Friday, 23 August 2013

Moore’s Law: The debate continues


Source: Company reports, Bernstein, Brewin Dolphin


Moore’s Law is a forecast that the number of transistors on a chip will, on average, double every two years. It is named after Gordon Moore, an Intel co-founder, who first predicted this evolution back in 1965. Reality has rarely strayed from this prediction, almost fifty years on, and the Law is now used as a guiding principle by the industry, becoming, in effect, a self-fulfilling one; however, as has happened at various times throughout the last half century, some analysts are questioning the ability of the industry to keep pace. In this thematic piece, we look at drivers behind the continuance of the Law, the technological challenges that stand in its way and the effect on the various companies in the value chain.

The benefits of the Law’s persistence are clear; increased productivity and performance at a lower cost has helped billions in their every day lives. From space shuttles to washing machines, Moore’s Law has enabled the human race to achieve the extraordinary, as well as manage the mundane with greater ease. It is the reason we can carry more computing power in our pockets than previously filled an entire room. Part of this increase is due to Moore’s Law with respect to the transistor count and part of this is due to improvements in transistor technology. Together these have led to an approximate doubling of chip performance every 18 months. The drive behind this evolution is, perhaps unsurprisingly, the pursuit of profit in an intensely competitive market. However, competitors do sometimes come together and share technology or resources for the purpose of mutual advancement.

Moore himself has said his Law will come to an end at some point due to the inability to shrink components beyond a certain size. Others have tried to estimate the timing of the end via the application of quantum physics or thermodynamics. With regards to the latter, a minimum energy level is required to turn a transistor on and off (between one and zero) and therefore the minimum dimension of a chip can be estimated. Bernstein, using this methodology, suggests that the 1.5nm node (or half-pitch which is half the distance between the components in a chip) is the lowest that can be achieved. Chips are currently shipping with node sizes of 22nm, suggesting that if Moore’s Law were to stay in place, it could last until 2028. If Moore’s Law were to slacken to a doubling of transistors every three years, for example, minimum scale would be reached by 2035.
As a result of increasing manufacturing complexity, the cost of producing chips is likely to become prohibitive at smaller node sizes and this is just as likely to slow the progress of shrinkage as the technological barriers. This clearly has implications for chip producers such as Samsung, Intel and TSMC which may or may not be able to pass on costs to consumers. If not, there will be less investment in R&D and less progress made on the roadmap to smaller node sizes. Instead of adhering to Moore’s Law of a doubling in transistor count every two years for the same cost, energy efficiency may take greater prominence in order to, in some way, compensate for the necessarily slower performance improvements. A slackening of Moore’s Law to every three years from the traditional two is therefore emerging as a realistic proposition.

Some argue that the possible slowing and eventual end of Moore’s Law is not necessarily as big a stumbling block to mankind’s progress as it may initially appear. Whilst we have benefitted enormously from its pace in the past, better use of technology to achieve the same aims can almost certainly be accomplished. For example, despite the processing power of computers doubling every 18 months or so, the expansion of software requirements has also increased exponentially. Often the application, Microsoft Word for example, has not changed immeasurably from its original form, however, over time, the processing requirements of such applications due to the adding of extra features (auto spell check for example) have increased enormously, as demonstrated by their often slow response times. More streamlined programming can therefore go a long way to speeding up computing.

Features such as video streaming on mobile devices are often thought to have high processing requirements. In truth, these requirements can be met comfortably by current processing hardware and it is the bandwidth of the connection (internet service) that is under strain. On this basis, some have suggested that we are perhaps reaching the stage where current computing power is good enough to do the job intended. We doubt this assessment, as the next generation of devices and applications will most likely have new features requiring additional computing power beyond that currently available and, on this basis, a slowing of Moore’s Law would almost certainly create a drag effect on technological advancement in time.

Ruairidh Finlayson CA
Technology Analyst

No comments:

Post a Comment