For half a century, the technology industry has been built on a set of rules that have defined how everything from power electronics to computer chips are designed. Among these rules is Moore's Law, which has been underpinning innovation since 1965, but it is expected to come to an end in the coming years. This is creating an urgent need for innovation in computing, design and materials, as Benjamin Stafford, Materials Science Specialist at materials search engine Matmatch, explains.
Since the first computer chip was developed in 1961, mankind has become increasingly infatuated with digital technology, with computers and smart devices now playing a fundamental role in our lives. With each passing year comes hundreds of new applications of digital technology, requiring ever-growing levels of complex computation carried out quickly, efficiently and effectively.
As demand for greater processing power and speed in electronic devices has grown, design and electronics engineers have overcome the challenge by developing integrated circuit (IC) computer chips that feature an increasing number of transistors. It's from this approach that Moore's Law arose, which states that the number of transistors per unit area in an IC will double every one to two years.
However, we are reaching the limits of Moore's law. As the number of transistors packed into an IC increases, their size decreases. In fact, the latest production ICs have a minimum feature size of 7 nm, which is just 17 silicon atoms. Packing any more transistors onto a chip raises the issue of quantum tunnelling, in which electrons leap through barriers and cause current leakage.
This issue hasn't hit researchers and design engineers suddenly. According to an interview with IBM's chief innovation officer Bernie Meyerson, quantum mechanical effects first caused transistors on ICs to fail in 2003.
Unfortunately, the rate of technological development hasn't slowed, leaving us in a situation where the end of Moore's Law is looming. Many in the industry are reluctant to make definitive statements on when the idea will cease to be relevant, but the general consensus is that it will be some time between 2020 and 2025 — and nobody seems to have a practical solution to this technological roadblock.
What we do have, however, is several proposed solutions that promise either an interim fix or a long-term answer.
Extending Moore's Law
One way to extend Moore's Law for years to come is by developing more application specific ICs (ASICs) that are optimised to complete specific tasks. One example would be specialised artificial intelligence (AI) chips, which are currently being developed by the likes of Google and Microsoft.
AI chips are ASICs, much like graphical processing units (GPUs), that will function to improve the effectiveness and efficiency of AI features in electronics. This could be faster image recognition on a smartphone, as is the case with the chips from Intel Movidius, or more informed diagnostics from AI-equipped healthcare devices.
Switching to a more ASIC-focussed design could potentially offer a stopgap solution and extend Moore's Law by roughly ten years, but it will continue to exacerbate the industry issue of the von Neumann bottleneck. Effectively, modern chips are capable of running so fast that computing time is wasted in the time it takes information to travel between chips.
A proposed solution to this is to develop 3D ICs, where several wafers are layered on top of each other and interconnected vertically. By effectively integrating multiple ICs together, design engineers can reduce how far data has to travel while also minimising the footprint of chips. This could further extend the viability of existing computing methods, allowing the principle of Moore's Law to continue.
Selecting the right material
Materials science is another promising area of research in mitigating the limitations of existing computing methods, complementing any potential changes in design approach.
Part of the reason why Moore's Law may be coming to an end is due to the size at which silicon chips can continue to function as intended. While there is talk of 5 nm silicon chips being developed, there doesn't appear to be much further that silicon can take our current means of producing ICs.
For design engineers, the matter of cost has presented roadblocks in sourcing alternative materials. Silicon is highly abundant on Earth and cheap to source. Alternative materials could potentially mean a higher supply cost and a more expensive final product.
A solution to this problem comes in the form of carbon nanotubes (CNTs). CNTs are roughly 1 nm wide tubes that conduct electricity better and more efficiently than silicon, and the abundance of carbon on Earth means that, once the production process is optimised, it could be relatively inexpensive to use.
However, CNTs are not yet a mature enough technology for mass production. IBM has been doing extensive research into CNT chips in recent years, but the technology is still years away from becoming commercially viable. It could be that CNTs become viable before Moore's Law ends, but it's not a chance many in the industry will want to take.
Another potential alternative to silicon is graphene. It's 200 times as conductive as silicon, far more heat resistant than silicon and stable in one-atom-thick layers. In addition to this, the material is produced from graphite, a type of coal and a crystalline allotrope of carbon. Since coal is an abundant resource on Earth and carbon is one of the most common elements in the universe, it would be relatively inexpensive to source graphene from suppliers through services like Matmatch's materials search engine.
However, graphene is still a relatively new material and ICs using the material need further development before they become viable for the market. Although in its finished form graphene is very strong, the deposited layers can be damaged during the fabrication process.
In 2014, IBM successfully created an analog graphene IC broadband radio-frequency mixer by changing the process of fabricating transistors. However, changing the overall fabrication process at scale would be an expensive task; one that few semiconductor manufacturers would likely undertake unless graphene chips became commercially viable.
Even so, the biggest issue with graphene is that it does not have a bandgap. The bandgap denotes the energy required to excite an electron from the valence band, which is the highest range of occupied electron energy states at absolute zero temperature, to the conductance band, which is the lowest range of available electron states. Put simply, the bandgap allows semiconductor materials to shift from electrically insulating to conducting.
For semiconductors, this bandgap is large enough so that there is a clear distinction between the on and off. The lack of bandgap is the main challenge preventing adoption of graphene in ICs.
However, materials scientists are developing ways of overcoming this. In 2015, researchers at Aalto University in Finland found that fabricating graphene in a ribbon shape introduced a bandgap to the material. Researchers at Sungkyunkwan University and the Institute for Basic Science in South Korea also found that a bandgap can be achieved by doping graphene.
Of course, more development is needed to reliably make graphene a contender for ICs. Design engineers and material scientists alike must experiment with graphene to do this, which could potentially revolutionise the semiconductor industry. Fortunately, graphene is becoming easier to source using platforms like Matmatch, which now features several forms of the material from trusted graphene suppliers.
While graphene might be in too early a stage to help extend the relevancy of Moore's Law, it does have another potential use. According to research from EPFL's Laboratory of Photonics and Quantum Measurements, graphene could be a leading candidate for creating quantum capacitors.
Computing after Moore's Law
If there is one resonating lesson to be learned from the current situation, it's that our existing approaches to computing and computer chips need to change. Rather than just tweaking designs or changing materials, it could be that computing itself needs to change.
There are two emerging technologies that could succeed our current binary digital computers: neuromorphic systems, which are systems designed in a way that imitates the human brain, and quantum computing, which uses quantum mechanics to drastically improve computer speed and performance. Both are in their infancy, but it's the latter that has the most long-term promise.
Quantum computing deviates from traditional binary digital in many ways, not least of which is how the device operates on a functional level. Quantum computers use quantum binary digits (qubits) as their basic units of information, and these can leverage quantum mechanics to be in superpositions of states. Graphene has shown potential as a material for quantum capacitors, which are necessary to create stable qubits that are resistant to electromagnetic interference.
Research from MIT in 2013 also found that graphene can be made into a topological insulator, which effectively means that electrons will move around the material in different directions depending on their spin direction. This spin could be turned on and off, allowing electrons to enter superposition of states as required.
The only limitation of this was that graphene required strong magnetic fields and near absolute zero temperature conditions to exhibit this effect. The latter is a limitation shared by current quantum computers, which require extremely low temperatures for materials to act as superconductors. As a concept, quantum computing is still many years away from becoming practical.
We may be on the precipice of Moore's Law ending, but there is work and research underway to ensure that technological developments are not impeded by the nature of computing as we know it. It is only by innovating with material selection, chip design and computing that we can ensure a smooth journey into the future of computer technology.