The processing speed of a computer is usually determined by how fast it can process data. Colocation America states that the first computer processor had a processing speed of 740kHz and was able to process 92,000 instructions per second. The processing speed of chips has roughly doubled every two years since then for over 50 years. A trend known as Moore’s Law. As MIT Technology Review puts it:
Moore’s Law is named after Intel co-founder Gordon Moore. He observed in 1965 that transistors were shrinking so fast that every year twice as many could fit onto a chip, and in 1975 adjusted the pace to a doubling every two years.
Unfortunately, you can only make a transistor so small before the laws of physics puts a halt to things. This is the situation we are arriving at now. We are reaching the point at which a single computer processing transistor is measured in number of atoms. Once we have achieved a chip that features transistors as small as an individual atom we will not be able to scale down any further.
Scaling down transistors is currently the main method of increasing the speed at which a computer can operate. Having more transistors on a chip means being able to complete more processes simultaneously.
We must find a new method of improving computing power that doesn’t rely on increasing the number of transistors.