Computers are used practically everywhere and in just about everything. In the 1960s the first computers could fill a small room. Today, computers are small enough to fit in the palms of our hands, or even smaller, thanks to improved semiconductor technologies. Manufacturers are now capable of making nanometer-sized (10-9 meter) computer chips, or nanochips. The improvement in miniaturizing computers has actually proven to be rather steady, and can be quantified by something known as Moore’s Law. Basically, this says that the number of transistors that can fit onto a chip doubles just about every 2 years.
You can probably guess where we’re going. If we keep shrinking our microprocessors from chips to microchips to nanochips, it won’t be long before they get so small that they operate in the quantum realm. In fact, we’re nearly approaching this already, and the observation of quantum effects within “classical” computers is just over the horizon. In one sense, these effects are likely to be a nuisance for traditional computation, and we’ll probably reach the “classical limit” for miniaturizing traditional computers in the next decade or two.
This graph, which depicts the increase in the number of transistors that fit onto computer processors through the years, is a demonstration of Moore’s Law. Note that vertical scale here shows factors of 10. It won’t be long before computers enter the quantum realm.
On the other hand, this could represent a great opportunity. Might we be able to harness quantum effects to make a better computer? Many scientists believe the answer is yes, and we’ll explore why in this section.
Leave a Reply