The four generations of Computing
By Taylor Teasdale
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.
Transistors replaced vaccum tubes and shot into the second generation of computing. It was made in 1947 but it didnt see widespread use until 1950. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation processors.
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
The microprocesser brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet.