Microprocessors

presenting the complete internal workings of...

What is a microprocessor?

A microprocessor is a chip that incorporates all the functions of a computer's central processing unit onto a single integrated circuit (IC). It is a multipurpose, programmable, multipurpose device which uses digital data as its input, then processes it according to instructions stored in its memory, and provides results as output. It is an example of “sequential digital logic” because it has internal memory. This means that it can use the history of its input to define the output. Microprocessors operate on numbers in the form of binary.


Low-cost computers on integrated circuits have transformed modern society. General-purpose microprocessors in PCs are used for computing, text editing, displaying information, and communication over the Internet, and throughout the world. A lot of microprocessors are part of smaller systems, providing digital control on many objects varying from appliances to cars to mobile phones.

The History of the microprocessor

During the 1960s computer processors were made up of small ICs each containing between 10 and 500 transistors. In every computer, all of these had to be soldered onto PCBs by hand. In the NASA Apollo moon missions in the 1960s and 70s all on board computer power for navigation and control was provided by a small processor known as the "Apollo Guidance Computer".


The integration of an entire CPU onto a single microchip reduced the cost of processing power a lot. The IC processors were manufactured in large quantities by robotic machinery, so the price for one CPU was low. Single-chip processors increased reliability as there were less electrical connections to go wrong. As microprocessors got faster, the cost of making them decreased.


Microprocessors integrated into large-scale ICs had architecture that had previously been used in medium sized ICs, and have reduced the need for other forms of computers, with microprocessors used in everything from the smallest handheld devices to the largest mainframes and supercomputers.

The first true microprocessors were created in the 1970s and used in electronic calculators, using binary-coded decimal maths on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit processors with 16-bit addressing allowed for the first general-purpose computers in the 1970s.


Since then the increase in capacity and speed of microprocessors has followed Moore's law, which states that the number of components that can be fitted onto a chip doubles every year. At the moment it is actually every two years and thus the law has been changed to reflect this.

How does it work?

Here's a diagram to explain it...

What is microprocessor architecture?

Microprocessor architecture defines the layout of the microprocessor, like a road map. If you have ever seen a PCB (Printed Circuit Board) you will know that it has many tracks leading to different places. Microprocessor architecture is split into two categories: CISC & RISC.


CISC stands for Complex Instruction-Set Computer and RISC for Reduced Instruction-Set Computer. As the name suggests. CISC is more complex than RISC and is thus able to handle much more complex commands, whilst the simpler and more basic RISC is faster.

What is the processor speed?

Processor speed is measured as instructions per second (IPS). IPS values represent maximum instruction rates on sequences with few branches, whereas more realistic workloads lead to lower IPS values. The performance of the memory also affects the processor’s performance, but this is an issue rarely considered in IPS calculations.


Because of these problems, researchers have created tests such as SPECint to attempt to measure the real performance in commonly used applications, and raw IPS is no longer used.


IPS is usually measured in thousand instructions per second (kIPS), million instructions per second (MIPS), Giga instructions per second (GIPS), or Million Operations per Second (MOPS). These all measure the processor clock speed.