Hard Drive Disk

A hard drive disk is a device that stores data using more than one rotating disk. The rotated disk is mad up of a magnetic material. Hard drive disks are a non-volatile memory, retaining stored data even when a device is switched off. The data which stored on the disk, is put into their own individual box. The hard drive was invented 61 years ago. It was invented by Rey Johnson on December 24th 1954.Rey Johnson was the leader of the IBM team.

Big image


The CPU stands for central processing unit. The CPU is the brains of a computer.J. Presper Eckert and John William Mauchly, invented the EDVAC and then invented the CPU. In 1971, which was a great achievement at the time,there were a variety of other, previous CPUs built out of a few dozen integrated circuits.20,000 of the refrigerator-sized IBM 1401 CPUs first introduced in 1959.The CPU was invented in November in 1971.
Big image


A motherboard is the main circuit board found in general purpose of microcomputers and other systems. It allows communication between many of the crucial electronic components of a system. Unlike a backplane, a motherboard usually contains significant sub-systems such as the central processor, the chipset's input and output and memory controllers, interface connectors, and other components integrated for general purpose use.
Kate Russell - Alan Turing
Big image

Touch Screens

A touchscreen is an input device normally layered on the top of an electronic visual display of an information processing system. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus and/or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others use a special stylus/pen only. The user can use the touchscreen to react to what is displayed and to control how it is displayed; for example, zooming to increase the text size.
Big image


SSD is a storage device containing a non-volatile flash memory, which can be used instead of a hard disk because the SSD has faster speed than a Hard Drive Disk.(HDD)The SSD stands for the Solid state drive. It is also known as the Solid State Disk.A solid-state drive is a solid-state storage device that uses an integrated circuit .This assemblies a memory to store data . The idea for the SSD company , came from the IBM.


Input devices allow us to enter raw data into any computer. The computer processes the data and then produces outputs that we can understand using an output device. Input devices can be manual or automatic.
Big image

The four generations of computers

Computers play a huge part in almost all of our lives, but how did these machines become so powerful and important? And what were some of the earliest models like? This collection of videos takes us through the Four Generations of computers, starting with Colossus, the world's first electronic computer (launched in 1944), and finishing with the BBC Micro (launched in 1981) and Fourth Generation Computers, looking at how technology changed throughout these years. Visiting locations such as The National Museum of Computing in Milton Keynes and The Centre for Computing History in Haverhill, we see an array of fascinating machines and learn about them along the way. This material forms part of The Open University course TU100.


Each generation increases in reliability, speed, efficiency and ease of use and decreases in cost & size.

The first Generation (1945 - 1955)

- Very large computers made up of vacuum tubes and often programmed using wiring plugboards
- Programmed using machine language
- Mostly used for numerical calculations as working out mathematical tables
- No OS

The Second Generation (1955 - 1965)

- Mainframes made up of transistors
- Mainframes made up of transistors
- At first punch cards were used to provide input, then tapes were used (for batch processing)
- Used Assemblers and FORTRAN compilers for program writing
-Simple batch processing was used with input files, programs and output on tape
- Smaller computers (e.g. IBM 1401) was used to read programs and data on punch cards on to input tapes and for offline printing
- Used mainly for scientific and engineering applications
- FMS (Fortran Monitor System) and IBM IBSYS as OSs for handling jobs (e.g. to read a job and to run it)

The Third Generation (1965 - 1980)

-Mainframes based on small scale ICs were used.
- Capable of multiprogramming (running several jobs at the same time)
- Fixed disks were used and new jobs on cards to be executed could be read on to the disk while executing other jobs (spooling)
- Though the first models used multiprogrammed batch processing, to cater to increased response time, timesharing was introduced later (Time-sharing Systems)
- Complex OSs as OS/360 were used.
- Used for various applications including scientific and business applications
- Mini computers also appeared on the market which were used by small departments etc. and became the platform for UNIX.

The Fourth Generation (1980 . . . )

- Mainframes, Minicomputers, Workstations, Personal Computers (Desktop and portable) based on VLSI components
- Network operating systems that facilitate file sharing, remote logging etc. and Client Server computing.
- Distributed OSs that make use of multiple machines and processors to run applications.
- GUI based OS interfaces and applications.
- Virtual Machines and Network Computers (NCs)
Big image



concurrent programming

, a monitor is a synchronization construct that allows


to have both

mutual exclusion

and the ability to wait (block) for a certain condition to become true. Monitors also have a mechanism for signalling other threads that their condition has been met. A monitor consists of a

mutex (lock)

object and condition variables. A condition variable is basically a container of threads that are waiting for a certain condition. Monitors provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task.

Another definition of monitor is a thread-safe




, or


that uses wrapped

mutual exclusion

in order to safely allow access to a method or variable by more than one


. The defining characteristic of a monitor is that its methods are executed with

mutual exclusion

: At each point in time, at most one thread may be executing any of its


. Using a condition variable(s), it can also provide the ability for threads to wait on a certain condition (thus using the above definition of a "monitor"). For the rest of this article, this sense of "monitor" will be referred to as a "thread-safe object/class/module".

Monitors were invented by

Per Brinch Hansen



C. A. R. Hoare



and were first implemented in

Brinch Hansen's

Concurrent Pascal


Big image

Operating Systems / Other software

An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. The operating system is a component of the system software in a computer system. Application programs usually require an operating system to function.

An operating system (OS) is

system software

that manages

computer hardware



resources and provides common



computer programs

. The operating system is a component of the

system software

in a computer system.

Application programs

usually require an operating system to function.


operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and

memory allocation

, the operating system acts as an intermediary between programs and the computer hardware,



although the application code is usually executed directly by the hardware and frequently makes

system calls

to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from

cellular phones


video game consoles


web servers




Examples of popular desktop operating systems include





and its variants, and

Microsoft Windows

. So-called

mobile operating systems





. Other classes of operating systems, such as real-time (RTOS), also exist.

Big image

Tim Berners-Lee - Research


Timothy John Berners-Lee
(1955-06-08) 8 June 1955



  • from 1973 to 1976, where he received a first-class degree bachelor of arts degree in physics.[21]


    Berners-Lee, 2005

    After graduation, Berners-Lee worked as an engineer at the telecommunications company Plessey in Poole, Dorset.[21] In 1978, he joined D. G. Nash in Ferndown, Dorset, where he helped create type-setting software for printers.[21]

    Berners-Lee worked as an independent contractor at CERN from June to December 1980. While in Geneva, he proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers.[23] To demonstrate it, he built a prototype system named ENQUIRE.[24]

    After leaving CERN in late 1980, he went to work at John Poole's Image Computer Systems, Ltd, in Bournemouth, Dorset.[25] He ran the company's technical side for three years.[26] The project he worked on was a "real-time remote procedure call" which gave him experience in computer networking.[25] In 1984, he returned to CERN as a fellow.[24]

    In 1989, CERN was the largest Internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the Internet:

    "I just had to take the hypertext idea and connect it to the Transmission Control Protocol and domain name system ideas and—ta-da!—the World Wide Web[27] ... Creating the web was really an act of desperation, because the situation without it was very difficult when I was working at CERN later. Most of the technology involved in the web, like the hypertext, like the Internet, multifont text objects, had all been designed already. I just had to put them together. It was a step of generalising, going to a higher level of abstraction, thinking about all the documentation systems out there as being possibly part of a larger imaginary documentation system."[28]

    This NeXT Computer was used by Berners-Lee at CERN and became the world's first web server

    Berners-Lee wrote his proposal in March 1989 and, in 1990, redistributed it. It then was accepted by his manager, Mike Sendall.[29] He used similar ideas to those underlying the ENQUIRE system to create the World Wide Web, for which he designed and built the first Web browser. His software also functioned as an editor (called WorldWideWeb, running on the NeXTSTEP operating system), and the first Web server, CERN HTTPd (short for Hypertext Transfer Protocol daemon).

    "Mike Sendall buys a NeXT cube for evaluation, and gives it to Tim [Berners-Lee]. Tim's prototype implementation on NeXTStep is made in the space of a few months, thanks to the qualities of the NeXTStep software development system. This prototype offers WYSIWYG browsing/authoring! Current Web browsers used in "surfing the Internet" are mere passive windows, depriving the user of the possibility to contribute. During some sessions in the CERN cafeteria, Tim and I try to find a catching name for the system. I was determined that the name should not yet again be taken from Greek mythology..... Tim proposes "World-Wide Web". I like this very much, except that it is difficult to pronounce in French..." by Robert Cailliau, 2 November 1995.[30]

    Berners-Lee using his laptop in 2003

    The first web site built was at CERN within the border of France,[31] and was put online on 6 August 1991 for the first time: was the address of the world's first-ever web site and web server, running on a NeXT computer at CERN. The first web page address was, which centred on information regarding the WWW project. Visitors could learn more about hypertext, technical details for creating their own webpage, and even an explanation on how to search the Web for information. There are no screenshots of this original page and, in any case, changes were made daily to the information available on the page as the WWW project developed. You may find a later copy (1992) on the World Wide Web Consortium website.[32]

    It provided an explanation of what the World Wide Web was, and how one could use a browser and set up a web server.[33][34][35][36]

    In 1994, Berners-Lee founded the W3C at the Massachusetts Institute of Technology. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made his idea available freely, with no patent and no royalties due. The World Wide Web Consortium decided that its standards should be based on royalty-free technology, so that they easily could be adopted by anyone.[37]

    In 2001, Berners-Lee became a patron of the East Dorset Heritage Trust, having previously lived in Colehill in Wimborne, East Dorset.[38] In December 2004, he accepted a chair in computer science at the School of Electronics and Computer Science, University of Southampton, Hampshire, to work on the Semantic Web.[39][40]

    In a Times article in October 2009, Berners-Lee admitted that the initial pair of slashes ("//") in a web address were "unnecessary". He told the newspaper that he easily could have designed web addresses without the slashes. "There you go, it seemed like a good idea at the time," he said in his lighthearted apology.[41]

Big image
Big image


Random-access memory is a form of computer data storage. A random-access memory device allows data items to be accessed in almost the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older drum memory, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.

Types of RAM;

The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell. This form of RAM is more expensive to produce, but is generally faster and requires less dynamic power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.

Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. These persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, etc. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction code.

In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, unlike CD-RW or DVD-RW it does not need to be erased before reuse. Nevertheless, a DVD-RAM behaves much like a hard disc drive if somewhat slower.