Where the word computer come from

A computer is a machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. A computer system is a nominally complete computer that includes the hardware, operating system (main software), and peripheral equipment needed and used for full operation. This term may also refer to a group of computers that are linked and function together, such as a computer network or computer cluster.

A broad range of industrial and consumer products use computers as control systems. Simple special-purpose devices like microwave ovens and remote controls are included, as are factory devices like industrial robots and computer-aided design, as well as general-purpose devices like personal computers and mobile devices like smartphones. Computers power the Internet, which links billions of other computers and users.

Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (as predicted by Moore’s law), leading to the Digital Revolution during the late 20th to early 21st centuries.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, along with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.

Etymology

A human computer.

A human computer, with microscope and calculator, 1952

According to the Oxford English Dictionary, the first known use of computer was in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: «I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number.» This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts.[1] By 1943, most human computers were women.[2]

The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning ‘one who calculates’; this is an «agent noun from compute (v.)». The Online Etymology Dictionary states that the use of the term to mean «‘calculating machine’ (of any type) is from 1897.» The Online Etymology Dictionary indicates that the «modern use» of the term, to mean ‘programmable digital electronic computer’ dates from «1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine«.[3]

History

Pre-20th century

Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a][4] The use of counting rods is one example.

The Chinese suanpan (算盘). The number represented on this abacus is 6,302,715,408.

The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[5]

The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price.[6] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BC. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century.[7]

Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.[8] The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer[9][10] and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235.[11] Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe,[12] an early fixed-wired knowledge processing machine[13] with a gear train and gear-wheels,[14] c. 1000 AD.

The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.

The slide rule was invented around 1620–1630 by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.

In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically «programmed» to read instructions. Along with two other complex machines, the doll is at the Musée d’Art et d’Histoire of Neuchâtel, Switzerland, and still operates.[15]

In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which, through a system of pulleys and cylinders and over, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[16] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First computer

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the «father of the computer»,[17] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[18][19]

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Analog computers

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[20] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson.[16]

The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).

Digital computers

Electromechanical

By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

Replica of Konrad Zuse’s Z3, the first fully automatic, digital (electromechanical) computer

Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer.[21]

In 1941, Zuse followed his earlier machine up with the Z3, the world’s first working electromechanical programmable, fully automatic digital computer.[24][25] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[26] Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage’s earlier design), using a binary system meant that Zuse’s machines were easier to build and potentially more reliable, given the technologies available at that time.[27] The Z3 was not itself a universal computer but could be extended to be Turing complete.[28][29]

Zuse’s next computer, the Z4, became the world’s first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich.[30] The computer was manufactured by Zuse’s own company, Zuse KG [de], which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin.[30]

Vacuum tubes and digital electronic circuits


Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[20] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[31] the first «automatic electronic digital computer».[32] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[33]

Two women are seen by the Colossus computer.

During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women.[34][35] To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[33] He spent eleven months from early February 1943 designing and building the first Colossus.[36] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[37] and attacked its first message on 5 February.[33]

Colossus was the world’s first electronic digital programmable computer.[20] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.[38][39]

ENIAC was the first electronic, Turing-complete device, and performed ballistics trajectory calculations for the United States Army.

The ENIAC[40] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a «program» on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the «ENIAC girls».[41][42]

It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[43]

Modern computers

Concept of modern computer

The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[44] On Computable Numbers. Turing proposed a simple device that he called «Universal Computing machine» and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing’s design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[45] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Stored programs

Three tall racks containing electronic circuit boards

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[33] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report «Proposed Electronic Calculator» was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.[20]

The Manchester Baby was the world’s first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[46] It was designed as a testbed for the Williams tube, the first random-access digital storage device.[47] Although the computer was described as «small and primitive» by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer.[48] As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1.

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world’s first commercially available general-purpose computer.[49] Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[50] In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons’s LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951[51] and ran the world’s first routine office computer job.

Grace Hopper was the first to develop a compiler for a programming language.[2]

Transistors

The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley’s bipolar junction transistor in 1948.[52][53] From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the «second generation» of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.[54]

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[55] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[56] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[56][57]

MOSFET (MOS transistor), showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink).

The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[58] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[54] With its high scalability,[59] and much lower power consumption and higher density than bipolar junction transistors,[60] the MOSFET made it possible to build high-density integrated circuits.[61][62] In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution,[63] and became the driving force behind the computer revolution.[64][65] The MOSFET is the most widely used transistor in computers,[66][67] and is the fundamental building block of digital electronics.[68]

Integrated circuits

MOS 6502 computer chip die photograph

Die photograph of a MOS 6502, an early 1970s microprocessor integrating 3500 transistors on a single chip

MOS 6502 computer chip in 'DIP' package

Integrated circuits are typically packaged in plastic, metal, or ceramic cases to protect the IC from damage and for ease of assembly.

The next great advance in computing power came with the advent of the integrated circuit (IC).
The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[69]

The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[70] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[71] In his patent application of 6 February 1959, Kilby described his new device as «a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated».[72][73] However, Kilby’s invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip.[74] Kilby’s IC had external wire connections, which made it difficult to mass-produce.[75]

Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[76] Noyce’s invention was the first true monolithic IC chip.[77][75] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made of germanium. Noyce’s monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Mohamed M. Atalla’s work on semiconductor surface passivation by silicon dioxide in the late 1950s.[78][79][80]

Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors).[81] The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962.[82] General Microelectronics later introduced the first commercial MOS IC in 1964,[83] developed by Robert Norman.[82] Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968.[84] The MOSFET has since become the most critical device component in modern ICs.[81]

The development of the MOS integrated circuit led to the invention of the microprocessor,[85][86] and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term «microprocessor», it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[87] designed and realized by Federico Faggin with his silicon-gate MOS IC technology,[85] along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b][89] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.[62]

System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin.[90] They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals don’t have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.

Mobile computers

The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s.[91] The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.

These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market.[92] These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.[90]

Types

Computers can be classified in a number of different ways, including:

By architecture

  • Analog computer
  • Digital computer
  • Hybrid computer
  • Harvard architecture
  • Von Neumann architecture
  • Complex instruction set computer
  • Reduced instruction set computer

By size, form-factor and purpose

  • Supercomputer
  • Mainframe computer
  • Minicomputer (term no longer used),[93] Midrange computer
  • Server
    • Rackmount server
    • Blade server
    • Tower server
  • Personal computer
    • Workstation
    • Microcomputer (term no longer used)[94]
      • Home computer (term fallen into disuse)[95]
    • Desktop computer
      • Tower desktop
      • Slimline desktop
        • Multimedia computer (non-linear editing system computers, video editing PCs and the like, this term is no longer used)[96]
        • Gaming computer
      • All-in-one PC
      • Nettop (Small form factor PCs, Mini PCs)
      • Home theater PC
      • Keyboard computer
      • Portable computer
      • Thin client
      • Internet appliance
    • Laptop
      • Desktop replacement computer
      • Gaming laptop
      • Rugged laptop
      • 2-in-1 PC
      • Ultrabook
      • Chromebook
      • Subnotebook
      • Netbook
  • Mobile computers:
    • Tablet computer
    • Smartphone
    • Ultra-mobile PC
    • Pocket PC
    • Palmtop PC
    • Handheld PC
  • Wearable computer
    • Smartwatch
    • Smartglasses
  • Single-board computer
  • Plug computer
  • Stick PC
  • Programmable logic controller
  • Computer-on-module
  • System on module
  • System in a package
  • System-on-chip (Also known as an Application Processor or AP if it lacks circuitry such as radio circuitry)
  • Microcontroller

Hardware

Video demonstrating the standard components of a «slimline» computer

The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and «mice» input devices are all hardware.

History of computing hardware

First generation
(mechanical/electromechanical)
Calculators Pascal’s calculator, Arithmometer, Difference engine, Quevedo’s analytical machines
Programmable devices Jacquard loom, Analytical engine, IBM ASCC/Harvard Mark I, Harvard Mark II, IBM SSEC, Z1, Z2, Z3
Second generation
(vacuum tubes)
Calculators Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120
Programmable devices Colossus, ENIAC, Manchester Baby, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti Mercury, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22
Third generation
(discrete transistors and SSI, MSI, LSI integrated circuits)
Mainframes IBM 7090, IBM 7080, IBM System/360, BUNCH
Minicomputer HP 2116A, IBM System/32, IBM System/36, LINC, PDP-8, PDP-11
Desktop Computer HP 9100
Fourth generation
(VLSI integrated circuits)
Minicomputer VAX, IBM AS/400
4-bit microcomputer Intel 4004, Intel 4040
8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80
16-bit microcomputer Intel 8088, Zilog Z8000, WDC 65816/65802
32-bit microcomputer Intel 80386, Pentium, Motorola 68000, ARM
64-bit microcomputer[c] Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64, ARMv8-A
Embedded computer Intel 8048, Intel 8051
Personal computer Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet PC, Wearable computer
Theoretical/experimental Quantum computer IBM Q System One
Chemical computer
DNA computing
Optical computer
Spintronics-based computer
Wetware/Organic computer

Other hardware topics

Peripheral device (input/output) Input Mouse, keyboard, joystick, image scanner, webcam, graphics tablet, microphone
Output Monitor, printer, loudspeaker
Both Floppy disk drive, hard disk drive, optical disc drive, teleprinter
Computer buses Short range RS-232, SCSI, PCI, USB
Long range (computer networking) Ethernet, ATM, FDDI

A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a «1», and when off it represents a «0» (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

Input devices

When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:

  • Computer keyboard
  • Digital camera
  • Digital video
  • Graphics tablet
  • Image scanner
  • Joystick
  • Microphone
  • Mouse
  • Overlay keyboard
  • Real-time clock
  • Trackball
  • Touchscreen
  • Light pen

Output devices

The means through which computer gives output are known as output devices. Some examples of output devices are:

  • Computer monitor
  • Printer
  • PC speaker
  • Projector
  • Sound card
  • Video card

Control unit

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system

The control unit (often called a control system or central controller) manages the computer’s various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[d] Control systems in advanced computers may change the order of execution of some instructions to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[e]

The control system’s function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as «jumps» and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.

Central processing unit (CPU)

The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.

Arithmetic logic unit (ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.[97] The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other («is 64 greater than 65?»). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic.

Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[98] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

A computer’s memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered «address» and can store a single number. The computer can be instructed to «put the number 123 into the cell numbered 1357» or to «add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.» The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software’s responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two’s complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer’s speed.

Computer main memory comes in two principal varieties:

  • random-access memory or RAM
  • read-only memory or ROM

RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer’s initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer’s operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[f]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer’s part.

Input/output (I/O)

I/O is the means by which a computer exchanges information with the outside world.[100] Devices that provide input or output to the computer are called peripherals.[101] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[102] One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running «at the same time». then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed «time-sharing» since each program is allocated a «slice» of time in turn.[103]

Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a «time slice» until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[g] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called «embarrassingly parallel» tasks.

Software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called «firmware».

Operating system /System Software Unix and BSD UNIX System V, IBM AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems
Linux List of Linux distributions, Comparison of Linux distributions
Microsoft Windows Windows 95, Windows 98, Windows NT, Windows 2000, Windows ME, Windows XP, Windows Vista, Windows 7, Windows 8, Windows 8.1, Windows 10, Windows 11
DOS 86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-DOS, FreeDOS
Macintosh operating systems Classic Mac OS, macOS (previously OS X and Mac OS X)
Embedded and real-time List of embedded operating systems
Experimental Amoeba, Oberon–AOS, Bluebottle, A2, Plan 9 from Bell Labs
Library Multimedia DirectX, OpenGL, OpenAL, Vulkan (API)
Programming library C standard library, Standard Template Library
Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
File format HTML, XML, JPEG, MPEG, PNG
User interface Graphical user interface (WIMP) Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM, Aqua
Text-based user interface Command-line interface, Text user interface
Application Software Office suite Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software
Internet Access Browser, Email client, Web server, Mail transfer agent, Instant messaging
Design and manufacturing Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
Graphics Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics, Video editing, Image processing
Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music
Software engineering Compiler, Assembler, Interpreter, Debugger, Text editor, Integrated development environment, Software performance analysis, Revision control, Software configuration management
Educational Edutainment, Educational game, Serious game, Flight simulator
Games Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction
Misc Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

Languages

There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications.

Programming languages

Lists of programming languages Timeline of programming languages, List of programming languages by category, Generational list of programming languages, List of programming languages, Non-English-based programming languages
Commonly used assembly languages ARM, MIPS, x86
Commonly used high-level programming languages Ada, BASIC, C, C++, C#, COBOL, Fortran, PL/I, REXX, Java, Lisp, Pascal, Object Pascal
Commonly used scripting languages Bourne script, JavaScript, Python, Ruby, PHP, Perl

Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

This section applies to most common RAM machine–based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer’s memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called «jump» instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that «remembers» the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:

  begin:
  addi $8, $0, 0           # initialize sum to 0
  addi $9, $0, 1           # set first number to add = 1
  loop:
  slti $10, $9, 1000       # check if the number is less than 1000
  beq $10, $0, finish      # if odd number is greater than n then exit
  add $8, $8, $9           # update sum
  addi $9, $9, 1           # get next number
  j loop                   # repeat the summing process
  finish:
  add $2, $8, $0           # put sum in output register

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.

Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer’s memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer’s memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture.[105][106] In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[h] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer’s assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.

A 1970s punched card containing one line from a Fortran program. The card reads: «Z(1) = Y + W(1)» and is labeled «PROJ039» for identification purposes.

Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer’s central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[i] Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.

High-level languages

Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually «compiled» into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[j] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Program design

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable.[107] As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered.[108] Large programs involving thousands of line of code and more require formal software methodologies.[109] The task of developing large software systems presents a significant intellectual challenge.[110] Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult;[111] the academic and professional discipline of software engineering concentrates specifically on this challenge.[112]

Bugs

The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer

Errors in computer programs are called «bugs». They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to «hang», becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash.[113] Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer’s proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program’s design.[k] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term «bugs» in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[114]

Networking and the Internet

Visualization of a portion of the routes on the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military’s SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[115] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[116] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. «Wireless» networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Unconventional computers

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word «computer» is synonymous with a personal electronic computer,[l] a typical modern definition of a computer is: «A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.»[117] According to this definition, any device that processes information qualifies as a computer.

Future

There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.

Computer architecture paradigms

There are many types of computer architectures:

  • Quantum computer vs. Chemical computer
  • Scalar processor vs. Vector processor
  • Non-Uniform Memory Access (NUMA) computers
  • Register machine vs. Stack machine
  • Harvard architecture vs. von Neumann architecture
  • Cellular architecture

Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[118] Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

Artificial intelligence

A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing.

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Computer-related professions

Hardware-related Electrical engineering, Electronic engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoengineering
Software-related Computer science, Computer engineering, Desktop publishing, Human–computer interaction, Information technology, Information systems, Computational science, Software engineering, Video game industry, Web design

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations

Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C
Professional societies ACM, AIS, IET, IFIP, BCS
Free/open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

See also

  • Computability theory
  • Computer security
  • Glossary of computer hardware terms
  • History of computer science
  • List of computer term etymologies
  • List of computer system manufacturers
  • List of fictional computers
  • List of films about computers
  • List of pioneers in computer science
  • Pulse computation
  • TOP500 (list of most powerful computers)
  • Unconventional computing

Notes

  1. ^ According to Schmandt-Besserat 1981, these clay containers contained tokens, the total of which were the count of objects being transferred. The containers thus served as something of a bill of lading or an accounts book. In order to avoid breaking open the containers, first, clay impressions of the tokens were placed on the outside of the containers, for the count; the shapes of the impressions were abstracted into stylized marks; finally, the abstract marks were systematically used as numerals; these numerals were finally formalized as numbers.
    Eventually the marks on the outside of the containers were all that were needed to convey the count, and the clay containers evolved into clay tablets with marks for the count. Schmandt-Besserat 1999 estimates it took 4000 years.
  2. ^ The Intel 4004 (1971) die was 12 mm2, composed of 2300 transistors; by comparison, the Pentium Pro was 306 mm2, composed of 5.5 million transistors.[88]
  3. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit incarnations were introduced.
  4. ^ The control unit’s role in interpreting instructions has varied somewhat in the past. Although the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Some computers have instructions that are partially interpreted by the control unit with further interpretation performed by another device. For example, EDVAC, one of the earliest stored-program computers, used a central control unit that interpreted only four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
  5. ^ Instructions often occupy more than one memory address, therefore the program counter usually increases by the number of memory locations required to store one instruction.
  6. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage.[99]
  7. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years.[104]
  8. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
  9. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
  10. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly, while running, by another program called an interpreter.
  11. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
  12. ^ According to the Shorter Oxford English Dictionary (6th ed, 2007), the word computer dates back to the mid 17th century, when it referred to «A person who makes calculations; specifically a person employed for this in an observatory etc.»

References

  1. ^ Evans 2018, p. 23.
  2. ^ a b Smith 2013, p. 6.
  3. ^ «computer (n.)». Online Etymology Dictionary. Archived from the original on 16 November 2016. Retrieved 19 August 2021.
  4. ^ Robson, Eleanor (2008). Mathematics in Ancient Iraq. p. 5. ISBN 978-0-691-09182-2.: calculi were in use in Iraq for primitive accounting systems as early as 3200–3000 BCE, with commodity-specific counting representation systems. Balanced accounting was in use by 3000–2350 BCE, and a sexagesimal number system was in use 2350–2000 BCE.
  5. ^ Flegg, Graham. (1989). Numbers through the ages (1st ed.). Houndmills, Basingstoke, Hampshire: Macmillan Education. ISBN 0-333-49130-0. OCLC 24660570.{{cite book}}: CS1 maint: date and year (link)
  6. ^ The Antikythera Mechanism Research Project Archived 28 April 2008 at the Wayback Machine, The Antikythera Mechanism Research Project. Retrieved 1 July 2007.
  7. ^ Marchant, Jo (1 November 2006). «In search of lost time». Nature. 444 (7119): 534–538. Bibcode:2006Natur.444..534M. doi:10.1038/444534a. PMID 17136067. S2CID 4305761.
  8. ^ G. Wiet, V. Elisseeff, P. Wolff, J. Naudu (1975). History of Mankind, Vol 3: The Great medieval Civilisations, p. 649. George Allen & Unwin Ltd, UNESCO.
  9. ^ Fuat Sezgin «Catalogue of the Exhibition of the Institute for the History of Arabic-Islamic Science (at the Johann Wolfgang Goethe University», Frankfurt, Germany) Frankfurt Book Fair 2004, pp. 35 & 38.
  10. ^ Charette, François (2006). «Archaeology: High tech from Ancient Greece». Nature. 444 (7119): 551–552. Bibcode:2006Natur.444..551C. doi:10.1038/444551a. PMID 17136077. S2CID 33513516.
  11. ^ Bedini, Silvio A.; Maddison, Francis R. (1966). «Mechanical Universe: The Astrarium of Giovanni de’ Dondi». Transactions of the American Philosophical Society. 56 (5): 1–69. doi:10.2307/1006002. JSTOR 1006002.
  12. ^ Price, Derek de S. (1984). «A History of Calculating Machines». IEEE Micro. 4 (1): 22–52. doi:10.1109/MM.1984.291305.
  13. ^ Őren, Tuncer (2001). «Advances in Computer and Information Sciences: From Abacus to Holonic Agents» (PDF). Turk J Elec Engin. 9 (1): 63–70. Archived (PDF) from the original on 15 September 2009. Retrieved 21 April 2016.
  14. ^ Donald Routledge Hill (1985). «Al-Biruni’s mechanical calendar», Annals of Science 42, pp. 139–163.
  15. ^ «The Writer Automaton, Switzerland». chonday.com. 11 July 2013. Archived from the original on 20 February 2015. Retrieved 28 January 2015.
  16. ^ a b Ray Girvan, «The revealed grace of the mechanism: computing after Babbage» Archived 3 November 2012 at the Wayback Machine, Scientific Computing World, May/June 2003
  17. ^ Halacy, Daniel Stephen (1970). Charles Babbage, Father of the Computer. Crowell-Collier Press. ISBN 978-0-02-741370-0.
  18. ^ «Babbage». Online stuff. Science Museum. 19 January 2007. Archived from the original on 7 August 2012. Retrieved 1 August 2012.
  19. ^ Graham-Cumming, John (23 December 2010). «Let’s build Babbage’s ultimate mechanical computer». opinion. New Scientist. Archived from the original on 5 August 2012. Retrieved 1 August 2012.
  20. ^ a b c d The Modern History of Computing. Stanford Encyclopedia of Philosophy. 2017. Archived from the original on 12 July 2010. Retrieved 7 January 2014.
  21. ^ Zuse, Horst. «Part 4: Konrad Zuse’s Z1 and Z3 Computers». The Life and Work of Konrad Zuse. EPE Online. Archived from the original on 1 June 2008. Retrieved 17 June 2008.
  22. ^ Bellis, Mary (15 May 2019) [First published 2006 at inventors.about.com/library/weekly/aa050298.htm]. «Biography of Konrad Zuse, Inventor and Programmer of Early Computers». thoughtco.com. Dotdash Meredith. Archived from the original on 13 December 2020. Retrieved 3 February 2021. Konrad Zuse earned the semiofficial title of ‘inventor of the modern computer’[who?]
  23. ^ «Who is the Father of the Computer?». www.computerhope.com.
  24. ^ Zuse, Konrad (2010) [1984]. The Computer – My Life Translated by McKenna, Patricia and Ross, J. Andrew from: Der Computer, mein Lebenswerk (1984). Berlin/Heidelberg: Springer-Verlag. ISBN 978-3-642-08151-4.
  25. ^ Salz Trautman, Peggy (20 April 1994). «A Computer Pioneer Rediscovered, 50 Years On». The New York Times. Archived from the original on 4 November 2016. Retrieved 15 February 2017.
  26. ^ Zuse, Konrad (1993). Der Computer. Mein Lebenswerk (in German) (3rd ed.). Berlin: Springer-Verlag. p. 55. ISBN 978-3-540-56292-4.
  27. ^ «Crash! The Story of IT: Zuse». Archived from the original on 18 September 2016. Retrieved 1 June 2016.
  28. ^ Rojas, R. (1998). «How to make Zuse’s Z3 a universal computer». IEEE Annals of the History of Computing. 20 (3): 51–54. doi:10.1109/85.707574. S2CID 14606587.
  29. ^ Rojas, Raúl. «How to Make Zuse’s Z3 a Universal Computer» (PDF). fu-berlin.de. Archived (PDF) from the original on 9 August 2017. Retrieved 28 September 2015.
  30. ^ a b O’Regan, Gerard (2010). A Brief History of Computing. Springer Nature. p. 65. ISBN 9783030665999.
  31. ^ «notice». Des Moines Register. 15 January 1941.
  32. ^ Arthur W. Burks (1989). The First Electronic Computer. ISBN 0472081047. Archived from the original on 29 July 2020. Retrieved 1 June 2019.
  33. ^ a b c d Copeland, Jack (2006). Colossus: The Secrets of Bletchley Park’s Codebreaking Computers. Oxford: Oxford University Press. pp. 101–115. ISBN 978-0-19-284055-4.
  34. ^ Miller, Joe (10 November 2014). «The woman who cracked Enigma cyphers». BBC News. Archived from the original on 10 November 2014. Retrieved 14 October 2018.
  35. ^ Bearne, Suzanne (24 July 2018). «Meet the female codebreakers of Bletchley Park». The Guardian. Archived from the original on 7 February 2019. Retrieved 14 October 2018.
  36. ^ «Bletchley’s code-cracking Colossus». BBC. Archived from the original on 4 February 2010. Retrieved 24 November 2021.
  37. ^ «Colossus – The Rebuild Story». The National Museum of Computing. Archived from the original on 18 April 2015. Retrieved 7 January 2014.
  38. ^ Randell, Brian; Fensom, Harry; Milne, Frank A. (15 March 1995). «Obituary: Allen Coombs». The Independent. Archived from the original on 3 February 2012. Retrieved 18 October 2012.
  39. ^ Fensom, Jim (8 November 2010). «Harry Fensom obituary». The Guardian. Archived from the original on 17 September 2013. Retrieved 17 October 2012.
  40. ^ John Presper Eckert Jr. and John W. Mauchly, Electronic Numerical Integrator and Computer, United States Patent Office, US Patent 3,120,606, filed 26 June 1947, issued 4 February 1964, and invalidated 19 October 1973 after court ruling on Honeywell v. Sperry Rand.
  41. ^ Evans 2018, p. 39.
  42. ^ Light 1999, p. 459.
  43. ^ «Generations of Computer». techiwarehouse.com. Archived from the original on 2 July 2015. Retrieved 7 January 2014.
  44. ^ Turing, A. M. (1937). «On Computable Numbers, with an Application to the Entscheidungsproblem». Proceedings of the London Mathematical Society. 2. 42 (1): 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712.
  45. ^ Copeland, Jack (2004). The Essential Turing. p. 22: von Neumann … firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace and others. Letter by Stanley Frankel to Brian Randell, 1972.
  46. ^ Enticknap, Nicholas (Summer 1998). «Computing’s Golden Jubilee». Resurrection (20). ISSN 0958-7403. Archived from the original on 9 January 2012. Retrieved 19 April 2008.
  47. ^ «Early computers at Manchester University». Resurrection. 1 (4). Summer 1992. ISSN 0958-7403. Archived from the original on 28 August 2017. Retrieved 7 July 2010.
  48. ^ «Early Electronic Computers (1946–51)». University of Manchester. Archived from the original on 5 January 2009. Retrieved 16 November 2008.
  49. ^ Napper, R. B. E. «Introduction to the Mark 1». The University of Manchester. Archived from the original on 26 October 2008. Retrieved 4 November 2008.
  50. ^ «Our Computer Heritage Pilot Study: Deliveries of Ferranti Mark I and Mark I Star computers». Computer Conservation Society. Archived from the original on 11 December 2016. Retrieved 9 January 2010.
  51. ^ Lavington, Simon. «A brief history of British computers: the first 25 years (1948–1973)». British Computer Society. Archived from the original on 5 July 2010. Retrieved 10 January 2010.
  52. ^ Lee, Thomas H. (2003). The Design of CMOS Radio-Frequency Integrated Circuits (PDF). Cambridge University Press. ISBN 9781139643771. Archived from the original (PDF) on 9 December 2019. Retrieved 31 July 2019.
  53. ^ Puers, Robert; Baldi, Livio; Voorde, Marcel Van de; Nooten, Sebastiaan E. van (2017). Nanoelectronics: Materials, Devices, Applications, 2 Volumes. John Wiley & Sons. p. 14. ISBN 9783527340538. Archived from the original on 3 March 2020. Retrieved 31 July 2019.
  54. ^ a b Moskowitz, Sanford L. (2016). Advanced Materials Innovation: Managing Global Technology in the 21st century. John Wiley & Sons. pp. 165–167. ISBN 9780470508923. Archived from the original on 3 March 2020. Retrieved 28 August 2019.
  55. ^ Lavington 1998, pp. 34–35.
  56. ^ a b Cooke-Yarborough, E. H. (June 1998). «Some early transistor applications in the UK». Engineering Science & Education Journal. 7 (3): 100–106. doi:10.1049/esej:19980301. ISSN 0963-7346. Archived from the original on 8 November 2020. Retrieved 7 June 2009. (subscription required)
  57. ^ Cooke-Yarborough, E.H. (1957). Introduction to Transistor Circuits. Edinburgh: Oliver and Boyd. p. 139.
  58. ^ «1960: Metal Oxide Semiconductor (MOS) Transistor Demonstrated». The Silicon Engine: A Timeline of Semiconductors in Computers. Computer History Museum. Archived from the original on 27 October 2019. Retrieved 31 August 2019.
  59. ^ Motoyoshi, M. (2009). «Through-Silicon Via (TSV)». Proceedings of the IEEE. 97 (1): 43–48. doi:10.1109/JPROC.2008.2007462. ISSN 0018-9219. S2CID 29105721.
  60. ^ Young, Ian (12 December 2018). «Transistors Keep Moore’s Law Alive». EETimes. Archived from the original on 24 September 2019. Retrieved 18 July 2019.
  61. ^ Laws, David (4 December 2013). «Who Invented the Transistor?». Computer History Museum. Archived from the original on 13 December 2013. Retrieved 20 July 2019.
  62. ^ a b Hittinger, William C. (1973). «Metal-Oxide-Semiconductor Technology». Scientific American. 229 (2): 48–59. Bibcode:1973SciAm.229b..48H. doi:10.1038/scientificamerican0873-48. ISSN 0036-8733. JSTOR 24923169.
  63. ^ Malmstadt, Howard V.; Enke, Christie G.; Crouch, Stanley R. (1994). Making the Right Connections: Microcomputers and Electronic Instrumentation. American Chemical Society. p. 389. ISBN 9780841228610. Archived from the original on 30 December 2019. Retrieved 28 August 2019. The relative simplicity and low power requirements of MOSFETs have fostered today’s microcomputer revolution.
  64. ^ Fossum, Jerry G.; Trivedi, Vishal P. (2013). Fundamentals of Ultra-Thin-Body MOSFETs and FinFETs. Cambridge University Press. p. vii. ISBN 9781107434493. Archived from the original on 3 March 2020. Retrieved 28 August 2019.
  65. ^ Marriott, J.W. (10 June 2019). «Remarks by Director Iancu at the 2019 International Intellectual Property Conference». United States Patent and Trademark Office. Archived from the original on 17 December 2019. Retrieved 20 July 2019.
  66. ^ «Dawon Kahng». National Inventors Hall of Fame. Archived from the original on 27 October 2019. Retrieved 27 June 2019.
  67. ^ «Martin Atalla in Inventors Hall of Fame, 2009». Archived from the original on 19 September 2019. Retrieved 21 June 2013.
  68. ^ «Triumph of the MOS Transistor». YouTube. Computer History Museum. 6 August 2010. Archived from the original on 18 August 2021. Retrieved 21 July 2019.
  69. ^ «The Hapless Tale of Geoffrey Dummer» Archived 11 May 2013 at the Wayback Machine, (n.d.), (HTML), Electronic Product News, accessed 8 July 2008.
  70. ^ Kilby, Jack (2000). «Nobel lecture» (PDF). Stockholm: Nobel Foundation. Archived (PDF) from the original on 29 May 2008. Retrieved 15 May 2008.
  71. ^ The Chip that Jack Built Archived 1 May 2015 at the Wayback Machine, (c. 2008), (HTML), Texas Instruments, Retrieved 29 May 2008.
  72. ^ Jack S. Kilby, Miniaturized Electronic Circuits, United States Patent Office, US Patent 3,138,743, filed 6 February 1959, issued 23 June 1964.
  73. ^ Winston, Brian (1998). Media Technology and Society: A History : From the Telegraph to the Internet. Routledge. p. 221. ISBN 978-0-415-14230-4. Archived from the original on 29 July 2020. Retrieved 6 June 2020.
  74. ^ Saxena, Arjun N. (2009). Invention of Integrated Circuits: Untold Important Facts. World Scientific. p. 140. ISBN 9789812814456. Archived from the original on 29 July 2020. Retrieved 28 August 2019.
  75. ^ a b «Integrated circuits». NASA. Archived from the original on 21 July 2019. Retrieved 13 August 2019.
  76. ^ Robert Noyce’s Unitary circuit, US patent 2981877, «Semiconductor device-and-lead structure», issued 1961-04-25, assigned to Fairchild Semiconductor Corporation.
  77. ^ «1959: Practical Monolithic Integrated Circuit Concept Patented». Computer History Museum. Archived from the original on 24 October 2019. Retrieved 13 August 2019.
  78. ^ Lojek, Bo (2007). History of Semiconductor Engineering. Springer Science & Business Media. p. 120. ISBN 9783540342588.
  79. ^ Bassett, Ross Knox (2007). To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology. Johns Hopkins University Press. p. 46. ISBN 9780801886393. Archived from the original on 27 July 2020. Retrieved 31 July 2019.
  80. ^ Huff, Howard R.; Tsuya, H.; Gösele, U. (1998). Silicon Materials Science and Technology: Proceedings of the Eighth International Symposium on Silicon Materials Science and Technology. Electrochemical Society. pp. 181–182. ISBN 9781566771931. Archived from the original on 12 May 2020. Retrieved 28 August 2019.
  81. ^ a b Kuo, Yue (1 January 2013). «Thin Film Transistor Technology—Past, Present, and Future» (PDF). The Electrochemical Society Interface. 22 (1): 55–61. Bibcode:2013ECSIn..22a..55K. doi:10.1149/2.F06131if. ISSN 1064-8208. Archived (PDF) from the original on 29 August 2017. Retrieved 31 July 2019.
  82. ^ a b «Tortoise of Transistors Wins the Race — CHM Revolution». Computer History Museum. Archived from the original on 10 March 2020. Retrieved 22 July 2019.
  83. ^ «1964 – First Commercial MOS IC Introduced». Computer History Museum. Archived from the original on 22 December 2015. Retrieved 31 July 2019.
  84. ^ «1968: Silicon Gate Technology Developed for ICs». Computer History Museum. Archived from the original on 29 July 2020. Retrieved 22 July 2019.
  85. ^ a b «1971: Microprocessor Integrates CPU Function onto a Single Chip». Computer History Museum. Archived from the original on 12 August 2021. Retrieved 22 July 2019.
  86. ^ Colinge, Jean-Pierre; Greer, James C. (2016). Nanowire Transistors: Physics of Devices and Materials in One Dimension. Cambridge University Press. p. 2. ISBN 9781107052406. Archived from the original on 17 March 2020. Retrieved 31 July 2019.
  87. ^ «Intel’s First Microprocessor—the Intel 4004». Intel Corp. November 1971. Archived from the original on 13 May 2008. Retrieved 17 May 2008.
  88. ^ Patterson, David; Hennessy, John (1998). Computer Organization and Design. San Francisco: Morgan Kaufmann. pp. 27–39. ISBN 978-1-55860-428-5.
  89. ^ Federico Faggin, The Making of the First Microprocessor Archived 27 October 2019 at the Wayback Machine, IEEE Solid-State Circuits Magazine, Winter 2009, IEEE Xplore
  90. ^ a b «7 dazzling smartphone improvements with Qualcomm’s Snapdragon 835 chip». 3 January 2017. Archived from the original on 30 September 2019. Retrieved 5 April 2019.
  91. ^ Chartier, David (23 December 2008). «Global notebook shipments finally overtake desktops». Ars Technica. Archived from the original on 4 July 2017. Retrieved 14 June 2017.
  92. ^ IDC (25 July 2013). «Growth Accelerates in the Worldwide Mobile Phone and Smartphone Markets in the Second Quarter, According to IDC». Archived from the original on 26 June 2014.
  93. ^ «Google Books Ngram Viewer». books.google.com.
  94. ^ «Google Books Ngram Viewer». books.google.com.
  95. ^ «Google Books Ngram Viewer». books.google.com.
  96. ^ «Google Books Ngram Viewer». books.google.com.
  97. ^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and Computing. A K Peters, Ltd. p. 54. ISBN 978-1-56881-128-4.
  98. ^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics. CRC Press. p. 45. ISBN 978-0-8247-4067-2.
  99. ^ Verma & Mielke 1988.
  100. ^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12.
  101. ^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the Microprocessors. Wiley. p. 85. ISBN 978-0-471-05051-3.
  102. ^ Jerry Peek; Grace Todino; John Strang (2002). Learning the UNIX Operating System: A Concise Guide for the New User. O’Reilly. p. 130. ISBN 978-0-596-00261-9.
  103. ^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111. ISBN 978-0-8493-0949-6.
  104. ^ TOP500 2006, p. [page needed].
  105. ^ Cragon, Harvey (2000). Computer Architecture and Implementation. Cambridge University Press. p. 5. ISBN 9780521651684. Archived from the original on 30 July 2022. Retrieved 10 June 2022.
  106. ^ Xu, Zhiwei; Zhang, Jialin (2021). Computational Thinking: A Perspective on Computer Science. Singapore: Springer. p. 60. ISBN 9789811638480. Archived from the original on 30 July 2022. Retrieved 10 June 2022. It is called the stored program architecture or stored program model, also known as the von Neumann architecture. We will use these terms interchangeably.
  107. ^ Ronald J. Leach (27 January 2016). Introduction to Software Engineering. CRC Press. p. 11. ISBN 978-1-4987-0528-8. Retrieved 26 November 2022.
  108. ^ Hong Zhu (22 March 2005). Software Design Methodology: From Principles to Architectural Styles. Elsevier. pp. 47–72. ISBN 978-0-08-045496-2. Retrieved 26 November 2022.
  109. ^ Ronald J. Leach (27 January 2016). Introduction to Software Engineering. CRC Press. p. 56. ISBN 978-1-4987-0528-8. Retrieved 26 November 2022.
  110. ^ John Knight (12 January 2012). Fundamentals of Dependable Computing for Software Engineers. CRC Press. p. 186. ISBN 978-1-4665-1821-6. Retrieved 26 November 2022.
  111. ^ Frederick P. Brooks (Jr.) (1975). The Mythical Man-month: Essays on Software Engineering. Addison-Wesley Publishing Company. ISBN 978-0-201-00650-6. Retrieved 26 November 2022.
  112. ^ Ian Sommerville (2007). Software Engineering. Pearson Education. pp. 4–17. ISBN 978-0-321-31379-9. Retrieved 26 November 2022.
  113. ^ «Why do computers crash?». Scientific American. Archived from the original on 1 May 2018. Retrieved 3 March 2022.
  114. ^ Taylor, Alexander L., III (16 April 1984). «The Wizard Inside the Machine». Time. Archived from the original on 16 March 2007. Retrieved 17 February 2007.
  115. ^ Agatha C. Hughes (2000). Systems, Experts, and Computers. MIT Press. p. 161. ISBN 978-0-262-08285-3. The experience of SAGE helped make possible the first truly large-scale commercial real-time network: the SABRE computerized airline reservations system
  116. ^ Leiner, Barry M.; Cerf, Vinton G.; Clark, David D.; Kahn, Robert E.; Kleinrock, Leonard; Lynch, Daniel C.; Postel, Jon; Roberts, Larry G.; Wolf, Stephen (1999). «A Brief History of the Internet». arXiv:cs/9901011.
  117. ^ «Definition of computer». Thefreedictionary.com. Archived from the original on 26 December 2009. Retrieved 29 January 2012.
  118. ^ II, Joseph D. Dumas (2005). Computer Architecture: Fundamentals and Principles of Computer Design. CRC Press. p. 340. ISBN 9780849327490. Archived from the original on 23 June 2021. Retrieved 9 November 2020.

Sources

  • Evans, Claire L. (2018). Broad Band: The Untold Story of the Women Who Made the Internet. New York: Portfolio/Penguin. ISBN 9780735211759. Archived from the original on 28 February 2021. Retrieved 9 November 2020.
  • Fuegi, J.; Francis, J. (2003). «Lovelace & Babbage and the creation of the 1843 ‘notes’«. IEEE Annals of the History of Computing. 25 (4): 16. doi:10.1109/MAHC.2003.1253887. S2CID 40077111.
  • Kempf, Karl (1961). Historical Monograph: Electronic Computers Within the Ordnance Corps. Aberdeen Proving Ground (United States Army). Archived from the original on 16 October 2006. Retrieved 24 October 2006.
  • Phillips, Tony (2000). «The Antikythera Mechanism I». American Mathematical Society. Archived from the original on 27 April 2006. Retrieved 5 April 2006.
  • Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits (Thesis). Massachusetts Institute of Technology. hdl:1721.1/11173.
  • Digital Equipment Corporation (1972). PDP-11/40 Processor Handbook (PDF). Maynard, MA: Digital Equipment Corporation. Archived (PDF) from the original on 1 December 2017. Retrieved 27 November 2017.
  • Swade, Doron D. (February 1993). «Redeeming Charles Babbage’s Mechanical Computer». Scientific American. 268 (2): 86–91. Bibcode:1993SciAm.268b..86S. doi:10.1038/scientificamerican0293-86. JSTOR 24941379.
  • Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (13 November 2006). «Architectures Share Over Time». TOP500. Archived from the original on 20 February 2007. Retrieved 27 November 2006.
  • Lavington, Simon (1998). A History of Manchester Computers (2nd ed.). Swindon: The British Computer Society. ISBN 978-0-902505-01-8.
  • Light, Jennifer S. (1999). «When Computers Were Women». Technology and Culture. 40 (3): 455–483. doi:10.1353/tech.1999.0128. JSTOR 25147356. S2CID 108407884.
  • Schmandt-Besserat, Denise (1999). «Tokens: The Cognitive Significance». Documenta Praehistorica. XXVI. Archived from the original on 30 January 2012.
  • Schmandt-Besserat, Denise (1981). «Decipherment of the earliest tablets». Science. 211 (4479): 283–285. Bibcode:1981Sci…211..283S. doi:10.1126/science.211.4479.283. PMID 17748027.
  • Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.
  • Zuse, Konrad (1993). The Computer – My life. Berlin: Pringler-Verlag. ISBN 978-0-387-56453-1.
  • Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine. Chicago: Washington Institute.
  • Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the Quantum Computer. New York: John Wiley & Sons. ISBN 978-0-471-39671-0.
  • Berkeley, Edmund (1949). Giant Brains, or Machines That Think. John Wiley & Sons.
  • Cohen, Bernard (2000). «Howard Aiken, Portrait of a computer pioneer». Physics Today. Cambridge, Massachusetts: The MIT Press. 53 (3): 74–75. Bibcode:2000PhT….53c..74C. doi:10.1063/1.883007. ISBN 978-0-262-53179-5.
  • Ligonnière, Robert (1987). Préhistoire et Histoire des ordinateurs. Paris: Robert Laffont. ISBN 978-2-221-05261-7.
  • Couffignal, Louis (1933). Les machines à calculer; leurs principes, leur évolution. Paris: Gauthier-Villars.
  • Essinger, James (2004). Jacquard’s Web, How a hand loom led to the birth of the information age. Oxford University Press. ISBN 978-0-19-280577-5.
  • Hyman, Anthony (1985). Charles Babbage: Pioneer of the Computer. Princeton University Press. ISBN 978-0-691-02377-9.
  • Bowden, B. V. (1953). Faster than thought. New York, Toronto, London: Pitman publishing corporation.
  • Moseley, Maboth (1964). Irascible Genius, Charles Babbage, inventor. London: Hutchinson.
  • Collier, Bruce (1970). The little engine that could’ve: The calculating machines of Charles Babbage. Garland Publishing. ISBN 978-0-8240-0043-1. Archived from the original on 20 January 2007. Retrieved 24 October 2013.
  • Randell, Brian (1982). «From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush» (PDF). Archived from the original (PDF) on 21 September 2013. Retrieved 29 October 2013.
  • Smith, Erika E. (2013). «Recognizing a Collective Inheritance through the History of Women in Computing». CLCWeb: Comparative Literature and Culture. 15 (1): 1–9. doi:10.7771/1481-4374.1972.
  • Verma, G.; Mielke, N. (1988). Reliability performance of ETOX based flash memories. IEEE International Reliability Physics Symposium.

External links

Where does the word computer came from?

The word “Computer” comes from the word “compute” which means to calculate. A computer is an electronic device, which stores and processes data to give meaningful information. Processing is done with the help of instructions given by the user, which are also stored within the computer.

Is there any full form of computer?

Computer is not an acronym, it is a word derived from a word “compute” which means to calculate. Some people say that COMPUTER stands for Common Operating Machine Purposely Used for Technological and Educational Research.

What is the full form of VGA?

The Video Graphics Array (VGA) connector is a standard connector used for computer video output.

What is the full form of GPS?

The Global Positioning System (GPS) is a U.S.-owned utility that provides users with positioning, navigation, and timing (PNT) services.

What is full form USP?

Definition: Unique Selling Proposition or USP is the one feature or the perceived benefit of a good which makes it unique from the rest of the competing brands in the market. Every product should have its own USP, which makes it stand apart from other products in the similar category.

What is a good USP?

A compelling USP should be: Assertive, but defensible: A specific position that forces you to make a case against competing products is more memorable than a generic stance, like “we sell high-quality products.”

What is your USP?

A unique selling proposition, more commonly referred to as a USP, is the one thing that makes your business better than the competition. Your USP plays to your strengths and should be based on what makes your brand or product uniquely valuable to your customers.

What is your USP interview answer?

Give them “your synopsis about you” answer, specifically your Unique Selling Proposition. Known as a personal branding or a value-added statement, the USP is a succinct, one-sentence description of who you are, your biggest strength and the major benefit that a company will derive from this strength.

What is your USP examples?

A Unique Selling Proposition (USP) is a unique selling point or slogan that differentiates a product or service from its competitors. A USP may include words such as the “lowest cost,” “the highest quality,” or “the first-ever,” which indicates to customers what your product or service has that your competitors do not.

What are your strengths?

Common strengths include leadership, communication, or writing skills. Common weaknesses include a fear of public speaking, lack of experience with software or a program, or difficulty with taking criticism.

How do you answer why should I hire you?

“Honestly, I possess all the skills and experience that you’re looking for. I’m pretty confident that I am the best candidate for this job role. It’s not just my background in the past projects, but also my people skills, which will be applicable in this position.

What is your salary expectation?

You can try to skirt the question with a broad answer, such as, “My salary expectations are in line with my experience and qualifications.” Or, “If this is the right job for me, I’m sure we can come to an agreement on salary.” This will show that you’re willing to negotiate. Offer a range.

What is your weakness best answer?

Example: “My greatest weakness is that I sometimes have a hard time letting go of a project. I’m the biggest critic of my work, and I can always find something that needs to be improved or changed. To help myself improve in this area, I give myself deadlines for revisions.

Why do u want to join this company?

“I see this opportunity as a way to contribute to an exciting/forward-thinking/fast-moving company/industry, and I feel I can do so by/with my … ” “I feel my skills are particularly well-suited to this position because … “I’m excited about this job opportunity, as it would allow me to …

What are your weaknesses?

Here are a few examples of the best weaknesses to mention in an interview:

  1. I focus too much on the details.
  2. I have a hard time letting go of a project.
  3. I have trouble saying “no.”
  4. I get impatient when projects run beyond the deadline.
  5. I could use more experience in…
  6. I sometimes lack confidence.

Why are you applying for this job?

First, they want to make sure you’ve done your research and know what their job involves. And second, they want to see if you’ve thought about your own career and know what you’re looking for. They want someone who’s thought about their career goals and wants a specific type of job (or at least a few different types).

What are your greatest strengths?

Not sure what your top strengths are? Here’s a list of some of the greatest strengths you can use during an interview based on your position and industry….You can say that your greatest strength is:

  • Creativity.
  • Originality.
  • Open-mindedness.
  • Detail-oriented.
  • Curiosity.
  • Flexibility.
  • Versatility.

How would you describe yourself?

Example: “I am ambitious and driven. I thrive on challenge and constantly set goals for myself, so I have something to strive towards. I’m not comfortable with settling, and I’m always looking for an opportunity to do better and achieve greatness. In my previous role, I was promoted three times in less than two years.”

What is your strength best answer?

“I think one of my greatest strengths is as a problem solver. I have the ability to see a situation from different perspectives and I can get my work done even in the face of difficult obstacles. I also feel that my communication skills are top-notch.

How do you handle stress and pressure?

The appropriate way to deal with stress is to make sure I have the correct balance between good stress and bad stress. I need good stress to stay motivated and productive. I react to situations, rather than to stress. That way, the situation is handled and doesn’t become stressful.

What are your future goals?

Your response to “What are your future goals?” should be focused on how your long-term career goals match with how this company is growing and the opportunities this job provides. In your research, look for information about company structure, mission, expansion, focuses or new initiatives.

How do you answer tell me more about yourself?

A Simple Formula for Answering “Tell Me About Yourself”

  1. Present: Talk a little bit about what your current role is, the scope of it, and perhaps a big recent accomplishment.
  2. Past: Tell the interviewer how you got there and/or mention previous experience that’s relevant to the job and company you’re applying for.

What makes you different from other?

Focus on what sets you apart from other candidates in terms of your skills or experience. Keep your answer relevant. Use the job description as a starting point to understand what the employer wants, and how you can add value. Use specific examples to illustrate how you have used your unique abilities in the workplace.

What makes you unique on a job application?

How to answer “What makes you unique?” Mention skills listed in the job description. Provide examples from your background. Reference prior accomplishments or results from past roles.

What qualities make a person unique?

10 Things That Make A Person Unique

  • Your Personality. An individual’s personality is something that is molded from the moment they are born right through to the present moment.
  • Your Attitude.
  • Your Experiences.
  • Your Habits.
  • Your Creativity.
  • Your Perspective.
  • Your Taste.
  • Your Goals.

Cover image for The History behind the word “Computers”.

Today computers are everywhere, on your wrist, on your office table, in your car, in your house, in your TV, even in our Mars Rover. If it is any kind of technology then directly or indirectly there are a somehow computers working behind it.

But have you ever ask your self who invented the name computer, when it was first used and by whom and where?

According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwaite.

But more interestingly in this book the word computer was not used for machines, but instead, it was used for Human-Computer. Human-computer means a person who carried out calculations more speedily than normal peoples. You can even say that they are really powerful and speedy in maths than the average person. At that time women were mostly hired as human computers because they work in less salary compare male. According to Wikipedia By 1943, most human computers were women.

After the creation of early mechanical computers such as Difference Engine and Analytical Engine in 1822, the word computer had more familiar meaning “The Machine that can perform calculations”.

But with the invention of digital electronic computers such as ENIAC (Electronic Numerical Integrator And Computer) in 1946. The word computers used for its more modern meaning “The Digital programmable machine”

So, guys, this was all about the little history of our most favorite word the computers. I hope this will help you somehow.

Please let me know in the comments if I missed something or if you have any suggestion please feel free to mail me. Till then Keep Coding, Keep Loving.

Wanna get in touch with me? Here are links. I’ll love to become your friend. 😊
Twitter
Facebook
Instagram
or just mail me at jayviveki13@gmail.com

In this week’s Dispatches from The Secret Library, Dr Oliver Tearle considers the history and original meaning of a now ubiquitous word

Here’s a pub quiz question for you: in which century were the words ‘computer’ and ‘electricity’ first used in English writing? The twentieth? ‘Computer’ may lead us to that answer, but then we reflect on Michael Faraday’s important work on electricity in the previous century. And didn’t Charles Babbage devise a forerunner to the modern computer in his Difference Engine, some time in the nineteenth century? Perhaps that’s the answer.

But no: both words make their debut in the annals of English literature in the seventeenth century. And it was one man who helped to popularise both. But the origins of the term ‘computer’, in particular, are worthy of comment. The word obviously derives from the verb ‘compute’, which is from the Latin for ‘reckon with’ (from the prefix com- and the verb putāre meaning to reckon). But what about the meaning of the word ‘computer’?

First, to deal with the more recent and most familiar meaning of the word ‘computer’: the word first came to mean an electronic device used to store and communicate information (and all of its subsequent functions) only in the 1940s: the earliest citation in the Oxford English Dictionary is from 1946. This is fitting. As is well-known now (or at least better-known than in the decades immediately following the end of the Second World War), the work of Alan Turing and other codebreakers at Bletchley Park – where Turing built his huge early computer, the Colossus – helped to shorten the war by several years.

But after the end of the war, America began to develop the computer for commercial use, and Britain hushed up its role in inventing the modern machine. Turing, shamefully, was never honoured in his lifetime, and his tragic end (dying of strychnine poisoning from eating a poisoned apple, having been forced to undergo chemical castration for his homosexuality) prevented him for getting the recognition he deserved. (The rumour that the logo of Apple computers – an apple with a bite taken out – was a deliberate allusion to Turing’s death is, by the way, not true.)

But ‘computers’ had been around for centuries – or, at least, the word ‘computer’ had been. And one of its earliest uses in English was in the work of an important seventeenth-century prose writer, Sir Thomas Browne. It is in Browne’s work that we also find early (and in many cases, the earliest) instances of words including ambidextrous, approximate, botanical, carnivorous, coma, complicated, cryptography, discrimination, electricity, elevator, ferocious, hallucination, indigenous, insecurity, medical, prairie, prefix, selection, and many, many more. I’ve blogged previously about Browne and his remarkable list of neologisms here.

Browne was born in Cheapside in London in 1605 and died in 1682, on his 77th birthday. He wrote on various topics pertaining to the natural world, and this would be the subject of his most ambitious work, Pseudodoxia Epidemica, which was published in 1646, although it was so popular it went through many more editions during Browne’s lifetime.

The full title of this book was Pseudodoxia Epidemica or Enquiries into very many received tenets and commonly presumed truths, although it is sometimes known simply as Vulgar Errors. Its purpose was to examine the widely held superstitions and beliefs of the time, and to correct those which were false; in many ways, Browne, a one-man debunking machine, was the early modern version of the TV show QI.

The context of Browne’s use of the word ‘computer’, in Pseudodoxia Epidemica, was a consideration of the difference in dates between the Julian and Gregorian calendars. When Browne was writing in the 1640s, Britain was behind much of Europe in still following the old Julian calendar, while numerous countries on the Continent had already adopted the Gregorian (which Britain would not do until 1752). Browne writes:

Now it is manifest, and most men likewise know, that the calendars of these computers, and the accounts of these days are very different: the Greeks dissenting from the Latins, and the Latins from each other: the one observing the Julian or ancient account, as Great Britain and part of Germany; the other adhering to the Gregorian or new account, as Italy, France, Spain, and the United Provinces of the Netherlands.

The context of Browne’s use of the word makes it clear that the word ‘computer’ is here being used to refer to someone who makes a calculation, specifically about dates. And this is the earliest known meaning of the term ‘computer’, a sense that the OED now categorises as ‘chiefly historical’: ‘A person who makes calculations or computations; a calculator, a reckoner; spec. a person employed to make calculations in an observatory, in surveying, etc.’

But Sir Thomas Browne didn’t coin the word ‘computer’. If anyone should get the credit for doing that, and even here we should bear in mind the usual caveat (that ‘first known use’ of a word does not necessarily equate to actual coinage of said word), then it’s a man named Richard Brathwaite (1588-1673), an English poet who published a 1613 book called Yong Mans Gleanings.

It is in this book that we find the earliest recorded use of the term ‘computer’; as Brathwaite’s use of ‘he’ makes clear, he was also referring not to a counting device or machine but to a person who does the calculating.

I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number: The daies of Man are threescore and ten.

You can continue to explore the unusual stories behind well-known words with the surprising origins of the word virus, the history behind the word vaccine, and the reason why the word homophobia meant something quite different when it was first coined.

Oliver Tearle is the author of The Secret Library: A Book-Lovers’ Journey Through Curiosities of History, available now from Michael O’Mara Books.

Here’s what computer stands for:

Computer doesn’t stand for “Common Operating Machine Purposely Used for Technological and Educational Research.”

Computer isn’t an acronym at all.

Computer is a combination of the Latin words Com and Putare, meaning respectively, “with” (com) and “reckon” (putare).

So if you want to learn all about the tech term computer, then you’re in the right place.

Let’s jump right into it!

COMPUTER Full Form: Stands For What? (+ Interesting Facts)

Remember on Star Trek when the Captain used to start a conversation with the LCARS (Library Computer Access/Retrieval System) by saying, “Computer…?”

We’ve always enjoyed personifying computers, haven’t we?

Even in day-to-day life, we get a kick out of talking to Siri, Alexa, or Google Assistant, imagining that we’re conversing with a humanlike persona, our loyal “Computer” that follows our commands.

In fact, forecasts suggest that by 2024, the number of digital voice assistants will reach 8.4 billion units.

Have you ever wondered where the title “Computer” came from or what the correct computer full form is? 

What About that Computer Acronym?

The easiest answer suggests the full form of computer is “Common Operating Machine Purposely Used for Technological and Educational Research.” 

An old computer.

Unfortunately, this easy answer is also an urban legend. There is no evidence of that acronym appearing in the 20th century when modern computers debuted.

Besides, that acronym seems dishonest when you consider both the etymology and function of a computer.

A programmable machine “computes” or calculates equations. 

Technically speaking, and in the words of programmers, a computer’s function is to respond to a set of instructions and then execute a list of further corresponding instructions. 

For us, in simple words, these devices:

  • Accept and interpret data
  • Process the information
  • Produce the desired results
  • Store and retrieve data

Truly understanding the computer full form requires a bit of a history lesson. 

What Are Human Computers?

According to BBC News, since the verb “compute” was used to refer to performing math equations, humans were the first computers.

We simply used pre-mechanical age devices, like the abacus, to compile our data.

The term “compute” comes from the Latin word “Putare.”

According to Professor William J. Rapaport, the word is a combination of Com and Putare, meaning respectively, “with” (com) and “reckon” (putare).

The compound word, Computare, refers to a reckoning of arithmetic, or to “settle an account.” Now that sounds like something a human might have to do. Robots, not so much! 

Ancient Blurbs About Computers

References to the computer date well before the 20th century, but they are always about the act of computing, a distinctly human activity. 

For example: 

  • The Roman poet Virgil wrote in Georgics of computing, or pruning vines
  • Historian Tacitus used the word when counting soldiers
  • Pliny wrote in Natural History how “the breadth of Asia should be “rightly calculated”, which was the same word—computetur)
  • Author Richard Braithwait wrote in The Yong Mans Gleanings that he “had read the truest computer of times”
  • Navy Administrator Samuel Pepys wrote of computing money for ships

The first known use of the word computer was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait.

[Wikipedia]

Computers for Hire

Looking for employment today and searching for “computers” would return very different results than a job search in the 18th or 19th centuries. It was quite common for old job ads to say “hiring computers.”

Nineteenth-century steam-powered robots, you ask? Unlikely! No, the ad implied they wanted human calculators or “computers.”

Essayist and Reverend Jonathan Swift once wrote in the Edinburgh Weekly Journal, describing a young married woman’s job as a computer, one who knows of her husband’s income.

The perception changed, of course, when inventors introduced the first mechanical computer.

Computers By Any Other Name

We have to assume that before computer full form existed, people simply called their pre-mechanical devices “tools” or something similar.

For thousands of years, humans used tools to perform equations and keep account of numbers. 

  • The Ishango Bone dates back to prehistoric Africa and is thought to be a primitive calculator. Etchings are situated in three columns with marks that have been grouped into sets, implying decimals or prime numbers.
  • The abacus dates back to at least 2400 B.C. in ancient Babylonia, or perhaps even longer. Since the word for abacus means literally “tablet,” it could etymologically be the closest relative to a mechanical “computer.”
Historic abacus.
  • The Antikythera mechanism is more complex, so much so that this 100 B.C. era device is considered the first primitive analog computing device.
  • Physicist Derek J. de Solla Price once wrote that the Antikythera mechanism could calculate astronomical positions.

What Did We Call Modern Computers?

We didn’t call modern computers “computers,” at least not when they first debuted. 

Charles Babbage was the “father of the computer,” but to him and his 19th-century contemporaries, these steam-driven devices were called “machines.”

Babbage said, “A tool is usually more simple than a machine; it is generally used with the hand, while a machine is frequently moved by animal or steam power.”

When he created a highly advanced machine that helped perform navigational calculations, he called it an “Analytical Engine.”

Charles Babbage's difference engine.

Perhaps the reason these amazing analytical machines were never actually called computers back in the day was that they weren’t yet “programmable” in the literal sense.

They were limited to performing only specific functions.

They were also called all sorts of nicknames other than “computer,” such as: 

  • In 1927 H. L. Hazen and Vannevar Bush’s “mechanized calculus” machine
  • In 1937, George Stibitz and his Model K “Adder”
  • Alan Turing’s Universal Turing Machine, or the “computing machine”

The First Robotic “Computer”

By 1938, the name computer in full form applied to non-human computers, that is, the programmable electromechanical analog systems we all know and love. 

Curiously, the first time we used the word “robot” was in 1920, after Czech author Karel Capek coined the phrase, robota, meaning “slave labor.” That’s right, so whenever Bender Rodriguez complains about the abuse of robots at the hands of humans, you know he’s got a good point.

The modern colloquial usage of the word “computer” also changed after 1938, with new achievements such as: 

  • The United States Navy naming their submarine-bound analog system a “Torpedo Data Computer” 
  • John Vincent Atanasoff and Clifford Berry naming the first electronic digital machine the Atanasoff-Berry Computer
  • By 1946, the military-created Electronic Numerical Integrator and Computer (ENIAC) debuted, its complete system weighing 30 tons used for processing (its power literally dimmed the lights of the city of Philadelphia!)
  • The ENIAC could perform numerous functions besides ballistics, including weather prediction, atomic energy calculations, and other “general purposes”
Historic picture of the ENIAC machine.

For more background on the ENIAC, read more about the first computers and their technologies in Scientific American.

The Marketed “Personal Computer”

The last evolution in computer etymology is the result of great achievements in manufacturing and marketing work alike. 

Although modern computers developed rapidly from the 1950s to the 1970s, it took some time to transform the image of a giant machine used to perform highly specialized calculations, to a “microcomputer,” a tool for small businesses and individuals. 

Much of the groundwork for the modern computer was laid in 1968 when Douglas Engelbart of SRI International gave the “mother of all demos,” showcasing many modern concepts like the mouse, email, word processing, and even video conferencing.

The identity of the “computer” was just now shaping into something that a modern, non-technical minded audience could embrace—something humanlike, a virtual assistant. 

The First Microcomputers

Computers only became identifiable, and thus personalized as a persona, when microcomputers appeared. Competition during the 1970s and 1980s was intense, with many companies competing in this brand new market:

  • Micral N debuted in 1972 as a mostly “hobbyist” product
  • IBM Special Computer APL Machine Portable came in 1973, which gained a reputation as the first real PC
  • IBM 5100 was a milestone, as it could be programmed in BASIC and APL
  • Xerox Alto debuted a Graphical User Interface
  • Altair 8800, which was the first true commercial success for a PC
  • By 1980, the Epson HX-20 became the first portable “laptop” computer to run on rechargeable batteries

The next revolution happened in 1976 when Steve Jobs and Steve Wozniak debuted the first Apple computer, which grew beyond the kit-style look of previous computers, marketing to the non-technical minded consumer.

Apple’s revenue from its Macintosh computers has seen an overall growth up to this day, with sales at around 7.1 billion U.S. dollars for the third quarter of 2020 alone.

Apple's Revenue From Sales of Macintosh Computers Worldwide

[Apple]

Tandy made more strides with the TRS- 80, while Commodore released the Amiga, which introduced modern principles like multitasking and a windows operating system.

The Life and Death of a Computer

By 1982, “The Computer” was a household word, publicized by Time magazine, who awarded it “Machine of the Year”.

Marketing evolutions continued in the 1980s and 1990s with video game systems and the universal software-focused approach of Windows and DOS. 

All was going well until…well, the computer died! Not so much in a Hal 9000 sort of way, but simply the end of a generation and the reinvention of an old market.

By the time desktops and laptops took a plunge in the 2000 era, in favor of smartphones and tablets, the colloquial use of “Computer” was all but finished. 

These new internet-ready devices were computers, but no one called them by the computer full form anymore, nor was a computer acronym necessary. 

Worldwide sales for these smartphones have been steadily increasing throughout the years. It is forecasted that by 2021, around 1.5 billion units would be sold to end users.

Global Smartphone Sales From 2007 to 2021 (In Million Units)

[Gartner]

The “end” of the market even prompted many programmers to give a symbolic eulogy for the death of the computer, a persona we once voted “Machine of the Year.”

Ironically, humans were the first computers, not machines. We didn’t even think of these amazing machines as “computers” until we understood the groundbreaking idea that machines could operate as humans do.

To Be, or Not to Be: Other Vital Tech Words and Their Meanings

Here’s a list of particularly interesting tech acronyms—or maybe they’re not even acronyms?

Learn what these stands for:

  • GOOGLE full form: what does it stand for?
  • INTERNET full form: what does it stand for?
  • WI-FI full form: what does it stand for?

Continue Learning about History

Where is the word computer derived from?

the word computer is derived from word compute


Did the word computer exist before computers as we know them came into being or does it just come from the word compute with its mathematical meaning?

no it didn’t a man made it up completely out of the blue


How old is the word computer?

whwn the first computer was made that was when the word computer
was made so look up that question


The meaning of the word computer?

The meaning of the word «COMPUTER»
Common
Operating
Machine
Purposely
Used for
Training
Education and
Research
That is the meaning of computer


Where does the word adventure come from?

The word adventure come from Argentina and Brazil that is where
it came from

Let’s learn where did the word computer come from. The most accurate or helpful solution is served by wiki.answers.com.

There are ten answers to this question.

Best solution

Answer:

it is an ancient latin putare word ‘computare’ meant to count or sum up

Read more

Other solutions

New Computer did not come with Microsoft Word?

I just bought a new Computer (XPS Dell with Windows 7) There is no writing program like Microsoft Word. I thought that it was always included with the computer and the computer price? I’ve bought several computers from Dell as it is my favored brand…

Answer:

It is NEVER included in the price of the machine unless you request it or it is listed in the spec….

Read more

Answer:

Original dictionary definition was «one who computes» meaning a human who’s job is to compute…

Read more

Answer:

Back before the digital machines we have today there was no simple way to do lots and lots of calculations…

Read more

Answer:

The Bug In 1947, Grace Murray Hopper was working on the Harvard University Mark II Aiken Relay Calculator…

Read more

Answer:

Computer used for person, 1646; mechanical calculating machine, 1897; & electronic machine, 194…

Read more

Answer:

no it comes from a guy Named Micheal dell

Read more

Answer:

no it didn’t a man made it up completely out of the blue

Read more

Answer:

moth in computer in 1952 The term «bug» had been in use for any malfunction or error of a…

Read more

Answer:

Claude E. Shannon first used the word bit in a 1948 paper. Shannon’s bit is a portmanteau word for binary…

Read more

Related Q & A:

  • Where did the water first come from?Best solution by Yahoo! Answers
  • My computer’s Screen will not come up, why?Best solution by Yahoo! Answers
  • Does the HP TouchSmart IQ504t come with a computer or is it just a monitor?Best solution by Yahoo! Answers
  • Where did the Easter bunny come from?Best solution by Yahoo! Answers
  • Where can I find computer assistant jobs?Best solution by indeed.com

Just Added Q & A:

  • How many active mobile subscribers are there in China?Best solution by Quora
  • How to find the right vacation?Best solution by bookit.com
  • How To Make Your Own Primer?Best solution by thekrazycouponlady.com
  • How do you get the domain & range?Best solution by ChaCha
  • How do you open pop up blockers?Best solution by Yahoo! Answers

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions,
answers and tips from other Internet users. We always provide the most suitable and complete answer to your
question at the top, along with a few good alternatives below.

Computer As It Is (Компьютер как он есть)

The word «computer» comes from a Latin word which means to count.

Слово «компьютер» происходит от латинского слова, означающего считать.

Initially, the computer was designed as a tool to manipulate numbers. Although designed originally for arithmetic purposes, at present it is applicable for a

great variety of tasks. Computers are now an integral part of our day-to-day lives. Today it would be difficult to find any task calling for the processing of large amounts of information that is not performed by a computer. The computer may be stated to have become an important and powerful tool for collecting, recording, analysing,

and distributing tremendous masses of information.

Первоначально компьютер был разработан как инструмент для манипулирования числами. Хотя предназначенный изначально для арифметических целей, в настоящее время он применяется для самых разнообразных задач. Компьютеры в настоящее время стали неотъемлемой частью нашей жизни изо дня в день. Сегодня было бы трудно найти любую задачу для обработки больших объемов информации, которую не берет на себя компьютер. Компьютер может быть важным и мощным инструментом для сбора, регистрации, анализа и распространения огромной массы информации.

In science, computers digest and analyse masses of measurements, such as the positions and velocities of a spacecraft and solve extraordinary long and complex mathematical problems, such as the trajectory of the spacecraft.

В науке, компьютеры считают и анализируют массу измерений, таких, как положения и скорости корабля и решают чрезвычайно длинные и сложные математические задачи, например, траекторию космических аппаратов.

In commerce, computers record and process inventories, purchases, bills, payrolls, bank deposits and the like and keep track of ongoing business transactions.

В торговле, компьютеры записывают и обработатывают запасы, закупки, вексели, платежные ведомости, банковские депозиты и т.п., и отслеживает текущие бизнес-операций.

In industry, they monitor and control manufacturing processes.

В промышленности они смотрят и контролируют производственные процессы.

In government, computers keep statistics, analyse and distribute information.

В правительстве, компьютеры ведут статистику, анализируют и распространяют информацию.

Computer technology has made dramatic strides in application of virtually every segment of a modern industrialized culture, from product design and manufacturing, through sales, warehousing and distribution. Nowadays computer-aided design can no longer be separated from computer-aided manufacturing, they are one and the same. Hence, the acronym, CAD/CAM. The list of applications is large and growing rapidly.

Компьютерная технология сделала серьезные успехи в применении, практически в каждом сегменте, современной индустриальной культуры, начиная с проектирования и производства, путем продажи, складирования и дистрибуции. В настоящее время система автоматизированного проектирования не может быть отделена от автоматизации производственных процессов, они являются одним и тем же. Отсюда, сокращение, CAD/CAM. Список приложений, большой и быстро растет.

To many people, the computer is a superhuman robot. Indeed, it can perform lightingfast calculations and can perform billions of operations in a second. But the computer is not superhuman for itcan accomplish none of these things by itself. Every computer

now in existence must be told what to do: it must have a set of instructions. The writing of these instructions is called programming. Programming is done by a man.

Для многих людей компьютер — сверхчеловеческий робот. В самом деле, он может выполнять сверхскоростные вычисления и может выполнять миллиарды операций в секунду. Но компьютер не сверхчеловек, так как он не может выполнить ни одну из этих вещей сам по себе. Каждому компьютеру, существующему в настоящее время нужно сказать, что делать: он должна иметь набор инструкций. Написания этих инструкций называется программированием. Программирование осуществляется с помощью человека.

Surely, there are similarities with human brain, but there exists one important difference. Despite all its accomplishments, the electronic brain must be programmed by a human brain.

Конечно, есть сходство с человеческим мозгом, но существует одно важное отличие. Несмотря на все свои достижения, электронный мозг должен быть запрограммирован человеческим мозгом.

Although accepted for different purposes, computers virtually do not differ in

structure. Regardless of their size or purpose most computer systems consist at least of three elements: the input-output ports, the memory hierarchy and the central processing unit.

Хотя принятые в различных целях, компьютеры практически не отличаются по своей структуре. Независимо от их размера или цели большинство компьютерных систем состоят как минимум из трех элементов: портов вводавывода, иерархии памяти и центрального процессора.

The input-output ports are known to be paths whereby information (instructions and data) is fed into the computer or taken out of it.

Порты ввода-вывода, как известно, это каналы, по которым информация (инструкции и данные) подается в компьютер или выводится из него.

There are several types of memory. Memory is essential to the computer’s operation. Items of information can be written to, stored in, retrieved from it on

demand by the central processing unit, or erased32 to make room for other information.

Есть несколько типов оперативной памяти. Память имеет важное значение для работы компьютера. Информация может записываться, храниться, извлекаться из компьютера по требованию центрального процессора, или удалятся, чтобы освободить место для другой информации.

The central processing unit, or CPU, controls the operation of the entire system

by issuing commands to other parts of the system and by acting on the responses. When required, itreads information from the memory, interprets instructions, performs operations on the data according to the instructions, writes the results back into the memory, and moves information between memory levels or through the input-output ports.

Центральный процессор, или просто процессор, управляет работой всей системы путем выдачи команд на другие части системы и, откликается на ответы. При необходимости, он считывает информацию из памяти, интерпретирует инструкции, выполняет операции над данными в соответствии с инструкциями, записывает результаты обратно в память, и передает информацию между уровнями памяти или с помощью портов ввода-вывода.

Advances in microelectronic components led to the development of smaller computers. In 1971 Intel. Corp. delivered the first microprocessor, the 4004. The central processing unit of a computer was put onto a single silicon chip less than 1/4 in square. When a central processing unit (CPU) of a computer is implemented in a single, or very small number of integrated circuits, we call it amicroprocessor. When a

computer incorporates a microprocessor as a major component, we call it a microcomputer. When the entire computer, including CPU, memory and input-output capability, is incorporated into a single 1C, we call the latter a one-chip microcompute.

Достижения в области микроэлектронных компонентов привели к развитию меньших компьютеров. В 1971 году Корпорация Intel поставила первый микропроцессор, 4004. Центральный процессор компьютера был поставлен на одном кристалле кремния менее 1/4 квадрата. Когда центральный процессор (CPU) компьютера реализован в виде одной микросхемы, или в виде очень небольшого количества интегральных схем, мы называем это микропроцессором. Когда компьютер включает в себя микропроцессор в качестве основного компонента, мы называем это микрокомпьютер. Когда весь компьютер, включая процессор, память и порты ввода-вывода, включен в одиночный 1С, мы называем это однокристальным микрокомпьютером.

The first design was followed by many others. The progress toward smaller computers is certain to continue: gradually there appear nano- computers and pico-computers. These computers are more flexible. Modern computers are virtually symbiotic.

Первый проект породил многие другие. Прогресс в сторону меньших компьютеров, несомненно, будет продолжаться: постепенно появляются нанокомпьютеров и пико-компьютеры. Эти компьютеры более гибкие. Современные компьютеры практически симбиотические.

Advances in microelectronics give rise to advances in computers. Computers today are providing an expanding range of services.

Достижения в области микроэлектроники привели к достижениям в области вычислительной техники. Компьютеры сегодня предоставляют расширение спектра услуг.

Computers are classified by size and capability as microcomputers, mainframes and supercomputers, depending on the size of their main memories and on their processing speed.

Компьютеры классифицируются по размеру и возможностям, на микрокомпьютеры, большие компьютеры (ЭВМ) и суперкомпьютеры, в зависимости от размера их основной памяти и от скорости обработки данных.

Most microcomputers are mostly used by individuals.

Большинство микрокомпьютеров в основном используются частными лицами.

Mainframes are used by large corporations, government agencies and other large institutions.

ЭВМ используются крупными корпорациями, правительственными агентствами и другими крупными учреждениями.

Supercomputers are the largest and fastest of all computers. They have memories and processing speeds that may be measured in several picoseconds (trillions of a second). The boundaries separating the categories change frequently as computer technology advances.

Суперкомпьютеры крупные и самые быстрые из всех компьютеров. У них есть память и скорость обработки, которые могут быть измерены в несколько пикосекунд (триллионы секунд). Границы, разделяющие категории часто меняются в зависимости от достижений техники.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]

  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #


Текст работы размещён без изображений и формул.
Полная версия работы доступна во вкладке «Файлы работы» в формате PDF

The computer is one of the greatest inventions of its time. Billions of people use computers in their daily lives around the world. For decades, the computer has evolved from a very expensive and slow device to today’s extremely smart machines with incredible computing power.

The word computer came to us from the distant eighteenth century. It is first found in the Oxford dictionary. Initially, the concept of computer was interpreted as a calculator. It was different from today’s in that it could be applied to absolutely any computing device, and not necessarily electronic.

The first computers or calculators were mechanical devices and were able to perform simple mathematical operations such as addition and subtraction. In 1653, the first computer appeared capable of solving more complex problems, or rather, divide and multiply.

For some time, the technological improvement of computers has stopped, and the emphasis was on the perfection of mechanisms and size reduction. Computers also performed four basic arithmetic operations, but became lighter and more compact.

In 1822, a machine capable of solving simple equations was first invented. This was the greatest breakthrough in the field of computer technology. After approval of the project by the government, funds were allocated, and the invention was able to further develop. Soon the machine received a steam engine and became fully automatic. After a decade of continuous research, the first analytical engine appeared-a multi-purpose computer that can operate with many numbers, work with memory and be programmed using punch cards.

Since then, the evolution of the computer has gone at an accelerated pace. To mechanical devices, added electrical relays. They were joined by vacuum tubes. Power and Speed of computers grew from year to year. And in 1946, the first computer appeared. Its weight, size and power consumption, for our understanding, were simply shocking. Enough mention of the weight of 30 tons to represent the scale of the machine, but at the time it was a huge achievement.

With the advent of semiconductor devices, gradually displacing vacuum tubes, the reliability of computers increased, and the size became smaller. The computer has RAM to store information. Machines have learned to write data to magnetic disks. The leader in the production of computers at that time was IBM.

And at one point, scientists were able to integrate into a single chip several semiconductor devices. This moment was a new impetus in the development of computer technology. The computer has a disk drive, a hard drive, a mouse and a graphical interface. Its size was reduced so much that the car could be put on the table. It was the birth of a personal computer, the prototype that we know today.

Since then, mankind has been able to mass use the computer for home use. The first personal computer is the IBM 5150. Computer made on the basis of Intel 8088 processor. The computer cost $ 1,565, was easy to use and took up relatively little space. IBM 5150 was equipped with an Intel 8088 processor with a clock frequency of 4.77 megahertz and pre-installed RAM size of 16 or 64 kilobytes. In just one month, IBM was able to sell 241,683 IBM PC computers. In agreement with the leaders of Microsoft, IBM paid the creators of the program a certain amount for each copy of the operating system installed on the IBM PC. Thanks to the popularity of the IBM PC personal computer, Microsoft executives Bill Gates and Paul Allen soon became billionaires, and Microsoft took a leading position in the software market.

After the creation of the first commercial version of the personal computer, the main emphasis in the development of computer technology, was made to improve the quality and performance of machines.

Gradually, progress has brought the computer to what we see today. Machines became increasingly stronger and more compact. There were laptops, netbooks, tablet PCs, etc.

Today, computers are something extremely powerful and more affordable than ever. They have penetrated almost every aspect of our lives. They are used as a powerful tool for communication and trade. The future of computers is huge.

So we are on the threshold of a new computer age, when artificial intelligence may be invented. And only time will tell whether computers will become our best friends or our worst enemies, as shown in some movies.

That’s interesting. What will the development of computer technology lead to in the near future? What will our children use?

Like this post? Please share to your friends:
  • Where something comes from word
  • Where is word to your mother from
  • Where is what type of word
  • Where is what kind of word
  • Where is vba in excel