Many of the technological advancements of the 1940s and 1950s came in the form of increasingly powerful analog computers, which analyze a continuous stream of information, much like that recorded on vinyl records. Analog computers worked well for solving big mathematical problems, such as the calculations related to electrical power delivery systems or the study of nuclear physics. However, one of their weaknesses was that they were inefficient at managing large amounts of data. Digital computers, or those that translate information into a complex series of ones and zeros, were far more capable of managing bulk data. Just a few years after the war, digital computing received a huge boost with the invention of the transistor, a device with far more computing potential than its predecessor the vacuum tube. Scientists could amplify this enlarged computing capacity even further by wiring multiple transistors together in increasingly complex ways.
The use of multiple transistors for computing purposes was an important step, but it had obvious drawbacks. Making machines capable of processing a great deal of information required connecting many transistors, which took up a great deal of space. Then, in the late 1950s, inventors in the United States developed an innovative solution. Using silicon, they could integrate transistors and capacitors in a way that clumsy wiring could not accomplish. The silicon-based integrated circuit freed computer technology from size constraints and opened the door to additional advancements in computing power.
Even so, digital computers remained large, expensive, and complicated to operate, and their use was largely confined to universities and the military. Only gradually over the 1970s did computing technology become more widely available, largely thanks to mass-produced general-purpose computers, sometimes called minicomputers, designed by IBM and the Digital Equipment Company. These served a variety of government and private purposes, such as calculating the Census, managing the flow of tax monies, and processing calculations related to creditworthiness (Figure 15.15). But despite being somewhat cheaper, minicomputers remained out of reach for average users.
The journey from minicomputers to personal computers began with the Intel Corporation, established in 1968 in Mountain View, California, in a region now commonly called Silicon Valley. During the 1970s, Intel developed a line of integrated circuits that were not only more powerful than their predecessors but also programmable. These became known as microprocessors, and they revolutionized computing by holding all of a computer’s processing power in a single integrated circuit. In 1975, a company in New Mexico released the first marketed personal computer, the Altair 8800. This used an Intel microprocessor and was promoted to computer hobbyists eager to wield a level of computing power once available to only a few. The Altair’s popularity inspired competing products like the Apple, the Commodore, and the Tandy Radio Shack computer (Figure 15.16). These personal computer systems were far easier to use and appealed to a much larger market than just hobbyists.
By 1982, there were 5.5 million personal computers in the United States, and over the next decade, their number and computing power rose exponentially. Computers proliferated in government offices, private firms, and family homes. Then, in 1984, Apple introduced the world to the Macintosh computer, which not only used a mouse but also replaced the standard code-based user interface with one based on graphics and icons. Recognizing the user-friendly possibilities of this graphic interface, competitors followed suit. Before long, the design popularized by Apple had become the norm.
By the end of the 1980s, not only had personal computers become common, but the microprocessor itself could be found everywhere. Microprocessors were incorporated into automobiles, cash registers, televisions, and household appliances and made possible a variety of other electronic devices like videocassette recorders and video game systems (Figure 15.17). Computer systems were created to store and manage financial, educational, and health-care information. In one form or another and whether they realized it or not, by the 1990s, almost everyone in the developed world was interacting with computers.
Modems were hardly new in the 1990s, but they became much faster and more common with the rise of the internet. The origins of the internet date back to the 1960s and the efforts by government researchers in the United States to use computers to share information. These developments were especially important for the U.S. Department of Defense during the Cold War and resulted in the emergence of the Advanced Research Projects Agency Network (ARPANET). In creating ARPANET, researchers developed many of the technologies that over the next few decades formed the basis for the internet we know today.
The content of this course has been taken from the free World History, Volume 2: from 1400 textbook by Openstax