A Brief History of the Internet

In Academia, Technology by J Michel Metz1 Comment

Dramatic, but untrue.

Dramatic, but untrue.

You probably thought that the Internet was created to survive a nuclear attack. Nothing could be further from the truth. In fact, the true origins of the Internet had no military usage whatsoever.

How do I know? I talked to several people who actually created the Internet (well, it was called the ARPANET back then). I took the material from those conversations, as well as other primary and secondary historical sources, and prepared a chronological description of how the Internet (as we knew it in 1994) came into being. Granted, this is a predominantly US-centric historical account – much work was being done in England and France at the same time, but their influence was beyond the scope of the paper.

This history is incredibly valuable. For example, it’s important to note that in today’s debate about Net Neutrality and government intervention, the story of the Internet’s development is a key component that is overlooked. Many of the metaphors used in 2015 about what the “Internet is for” and “why it exists” are, quite simply, dead wrong. Having a better understanding of the history of the (arguably) most important technological development since the Gutenberg printing press can, and should, provide a solid foundation for that debate.

Note: This is a peer-reviewed academic paper presented to the American Historical Association in October, 1995. It is also text prepared for, but ultimately excised from, my doctoral dissertation. In an effort to preserve the text it has been reprinted here, but artifacts from the original digital file (which was heavily corrupted) may remain. 

Introduction

One of the most used buzzwords in the media today is the Information Superhighway. Connecting the home to information services, the current administration would have the general public believe that the science fiction of the future is just around the corner.

In actuality, that superhighway exists already; it is called the Internet. Every day, millions of messages travel along the hundreds of computer networks that make up the Internet. Few studies have been done concerning the effectiveness of e-mail over the Internet. Even fewer studies still have examined the culture created by those who use the computer (Curtis, 1992; Metz, 1993; Reid, 1991). But to date, any examinations of how the Internet came to exist have been sparse and, in general, incomplete.

Indeed, the “Internet” is not its original name. As a project begun during the 1960s, the Internet’s precursor, ARPANET, had grown beyond its original design just fifteen years later, and the Internet took over as the backbone of network communications (Network-Service-Center, 1990). Just how did the ARPANET and later, Internet, come to be?

This paper examines that question, attempting to piece together the connections made through history that lead to the foundation of ARPANET. While the ARPANET came into existence in the late 1960s, the events which lead to its development can be traced to the 1930s and the advent of the computer. But, the first concrete idea of resource-sharing — one of the main ideas behind ARPANET — was conceived in World War II.

In addition, the paper will briefly review computer history relevant to the development of the network: from UNIVAC to timesharing to the development of packet switching, and the concept of the ARPANET. The second part of this paper illustrates the course of events after the ARPANET was “turned on.” The development of the network as a communications medium is explored, and the rise of the Internet as it began to take precedence is illustrated.

The ARPANET: From Computer Birth to Network Offspring

The first completely electronic computer was ENIAC, or the Electronic Numerical Integrator and Computer, completed in 1946. While the computer was not the very first programmable computer, nor were its inventors — John Mauchley and J. Presper Eckert, Jr. — the first to conceive of its full potential (Augarten, 1984), the emergence of the ARPANET can be directly traced to ENIAC’s humble beginnings at the Moore School of the University of Pennsylvania in 1942.

Just after World War II, Vannevar Bush, then a prominent researcher at M.I.T., noted that: scientists were increasingly unable to keep up with professional information (Press, 1993). First, they could not find or read everything in their fields. Second, what they did read (and write) usually got lost in the labyrinth of misfiled works because of inflexible indexing systems. Bush suggested a machine to help solve this dilemma. He envisioned a system in which the machine, complete with keyboard, knobs, levers, and displays, would have the ability to store information. However, this system would not just be limited to indexing and storage; it would also be annotated, with an almost unlimited ability to link documents (Press, 1993).

After ENIAC was built in 1946[1], its inventors developed the world’s first commercial computer company, the Electronic Control Company (Augarten, 1984). It later came to be known as the Eckert-Mauchley Computer Co., after its founders (ACM, 1993). Unfortunately, the two men were more scientists than businessmen and found themselves on the brink of bankruptcy more than once (ACM, 1993; Augarten, 1984).

In 1950, control passed from Eckert and Mauchley to the bigger, and more financially sound Remington-Rand Company, and the two scientists’ vision — UNIVAC — became a reality a year later (Augarten, 1984. UNIVAC was successful and made quite an impression on the public during the 1952 election when it became the first computer to tabulate election results. Its accurate predictions of the outcome hours before its human counterparts’ prognostications astounded both the press and the public (ACM, 1993; Augarten, 1984).

The result was electric, literally. UNIVAC remained the most popular machine throughout the 1950s (ACM, 1993. The effect in the business world was astounding, as the UNIVAC appeared in accounting houses and payroll departments worldwide (ACM, 1993).

Meanwhile, the computer courses offered by the Moore School were attended by several universities, companies and government agencies that had the technical and financial resources to build their own computers. Despite the departure of Moore’s lead scientists Macuhley and Eckert, the school maintained its reputation as the leader of computer science for several years (Augarten, 1984). Before long, the students became the masters, as General Electric, IBM, MIT, and Bell Labs entered the computer arena, with varying degrees of success (Augarten, 1984).

History of Timesharing

The problem with UNIVAC, and others that immediately followed was that such machines were prohibitively expensive. Companies wanted to own a computer, but faced paying hundreds of thousands of dollars either for their own computer or for renting time on machines owned by other corporations. While cheaper, the latter option left smaller companies waiting for convenient times of use, rather than on-demand processing.

The reason for waiting delays was simple: computers worked through a process called “batch-processing:”

This cumbersome system compelled anyone seeking use of the machine to hand over to a computer operator a batch of cards punched with holes that coded the program to be run or the problem to be solved. The operator fed the cards into the computer for processing, one batch at a time. Depending on the length of the queue and the complexity of the programs and problems, it was not unusual to wait 24 hours or more for results (O’Neill, Bell, Nirenberg, & Stallings, 1986, p. 40).

As early computers were expensive, it was imperative that they not sit idle (Press, 1993). The batch processing method of computing kept the computers busy, but wasted the user’s time. The importance and usefulness of interactive computing was not hard to impress upon programmers: ‘every programmer who got a hands-on ‘shot’ at the console for debugging or typed a five-minute ‘hello-world’ program into an early timesharing system knew it was better to work interactively than in batch mode’ (Press, 1993, p. 29).

M.I.T. was the first to make this form of interactivity commonplace, where scientist Jay Forrester created the Whirlwind computer in the 1950s. The new machine processed real-time telemetry data while being used interactively by an operator at a display-based console (Press, 1993). Whirlwind was followed by other experimental interactive computers at M.I.T. All of the engineers using them understood the value of interaction, but it took other “visionaries” to spread the word.

One such visionary was M.I.T. scientist John McCarthy. In 1959, frustrated by the batch method of computer work, he outlined the idea for time-sharing in a memo to his colleagues (O’Neill, et al., 1986). Time-sharing is simply a technique for sharing a computer’s resources between multiple users, giving each user the illusion that he or she is the only person using the system (Phaffenberger, 1991). In extremely large systems, hundreds or even thousands of users can use the system simultaneously without realizing that others are doing the same. At peak usage times, however, performance time decreases significantly (Phaffenberger, 1991).

McCarthy became the ‘visionary spokesperson’ for the technology (O’Neill, et al., 1986, p. 40). Ultimately, he envisioned time-sharing as a public utility, much like the telephone system is a public utility. Subscribers would pay for the capacity that he or she actually used, but would have access to all the programming languages characteristics of a large computer system (O’Neill, et al., 1986).

The other, and perhaps more influential spokesperson for time-sharing was M.I.T. research psychologist J.C.R. Licklider. In 1960 Licklider wrote one of the most influential papers on man-machine symbiosis:

Computing machines can do readily, well, and rapidly many things that are difficult or impossible for computers. That suggests that a symbiotic cooperation, if successful in integrating the positive characteristics of men and computers, would be of great value (Licklider, 1960).

Licklider’s use of the term ‘computers’ did not mean the machines that did the calculating. Instead, even as late as 1960, the term referred to human subjects who did the computing; literally, ‘human computers.’

The idea of time-sharing was to mold the way scientists thought about interactive computing through the 1950s and 1960s. The technology of real-time based computing became a paradigm of the way computers should be used to share processing ability, but as will be shown later, it nearly killed the ARPANET before it even got started.

In the meantime, while people at M.I.T. enjoyed interactive computing, the rest of the world was submitting jobs and waiting for output. Time-sharing brought interactive computing to the public, and not too many years later, companies designed for just this purpose began. In 1965 Keydata Corporation leased dumb terminals (machines having a display terminal and keyboard, but without computing power) to 20 businesses around the metropolitan Boston area and linked those terminals to a UNIVAC mainframe computer in Cambridge. For the first time, a business could access a computer as easily as making a phone call, and more economically viable than owning a six-figure machine or renting it for exclusive use (O’Neill, et al., 1986).

Ironically, one of the first externally-accessible prototypes for timesharing was created by a man who almost prevented the computer industry from taking off. George Stibitz, a former Bell Labs scientist, was the creative force behind Dartmouth College’s time-sharing system, a local network designed by students and faculty (O’Neill, et al., 1986). Eighteen years earlier, Stibitz had been a consultant on Mauchley and Eckert’s UNIVAC contract. At the time, Stibitz was a staunch advocate of tried-and-true relay computers and took an adversarial position against UNIVAC’s creation (Augarten, 1984). If the government had followed his advice not to extend a full contract for UNIVAC’s development, his own involvement in and success with Dartmouth’s historic time-sharing project would never have occurred.

By the mid- to late-1960s, time-sharing was the fastest growing element of the computer industry (Augarten, 1984). These systems were important early steps in the evolution of new ways to link computers into networks. They also entrenched ideas about networking into nearly unmovable ideologies.

Problems with Time-share Networks

With the Cold War, the U.S. government became more interested in computer communications networks. Those networking ideologies interfered with overcoming two major problems that afflicted time-sharing systems: computer incompatibility and unreliability of the data transmission method.

In general, different machines have different operating systems, and these systems are unintelligible to each other:

For example, one computer may express letters and numbers in a different code from another. Or one machine may transmit data at a rate that another is unprepared to accept. For networks to advance, they had to compensate for such variations, and they did so with an increasingly elaborate sets of rules known in the trade as protocols (O’Neill, et al., 1986, p. 40).

The term ‘protocols,’ used in almost every incarnation of electronic communication, comes from the world of diplomacy, where it referred ‘to the formalities vital to ensuring that heads of state do not unwittingly give offense because of cultural differences’ (O’Neill, et al., 1986, p. 40). Computer protocols, then, described precisely the form that data must take before it can pass through a network, the rate at which computers may transmit information, and the method by which data is checked for errors that might occur en route. Protocols allowed different kinds (and makes) of computers to communicate in a common language, or “to talk with each other”(O’Neill, et al., 1986).

In theory, protocols should have made communication easier between host computers. But in practice such was not the case. With only one host computer in charge, no questions of authority or compatibility existed: the host was the authority. When attempts were made to connect with another computer, questions arose about which computer would transmit first and what the arrangements would be for translating their data into a commonalty (O’Neill, et al., 1986).

Consequently, the 1960s were marked with the need to provide a standardized code for computer communications systems. The first and most fundamental of these protocols addressed a need for a universal code that represented data (O’Neill, et al., 1986).

Each computer maker had its own way of encoding data within its machines. IBM, for example, had no fewer than nine different codes within its own various computers (O’Neill, et al., 1986). Thus, in 1960 adopting a standard fell to the American Standards Association (ASA), comprised of representatives from the government, computer manufacturers, and the communications industry (O’Neill, et al., 1986). Four years later the ASA issued the American Standard Code for Information Interchange, or ASCII (rhymes with ‘passkey’) (O’Neill, et al., 1986). In essence, ASCII was one of the first computer communication network protocols.

Transmission Reliability

Once a common method of coding data for communications was established, another standard was needed to specify exactly how the encoded information would be transmitted by modem over a telephone wire. Telephone lines and computer circuits were extremely fragile and intolerant of environmental deviations, such as power outages and thunderstorms. As it was, customers of time-sharing computing balked at long-distance telephone charges which may cost more than the time-share itself, when even a slight surge in electrical power could wipe out all of a customer’s data stored temporarily in the computer’s memory. With these problems in mind, a coalition of interested groups joined to lay down the procedures for the link between computer and modem. This group, which was formed by representatives of the Electronics Industries Association, various manufacturers of communications equipment, and Bell laboratories, produced such a protocol in 1969. Named RS-232-C (RS means ‘recommended standard’), it covered practically all aspects of the modern computer dialogue (O’Neill, et al., 1986).

Even with a standardized code for representing data and a protocol for electronic communications, the Advanced Research Projects Agency (ARPA) started an ambitious effort to address these protocol issues as well. The goal was to create a single coast-to-coast network of about a dozen computers of different types located at colleges, laboratories, and other institutions engaged in research for the Defense Department (O’Neill, et al., 1986).

The obvious solution for the networking problem was the time-sharing possibilities emerging at the time. In fact, ARPA was a major funder of several time-sharing experiments. This was no accident: Licklider was the first head of the Information Processing Techniques Office at ARPA (Press, 1993)

Two main problems existed with time-sharing, however. First, even though timesharing enabled interactive computing, the systems were limited to command-oriented interfaces, instead of more user-friendly WYSIWYG [What-you-see-is-what-you-get] displays (Press, 1993). Second, time-sharing was not the same thing as a multi-processing computer (Augarten, 1984). Although time-shared computers could support many terminals and perform many tasks at the same time, it could not run different programs simultaneously. Combine these with the unreliability of the telephone lines, and the need for a different system was obvious.

ARPA assigned Lawrence G. Roberts the task of solving the networking problem. Roberts earned his Ph.D. in electrical engineering at M.I.T., serving as chief of the software group at M.I.T.’s Lincoln Laboratory, where he helped develop systems for three-dimensional graphics input and timesharing before joining the agency in 1967 (O’Neill, et al., 1986).

Frustrated by the duplication of efforts by different institutions, organizations, and researchers, Roberts knew there had to be a way to grant investigators at one institution access to the research tools and information that researchers at another institution had already developed. This decentralized manner of work caused too much duplication (O’Neill, et al., 1986).

Part of the problem, as he saw it, was the love/hate relationship between the computer industry and the telephone company. From experiments he conducted at the Lincoln Laboratory, Roberts knew that the telephone system’s technique of handling ordinary phone calls as well as computer data through circuit switching was completely inappropriate for the network he envisioned (O’Neill, et al., 1986). Circuit switching was adequate for uninterrupted transmission (e.g., transmission of records that are millions of bits long), but was far from ideal for applications that produced data of a sporadic (or ‘bursty’) nature (O’Neill, et al., 1986).

For example, a programmer might send a computer a problem to solve over the telephone lines (as timesharing systems were designed), and then wait for the answer. The cost of an idle circuit quickly became a burden, especially for academic researchers with extremely limited budgets. Additionally, telephone circuits caused problems in that they established a communications path at the beginning of a transmission and broke the circuit only after all the data has been transferred. As a result, the circuit was ‘blocked out’ of usage during idle time. As a result, the entire model of communications needed to be overhauled. Roberts decided to turn to a new and unproven technology, packet-switching.

Packet switching was a theoretical method of data transmission, envisioned by brilliant computer communications specialist Paul Baran (Borsook, 1991). Baran’s distinguished career as an inventor and innovator began early in his career. His inventions included the high-speed modem technology used widely today, fast-packet technology, spread-spectrum satellite technology, as well as airport doorway metal detectors and the technologies involved in pay-per-view television. After receiving his electrical engineering degree from Drexel University in 1949, he worked as a technician in the Eckert-Mauchley Computer Co. as a technician for UNIVAC. Later, Baran went to work for Rand in 1959, where he wrote the 13-volume set On Distributed Communications, which mapped out the structure of packet-switched networking (Borsook, 1991). Baran’s ideas originated from work involved in making military voice communication circuits safe from wiretapping (Rosner, 1982). In the cold-war climate of the 1950s and 1960s, Baran’s work was considered to be a U.S. national security asset (Borsook, 1991).

Baran’s ingenious solution was first to digitize the information that was to be transmitted, converting it from an analog signal to a digital one, then to equip each junction, or node, of the network with a small, high-speed computer. The computers nearest the participants in a conversation would chop the outgoing portions into small segments and then transmit them by way of the computers at intervening nodes. En route, the message segments would be mixed with segments of other conversations. An electronic eavesdropper would detect only the garble from dozens of conversations. The computer at the node nearest the destination would reassemble the scrambled pieces into intelligible order and reconstruct the conversation. Because control of the network was to be distributed among all the computers and because there were many possible routes by which a chunk of data might reach its destination, the system could survive even if many parts were lost to bombing or sabotage (O’Neill, et al., 1986, p. 45).

To Roberts, the advantages of Baran’s packet switching were obvious, yet he ran into a brick wall of skepticism and hostility when he tried to propose this system to his colleagues in the communications field (O’Neill, et al., 1986). Roberts needed something other than a good idea to illustrate the advantages of this system.

Fortunately, the solution came with another main shift in the computer industry that foreshadowed the death of time-sharing utilities, and brought new life to the possibility of Robert’s ARPANET: the advent of minicomputers. In 1963, Digital Equipment Corporation produced the Programmed Data Processor model 8 (PDP-8), the first widely-successful minicomputer (Augarten, 1984). Smaller than the gigantic mainframes produced by competitors such as IBM, it also offered practical computing power to smaller companies for the first time. About the size of a refrigerator, it cost only $18,000, one-twentieth to one-one hundredth the cost of a mainframe computer. DEC’s PDP-8s were installed in everywhere from engineer’s offices to the Navy’s submarines, and were perfect for Roberts’ packet-switching scheme (Augarten, 1984; O’Neill, et al., 1986). Roberts now had enough evidence to support his ideas for a full proposal.

The Proposal

The original proposal date is open to debate, depending on who is asked. Hardy (1993) reports a citation of a 1962 report by Baran on this topic, and then adds that the initial plan for the ARPANET was distributed at the October 1967 Association for Computing Machinery (ACM) Symposium on Operating Principles in Gatlingberg, Tennessee.

The discrepancy could be explained by two different proposals. First, RAND Corporation had been conducting a survey of the effectiveness of the United States’ communications security. Baran’s work emerged out of that examination (Borsook, 1991; Rosner, 1982). During this presentation, Baran may have proposed a way to strengthen U.S. networks against nuclear attacks, and determined that a network based on packet switching would maintain its integrity better than the then-current circuit-based technology. Unfortunately, the proposal became entangled in bureaucratic politics within the Department of Defense (Borsook, 1991; O’Neill, et al., 1986). During that time, a British researcher named Donald Davies, working independently on a similar concept, gave a name to Baran’s message segments. He called them packets, and coined the name for transmitting them: packet-switching (O’Neill, et al., 1986).

The second proposal, concerning the ARPANET in particular, laid out a clear-cut plan of action. Given that the main priority was to keep communication lines open after a nuclear attack, the new network architecture proposed no central authority (Hardy, 1993; Krol, 1992; Sterling, 1993). America as a post-nuclear nation would need a “command-and-control network, linked from city to city, state to state, base to base” (Sterling, 1993). The problem was that no matter where the network lay, the physical links would always be vulnerable to the impact of atomic bombs. Theoretically, a nuclear war would shatter any communications network.

Therefore, the envisioned network would have absolutely no central authority. “Furthermore, it would be designed from the beginning to operate while in tatters” (Sterling, 1993). The working assumption would be that the network itself would be unreliable at all times. From there, it would:

start to transcend its own unreliability. All the nodes in the network would be equal in status to all other nodes, each node with its own authority to originate, pass, and receive messages. The messages themselves would be divided into packets, each packet separately addressed. Each packet would begin at some specified source node, and end at some other specified destination node. Each packet would wind its way through the network on an individual basis (Sterling, 1993).

Of course, since the network was considered unreliable at all times, the route in which the packet took would be unimportant. For example, once the packets were split up and sent over the lines, they bounced from node to node in the correct general direction (more or less), until they finally ended up in the right place. Should a particular node or nodes be destroyed, the packets would simply be shunted across the net by whatever nodes happened to survive. While terribly inefficient, the system provided durability (Sterling, 1993).

During these proposal meetings, people would holler and scream, getting violently angry. These types of confrontations were not new: computer people wanted to apply their technology to the telephone system, but the engineers at Bell were extremely distrustful of ideas that did not come out of their own laboratories (O’Neill, et al., 1986). Finally, when word came that other countries were starting their own packet-switched networks based upon the same ideas, ARPA attached its name to the project.

The Birth

The first packet-switched network was not ARPANET; rather the National Physical Laboratory in Great Britain set up the first test network on these same principles in 1968 (Sterling, 1993). A Societé Internationale de Telecommunications Aeronautiques project in France was getting underway at the same time (Hardy, 1993).

ARPA’s main goal was to fund a larger, more ambitious project in the U.S.A. Work began in 1968 (Hardy, 1993), using high-speed supercomputers — or what passed for supercomputers at the time — the Honeywell 516’s. These computers had only 12 K of memory, and they were considered powerful minicomputers of their time. In contrast, most personal computers today come with a minimum of 4,096 K [Note: This article was written in 1995 – auth.]. For example, the Macintosh used to write this article has 12,288 K, more memory than 1000 of the original Honeywell computers.

It was not until the fall of 1969, however, that the first ARPANET Information Message Processor (IMP) node was installed at UCLA on September 1 (Hardy, 1993; Sterling, 1993). By December 1969, three other hosts — the University of California at Santa Barbara (UCSB), Stanford Research Institute (now SRI International), and the University of Utah (Salt Lake City) were connected (Rosner, 1982; Staff, 1992; Sterling, 1993). The network was christened ARPANET, after its Pentagon sponsor (Sterling, 1993).

These four computers could transfer data on dedicated high-speed transmission lines, and could even be programmed remotely from other the other nodes. At the time, use of ARPANET was restricted to workers who were directly involved in defense work, including work carried out at universities (Staff, 1993). As Roberts had predicted, resource sharing played an extremely large role in the daily life of the network.

For example, the Stanford Research Institute, while waiting the delivery of a computer to help with various research projects, tapped into a similar machine at the University of Utah to begin developing its software (O’Neill, et al., 1986). A software company in Massachusetts exploited the three-hour time zone difference to gain access to a mainframe during its off-peak hours more than 3,000 miles away, at the University of Southern California. By 1973, the increase in productivity gained from resource sharing was more than enough to offset the cost of operating the network, at about $4 million a year (O’Neill, et al., 1986).

Growing Pains

As noted before, computer time was not only at a premium, but a luxury, so these resource-sharing computers provided breakthrough services for scientists, programmers, and researchers[1]. By 1971 those four nodes had grown to fifteen, and in 1972, to thirty-seven (Sterling, 1993).

By 1971, the second year of operation, ARPANET’s users “had warped the computer-sharing network into a dedicated, high-speed, federally subsidized electronic post-office” (Sterling, 1993). Few people envisioned this kind of impact that it would have upon communications in general (Metcalfe, 1992), but the network’s creators’ attention now turned to the impact of new messaging systems. As two of the original creators of the network wrote,

One of the advantages of the message system over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense. The formality and perfection that most people expect in a typed letter did not become associated with network messages, probably because the network was so much faster, so much more like the telephone. Indeed, tolerance for informality and imperfect typing was even more evident when two users of the ARPANET linked their consoles together and typed back and forth in an alphanumeric conversation. Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time (Licklider & Vezza, 1978).

All this talking caused the administrators of the network considerable consternation. Researchers were supposed to be using the network to run their programs on machines to which they couldn’t have otherwise had access. Instead, they “were using ARPANET to collaborate on projects, to trade notes on work, and eventually, to downright gossip and schmooze.” (Sterling, 1993). People had their own personal accounts on their own machines, and their own personal addresses for electronic mail. Not only were they extremely enthusiastic about this new medium of communication, but also they were far more enthusiastic about it than about long-distance remote-access computation (Sterling, 1993).

One user of this medium, Stu Denenberg, recalled participating in an early synchronous interactive remote-access computer games, those types of games that eventually evolved into Multiple-User Dimensions (MUDs). The game demonstrated the capability of the net to hide stereotypical cues about people. The game — called Empire — was simple: conquer the universe with various tools at one’s disposal: rocket ships, armies, and planets. The goal was to ally oneself with other people who happened to be online at the time (Denenberg, 1994):

The key to winning was to get help from others. Without that help, a novice player was destroyed before he or she was able to determine how to stay alive. In other words, it was either seek help, or be destroyed. After several attempts at deciphering the game alone, the user finally asked someone for help. The mysterious stranger at the other end of the net taught Denenberg the tactics. Finally, when he thought he had enough of a handle on the game to venture out on his own, he was asked what age he was. When he revealed that he was 37 years of age, the other person sent a long “YIIIIIIIIIIIKKEEEEEEEEESSS!!!!!!!!!!” Evidently, the person who helped him was only twelve. Up until that point, the age differential was unknown and the notion of equality seemed brought home to both players (Denenberg, 1994).

Communications of this sort brought about rapid changes in the network. Once the distribution (mailing) list was invented, for example, the true anarchic nature of the network presented itself. A mailing list is a “broadcasting technique in which an identical message could be sent automatically to large numbers of network subscribers” (Sterling, 1993). One of the first really big lists was “SF-LOVERS,” for science-fiction fans (Sterling, 1993). Discussing science fiction or playing Empire was not work-related and was frowned upon by many ARPANET computer administrators. Nevertheless, this disapproval did not prevent such events from happening (Sterling, 1993).

The early 1970s became the testing ground for experimental packet-switched networks. An early survey conducted in 1971 examined 10 such networks, most in research environments (Wood, 1985). During this time, as well, the ARPANET grew. Since the network was decentralized, adding on a node here, a network there, made expansion extremely easy over time. Over the next few years, the network grew rapidly, expanding connection nodes at an ever increasing rate (Wood, 1985).

By 1975, the network had evolved from a research and development focus to an operational service. The time had come for the responsibility of its operation to transferred to the Defense Communications Agency (Wood, 1985).

The key lay in its design. Since the network did not run on hardware requirements (as did corporate networks of the time), it could accommodate any kind of machine, as long as it could understand the packet-switched Network Control Protocol (or “NCP”) that ARPANET then used. Thus, the brand name of the machines, the content, and the people using them were completely irrelevant to ARPANET (Sterling, 1993). With the concept of peer computing, any hope of total control, governmental or private, over the ARPANET quickly slipped away.

In the mid-1970s, the ARPANET had been interconnected with other types of packet networks, including satellite networks, packet radio networks, and local area networks (LANs) (Wood, 1985). These connections brought to light the importance of a universal standard for computer communications. The concern over international standards and protocol led to the development of the Transmission Control Protocol (TCP) and, later, to the Internet Protocol (IP) (Wood, 1985).

The Transmission Control Protocol does just that: it converts messages into streams of packets at the source, then reassembles them back into messages when they arrive at their final destination. The other half, the Internet Protocol, handles the addressing, essentially making sure that the packets are routed across the correct nodes and networks. The powerful thing about the IP was that it was able to do its job not just on the ARPANET NCP suite, but also other standards such as Ethernet, FDDI, and X.25 (Sterling, 1993).

The technological advances made particularly in the late 1970s provided many different social groups with possession of (or at least access to) powerful computers. These groups wanted access to the ARPANET, and two major factors allowed them to have it. First, since the TCP/IP suite was public-domain, it was free to anybody who wanted it. Second, and most importantly, the basic technology was decentralized and extremely anarchic by design. The combination of the two meant that it became difficult to stop people from barging in and linking up at some place or another, sometimes quite messily (Sterling, 1993).

The important thing to note here, however, is that no one wanted to stop them from joining in. As early as 1977, TCP/IP was being used by other networks to join to the ARPANET. The ARPANET backbone, the main communications network, was tightly controlled, however. Even as late as 1985, the ARPANET was intended “to be used solely for the conduct of or in support of official U.S. government business. It is used… by Department of Defense (DoD) users, by non-DoD government agencies, and by contractors sponsored by government agencies” (Wood, 1985, p. 156). By 1985 the network spanned the United States, across the Pacific to Hawaii, and across the Atlantic to England and Norway (Wood, 1985). Despite its reach, it was becoming a smaller and smaller community of users in light of the rapidly expanding galaxy of linked machines (Sterling, 1993).

As new machines became more and more interconnected with the ARPANET, and more and more networks joined in, the system started to become known as the “Internet,” or the “network of networks.” The rapid expansion rate was encouraged. Since connecting to the Internet would cost the taxpayer little or nothing, as each node was completely independent with its own financing and technical requirements, it was “the more, the merrier” (Sterling, 1993). Like any system, the more the computer network comprised larger territories of people and resources, the more valuable it became (Sterling, 1993).

Adolescent Identity Crises

In 1982, the Defense Data network was created to incorporate MILNET, the military arm of the ARPANET, when the network branched into two separate entities a year later. (Hardy, 1993). Together, the ARPANET and the MILNET formed the Internet (Staff, 1991). Each was given a network number, and gateways were installed to provide packet forwarding between them.

The split paved the way for more networks to join the Internet. The Defense Communications Agency (DCA) mandated the use of TCP/IP for all ARPANET hosts, and enforced this by modifying the packet switching software. As a result, all ARPANET hosts had to begin using TCP/IP protocols and interacting with the Internet environment (Staff, 1991). Essentially, this adoption of a TCP standard meant that more networks could join without affecting the existing network.

With accessibility so easy, more and more networks joined the Internet. By 1985, the number was approximately one hundred. By 1987, the number had exceeded two hundred; by 1989, more than five hundred had connected. According to the DDN Network Information Center (DDN NIC), there were over 2,200 networks connected to the Internet as of January, 1990 (Staff, 1991).

In 1984, the National Science Foundation got into the act, through its office of Advanced Scientific Computing (Sterling, 1993). Its own backbone network, NSFNET, took over the ARPANET’s role as the Internet’s main backbone in 1986 (Hardy, 1993; Staff, 1991). The NSF did this to permit supercomputing centers to communicate (Staff, 1991).

The scope of the NSFNET has expanded, and today it is the U.S. national research network, extending to academic and commercial communities the TCP/IP services that were previously available only to government researchers. The NSFNET itself links several midlevel networks, which in turn connect networks at universities and commercial enterprises. NSFNET, like the Internet, is also a network of networks (Staff, 1991).

Under NSF guidance, the Internet blazed forward, setting “a blistering pace for technical advancement, linking newer, faster, shinier supercomputers, through thicker, faster links, upgraded and expanded, again and again, in 1986, 1988, [and] 1990. Other governmental agencies leapt in: NASA, the National Institutes of Health, the Department of Energy, each of them maintaining a digital satrapy in the Internet confederation” (Sterling, 1993).

Under the guiding hand of the NSF, the Internet’s backbone moved from a 56 kbps (thousands of bits-per-second) to the 1.544 mbps (millions of bits-per-second) T-1 line in 1988. Four years later, in December of 1992, the NSFNET had completely switched over to the 45 mbps T-3 line (MERIT/NSFNET, 1993). In five years, the capacity of the NSFNET expanded almost 700 times through the implementation of leading-edge technologies. “Today, the network’s backbone service carries data at the equivalent of 1,400 pages of single-spaced, typed text per second. This means the information in a 20-volume encyclopedia can be sent across the network in under 23 seconds” (MERIT/NSFNET, 1993).

The volume of traffic on the Internet has grown exponentially, doubling every year since the NSF took over. In 1987, the National Science Foundation predicted “over the next five years NSFNET will reach more than 10,000 mathematicians, scientists, and engineers at 200 or more campuses and other research centers” (MERIT/NSFNET, 1993). In fact, those numbers had been exceeded all expectations: total NSFNET traffic grew from 195 million packets in August 1988 to almost 24 billion in November 1992, “a 100-fold increase in four years” (MERIT/NSFNET, 1993). During November 1992, the network reached its first billion-packet-a-day mark, and traffic increased by an average of 11- to 20 percent per month (MERIT/NSFNET, 1993; Sterling, 1993).

As the NSFNET grew to handle much of the load of the Internet, older and less sophisticated networks became less useful. A milestone occurred in June 1990, when the Defense Communications Agency shut down the ARPANET because the midlevel networks and NSFNET had replaced its functions. “Perhaps the greatest testimony to the architecture of the Internet is that when ARPANET, the network from which the Internet grew, was turned off, no one but network staff was aware of it” (Staff, 1991).

Epilogue

The story does not end here, of course. The technology incorporated in the ARPANET was later used in creating local area networks (or LANs), the Ethernet, as well as introduced a whole new culture of people who used the computer other than engineers or scientists. Less than ten years after its introduction, the Internet bloomed to include nearly 100 nodes (O’Neill, et al., 1986). Almost ten years after that, in 1988, the number of nodes reached approximately 50,000 (Staff, 1993). Two years later the number of nodes reached 300,000. In January of 1992, the number of hosts on the Internet climbed over 700,000 as speed and technology improved (Staff, 1993).

The history goes on to include the creation of the Internet, and ultimately, how that may soon change to the National Research Educational Network, or NREN, as soon as 1996 (Staff, 1993). Other portions of the Internet are also worth exploration: Usenet, Internic, BBN, as well as other major networks which are not a part of Internet, but connected to it (such as Bitnet, for example). Additionally, there is the question about the future.

Thus, the story of the ARPANET as outlined here has literally just begun. Through understanding these humble beginnings a full appreciation of where the systems are now and where they may go in the future can be achieved.

Bibliography

ACM (1993). The Machine That Changed The World. In Boston, MA: Public Broadcasting System.

Augarten, S. (1984). Bit by Bit: An Illustrated History of Computers. New York: Ticknor & Fields.

Borsook, P. (1991). Paul Baran: Inventor Extraordinaire. Network World (August 19), 65.

Curtis, P. (1992) Mudding: Social Phenomena in Text-Based Virtual Realities. XEROX Parc.

Denenberg, S. (1994). Personal Communication.

Hardy, H. E. (1993) The History of the Net. Master’s Thesis, Grand Valley State University.

Krol, E. (1992). The whole Internet: user’s guide & catalog. Sebastopol, CA: O’Reilly & Associates.

Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Trans. Human Factors Electronics, March, 4-11.

Licklider, J. C. R., & Vezza, A. (1978). Applications of Information Technology. Proceedings of the IEEE, 66(11), 1330-1346.

MERIT/NSFNET (1993). T-1 Now Part of Internet History. Berkeley Computing, 3(1 (January – February)).

Metcalfe, B. (1992, September 21). Internet fogies to reminisce and argue at Interop Conference. InfoWorld, p. 45.

Metz, J. (1993) An Ethnographic Examination of Computer-Mediated Communication: The Culture of Compusex. Masters Thesis, University of South Dakota.

Network-Service-Center, N. S. F. (1990). The Internet Tour. In Cambridge, MA: Bolt Beranek and Newman Inc.

O’Neill, E. F., Bell, G., Nirenberg, I. L., & Stallings, W. (1986). Communications. Alexandria, VA: Time-Life Books.

Phaffenberger, B. (1991). Que’s Computer User’s Dictionary (2nd ed.). Carmel, Indiana: Que Corporation.

Press, L. (1993). Before the Altair: The history of personal computing. Communications of the ACM, 36(3), 27.

Reid, E. M. (1991) Electropolis: Communication and Community on Internet Relay Chat. Unpublished Thesis, University of Melbourne.

Rosner, R. D. (1982). Packet Switching: Tomorrow’s Communications Today. Belmont, CA: Lifetime Learning Publications.

Staff (1991). History of the Internet (MERIT Report No. Network Science Foundation.

Staff (1993, June 16). Internet: the story so far. PC User, p. 94.

Sterling, B. (1993). A Short History of the Internet. The Magazine of Fantasy and Science Fiction (February).

Wood, D. C. (1985). Computer Networks: A Survey. In W. Chou (Eds.), Computer Communications: Systems and Applications Englewood Cliffs, N.J.: Prentice-Hall

 

End Notes:

[1] An excellent description of the birth of ENIAC, as well as its European predecessors, is given in Augarten, 1984, and in the Peabody-award winning series entitled, ‘The Machine That Changed The World’ (ACM, 1993). For purposes here, however, it is the events, which occurred as a result of ENIAC’s success upon which I will focus.

 

[2] There appears to be some debate as well to when effective resource-sharing occurred on the network. Wood (1985) says that it wasn’t until 1971 that effective resource sharing service began.

 

[3] Actually, the term “Internet” was originally coined in 1972 at the First International Conference on Computer Communications, held in Washington, D.C. A demonstration of the ARPANET was given to visiting project representatives from several countries to discuss the commencement of work on agreed-upon protocols (Hardy, 1993).

 

Comments

  1. Wonderful breakdown of the history of the Internet! It is amazing to see how far technology has come and very exciting to think about the future. Thanks for sharing!

Leave a Comment