When I open my browser and type in a website address, I do it automatically – like flipping a light switch without thinking about the electrical grid inside the wall. But if you stop for a moment and take this mechanism apart, the picture becomes far more interesting than it seems.
The World Wide Web – WWW – is not a synonym for the internet. It's one of its applications. And the confusion between these two concepts begins with the very first conversation about technology. Let's lay it all out on the table and see what's what.
The Internet vs. the WWW – What's the Difference?
Let's start with a basic mistake almost everyone makes. When people say, “I went online,” they usually mean they opened a browser and visited a website. But the internet existed long before browsers. And it continues to operate in places where there are no browsers at all.
The Internet is the physical and protocol-based infrastructure: millions of computers, servers, routers, and cables connected by a common set of rules for data transmission. This infrastructure is used for sending emails, transferring files, running instant messengers, and making video calls. These are all different services of the internet.
The WWW (World Wide Web) is one of these services. Its essence is hypertext: documents linked together by hyperlinks, which can be opened in a browser. Nothing more, nothing less. But it was this service that became what most people call “the internet” in everyday conversation – because it made the network understandable to someone without a technical background.
To understand the WWW, you first need to understand what came before it. And before it, there was ARPANET – a network developed with the support of the U.S. Department of Defense in the late 1960s. Its initial task was a technical one: to ensure stable data transmission between research institutions. The first transmission occurred in 1969 between the University of California, Los Angeles, and the Stanford Research Institute. The computer was supposed to transmit the word “login.” Only the first two letters – “lo” – made it through before the system crashed. Nevertheless, it was a historic moment.
Over the next decade, the network grew as universities, research centers, and government agencies connected to it. The first protocols emerged – sets of rules by which computers agree on how to transmit data. In 1983, ARPANET switched to the TCP/IP protocol, which remains the foundation of the internet today. This moment is generally considered the birth of the modern internet, as it established a unified foundation upon which different networks could operate.
TCP/IP: How the “Language” of the Internet Works
TCP/IP is a pair of protocols, and it's worth understanding them at least in principle. Imagine you're sending a large package through the regular mail, but it doesn't fit into a single box. So, you break it down into several smaller packets, number them, and send them separately along different routes. At the other end, the recipient reassembles the packets in the correct order.
This is exactly how IP (Internet Protocol) works: it divides data into packets and routes them across the network. Meanwhile, TCP (Transmission Control Protocol) ensures that all packets arrive and are reassembled correctly. If a packet gets lost, TCP requests it again. This is what makes data transmission over the internet reliable, even with an unstable connection.
In 1989, British scientist Tim Berners-Lee was working at CERN, the European Organization for Nuclear Research near Geneva. The problem he wanted to solve was purely practical: CERN employed thousands of scientists, and their documents were stored in various formats on different machines, making it extremely difficult to find the necessary information.
His proposal was titled “Information Management: A Proposal” – that was the exact heading of the internal document he submitted to his supervisor. His boss wrote a note in the margin: “Vague, but exciting.” This was perhaps the most modest assessment of an idea destined to change how humanity works with information.
In his proposal, Berners-Lee formulated three key concepts:
- HTML (HyperText Markup Language) – a markup language for creating documents with hyperlinks. Web pages are written in HTML, which describes a document's structure: headings, paragraphs, links, and images.
- HTTP (HyperText Transfer Protocol) – a protocol for transmitting hypertext. This is the set of rules by which a browser requests a document from a server, and the server sends it back. When you enter a website address, your browser sends an HTTP request to the server and receives an HTML page in return.
- URL (Uniform Resource Locator) – a universal address for a resource. In simple terms: an addressing system that allows any document on the network to be uniquely located.
In 1991, the very first website in history was launched at CERN. It described what the World Wide Web was and how to use it. That site is still available at its original address – an artifact that has survived three decades and several generations of browsers.
Initially, the web was the domain of techies. The first browsers were text-based programs without graphics and required a basic knowledge of the command line. Everything changed in 1993 with the release of Mosaic, the first browser with a graphical user interface that could display images directly within a document instead of in a separate window. It was written by a team at the National Center for Supercomputing Applications in the US.
Mosaic did exactly what is needed for the mass adoption of any technology: it removed the barrier to entry. Now, all you needed to use the web was the ability to click a mouse. Users began connecting to the network by the thousands, and then by the millions.
The next step was taken by Netscape Navigator, a commercial browser released in 1994. It added support for encryption (the SSL protocol), which paved the way for e-commerce. Without encryption, transmitting your credit card details over the internet would have been like announcing the number out loud on a bus.
The Browser Wars and Their Aftermath
In the late 1990s, the so-called “browser wars” erupted between Netscape and Microsoft, which had released Internet Explorer and started bundling it with Windows. Microsoft won through distribution: a user would turn on their computer, and the browser was already there. By the early 2000s, Internet Explorer's market share had reached 90%.
But this victory had a downside: the monopoly led to stagnation. Microsoft slowed down its development pace, standards began to diverge, and web developers were forced to create sites specifically “for IE” and “for everyone else.” The situation changed with the arrival of Firefox in 2004 and then Chrome in 2008 – the latter eventually becoming the dominant browser for the next decade.
Let's break down a specific scenario. You open your browser and enter a website address. What happens next? Here's the full chain of events, step by step.
- DNS Query. The browser doesn't know where the server with the desired site is physically located. It sends a query to the DNS (Domain Name System), which translates the domain name into an IP address. It's like a phone book: you know the name, and the system gives you the number.
- Establishing a Connection. After getting the IP address, the browser establishes a TCP connection with the server. This is a three-way “handshake”: the browser says “hello,” the server replies “hello,” and the browser confirms receipt. Only then does data exchange begin.
- HTTPS and Encryption. If the site uses HTTPS (as most do today), the browser and server agree on encryption before transferring data – this is called the TLS handshake. The server provides a digital certificate, the browser verifies it, and all further communication is encrypted. This is why the lock icon in the address bar isn't just a decorative element; it's a real technical confirmation.
- HTTP Request. The browser requests a specific resource: “Give me the index.html file.” The server responds and transmits the file.
- Parsing and Rendering. The browser receives the HTML code and starts parsing it. It builds the DOM (Document Object Model), an internal model of the page. Along the way, it discovers links to CSS files (styles) and JavaScript files (logic) and requests them as well. After that, it renders the final page on the screen.
In practice, this entire process takes anywhere from a few dozen to a few hundred milliseconds. Less time than it takes for the eye to blink.
Every device on the internet has an IP address – a numerical identifier. In the IPv4 version, an address looks like four numbers from 0 to 255, separated by dots: for example, 93.184.216.34. IPv4 allows for about 4.3 billion unique devices – which turned out to be catastrophically small by modern internet standards. That's why IPv6 is being implemented in parallel, where the address is much longer and provides a virtually unlimited number of unique addresses: about 3.4 × 1038.
But remembering numerical addresses is inconvenient. This is precisely why the Domain Name System (DNS) was invented. A domain is a human-readable address for a site. It has a hierarchical structure that is best read from right to left: .de is a country code top-level domain (Germany), followed by the company or project name, and then a subdomain. DNS servers store lookup tables matching domains to IP addresses and respond to browser queries from all over the world.
The history of the web is often divided into informal generations. These aren't strict technical terms but rather descriptions of how the user's role has changed.
Web 1.0: Read, Don't Write
The first generation of the web – roughly from 1991 to 2004 – was essentially a digital library. Websites were static pages: a company or author would publish information, and the user would read it. No interactivity, no comments, no profiles. You were a spectator.
Web 2.0: Write, Share, Interact
Starting around the mid-2000s, the transition to what became known as Web 2.0 began. Technologically, this meant the emergence of dynamic pages, AJAX (asynchronous data loading without page reloads), and APIs between services. From a user's perspective, it meant the ability to create content themselves.
Blogs, social networks, video hosting services, and forums are all part of Web 2.0. The user was no longer just a consumer but also a producer. It was during this period that a few platforms concentrated a colossal amount of content and data, raising questions about who actually owns what the user creates.
What's Next: Decentralization and New Models
The discussion about the next step – so-called Web 3.0 – is ongoing but has yet to yield a clear result. Some define it as a decentralized architecture based on blockchain, where data doesn't belong to a single platform. Others see it as the semantic web, where machines understand the meaning of content, not just its structure. Still others believe that the talk of Web 3.0 has gotten ahead of reality.
Regardless of what the next stage is called, the trend is clear: the network is becoming less centralized, the issue of data ownership is growing more acute, and the demands for security and privacy are higher than ever.
Behind every website you open lies a physical infrastructure that few people think about. Data centers are buildings, sometimes tens of thousands of square meters in area, filled with server racks. They consume enormous amounts of electricity and require constant cooling. The largest of them are located in the US, Europe, and Asia and are managed by a handful of companies that effectively run most of the public internet.
The physical backbone of global connectivity is submarine cables. Thousands of kilometers of fiber-optic cables are laid across ocean floors, carrying about 95% of international internet traffic. Satellite solutions, which have been developing rapidly in recent years, still remain a supplement to the cable infrastructure rather than a replacement – though the gap is narrowing.
When one of these cables fails – due to a ship's anchor, an underwater earthquake, or simple wear and tear – it immediately affects connection speeds across entire regions. This isn't an abstraction: such incidents are recorded regularly and are well-documented.
In the early days of the web, data was transmitted in plain text. Anyone with access to “listen in” on traffic along the route between a user and a server could read everything: logins, passwords, the content of messages. This was the norm as long as the internet remained an academic environment. But with the advent of e-commerce, unencrypted data transmission became an obvious threat.
SSL (Secure Sockets Layer), later replaced by TLS (Transport Layer Security), solved this problem through cryptographic encryption. Today, HTTPS is the de facto standard: browsers explicitly warn users about sites running on unsecure HTTP, and search engines factor in the presence of HTTPS for ranking.
The certificates that confirm a site's authenticity are issued by special organizations called Certificate Authorities. This creates a chain of trust: your browser trusts certain root authorities, they issue certificates to websites, and you get a guarantee that you are communicating with the exact server you intended to reach.
If we put it all together, the picture looks like this:
- The Internet is the infrastructure. TCP/IP is the language that all participants speak.
- The WWW is one of the internet's services, based on three elements: HTML, HTTP, and URL.
- DNS is the address book that translates names into addresses.
- HTTPS is the encryption without which e-commerce and personal data would be defenseless.
- The browser is the tool that handles the entire technical process and shows the user the final page.
What Tim Berners-Lee proposed in 1989 as a tool for sharing scientific documents within a single organization has become an infrastructure without which it's hard to imagine the modern economy, education, medicine, and daily life. This isn't a metaphor – it's a simple fact, confirmed by statistics: according to the ITU, more than five billion people are connected to the internet today.
And despite its massive scale, it all rests on the same three ideas that one person wrote down on a few pages in a Swiss office more than thirty years ago. That's how it works. Now you know.