Computer Networking — Part One

Sanket Saxena
10 min readJul 1, 2023

--

The digital universe continues to evolve, and in this context, the relevance of solid, dependable communication systems has amplified. Yet, ensuring seamless communication between various network components is quite an undertaking. Here, communication models, such as the OSI model, become essential.

Why Do We Need a Communication Model?

As our digital universe expands, the importance of robust, reliable communication systems has never been more critical. Yet, ensuring different network elements can communicate effectively and innovatively is no small task. This is where communication models like the OSI model come into play.

Agnostic Applications

A communication model is crucial to develop network-agnostic applications. Without a standard model, an application would need to know the details of the network medium — be it Wi-Fi, Ethernet, LTE, or Fiber — to communicate. It would necessitate developing separate applications for each network type, a complex and inefficient approach.

Network Equipment Management

Implementing a standard model like OSI simplifies network equipment management. It allows components from different manufacturers to interoperate, facilitating equipment upgrades and replacements. Without a standard model, replacing or upgrading network equipment can quickly turn into a nightmare.

Decoupled Innovation

Lastly, a standard model allows for decoupled innovation. Each layer of the model can be developed or enhanced independently without impacting the other layers. This flexibility drives technological advancement while preserving system stability.

Data encapsulation is the process of adding headers and trailers to data as it moves down the OSI layers. At each layer, specific information (Protocol Data Unit or PDU) is attached. At the transport layer, data is broken down into segments, and headers including source and destination ports are attached. At the network layer, these segments are encapsulated into packets, with added source and destination IP addresses. At the data link layer, these packets are further encapsulated into frames by adding headers and trailers containing MAC addresses and error-detection codes. Finally, at the physical layer, frames are converted into bits for transmission. As the data is received, each layer removes its respective headers (decapsulation) to extract the necessary information.

The OSI model provides a standard set of rules or protocols that networked devices must follow to communicate. These protocols are globally recognized, meaning that manufacturers design their devices to comply with these rules. This enables devices from different manufacturers to communicate effectively with each other because they all adhere to the same set of protocols defined by the OSI model.

OSI Model: An Overview

  1. Application Layer (Layer 7): The topmost layer is where user interaction happens. When a user initiates a network application, it begins here. Protocols like HTTP for web services, FTP for file transfer, SMTP for email, or DNS for domain name resolution work at this layer. It facilitates the interaction between software applications and lower-level networks services.
  2. Presentation Layer (Layer 6): Once the application layer has the data to send, the presentation layer prepares this data for delivery. This layer is responsible for the translation of data into a network-compatible format (serialization), and encrypting data for secure transmission. It can also compress data to make the transmission process more efficient.
  3. Session Layer (Layer 5): This layer establishes, manages, and terminates connections between applications on each end of the communication. It ensures that the session remains open long enough to transfer all the data being exchanged, then promptly closes the session to avoid leaving the connection open for longer than necessary. Protocols like Point-to-Point Tunneling Protocol (PPTP) for VPN connection or NetBIOS for local area networking work on this layer.
  4. Transport Layer (Layer 4): The Transport Layer is like the system’s traffic control center. It regulates data transmission by ensuring data packets are error-free, in sequence, without losses or duplications. It does so by using control measures, such as acknowledgments and timeouts. Here, the TCP or UDP protocol attaches source and destination ports to data packets, making them data segments.
  5. Network Layer (Layer 3): Here, the best path for data transmission is determined, known as routing. The Network Layer adds the logical addresses (source and destination IP addresses) to the data segments, turning them into packets. It handles the routing and forwarding of packets across networks. The Internet Protocol (IP), as well as routing protocols like OSPF and BGP, operate at this layer.
  6. Data Link Layer (Layer 2): The Data Link Layer takes packets from the Network Layer and divides them into frames. It adds headers and trailers to the data, which contain control information like source and destination MAC addresses and error detection codes. It also manages the physical addressing and media access control. Ethernet and PPP are examples of protocols that work at this layer.
  7. Physical Layer (Layer 1): This base layer is responsible for the actual physical connection between devices, transmitting raw bitstreams over this physical medium. This layer defines the electrical and mechanical aspects of the devices such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. Networking technologies like Ethernet cables (Cat5e, Cat6), coaxial cables, and fiber optics work at this layer.

Here’s a more practical look at where these devices and services operate within the OSI model, along with why and how efficiently they perform their functions:

Switch

  • OSI Layer: Data Link Layer (Layer 2)
  • Scenario: Switches are used within a network when you want to connect multiple devices to one another. They are very efficient and fast because they create a direct connection between source and destination devices for the duration of the communication. This direct connection provides a dedicated bandwidth, allowing data packets to be delivered without congestion.

Router

  • OSI Layer: Network Layer (Layer 3)
  • Scenario: Routers are used when you need to connect two or more different networks, often in different geographical locations. They are not as fast as switches due to the higher level of processing involved. They read each packet’s IP address to determine the best path for the packet across the network, taking into account factors such as network congestion and the cost of different potential routes.

Load Balancer (ALB, NLB)

  • OSI Layer: ALB operates at the Application Layer (Layer 7) and NLB at the Transport Layer (Layer 4)
  • Scenario: Load balancers like ALB and NLB are used in AWS to distribute incoming application traffic across multiple targets, such as EC2 instances. They are efficient in ensuring high availability and fault tolerance of applications. ALB, being a Layer 7 load balancer, can route traffic based on the content of the message, like HTTP headers and request paths, making it more flexible. NLB, a Layer 4 load balancer, can handle millions of requests per second while maintaining ultra-low latencies, which is useful for handling TCP traffic where extreme performance is required.

Reverse Proxy

  • OSI Layer: Application Layer (Layer 7)
  • Scenario: A reverse proxy is used to distribute network traffic to a number of servers to prevent any one server from becoming a bottleneck. It also provides an additional layer of abstraction and control to ensure the smooth flow of network traffic between clients and servers. Reverse proxies can enhance performance by compressing inbound and outbound data, and by caching; they also add security and anonymity.

API Gateway

  • OSI Layer: Application Layer (Layer 7)
  • Scenario: API Gateway is a managed service in AWS that acts as a “front door” for developers to access data, business logic, or functionality from back-end services. It handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, data transformation, and more. It’s quite efficient as it allows developers to concentrate on the application logic rather than dealing with the overhead of managing APIs.

Firewall

  • OSI Layer: Works across multiple layers (Layer 3 — Layer 7)
  • Scenario: Firewalls are security devices used to monitor and control incoming and outgoing network traffic based on predetermined security rules. They establish a barrier between a trusted and an untrusted network. Firewalls that operate at the higher levels can inspect the payload of a packet and make decisions based on the actual content of the traffic, offering more comprehensive security.

DNS

  • OSI Layer: Application Layer (Layer 7)
  • Scenario: DNS, or Domain Name System, is used to translate human-readable domain names (like www.example.com) into machine-readable IP addresses (like 192.0.2.1). It operates quite quickly, although exact speed can depend on various factors such as network congestion, physical location, and the specific settings of the DNS server. AWS’s Route 53 is an example of a DNS service provider.

The OSI model has its flaws, including its seven-layer structure, which can be complex for new users. Differentiating layer functions can be challenging, leading to debates on which layer handles which responsibility. Consolidating layers 5–7 into a single application layer may be more practical.

The TCP/IP Model

The TCP/IP model is a streamlined alternative to the OSI model, featuring only four layers:

  • Application Layer: Consolidates layers 5, 6, and 7 of the OSI model.
  • Transport Layer: Corresponds to Layer 4 in the OSI model.
  • Internet Layer: Matches Layer 3 in the OSI model.
  • Data Link Layer: Similar to Layer 2 in the OSI model.

Interestingly, the TCP/IP model doesn’t officially cover the Physical Layer, leaving physical networking details more flexible.

Browsing Google.com Using TCP/IP Model

The process of browsing Google.com can also be explained using the TCP/IP model. Here’s how it works:

  1. Application Layer: You type www.google.com in your web browser. Your browser, functioning at the Application layer, initiates an HTTP request.
  2. Transport Layer: The request is then handed over to the Transport layer, where TCP (Transmission Control Protocol) adds its header information, including source and destination ports. The process of segmenting the data, if necessary, also occurs at this layer.
  3. Internet Layer: The Internet layer encapsulates the TCP segment into an IP packet, adding source and destination IP addresses.
  4. Network Interface Layer: Finally, the Network Interface layer further encapsulates the IP packet into a frame, attaching source and destination MAC addresses, and transmits it over the physical network.

On the server side, the layers interpret and strip their respective headers in the reverse order, with the Application layer ultimately presenting the requested Google homepage.

Understanding AWS Networking

VPC (Virtual Private Cloud)

Amazon’s VPC lets you provision a logically isolated, virtual network where you can launch AWS resources. When setting up a VPC, you define its IP address space from which you can create subnets. To ensure high availability, you should deploy your resources across multiple Availability Zones (AZs) in a VPC.

VPC Peering: To connect one VPC to another, AWS offers VPC Peering. It’s a networking connection between two VPCs enabling you to route traffic between them privately. This connection doesn’t traverse the public internet and doesn’t require a gateway or VPN connection.

Subnet

Subnets allow you to segment your VPC’s IP address space, which enhances the security and traffic management of your applications. There are two types:

  • Public Subnets: Have a route to an Internet Gateway (IGW) and the instances can communicate with the internet.
  • Private Subnets: Do not have a direct route to the IGW, hence the instances cannot communicate with the internet.

You can employ AWS Network Access Control Lists (NACLs) and Security Groups (SGs) to add a layer of security to subnets.

CIDR Notations and Masks

CIDR (Classless Inter-Domain Routing) notations are a compact representation of an IP address and its associated network mask. For example, ‘192.0.2.0/24’ represents the IPv4 address 192.0.2.0 and its associated routing prefix 192.0.2.0, or equivalently, its subnet mask 255.255.255.0, which has 24 leading 1-bits. Plan your CIDR blocks carefully to avoid running out of IP addresses for your resources.

Routing Tables

A routing table contains a set of rules, called routes, that determine where network traffic is directed. Each subnet must be associated with a routing table, which controls the traffic leaving that subnet. If a subnet doesn’t have a specific route table associated with it, it uses the main route table.

Elastic IP

An Elastic IP is a static, public IPv4 address you can allocate to your account, and associate with your instance. You can quickly remap an Elastic IP to another instance in case of instance failure, which makes it ideal for fault-tolerant and highly available applications.

NAT Gateway

A NAT (Network Address Translation) Gateway enables instances in a private subnet to connect to the internet or other AWS services but prevents the internet from initiating a connection with those instances. If you have instances that require internet access for software updates but should not be directly accessible from the internet, a NAT Gateway is the perfect solution.

API Gateway

API Gateway is an AWS service for creating, deploying, and managing secure APIs at scale. It provides features like traffic management, API version control, and authorization and access control. It’s a key component for serverless architectures and mobile backend services.

Load Balancers (ALB and NLB)

Load balancers are crucial components for ensuring high availability and fault tolerance of applications. They distribute incoming traffic across multiple targets in multiple Availability Zones to minimize the risk of overloading a single instance.

  • Application Load Balancer (ALB): Best suited for load balancing of HTTP and HTTPS traffic.
  • Network Load Balancer (NLB): Best suited for load balancing of TCP traffic where extreme performance is required.

Transit Gateway

AWS Transit Gateway acts as a network transit hub for connecting your VPCs and on-premises networks. It simplifies network architecture, reduces operational overhead, and facilitates scalability. Transit Gateways are essential for building a highly available network architecture across multiple AWS accounts and VPCs.

Availability Zones (AZs)

Availability Zones are physically separate locations within an AWS region that are engineered to be insulated from failures in other AZs, providing inexpensive, low-latency network connectivity to other zones in the same region. Deploying your resources across multiple AZs can significantly increase the fault tolerance of your system.

VPN and Tunneling

Virtual Private Network (VPN) connections enable you to establish secure and private network connections between your network and your VPCs. AWS supports Internet Protocol Security (IPSec) VPN connections. VPN tunneling involves encapsulating a network protocol within packets carried by the second network. AWS Site-to-Site VPN creates a secure, IPsec Site-to-Site VPN tunnel for a safe and secure connection.

By leveraging these AWS networking services, you can build scalable, secure, and highly available network architectures. Each service plays a crucial role in ensuring your resources are available to serve your applications, and your data is secure from threats.

--

--

Sanket Saxena
Sanket Saxena

Written by Sanket Saxena

I love writing about software engineering and all the cool things I learn along the way.

No responses yet