Interprocess Communication and Sockets

From queues to networks — how processes actually talk

Author

Karsten Naert

Published

February 9, 2026

Introduction

In Lectures 1 and 2 we learned to spawn processes and threads, and we used multiprocessing.Queue to shuttle data between them. That’s interprocess communication—but it only works when all parties live on the same machine and share the same Python runtime.

What if the processes are on different machines? Or written in different languages? Or started at completely different times? That’s where sockets come in—the universal mechanism for processes to talk to each other, whether they’re next-door neighbors or on opposite sides of the planet.

This lecture bridges the gap from “processes sharing a queue” to “processes sending messages over a network.” By the end, you’ll understand the plumbing that makes everything from chat apps to REST APIs work.

This is the third lecture in a five-part series:

  1. Processes and Threads
  2. Multiprocessing and Multithreading in Practice
  3. Interprocess Communication and Sockets (you are here)
  4. Client-Server Architectures and RESTful APIs
  5. Async Programming, Event Loops, and ASGI

The Big Picture: What Is IPC?

IPC stands for Interprocess Communication: any mechanism that allows two or more processes to exchange data. It’s a surprisingly broad umbrella. Here’s a rough taxonomy:

Scope Mechanism Speed Python module
Same process (threads) Shared variables, queue.Queue ⚡ Fastest threading, queue
Same machine (processes) Pipes, shared memory, multiprocessing.Queue 🚀 Fast multiprocessing
Same machine (any language) Unix domain sockets, named pipes, memory-mapped files 🚀 Fast socket, mmap
Different machines Network sockets (TCP/UDP), HTTP, message brokers 🐢 Slower (network) socket, http, requests

Notice how multiprocessing.Queue—which we used in Lecture 2 to send π estimation results from workers to the coordinator—is already IPC. Under the hood, it serializes your data with pickle, shoves it through an OS-level pipe, and deserializes it on the other side. We just never had to think about it because the Queue abstraction handled the messy bits.

The Warehouse Analogy, Extended

In Lecture 1 we said processes are separate warehouses and threads are workers inside the same warehouse. Let’s extend that:

  • Threads (same warehouse): Workers yell across the room. Fast, but chaotic without coordination.
  • Pipes / shared memory (same street): Warehouses next door pass crates through a window or a shared loading dock. Fast, but you both need to be on the same street.
  • Network sockets (across the city/country): Warehouses send packages via the postal service. Slower, but works regardless of distance. And the postal service doesn’t care what’s inside the package—it just delivers bytes.

This lecture is about graduating from shouting across the room to using the postal service.

Quick Tour: Pipes and Shared Memory

Before we dive into sockets, let’s briefly see two other IPC mechanisms that multiprocessing provides. These are fast and convenient for same-machine communication, but limited in scope.

multiprocessing.Pipe: A Two-Way Channel

A Pipe creates a pair of connected endpoints. Data sent into one end comes out the other. Think of it as two tin cans connected by a string.

from multiprocessing import Process, Pipe

def child(conn):
    conn.send("Hello from the child!")
    response = conn.recv()
    print(f"Child got: {response}")
    conn.close()

if __name__ == '__main__':
    parent_conn, child_conn = Pipe()

    p = Process(target=child, args=(child_conn,))
    p.start()

    msg = parent_conn.recv()
    print(f"Parent got: {msg}")
    parent_conn.send("Hello back from the parent!")

    p.join()

Run this and you’ll see:

Parent got: Hello from the child!
Child got: Hello back from the parent!

Simple and elegant. But Pipe only connects two endpoints—it’s not a broadcast mechanism. And both processes must be spawned from the same Python program.

Pipe vs Queue

Pipe is lower-level and faster than Queue (no locking overhead), but it only supports exactly two endpoints. Queue supports many producers and consumers. Use Pipe when you have a simple parent↔︎child channel; use Queue for anything more complex.

multiprocessing.shared_memory: Direct Memory Access

Sometimes you don’t want to send data—you want both processes to see the same chunk of memory. The shared_memory module (Python 3.8+) lets you allocate a block of memory that multiple processes can read and write directly.

from multiprocessing import Process
from multiprocessing.shared_memory import SharedMemory
import struct

def writer():
    shm = SharedMemory(name="my_shared_block", create=True, size=8)
    # Write the float 3.14159 into the shared memory block
    struct.pack_into('d', shm.buf, 0, 3.14159)
    print("Writer: wrote 3.14159 to shared memory")
    input("Writer: press Enter to clean up...")
    shm.close()
    shm.unlink()  # Free the shared memory block

def reader():
    shm = SharedMemory(name="my_shared_block", create=False)
    value = struct.unpack_from('d', shm.buf, 0)[0]
    print(f"Reader: read {value} from shared memory")
    shm.close()

if __name__ == '__main__':
    # In practice you'd run these as separate scripts.
    # Here we cheat and run them sequentially for demonstration.
    writer_process = Process(target=writer)
    writer_process.start()

    import time
    time.sleep(0.5)  # Give the writer a head start

    reader_process = Process(target=reader)
    reader_process.start()
    reader_process.join()

    # Now press Enter in the writer to clean up
    writer_process.join()
Shared Memory Is Tricky

With shared memory, there’s no built-in synchronization. If two processes write to the same bytes simultaneously, you get corrupted data. You’d need to layer a Lock or Semaphore on top (remember those from Lecture 1?). For most use cases, Queue or Pipe is safer and simpler.

These mechanisms are useful, but they’re limited to processes on the same machine. For communication across machines—or even between programs that weren’t started together—we need something more universal. Enter sockets.

Enter Sockets: The Universal IPC

A socket is an endpoint for communication. Think of it as a mailbox: it has an address, and you can send to it or receive from it. Two sockets connected together form a communication channel.

Every network application you’ve ever used—your browser, Spotify, Discord, git push—is built on sockets. HTTP, SMTP, SSH, DNS: all of these are protocols that send structured bytes through sockets. The socket is the raw pipe; the protocol is the language spoken through it.

Addresses: IP + Port

A socket address has two parts:

  • IP address: identifies the machine. Like a street address. 127.0.0.1 (or localhost) means “this machine.”
  • Port number: identifies the process on that machine. Like an apartment number. Ports range from 0 to 65535. Ports below 1024 are reserved for well-known services (80 for HTTP, 443 for HTTPS, 22 for SSH, etc.).

So 127.0.0.1:8000 means “port 8000 on this machine.” When your browser visits http://localhost:8000, it’s connecting a socket to that address.

Python’s socket Module

The socket module is Python’s interface to the operating system’s socket API. It’s low-level—you’re dealing with raw bytes, not Python objects. That’s the point: understanding this layer makes everything above it (HTTP, REST, web frameworks) demystified.

import socket

# Create a socket
s = socket.socket(
    socket.AF_INET,      # Address family: IPv4
    socket.SOCK_DGRAM,   # Socket type: UDP (datagram)
)

print(f"Created socket: {s}")
s.close()

Two key choices when creating a socket:

Parameter Option Meaning
Address family AF_INET IPv4 (the one you know: 192.168.1.1)
AF_INET6 IPv6 (the newer, longer addresses)
Socket type SOCK_DGRAM UDP — connectionless, unreliable, fast
SOCK_STREAM TCP — connection-based, reliable, ordered

We’ll start with UDP because it’s simpler—fewer moving parts. Then we’ll graduate to TCP.

UDP Sockets: Fire and Forget

UDP (User Datagram Protocol) is the simplest way to send data over a network. You put bytes into a datagram, throw it at an address, and hope for the best. There’s no connection, no acknowledgment, no ordering guarantee. Like sending a postcard: cheap and fast, but it might get lost and nobody will tell you.

Despite this, UDP is widely used: DNS lookups, video streaming, online gaming, and voice calls all prefer UDP’s speed over TCP’s reliability.

Ping-Pong: Two Processes Talking Over UDP

Let’s build the simplest possible network application: two scripts that send “ping” and “pong” back and forth over UDP.

udp_pong.py — the “server” (listens first):

# udp_pong.py — Run this first in one CMD window
import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("127.0.0.1", 9999))  # Claim port 9999 on localhost
print("Pong: listening on 127.0.0.1:9999 ...")

for i in range(5):
    data, addr = sock.recvfrom(1024)  # Wait for up to 1024 bytes
    message = data.decode()
    print(f"Pong: received '{message}' from {addr}")

    reply = f"pong-{i}"
    sock.sendto(reply.encode(), addr)  # Send reply back to sender
    print(f"Pong: sent '{reply}'")

sock.close()
print("Pong: done.")

udp_ping.py — the “client” (sends first):

# udp_ping.py — Run this second in another CMD window
import socket

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

server_addr = ("127.0.0.1", 9999)

for i in range(5):
    message = f"ping-{i}"
    sock.sendto(message.encode(), server_addr)
    print(f"Ping: sent '{message}'")

    data, addr = sock.recvfrom(1024)
    reply = data.decode()
    print(f"Ping: received '{reply}' from {addr}")

sock.close()
print("Ping: done.")

Running It

Open two CMD windows. In the first:

python udp_pong.py

In the second:

python udp_ping.py

You’ll see the two processes trading messages:

# Window 1 (pong)                    # Window 2 (ping)
Pong: listening on 127.0.0.1:9999    Ping: sent 'ping-0'
Pong: received 'ping-0' from ...     Ping: received 'pong-0' from ...
Pong: sent 'pong-0'                  Ping: sent 'ping-1'
Pong: received 'ping-1' from ...     Ping: received 'pong-1' from ...
...                                  ...

What’s Going On?

Let’s walk through the key socket calls:

  1. sock.bind(("127.0.0.1", 9999)) — The pong server claims port 9999. This is like putting your name on a mailbox. Without bind(), the OS assigns a random port (which is what happens to the ping client).

  2. sock.recvfrom(1024) — Blocks until a datagram arrives. Returns the data (as bytes) and the sender’s address. The 1024 is the maximum number of bytes to read.

  3. sock.sendto(data, addr) — Sends bytes to the specified address. No connection needed—just fire and forget.

  4. .encode() / .decode() — Sockets deal in raw bytes, not strings. We encode strings to bytes before sending, and decode bytes back to strings after receiving. (This is bytes and str from the advanced Python section.)

No connect(), No listen(), No accept()

Notice what’s missing from the UDP example: there’s no connection establishment at all. The pong server doesn’t “accept” connections. The ping client doesn’t “connect.” They just send datagrams to addresses. This is the essence of UDP: stateless, connectionless communication.

The downside? If the pong server isn’t running when ping sends a message, the message simply vanishes. No error, no retry, no notification. The postcard fell in a ditch and nobody knows.

Try It: Kill the Server

Start udp_pong.py, then start udp_ping.py. Now try it in reverse: start udp_ping.py first, before the server is running. What happens? (Hint: recvfrom will block forever waiting for a reply that never comes. Press Ctrl+C to escape.)

TCP vs UDP

Before we build the TCP version, let’s compare the two protocols side by side.

UDP TCP
Connection None — just send datagrams Must establish a connection first
Reliability No guarantees — packets can be lost, duplicated, or reordered Guaranteed delivery, in order
Speed Faster (less overhead) Slower (acknowledgments, retransmissions)
Message boundaries Preserved — one sendto() = one recvfrom() Stream-based — no boundaries, bytes can arrive in chunks
Analogy Postcard Phone call
Use cases DNS, video streaming, gaming Web (HTTP), email (SMTP), file transfer (FTP)

The critical difference for us: TCP is a byte stream, not a message stream. When you send “Hello” followed by “World” over TCP, the receiver might get “HelloWorld” in a single recv(), or “Hel” and “loWorld” in two separate calls. TCP doesn’t know or care where your messages begin and end—it just guarantees the bytes arrive in order.

This is why protocols like HTTP add structure on top of TCP: headers that tell you how many bytes the message body contains, so you know when to stop reading. We’ll see this up close shortly.

The Warehouse Analogy

  • UDP = tossing a letter over the wall. Maybe someone catches it, maybe it lands in a puddle. No confirmation either way.
  • TCP = picking up the phone. You dial, the other side answers, you confirm you can hear each other (handshake), you talk, and when you’re done you both say goodbye (teardown). If a sentence gets garbled, you repeat it.

For the rest of this lecture (and Lectures 4-5), we’ll work exclusively with TCP. It’s the foundation of the web.

TCP Sockets: The Reliable Channel

TCP (Transmission Control Protocol) adds a connection layer on top of IP. Before any data flows, the two sides perform a three-way handshake:

  1. Client → Server: “I’d like to connect” (SYN)
  2. Server → Client: “Sure, I acknowledge” (SYN-ACK)
  3. Client → Server: “Great, let’s go” (ACK)

You don’t write this handshake yourself—the OS handles it when you call connect() (client) or accept() (server). But it’s good to know it’s happening, because it explains why TCP connections take a measurable amount of time to set up.

Ping-Pong Over TCP

Let’s rebuild our ping-pong, this time with a reliable connection.

tcp_server.py — listens for a connection, then exchanges messages:

# tcp_server.py — Run this first
import socket

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)  # SOCK_STREAM = TCP
server.bind(("127.0.0.1", 9999))
server.listen(1)  # Accept up to 1 pending connection
print("Server: listening on 127.0.0.1:9999 ...")

conn, addr = server.accept()  # Block until a client connects
print(f"Server: accepted connection from {addr}")

for i in range(5):
    data = conn.recv(1024)  # Wait for data from the client
    message = data.decode()
    print(f"Server: received '{message}'")

    reply = f"pong-{i}"
    conn.send(reply.encode())
    print(f"Server: sent '{reply}'")

conn.close()
server.close()
print("Server: done.")

tcp_client.py — connects to the server, then exchanges messages:

# tcp_client.py — Run this second
import socket

client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("127.0.0.1", 9999))  # Establish connection (three-way handshake)
print("Client: connected to server")

for i in range(5):
    message = f"ping-{i}"
    client.send(message.encode())
    print(f"Client: sent '{message}'")

    data = client.recv(1024)
    reply = data.decode()
    print(f"Client: received '{reply}'")

client.close()
print("Client: done.")

Running It

Same as before—two CMD windows:

# Window 1
python tcp_server.py

# Window 2
python tcp_client.py

The TCP Socket Dance

The TCP version has more ceremony than UDP. Here’s the sequence:

SERVER                              CLIENT
──────                              ──────
socket()                            socket()
bind(("127.0.0.1", 9999))
listen(1)
accept()  ←── blocks ──┐
                        │
                        ├──  connect(("127.0.0.1", 9999))
                        │    (three-way handshake happens here)
conn, addr = ...  ◄─────┘
                                    
recv()  ←── blocks ──┐
                     │
                     ├──  send("ping-0")
                     │
data = ...  ◄────────┘
send("pong-0")  ──────────────────► recv() → "pong-0"
...                                 ...
conn.close()                        client.close()
server.close()

Key differences from UDP:

  • listen(1) — Tells the OS this socket will accept incoming connections. The 1 is the backlog: how many pending connections to queue before refusing new ones.
  • accept() — Blocks until a client connects. Returns a new socket (conn) dedicated to this specific client, plus the client’s address. The original server socket continues listening for more clients.
  • connect() — The client initiates the three-way handshake.
  • send() / recv() — Instead of sendto() / recvfrom(). Since we have a connection, the socket already knows who’s on the other end.
Two Sockets on the Server

This is a subtle but important point. After accept(), the server has two sockets:

  1. The listening socket (server) — still bound to port 9999, ready to accept more connections.
  2. The connection socket (conn) — dedicated to talking with this particular client.

This separation is what makes it possible (in principle) to serve multiple clients. We’ll explore that soon.

A More Realistic Example: Echo Server

Our ping-pong was rigid—exactly 5 exchanges, then done. A real server should handle an arbitrary conversation. Here’s an echo server that reads messages until the client disconnects:

# tcp_echo_server.py
import socket

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)  # Allow port reuse
server.bind(("127.0.0.1", 9999))
server.listen(1)
print("Echo server listening on 127.0.0.1:9999 ...")

conn, addr = server.accept()
print(f"Client connected from {addr}")

while True:
    data = conn.recv(1024)
    if not data:  # Client closed the connection
        break
    message = data.decode()
    print(f"Received: '{message}' — echoing back")
    conn.send(data)  # Echo the raw bytes back

conn.close()
server.close()
print("Server shut down.")
# tcp_echo_client.py
import socket

client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("127.0.0.1", 9999))

messages = ["Hello", "How are you?", "Goodbye"]
for msg in messages:
    client.send(msg.encode())
    reply = client.recv(1024).decode()
    print(f"Sent: '{msg}' → Got back: '{reply}'")

client.close()
SO_REUSEADDR

You might have noticed server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1). Without this, if you stop the server and immediately restart it, you’ll get an “Address already in use” error. The OS keeps the port reserved for a short time after closing (the TIME_WAIT state). SO_REUSEADDR tells it “I know what I’m doing, let me reuse it.” Always include this for development servers.

What an HTTP Message Actually Looks Like

We’ve been sending arbitrary strings like "ping-0" through our sockets. But real-world protocols impose structure on those bytes. The most important protocol for us is HTTP — the language of the web.

Here’s the key insight: HTTP is just text sent over a TCP socket. There’s no magic. When your browser visits a website, it opens a TCP connection to the server and sends something like this:

GET /index.html HTTP/1.1\r\n
Host: www.example.com\r\n
User-Agent: Mozilla/5.0\r\n
Accept: text/html\r\n
\r\n

That’s it. Plain text. Bytes on the wire. Let’s dissect it:

┌─── Request line: method, path, protocol version
│
│         GET /index.html HTTP/1.1\r\n
│         Host: www.example.com\r\n          ← Headers (key: value pairs)
│         User-Agent: Mozilla/5.0\r\n        ← Headers
│         Accept: text/html\r\n              ← Headers
│         \r\n                                ← Blank line = end of headers
│         (no body for GET requests)
│
└─── Everything above is the "payload" from TCP's perspective.
     TCP doesn't know or care that it's HTTP. It just delivers bytes.

And the server responds with something like:

HTTP/1.1 200 OK\r\n
Content-Type: text/html\r\n
Content-Length: 45\r\n
\r\n
<html><body><h1>Hello!</h1></body></html>

Again, let’s dissect:

┌─── Status line: protocol, status code, reason phrase
│
│         HTTP/1.1 200 OK\r\n
│         Content-Type: text/html\r\n        ← Response headers
│         Content-Length: 45\r\n             ← How many bytes in the body
│         \r\n                                ← Blank line = end of headers
│         <html><body>...                     ← Body (the actual content)
│
└─── TCP sees all of this as one big blob of bytes.
Headers Are the Envelope, Body Is the Letter

This is the mental model to hold onto going into Lecture 4:

  • Headers = metadata about the message: what type of content it is, how long it is, who sent it, caching rules, authentication tokens, etc.
  • Body = the actual content: HTML, JSON, an image, whatever.
  • The blank line (\r\n\r\n) separates headers from body.

From TCP’s point of view, headers and body are all just payload bytes. TCP doesn’t parse them—it doesn’t even know HTTP exists. It just guarantees the bytes arrive intact and in order. The HTTP protocol is a convention layered on top.

Let’s Prove It: A Bare-Bones HTTP Server

We can build a (terrible, minimal) HTTP server with nothing but the socket module. This drives home that HTTP is just structured text over TCP:

# bare_http_server.py — A minimal HTTP server using raw sockets
import socket

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 8080))
server.listen(1)
print("Bare HTTP server on http://127.0.0.1:8080 — open this in your browser!")

while True:
    conn, addr = server.accept()
    request = conn.recv(4096).decode()
    print(f"--- Request from {addr} ---")
    print(request)
    print("--- End of request ---")

    # Build an HTTP response by hand
    body = "<html><body><h1>Hello from raw sockets!</h1></body></html>"
    response = (
        "HTTP/1.1 200 OK\r\n"
        "Content-Type: text/html\r\n"
        f"Content-Length: {len(body)}\r\n"
        "\r\n"
        f"{body}"
    )
    conn.send(response.encode())
    conn.close()

Run this and open http://127.0.0.1:8080 in your browser. You’ll see “Hello from raw sockets!” in the browser, and in the terminal you’ll see the raw HTTP request your browser sent—complete with all the headers the browser automatically includes.

This is exactly what web frameworks like Flask and FastAPI do under the hood, just with far more sophistication. In Lecture 4, we’ll see how frameworks parse these headers and route requests to your Python functions. But now you know: there’s no magic. It’s bytes over TCP, with headers on top.

Exercise: Inspect HTTP with curl

You can also talk to our bare server using curl from CMD:

curl -v http://127.0.0.1:8080

The -v (verbose) flag shows you both the request headers curl sends and the response headers it receives. Compare this with what the server prints. They should match!

You can also use curl to talk to any website and see the raw HTTP exchange:

curl -v http://example.com

The Multi-Client Problem

Our TCP servers so far have a glaring limitation: they can only handle one client at a time. The echo server calls accept(), talks to the client, and only when that client disconnects does it loop back to accept another. Any other client connecting in the meantime is left waiting in the backlog queue.

Let’s make this concrete. Modify the echo server to loop and accept multiple clients sequentially:

# tcp_echo_sequential.py — Handles clients one at a time
import socket

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 9999))
server.listen(5)
print("Sequential echo server on 127.0.0.1:9999 ...")

while True:
    conn, addr = server.accept()
    print(f"Client {addr} connected")

    while True:
        data = conn.recv(1024)
        if not data:
            break
        conn.send(data)

    print(f"Client {addr} disconnected")
    conn.close()

Now try this: open three CMD windows. In the first, start the server. In the second, connect a client. In the third, try to connect another client simultaneously. The second client’s connection will be accepted by the OS (it sits in the backlog), but the server won’t actually accept() it or read its data until the first client disconnects.

This is clearly not how a real web server works. Google doesn’t make you wait until every other user has finished browsing before it serves your page.

The Thread-Per-Client Solution

The most intuitive fix: spawn a new thread for each client. The main thread loops on accept(), and each connection gets its own handler thread.

# tcp_echo_threaded.py — One thread per client
import socket
import threading

def handle_client(conn, addr):
    print(f"[Thread {threading.current_thread().name}] Handling {addr}")
    while True:
        data = conn.recv(1024)
        if not data:
            break
        conn.send(data)
    print(f"[Thread {threading.current_thread().name}] {addr} disconnected")
    conn.close()

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 9999))
server.listen(5)
print("Threaded echo server on 127.0.0.1:9999 ...")

while True:
    conn, addr = server.accept()
    t = threading.Thread(target=handle_client, args=(conn, addr), daemon=True)
    t.start()

Now multiple clients can connect and chat simultaneously. Each gets their own thread, their own conn socket, their own handle_client loop. The main thread is free to keep accepting new connections.

This works! But there’s a catch that we’ll explore in detail in Lecture 5: threads are not free. Each thread costs memory (typically ~1 MB for its stack), and context-switching between thousands of threads grinds the OS scheduler to a halt. For a chat app with 10 users, threads are fine. For a web server handling 10,000 concurrent connections, threads are a disaster. The solution is async — but that’s getting ahead of ourselves.

Foreshadowing

The progression from here is:

  1. One client at a time — simple but useless for real servers. (This lecture, above.)
  2. Thread per client — works but doesn’t scale. (This lecture, just shown.)
  3. Async event loop — scales to thousands of connections on a single thread. (Lecture 5.)

Understanding why threads don’t scale is the motivation for async programming. Keep this in the back of your mind.

Putting It All Together: A Simple Chat Server

Let’s combine everything from this lecture and the previous two into a practical example: a multi-client chat server. Clients connect over TCP, and any message from one client is broadcast to all connected clients.

This ties together:

  • TCP sockets for network communication
  • Threading (from Lecture 2) for handling multiple clients
  • Locks (from Lecture 1) for safely managing the shared client list

The Server

# chat_server.py
import socket
import threading

HOST = "127.0.0.1"
PORT = 9999

clients = []          # List of (conn, addr) tuples
clients_lock = threading.Lock()

def broadcast(message, sender_addr):
    """Send a message to all connected clients except the sender."""
    with clients_lock:
        for conn, addr in clients:
            if addr != sender_addr:
                try:
                    conn.send(message.encode())
                except OSError:
                    pass  # Client probably disconnected

def handle_client(conn, addr):
    print(f"[Server] {addr} joined.")
    broadcast(f"*** {addr} joined the chat ***", sender_addr=None)

    with clients_lock:
        clients.append((conn, addr))

    try:
        while True:
            data = conn.recv(1024)
            if not data:
                break
            message = data.decode()
            print(f"[{addr}] {message}")
            broadcast(f"[{addr}] {message}", sender_addr=addr)
    except ConnectionResetError:
        pass
    finally:
        with clients_lock:
            clients.remove((conn, addr))
        conn.close()
        print(f"[Server] {addr} left.")
        broadcast(f"*** {addr} left the chat ***", sender_addr=None)

def main():
    server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    server.bind((HOST, PORT))
    server.listen(5)
    print(f"Chat server running on {HOST}:{PORT}")

    while True:
        conn, addr = server.accept()
        t = threading.Thread(target=handle_client, args=(conn, addr), daemon=True)
        t.start()

if __name__ == "__main__":
    main()

The Client

# chat_client.py
import socket
import threading

HOST = "127.0.0.1"
PORT = 9999

def receive_messages(sock):
    """Background thread: print incoming messages."""
    while True:
        try:
            data = sock.recv(1024)
            if not data:
                print("\n[Disconnected from server]")
                break
            print(f"\n{data.decode()}")
        except OSError:
            break

def main():
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    sock.connect((HOST, PORT))
    print(f"Connected to chat server at {HOST}:{PORT}")
    print("Type messages and press Enter. Press Ctrl+C to quit.\n")

    # Start a background thread to receive messages
    receiver = threading.Thread(target=receive_messages, args=(sock,), daemon=True)
    receiver.start()

    try:
        while True:
            message = input()
            if message:
                sock.send(message.encode())
    except (KeyboardInterrupt, EOFError):
        pass
    finally:
        sock.close()

if __name__ == "__main__":
    main()

Running It

Open three (or more) CMD windows:

# Window 1: Start the server
python chat_server.py

# Window 2: Start a client
python chat_client.py

# Window 3: Start another client
python chat_client.py

Type a message in one client window — it appears in the other. The server logs all messages and manages the connections.

How It Works

The architecture follows the same pattern we used in Lecture 2’s capstone:

┌──────────────────────────────────────────────────┐
│  Chat Server                                     │
│                                                  │
│  Main Thread             Handler Threads         │
│  ┌──────────┐     ┌──────────┐ ┌──────────┐     │
│  │ accept() │────►│ Client 1 │ │ Client 2 │ ... │
│  │  loop    │     │  thread  │ │  thread  │     │
│  └──────────┘     └──────────┘ └──────────┘     │
│                          │            │          │
│                     clients list (shared,        │
│                     protected by Lock)           │
└──────────────────────────────────────────────────┘
  • The main thread sits in an accept() loop, spawning a handler thread for each new client.
  • Each handler thread reads from its client and calls broadcast() to forward the message.
  • broadcast() iterates over the shared clients list under a Lock — the same synchronization primitive from Lecture 1.
  • The client uses a background thread to receive messages while input() blocks the main thread waiting for user input.

This is a real (if barebones) chat application. It demonstrates every concept from Lectures 1–3 working together. The only limitation is scalability — but that’s a story for Lecture 5.

Summary

We’ve climbed the IPC ladder from the simplest mechanisms to full network communication:

Concept Scope Python When to use
Pipe Same machine, 2 endpoints multiprocessing.Pipe Simple parent↔︎child channel
Shared memory Same machine, any # of processes multiprocessing.shared_memory High-speed shared data (with care)
Queue Same machine, many producers/consumers multiprocessing.Queue General-purpose IPC
UDP socket Any machines, connectionless socket.SOCK_DGRAM Speed over reliability (DNS, gaming)
TCP socket Any machines, connection-based socket.SOCK_STREAM Reliability and ordering (HTTP, SSH)

Key takeaways:

  1. IPC is a spectrum — from shared variables (threads) to network sockets (different machines). Each level trades speed for reach.
  2. Sockets are the foundation of all networking. HTTP, REST APIs, web frameworks — they all send structured bytes through TCP sockets.
  3. UDP is simple and fast but unreliable. TCP is reliable but requires connection setup and has no message boundaries.
  4. HTTP is just text over TCP — request line, headers, blank line, body. Headers are metadata; the body is the content. TCP sees it all as payload bytes.
  5. Thread-per-client is the straightforward approach to serving multiple clients, but it doesn’t scale to thousands of connections.

The story so far:

Lecture 1: Processes & threads exist. Here's how to create them.
Lecture 2: Here's how to combine them in a real application (π estimator).
Lecture 3: Here's how they talk to each other, locally and over the network. (You are here.)
Lecture 4: HTTP + client-server architecture → REST APIs.
Lecture 5: Async programming → scalable servers without thread-per-client.

Exercises & Project Ideas

Exercise 1: UDP Latency Measurement

Modify the UDP ping-pong example to measure round-trip latency:

  1. The ping client records time.perf_counter() before sending each ping.
  2. After receiving the pong, it records the time again and computes the round-trip time.
  3. After 100 exchanges, print the average, minimum, and maximum latency.

How does latency change if you increase the message size from 10 bytes to 10,000 bytes?

Exercise 2: TCP File Transfer

Build a simple file transfer tool:

  1. file_server.py: Accepts a connection, receives a filename (as the first line), then receives file contents and saves them to disk.
  2. file_client.py: Connects to the server, sends a filename, then sends the file contents.

Hints:

  • Use \n as a delimiter between the filename and the file data.
  • Read/write in binary mode ('rb' / 'wb') for non-text files.
  • Remember that TCP is a byte stream — you may need to read in a loop until all data arrives.

Bonus: Add a progress indicator that shows bytes transferred.

Exercise 3: Multi-Room Chat

Extend the chat server to support multiple rooms:

  1. When a client connects, they send /join <room_name> as their first message.
  2. Messages are only broadcast to clients in the same room.
  3. A client can switch rooms with /join <other_room>.
  4. /rooms lists all active rooms and their user counts.

This requires modifying the shared clients data structure — think about what data structure would be appropriate, and how to protect it with locks.

Exercise 4: Shared Memory NumPy Array

Use multiprocessing.shared_memory to share a NumPy array between two processes:

  1. Process A creates a shared memory block and maps a NumPy array onto it.
  2. Process A fills the array with random data and signals Process B (use a multiprocessing.Event).
  3. Process B attaches to the same shared memory, reads the array, computes the mean, and prints it.
import numpy as np
from multiprocessing.shared_memory import SharedMemory

# Create a shared memory block big enough for 1000 float64s
shm = SharedMemory(create=True, size=1000 * 8)

# Map a NumPy array onto the shared memory
arr = np.ndarray((1000,), dtype=np.float64, buffer=shm.buf)

# Now arr reads/writes directly to shared memory!
arr[:] = np.random.random(1000)
print(f"Mean: {arr.mean():.4f}")

shm.close()
shm.unlink()
Exercise 5: Extend the Bare HTTP Server

Take the bare_http_server.py from the HTTP section and extend it:

  1. Parse the request line to extract the method and path (e.g., GET /about).
  2. Serve different HTML content for /, /about, and /contact.
  3. Return a proper 404 Not Found response for unknown paths.
  4. Add a Date header to your response (use datetime.datetime.now(datetime.UTC)).

This is essentially building a micro web framework from scratch. Compare the amount of code needed here with what Flask or FastAPI give you for free — that’s the motivation for Lecture 4.

Project Idea: Remote Calculator

Build a client-server calculator:

  • The server accepts TCP connections and reads arithmetic expressions (e.g., "3 + 4 * 2").
  • It evaluates the expression (safely! use ast.literal_eval or a simple parser—never eval()) and sends back the result.
  • The client provides a REPL where the user types expressions and sees results.
  • Bonus: make the server multi-threaded so multiple users can calculate simultaneously.

Additional Resources


Next: Lecture 4 — Client-Server Architectures and RESTful APIs, where we move from raw sockets to structured HTTP APIs, and learn what “REST” actually means (spoiler: not what most people think).