from multiprocessing import Process, Pipe
def child(conn):
conn.send("Hello from the child!")
response = conn.recv()
print(f"Child got: {response}")
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=child, args=(child_conn,))
p.start()
msg = parent_conn.recv()
print(f"Parent got: {msg}")
parent_conn.send("Hello back from the parent!")
p.join()Interprocess Communication and Sockets
From queues to networks — how processes actually talk
Introduction
In Lectures 1 and 2 we learned to spawn processes and threads, and we used multiprocessing.Queue to shuttle data between them. That’s interprocess communication—but it only works when all parties live on the same machine and share the same Python runtime.
What if the processes are on different machines? Or written in different languages? Or started at completely different times? That’s where sockets come in—the universal mechanism for processes to talk to each other, whether they’re next-door neighbors or on opposite sides of the planet.
This lecture bridges the gap from “processes sharing a queue” to “processes sending messages over a network.” By the end, you’ll understand the plumbing that makes everything from chat apps to REST APIs work.
This is the third lecture in a five-part series:
- Processes and Threads
- Multiprocessing and Multithreading in Practice
- Interprocess Communication and Sockets (you are here)
- Client-Server Architectures and RESTful APIs
- Async Programming, Event Loops, and ASGI
The Big Picture: What Is IPC?
IPC stands for Interprocess Communication: any mechanism that allows two or more processes to exchange data. It’s a surprisingly broad umbrella. Here’s a rough taxonomy:
| Scope | Mechanism | Speed | Python module |
|---|---|---|---|
| Same process (threads) | Shared variables, queue.Queue |
⚡ Fastest | threading, queue |
| Same machine (processes) | Pipes, shared memory, multiprocessing.Queue |
🚀 Fast | multiprocessing |
| Same machine (any language) | Unix domain sockets, named pipes, memory-mapped files | 🚀 Fast | socket, mmap |
| Different machines | Network sockets (TCP/UDP), HTTP, message brokers | 🐢 Slower (network) | socket, http, requests |
Notice how multiprocessing.Queue—which we used in Lecture 2 to send π estimation results from workers to the coordinator—is already IPC. Under the hood, it serializes your data with pickle, shoves it through an OS-level pipe, and deserializes it on the other side. We just never had to think about it because the Queue abstraction handled the messy bits.
The Warehouse Analogy, Extended
In Lecture 1 we said processes are separate warehouses and threads are workers inside the same warehouse. Let’s extend that:
- Threads (same warehouse): Workers yell across the room. Fast, but chaotic without coordination.
- Pipes / shared memory (same street): Warehouses next door pass crates through a window or a shared loading dock. Fast, but you both need to be on the same street.
- Network sockets (across the city/country): Warehouses send packages via the postal service. Slower, but works regardless of distance. And the postal service doesn’t care what’s inside the package—it just delivers bytes.
This lecture is about graduating from shouting across the room to using the postal service.
Enter Sockets: The Universal IPC
A socket is an endpoint for communication. Think of it as a mailbox: it has an address, and you can send to it or receive from it. Two sockets connected together form a communication channel.
Every network application you’ve ever used—your browser, Spotify, Discord, git push—is built on sockets. HTTP, SMTP, SSH, DNS: all of these are protocols that send structured bytes through sockets. The socket is the raw pipe; the protocol is the language spoken through it.
Addresses: IP + Port
A socket address has two parts:
- IP address: identifies the machine. Like a street address.
127.0.0.1(orlocalhost) means “this machine.” - Port number: identifies the process on that machine. Like an apartment number. Ports range from 0 to 65535. Ports below 1024 are reserved for well-known services (80 for HTTP, 443 for HTTPS, 22 for SSH, etc.).
So 127.0.0.1:8000 means “port 8000 on this machine.” When your browser visits http://localhost:8000, it’s connecting a socket to that address.
Python’s socket Module
The socket module is Python’s interface to the operating system’s socket API. It’s low-level—you’re dealing with raw bytes, not Python objects. That’s the point: understanding this layer makes everything above it (HTTP, REST, web frameworks) demystified.
import socket
# Create a socket
s = socket.socket(
socket.AF_INET, # Address family: IPv4
socket.SOCK_DGRAM, # Socket type: UDP (datagram)
)
print(f"Created socket: {s}")
s.close()Two key choices when creating a socket:
| Parameter | Option | Meaning |
|---|---|---|
| Address family | AF_INET |
IPv4 (the one you know: 192.168.1.1) |
AF_INET6 |
IPv6 (the newer, longer addresses) | |
| Socket type | SOCK_DGRAM |
UDP — connectionless, unreliable, fast |
SOCK_STREAM |
TCP — connection-based, reliable, ordered |
We’ll start with UDP because it’s simpler—fewer moving parts. Then we’ll graduate to TCP.
UDP Sockets: Fire and Forget
UDP (User Datagram Protocol) is the simplest way to send data over a network. You put bytes into a datagram, throw it at an address, and hope for the best. There’s no connection, no acknowledgment, no ordering guarantee. Like sending a postcard: cheap and fast, but it might get lost and nobody will tell you.
Despite this, UDP is widely used: DNS lookups, video streaming, online gaming, and voice calls all prefer UDP’s speed over TCP’s reliability.
Ping-Pong: Two Processes Talking Over UDP
Let’s build the simplest possible network application: two scripts that send “ping” and “pong” back and forth over UDP.
udp_pong.py — the “server” (listens first):
# udp_pong.py — Run this first in one CMD window
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("127.0.0.1", 9999)) # Claim port 9999 on localhost
print("Pong: listening on 127.0.0.1:9999 ...")
for i in range(5):
data, addr = sock.recvfrom(1024) # Wait for up to 1024 bytes
message = data.decode()
print(f"Pong: received '{message}' from {addr}")
reply = f"pong-{i}"
sock.sendto(reply.encode(), addr) # Send reply back to sender
print(f"Pong: sent '{reply}'")
sock.close()
print("Pong: done.")udp_ping.py — the “client” (sends first):
# udp_ping.py — Run this second in another CMD window
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_addr = ("127.0.0.1", 9999)
for i in range(5):
message = f"ping-{i}"
sock.sendto(message.encode(), server_addr)
print(f"Ping: sent '{message}'")
data, addr = sock.recvfrom(1024)
reply = data.decode()
print(f"Ping: received '{reply}' from {addr}")
sock.close()
print("Ping: done.")Running It
Open two CMD windows. In the first:
python udp_pong.pyIn the second:
python udp_ping.pyYou’ll see the two processes trading messages:
# Window 1 (pong) # Window 2 (ping)
Pong: listening on 127.0.0.1:9999 Ping: sent 'ping-0'
Pong: received 'ping-0' from ... Ping: received 'pong-0' from ...
Pong: sent 'pong-0' Ping: sent 'ping-1'
Pong: received 'ping-1' from ... Ping: received 'pong-1' from ...
... ...
What’s Going On?
Let’s walk through the key socket calls:
sock.bind(("127.0.0.1", 9999))— The pong server claims port 9999. This is like putting your name on a mailbox. Withoutbind(), the OS assigns a random port (which is what happens to the ping client).sock.recvfrom(1024)— Blocks until a datagram arrives. Returns the data (asbytes) and the sender’s address. The1024is the maximum number of bytes to read.sock.sendto(data, addr)— Sends bytes to the specified address. No connection needed—just fire and forget..encode()/.decode()— Sockets deal in raw bytes, not strings. We encode strings to bytes before sending, and decode bytes back to strings after receiving. (This isbytesandstrfrom the advanced Python section.)
connect(), No listen(), No accept()
Notice what’s missing from the UDP example: there’s no connection establishment at all. The pong server doesn’t “accept” connections. The ping client doesn’t “connect.” They just send datagrams to addresses. This is the essence of UDP: stateless, connectionless communication.
The downside? If the pong server isn’t running when ping sends a message, the message simply vanishes. No error, no retry, no notification. The postcard fell in a ditch and nobody knows.
Start udp_pong.py, then start udp_ping.py. Now try it in reverse: start udp_ping.py first, before the server is running. What happens? (Hint: recvfrom will block forever waiting for a reply that never comes. Press Ctrl+C to escape.)
TCP vs UDP
Before we build the TCP version, let’s compare the two protocols side by side.
| UDP | TCP | |
|---|---|---|
| Connection | None — just send datagrams | Must establish a connection first |
| Reliability | No guarantees — packets can be lost, duplicated, or reordered | Guaranteed delivery, in order |
| Speed | Faster (less overhead) | Slower (acknowledgments, retransmissions) |
| Message boundaries | Preserved — one sendto() = one recvfrom() |
Stream-based — no boundaries, bytes can arrive in chunks |
| Analogy | Postcard | Phone call |
| Use cases | DNS, video streaming, gaming | Web (HTTP), email (SMTP), file transfer (FTP) |
The critical difference for us: TCP is a byte stream, not a message stream. When you send “Hello” followed by “World” over TCP, the receiver might get “HelloWorld” in a single recv(), or “Hel” and “loWorld” in two separate calls. TCP doesn’t know or care where your messages begin and end—it just guarantees the bytes arrive in order.
This is why protocols like HTTP add structure on top of TCP: headers that tell you how many bytes the message body contains, so you know when to stop reading. We’ll see this up close shortly.
The Warehouse Analogy
- UDP = tossing a letter over the wall. Maybe someone catches it, maybe it lands in a puddle. No confirmation either way.
- TCP = picking up the phone. You dial, the other side answers, you confirm you can hear each other (handshake), you talk, and when you’re done you both say goodbye (teardown). If a sentence gets garbled, you repeat it.
For the rest of this lecture (and Lectures 4-5), we’ll work exclusively with TCP. It’s the foundation of the web.
TCP Sockets: The Reliable Channel
TCP (Transmission Control Protocol) adds a connection layer on top of IP. Before any data flows, the two sides perform a three-way handshake:
- Client → Server: “I’d like to connect” (SYN)
- Server → Client: “Sure, I acknowledge” (SYN-ACK)
- Client → Server: “Great, let’s go” (ACK)
You don’t write this handshake yourself—the OS handles it when you call connect() (client) or accept() (server). But it’s good to know it’s happening, because it explains why TCP connections take a measurable amount of time to set up.
Ping-Pong Over TCP
Let’s rebuild our ping-pong, this time with a reliable connection.
tcp_server.py — listens for a connection, then exchanges messages:
# tcp_server.py — Run this first
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # SOCK_STREAM = TCP
server.bind(("127.0.0.1", 9999))
server.listen(1) # Accept up to 1 pending connection
print("Server: listening on 127.0.0.1:9999 ...")
conn, addr = server.accept() # Block until a client connects
print(f"Server: accepted connection from {addr}")
for i in range(5):
data = conn.recv(1024) # Wait for data from the client
message = data.decode()
print(f"Server: received '{message}'")
reply = f"pong-{i}"
conn.send(reply.encode())
print(f"Server: sent '{reply}'")
conn.close()
server.close()
print("Server: done.")tcp_client.py — connects to the server, then exchanges messages:
# tcp_client.py — Run this second
import socket
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("127.0.0.1", 9999)) # Establish connection (three-way handshake)
print("Client: connected to server")
for i in range(5):
message = f"ping-{i}"
client.send(message.encode())
print(f"Client: sent '{message}'")
data = client.recv(1024)
reply = data.decode()
print(f"Client: received '{reply}'")
client.close()
print("Client: done.")Running It
Same as before—two CMD windows:
# Window 1
python tcp_server.py
# Window 2
python tcp_client.pyThe TCP Socket Dance
The TCP version has more ceremony than UDP. Here’s the sequence:
SERVER CLIENT
────── ──────
socket() socket()
bind(("127.0.0.1", 9999))
listen(1)
accept() ←── blocks ──┐
│
├── connect(("127.0.0.1", 9999))
│ (three-way handshake happens here)
conn, addr = ... ◄─────┘
recv() ←── blocks ──┐
│
├── send("ping-0")
│
data = ... ◄────────┘
send("pong-0") ──────────────────► recv() → "pong-0"
... ...
conn.close() client.close()
server.close()
Key differences from UDP:
listen(1)— Tells the OS this socket will accept incoming connections. The1is the backlog: how many pending connections to queue before refusing new ones.accept()— Blocks until a client connects. Returns a new socket (conn) dedicated to this specific client, plus the client’s address. The originalserversocket continues listening for more clients.connect()— The client initiates the three-way handshake.send()/recv()— Instead ofsendto()/recvfrom(). Since we have a connection, the socket already knows who’s on the other end.
This is a subtle but important point. After accept(), the server has two sockets:
- The listening socket (
server) — still bound to port 9999, ready to accept more connections. - The connection socket (
conn) — dedicated to talking with this particular client.
This separation is what makes it possible (in principle) to serve multiple clients. We’ll explore that soon.
A More Realistic Example: Echo Server
Our ping-pong was rigid—exactly 5 exchanges, then done. A real server should handle an arbitrary conversation. Here’s an echo server that reads messages until the client disconnects:
# tcp_echo_server.py
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Allow port reuse
server.bind(("127.0.0.1", 9999))
server.listen(1)
print("Echo server listening on 127.0.0.1:9999 ...")
conn, addr = server.accept()
print(f"Client connected from {addr}")
while True:
data = conn.recv(1024)
if not data: # Client closed the connection
break
message = data.decode()
print(f"Received: '{message}' — echoing back")
conn.send(data) # Echo the raw bytes back
conn.close()
server.close()
print("Server shut down.")# tcp_echo_client.py
import socket
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("127.0.0.1", 9999))
messages = ["Hello", "How are you?", "Goodbye"]
for msg in messages:
client.send(msg.encode())
reply = client.recv(1024).decode()
print(f"Sent: '{msg}' → Got back: '{reply}'")
client.close()SO_REUSEADDR
You might have noticed server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1). Without this, if you stop the server and immediately restart it, you’ll get an “Address already in use” error. The OS keeps the port reserved for a short time after closing (the TIME_WAIT state). SO_REUSEADDR tells it “I know what I’m doing, let me reuse it.” Always include this for development servers.
What an HTTP Message Actually Looks Like
We’ve been sending arbitrary strings like "ping-0" through our sockets. But real-world protocols impose structure on those bytes. The most important protocol for us is HTTP — the language of the web.
Here’s the key insight: HTTP is just text sent over a TCP socket. There’s no magic. When your browser visits a website, it opens a TCP connection to the server and sends something like this:
GET /index.html HTTP/1.1\r\n
Host: www.example.com\r\n
User-Agent: Mozilla/5.0\r\n
Accept: text/html\r\n
\r\n
That’s it. Plain text. Bytes on the wire. Let’s dissect it:
┌─── Request line: method, path, protocol version
│
│ GET /index.html HTTP/1.1\r\n
│ Host: www.example.com\r\n ← Headers (key: value pairs)
│ User-Agent: Mozilla/5.0\r\n ← Headers
│ Accept: text/html\r\n ← Headers
│ \r\n ← Blank line = end of headers
│ (no body for GET requests)
│
└─── Everything above is the "payload" from TCP's perspective.
TCP doesn't know or care that it's HTTP. It just delivers bytes.
And the server responds with something like:
HTTP/1.1 200 OK\r\n
Content-Type: text/html\r\n
Content-Length: 45\r\n
\r\n
<html><body><h1>Hello!</h1></body></html>
Again, let’s dissect:
┌─── Status line: protocol, status code, reason phrase
│
│ HTTP/1.1 200 OK\r\n
│ Content-Type: text/html\r\n ← Response headers
│ Content-Length: 45\r\n ← How many bytes in the body
│ \r\n ← Blank line = end of headers
│ <html><body>... ← Body (the actual content)
│
└─── TCP sees all of this as one big blob of bytes.
This is the mental model to hold onto going into Lecture 4:
- Headers = metadata about the message: what type of content it is, how long it is, who sent it, caching rules, authentication tokens, etc.
- Body = the actual content: HTML, JSON, an image, whatever.
- The blank line (
\r\n\r\n) separates headers from body.
From TCP’s point of view, headers and body are all just payload bytes. TCP doesn’t parse them—it doesn’t even know HTTP exists. It just guarantees the bytes arrive intact and in order. The HTTP protocol is a convention layered on top.
Let’s Prove It: A Bare-Bones HTTP Server
We can build a (terrible, minimal) HTTP server with nothing but the socket module. This drives home that HTTP is just structured text over TCP:
# bare_http_server.py — A minimal HTTP server using raw sockets
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 8080))
server.listen(1)
print("Bare HTTP server on http://127.0.0.1:8080 — open this in your browser!")
while True:
conn, addr = server.accept()
request = conn.recv(4096).decode()
print(f"--- Request from {addr} ---")
print(request)
print("--- End of request ---")
# Build an HTTP response by hand
body = "<html><body><h1>Hello from raw sockets!</h1></body></html>"
response = (
"HTTP/1.1 200 OK\r\n"
"Content-Type: text/html\r\n"
f"Content-Length: {len(body)}\r\n"
"\r\n"
f"{body}"
)
conn.send(response.encode())
conn.close()Run this and open http://127.0.0.1:8080 in your browser. You’ll see “Hello from raw sockets!” in the browser, and in the terminal you’ll see the raw HTTP request your browser sent—complete with all the headers the browser automatically includes.
This is exactly what web frameworks like Flask and FastAPI do under the hood, just with far more sophistication. In Lecture 4, we’ll see how frameworks parse these headers and route requests to your Python functions. But now you know: there’s no magic. It’s bytes over TCP, with headers on top.
The Multi-Client Problem
Our TCP servers so far have a glaring limitation: they can only handle one client at a time. The echo server calls accept(), talks to the client, and only when that client disconnects does it loop back to accept another. Any other client connecting in the meantime is left waiting in the backlog queue.
Let’s make this concrete. Modify the echo server to loop and accept multiple clients sequentially:
# tcp_echo_sequential.py — Handles clients one at a time
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 9999))
server.listen(5)
print("Sequential echo server on 127.0.0.1:9999 ...")
while True:
conn, addr = server.accept()
print(f"Client {addr} connected")
while True:
data = conn.recv(1024)
if not data:
break
conn.send(data)
print(f"Client {addr} disconnected")
conn.close()Now try this: open three CMD windows. In the first, start the server. In the second, connect a client. In the third, try to connect another client simultaneously. The second client’s connection will be accepted by the OS (it sits in the backlog), but the server won’t actually accept() it or read its data until the first client disconnects.
This is clearly not how a real web server works. Google doesn’t make you wait until every other user has finished browsing before it serves your page.
The Thread-Per-Client Solution
The most intuitive fix: spawn a new thread for each client. The main thread loops on accept(), and each connection gets its own handler thread.
# tcp_echo_threaded.py — One thread per client
import socket
import threading
def handle_client(conn, addr):
print(f"[Thread {threading.current_thread().name}] Handling {addr}")
while True:
data = conn.recv(1024)
if not data:
break
conn.send(data)
print(f"[Thread {threading.current_thread().name}] {addr} disconnected")
conn.close()
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("127.0.0.1", 9999))
server.listen(5)
print("Threaded echo server on 127.0.0.1:9999 ...")
while True:
conn, addr = server.accept()
t = threading.Thread(target=handle_client, args=(conn, addr), daemon=True)
t.start()Now multiple clients can connect and chat simultaneously. Each gets their own thread, their own conn socket, their own handle_client loop. The main thread is free to keep accepting new connections.
This works! But there’s a catch that we’ll explore in detail in Lecture 5: threads are not free. Each thread costs memory (typically ~1 MB for its stack), and context-switching between thousands of threads grinds the OS scheduler to a halt. For a chat app with 10 users, threads are fine. For a web server handling 10,000 concurrent connections, threads are a disaster. The solution is async — but that’s getting ahead of ourselves.
The progression from here is:
- One client at a time — simple but useless for real servers. (This lecture, above.)
- Thread per client — works but doesn’t scale. (This lecture, just shown.)
- Async event loop — scales to thousands of connections on a single thread. (Lecture 5.)
Understanding why threads don’t scale is the motivation for async programming. Keep this in the back of your mind.
Putting It All Together: A Simple Chat Server
Let’s combine everything from this lecture and the previous two into a practical example: a multi-client chat server. Clients connect over TCP, and any message from one client is broadcast to all connected clients.
This ties together:
- TCP sockets for network communication
- Threading (from Lecture 2) for handling multiple clients
- Locks (from Lecture 1) for safely managing the shared client list
The Server
# chat_server.py
import socket
import threading
HOST = "127.0.0.1"
PORT = 9999
clients = [] # List of (conn, addr) tuples
clients_lock = threading.Lock()
def broadcast(message, sender_addr):
"""Send a message to all connected clients except the sender."""
with clients_lock:
for conn, addr in clients:
if addr != sender_addr:
try:
conn.send(message.encode())
except OSError:
pass # Client probably disconnected
def handle_client(conn, addr):
print(f"[Server] {addr} joined.")
broadcast(f"*** {addr} joined the chat ***", sender_addr=None)
with clients_lock:
clients.append((conn, addr))
try:
while True:
data = conn.recv(1024)
if not data:
break
message = data.decode()
print(f"[{addr}] {message}")
broadcast(f"[{addr}] {message}", sender_addr=addr)
except ConnectionResetError:
pass
finally:
with clients_lock:
clients.remove((conn, addr))
conn.close()
print(f"[Server] {addr} left.")
broadcast(f"*** {addr} left the chat ***", sender_addr=None)
def main():
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind((HOST, PORT))
server.listen(5)
print(f"Chat server running on {HOST}:{PORT}")
while True:
conn, addr = server.accept()
t = threading.Thread(target=handle_client, args=(conn, addr), daemon=True)
t.start()
if __name__ == "__main__":
main()The Client
# chat_client.py
import socket
import threading
HOST = "127.0.0.1"
PORT = 9999
def receive_messages(sock):
"""Background thread: print incoming messages."""
while True:
try:
data = sock.recv(1024)
if not data:
print("\n[Disconnected from server]")
break
print(f"\n{data.decode()}")
except OSError:
break
def main():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
print(f"Connected to chat server at {HOST}:{PORT}")
print("Type messages and press Enter. Press Ctrl+C to quit.\n")
# Start a background thread to receive messages
receiver = threading.Thread(target=receive_messages, args=(sock,), daemon=True)
receiver.start()
try:
while True:
message = input()
if message:
sock.send(message.encode())
except (KeyboardInterrupt, EOFError):
pass
finally:
sock.close()
if __name__ == "__main__":
main()Running It
Open three (or more) CMD windows:
# Window 1: Start the server
python chat_server.py
# Window 2: Start a client
python chat_client.py
# Window 3: Start another client
python chat_client.pyType a message in one client window — it appears in the other. The server logs all messages and manages the connections.
How It Works
The architecture follows the same pattern we used in Lecture 2’s capstone:
┌──────────────────────────────────────────────────┐
│ Chat Server │
│ │
│ Main Thread Handler Threads │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ accept() │────►│ Client 1 │ │ Client 2 │ ... │
│ │ loop │ │ thread │ │ thread │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │
│ clients list (shared, │
│ protected by Lock) │
└──────────────────────────────────────────────────┘
- The main thread sits in an
accept()loop, spawning a handler thread for each new client. - Each handler thread reads from its client and calls
broadcast()to forward the message. broadcast()iterates over the sharedclientslist under aLock— the same synchronization primitive from Lecture 1.- The client uses a background thread to receive messages while
input()blocks the main thread waiting for user input.
This is a real (if barebones) chat application. It demonstrates every concept from Lectures 1–3 working together. The only limitation is scalability — but that’s a story for Lecture 5.
Summary
We’ve climbed the IPC ladder from the simplest mechanisms to full network communication:
| Concept | Scope | Python | When to use |
|---|---|---|---|
| Pipe | Same machine, 2 endpoints | multiprocessing.Pipe |
Simple parent↔︎child channel |
| Shared memory | Same machine, any # of processes | multiprocessing.shared_memory |
High-speed shared data (with care) |
| Queue | Same machine, many producers/consumers | multiprocessing.Queue |
General-purpose IPC |
| UDP socket | Any machines, connectionless | socket.SOCK_DGRAM |
Speed over reliability (DNS, gaming) |
| TCP socket | Any machines, connection-based | socket.SOCK_STREAM |
Reliability and ordering (HTTP, SSH) |
Key takeaways:
- IPC is a spectrum — from shared variables (threads) to network sockets (different machines). Each level trades speed for reach.
- Sockets are the foundation of all networking. HTTP, REST APIs, web frameworks — they all send structured bytes through TCP sockets.
- UDP is simple and fast but unreliable. TCP is reliable but requires connection setup and has no message boundaries.
- HTTP is just text over TCP — request line, headers, blank line, body. Headers are metadata; the body is the content. TCP sees it all as payload bytes.
- Thread-per-client is the straightforward approach to serving multiple clients, but it doesn’t scale to thousands of connections.
The story so far:
Lecture 1: Processes & threads exist. Here's how to create them.
Lecture 2: Here's how to combine them in a real application (π estimator).
Lecture 3: Here's how they talk to each other, locally and over the network. (You are here.)
Lecture 4: HTTP + client-server architecture → REST APIs.
Lecture 5: Async programming → scalable servers without thread-per-client.
Exercises & Project Ideas
Additional Resources
- Python
socketmodule documentation - Python
socketHOWTO — an excellent practical guide - Beej’s Guide to Network Programming — the classic C sockets tutorial; concepts transfer directly to Python
- multiprocessing.shared_memory documentation
- HTTP/1.1 specification (RFC 9110) — the full standard (dense, but authoritative)
Next: Lecture 4 — Client-Server Architectures and RESTful APIs, where we move from raw sockets to structured HTTP APIs, and learn what “REST” actually means (spoiler: not what most people think).