def some_function(x, y):
print("Some blabla")
return x + yContext managers and metaclasses
To infinity, and beyond
Context Managers
The setup-execute-teardown pattern
A common pattern in programming is this:
- Set up a resource (open a file, create a connection, acquire a lock)
- Do some work with that resource
- Unconditionally tear down the resource (close the file, close the connection, release the lock)
The crucial point is that step 3 must always happen, even if step 2 fails with an exception. Otherwise you risk resource leaks, corrupted data, or deadlocks.
Python provides elegant syntax for this pattern: the context manager.
with something() as resource:
do_stuff_with(resource)The something() is the context manager, and it ensures proper cleanup no matter what happens inside the with block.
Using context managers
Example: Capturing output
Let’s say we have a function from a library that prints output, and we want to capture what it prints:
We could do this manually by redirecting sys.stdout:
import sys
import io
s = io.StringIO()
old_stdout, sys.stdout = sys.stdout, s # redirect sys.stdout to s
some_function(3, 4)
sys.stdout = old_stdout # reset sys.stdout to its old value
print(f"Captured: {s.getvalue()!r}")Captured: 'Some blabla\n'
But what if some_function raises an exception? The last line never runs, and sys.stdout stays broken!
With a context manager, cleanup happens automatically:
import contextlib
s = io.StringIO()
try:
with contextlib.redirect_stdout(s):
some_function(3, 4)
# Even if an exception happens here, stdout gets restored
except Exception:
pass
print(f"Captured: {s.getvalue()!r}")
print("This prints normally")Captured: 'Some blabla\n'
This prints normally
The beauty of context managers is that the cleanup code runs even if an exception occurs or a function returns inside the with block. This is similar to finally blocks, but much cleaner.
Other context managers you know
You’ve already used context managers for opening files:
from pathlib import Path
with Path('test.txt').open('w') as f:
f.write('Hello from a context manager!\n')
# File is automatically closed hereAnother example is creating zip files:
from zipfile import ZipFile
with ZipFile('myzip.zip', 'w') as myzip:
myzip.write('test.txt')Other useful context managers include:
threading.Lock()for thread synchronizationtempfile.TemporaryDirectory()for temporary directories- Database connections from libraries like
sqlite3 pytest.raises()for testing exceptions
Creating your own context managers
To create a context manager, you need to define what happens at three moments:
- When the context manager is created (
__init__) - When entering the
withblock (__enter__) - When exiting the
withblock (__exit__)
In practice, steps 1 and 2 often happen on the same line of code, but that’s not strictly required.
Let’s build a StopWatch context manager that measures execution time.
Basic stopwatch
Without a context manager, timing code looks like this:
import time
t0 = time.perf_counter()
result = sum(i**2 for i in range(10_000_000))
t1 = time.perf_counter()
print(f'Command took {t1 - t0:.4f} s')Command took 0.5978 s
Let’s turn this into a context manager:
class StopWatch:
def __enter__(self):
self.t0 = time.perf_counter()
def __exit__(self, *args):
self.t1 = time.perf_counter()
print(f'Command took {self.t1 - self.t0:.4f} s')with StopWatch():
result = sum(i**2 for i in range(10_000_000))Command took 0.5933 s
The *args in __exit__ are for exception information (we’ll get back to that).
Adding names
When nesting stopwatches, it helps to give them names:
class StopWatch:
def __init__(self, name=''):
self.name = name
def __enter__(self):
self.t0 = time.perf_counter()
def __exit__(self, *args):
self.t1 = time.perf_counter()
print(f'Command "{self.name}" took {self.t1 - self.t0:.4f} s')with StopWatch("sum of squares"):
with StopWatch("first half"):
result1 = sum(i**2 for i in range(5_000_000))
with StopWatch("second half"):
result2 = sum(i**2 for i in range(5_000_000, 10_000_000))
print(f'Total: {result1 + result2}')Command "first half" took 0.3369 s
Command "second half" took 0.3341 s
Total: 333333283333335000000
Command "sum of squares" took 0.6717 s
Using the context manager object
If you return self from __enter__, you can use the context manager object inside the with block:
class StopWatch:
def __init__(self, name=''):
self.name = name
def __enter__(self):
self.t0 = time.perf_counter()
return self # Return the context manager itself
def elapsed(self):
"""Get time elapsed since entering the context."""
t1 = time.perf_counter()
return t1 - self.t0
def __exit__(self, *args):
self.t1 = time.perf_counter()
print(f'Command "{self.name}" took {self.t1 - self.t0:.4f} s')with StopWatch("sum of squares") as sw:
result1 = sum(i**2 for i in range(5_000_000))
print(f'After first half: {sw.elapsed():.4f} s')
result2 = sum(i**2 for i in range(5_000_000, 10_000_000))After first half: 0.3402 s
Command "sum of squares" took 0.6582 s
Exception handling
The __exit__ method receives three arguments when an exception occurs:
exc_type: The exception typeexc_value: The exception instancetraceback: The traceback object
If __exit__ returns True, the exception is suppressed. Otherwise, it propagates.
class SafeStopWatch:
def __init__(self, name=''):
self.name = name
def __enter__(self):
self.t0 = time.perf_counter()
return self
def __exit__(self, exc_type, exc_value, traceback):
self.t1 = time.perf_counter()
if exc_type is not None:
print(f'Command "{self.name}" failed after {self.t1 - self.t0:.4f} s')
print(f'Error: {exc_value}')
else:
print(f'Command "{self.name}" took {self.t1 - self.t0:.4f} s')
# Return False to let the exception propagate
return Falsewith SafeStopWatch("failing command"):
x = 1 / 0Command "failing command" failed after 0.0000 s
Error: division by zero
--------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) Cell In[14], line 2 1 with SafeStopWatch("failing command"): ----> 2 x = 1 / 0 ZeroDivisionError: division by zero
Solution
import sys
class ImportManager:
def __enter__(self):
self.imports = set(sys.modules.keys())
def __exit__(self, *args):
new_imports = set(sys.modules.keys())
extra_imports = new_imports - self.imports
for k in extra_imports:
print(f"Removing module: {k}")
del sys.modules[k]
with ImportManager():
import uuid
print('uuid' in sys.modules) # FalseTrue
The @contextmanager decorator
For simple cases, you can use the contextlib.contextmanager decorator instead of writing a class:
from contextlib import contextmanager
@contextmanager
def stopwatch(name=''):
t0 = time.perf_counter()
try:
yield # This is where the with block executes
finally:
t1 = time.perf_counter()
print(f'Command "{name}" took {t1 - t0:.4f} s')
with stopwatch("quick test"):
sum(i**2 for i in range(5_000_000))Command "quick test" took 0.3290 s
The code before yield runs during __enter__, and the code after runs during __exit__. The finally ensures cleanup happens even with exceptions.
Use the decorator for simple, one-off context managers. Use a class when you need:
- Reusable, complex context managers
- Additional methods (like our
elapsed()method) - State management across multiple uses
Metaclasses
Metaclasses are powerful but rarely necessary. As Tim Peters wrote: “Metaclasses are deeper magic than 99% of users should ever worry about.”
Most problems that can be solved with metaclasses have simpler solutions using decorators, __init_subclass__, or simple inheritance. We’re covering them here to understand what happens behind the scenes when Python creates classes and objects.
Everything is an object (even classes)
In Python, everything is an object: variables, functions, modules… and classes too.
class K:
a = 1
class L(K):
b = 2We can create instances of these classes:
k = K()
print(f'{k.a=}')
l = L()
print(f'{l.a=}, {l.b=}')k.a=1
l.a=1, l.b=2
What’s the type of an instance?
print(type(k))<class '__main__.K'>
What’s the type of a class?
print(type(K), type(L))<class 'type'> <class 'type'>
The type of a class is type! This means type is a metaclass: a class whose instances are classes.
Creating classes with type()
Since classes are instances of type, we can create classes by calling type() directly:
K = type("K", (), {'a': 1})
L = type("L", (K,), {'b': 2})
print(K, L)<class '__main__.K'> <class '__main__.L'>
The syntax is: type(name, bases, namespace)
name: The class name as a stringbases: A tuple of base classesnamespace: A dictionary of class attributes and methods
These classes work exactly like normal classes:
k = K()
print(f'{k.a=}')
l = L()
print(f'{l.a=}, {l.b=}')k.a=1
l.a=1, l.b=2
Defining a class with the class keyword is equivalent to instantiating type.
When Python sees class K: ..., it essentially calls type("K", bases, namespace) behind the scenes.
There’s normally no good reason to create classes this way, but it reveals an important truth about how Python works.
Custom metaclasses
What if we want to customize class creation? We can create our own variant of type by subclassing it!
Let’s create a metaclass that automatically adds a creation_time attribute to every class:
import datetime
def now_str():
now = datetime.datetime.now()
return now.strftime("%Y/%m/%d %H:%M:%S")
class MyType(type):
def __init__(self, name, bases, namespace):
self.creation_time = now_str()
super().__init__(name, bases, namespace)Now we can create classes using our custom metaclass:
K = MyType("K", (), {'a': 1})
time.sleep(0.1)
L = MyType("L", (K,), {'b': 2})
print(f'{K.creation_time=}')
print(f'{L.creation_time=}')K.creation_time='2025/11/15 19:32:45'
L.creation_time='2025/11/15 19:32:45'
But we’d prefer to use normal class syntax. We can specify the metaclass with the metaclass keyword:
class K(metaclass=MyType):
a = 1
time.sleep(0.1)
class L(K):
b = 2
print(f'{K.creation_time=}')
print(f'{L.creation_time=}')K.creation_time='2025/11/15 19:32:45'
L.creation_time='2025/11/15 19:32:45'
Notice that L didn’t need to specify the metaclass - it’s inherited from K!
print(f'{type(K)=}')
print(f'{type(L)=}')type(K)=<class '__main__.MyType'>
type(L)=<class '__main__.MyType'>
__new__ vs __init__ in metaclasses
In the previous example, we used __init__ which runs after the class is created. We can also use __new__ to intervene before the class is created:
class MyType(type):
def __new__(meta, name, bases, namespace):
# Modify the namespace before the class is created
namespace['creation_time'] = now_str()
cls = super().__new__(meta, name, bases, namespace)
return clsclass K(metaclass=MyType):
a = 1
class L(K):
b = 2
print(f'{K.creation_time=}')
print(f'{L.creation_time=}')K.creation_time='2025/11/15 19:32:45'
L.creation_time='2025/11/15 19:32:45'
When to use __new__ vs __init__
- Use
__new__when you need to modify the class before it’s created (e.g., adding/removing attributes) - Use
__init__when you need to perform setup after the class exists (e.g., registration, validation)
Here’s an example that validates method names:
class CapitalCheckerMeta(type):
def __new__(meta, name, bases, namespace):
for attr_name in namespace:
if not attr_name.startswith('_') and attr_name[0].islower():
raise ValueError(
f'Method {attr_name} in class {name} must start with uppercase'
)
cls = super().__new__(meta, name, bases, namespace)
return clsclass BadClass(metaclass=CapitalCheckerMeta):
def myMethod(self): # lowercase 'm'
pass--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[30], line 1 ----> 1 class BadClass(metaclass=CapitalCheckerMeta): 2 def myMethod(self): # lowercase 'm' 3 pass Cell In[29], line 5, in CapitalCheckerMeta.__new__(meta, name, bases, namespace) 3 for attr_name in namespace: 4 if not attr_name.startswith('_') and attr_name[0].islower(): ----> 5 raise ValueError( 6 f'Method {attr_name} in class {name} must start with uppercase' 7 ) 8 cls = super().__new__(meta, name, bases, namespace) 9 return cls ValueError: Method myMethod in class BadClass must start with uppercase
The complete creation chain
When you define a class and create instances, a complex sequence of method calls occurs. Let’s trace them:
class MyType(type):
def __new__(meta, name, bases, namespace):
print(f" __new__ of metaclass (creating class {name})")
cls = super().__new__(meta, name, bases, namespace)
return cls
def __init__(cls, name, bases, namespace):
print(f" __init__ of metaclass (initializing class {name})")
super().__init__(name, bases, namespace)
def __call__(cls, *args, **kwargs):
print(f" __call__ of metaclass (about to create instance of {cls.__name__})")
return super().__call__(*args, **kwargs)
class K(metaclass=MyType):
def __new__(cls):
print(f" __new__ of class {cls.__name__}")
return super().__new__(cls)
def __init__(self):
print(f" __init__ of class {self.__class__.__name__}")
super().__init__()
print("=== Defining class K ===")
# (K was already defined above, let's do it again)
print("\n=== Creating instance k ===")
k = K() __new__ of metaclass (creating class K)
__init__ of metaclass (initializing class K)
=== Defining class K ===
=== Creating instance k ===
__call__ of metaclass (about to create instance of K)
__new__ of class K
__init__ of class K
The sequence is:
When defining a class:
__new__of the metaclass (creates the class object)__init__of the metaclass (initializes the class object)
When creating an instance:
__call__of the metaclass (acts as a gatekeeper)__new__of the class (creates the instance)__init__of the class (initializes the instance)
The metaclass’s __call__ method is a powerful control point - it’s invoked before any instance is created. This allows you to intercept instance creation entirely.
Practical example: Singleton pattern
A singleton is a class that only allows one instance to exist. Let’s see how to implement this pattern, starting with naive approaches.
Naive approach #1: Manual management
class Config:
_instance = None
@classmethod
def get(cls):
if cls._instance is None:
cls._instance = cls()
return cls._instance
def __init__(self):
print("Initializing Config...")
time.sleep(0.5) # Simulate expensive setup
self.settings = {'theme': 'dark', 'language': 'en'}c1 = Config.get()
c2 = Config.get()
print(f'{c1 is c2=}')Initializing Config...
c1 is c2=True
Problem: Users need to remember to call .get() instead of the normal constructor. Nothing prevents Config() from creating multiple instances!
Naive approach #2: Blocking the constructor
class Config:
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
print("Initializing Config...")
self.settings = {'theme': 'dark', 'language': 'en'}c1 = Config()
c2 = Config()
print(f'{c1 is c2=}')Initializing Config...
Initializing Config...
c1 is c2=True
Problem: __init__ runs every time! Watch:
c1 = Config()
c1.settings['custom'] = 'value'
print(f"Before: {c1.settings}")
c2 = Config() # This calls __init__ again!
print(f"After: {c2.settings}")Initializing Config...
Before: {'theme': 'dark', 'language': 'en', 'custom': 'value'}
Initializing Config...
After: {'theme': 'dark', 'language': 'en'}
Our custom setting disappeared! We could add a flag to prevent re-initialization, but this is getting messy.
The metaclass solution
class SingletonMeta(type):
def __init__(cls, *args, **kwargs):
cls._instance = None
super().__init__(*args, **kwargs)
def __call__(cls, *args, **kwargs):
if cls._instance is None:
cls._instance = super().__call__(*args, **kwargs)
return cls._instance
class Config(metaclass=SingletonMeta):
def __init__(self):
print("Initializing Config...")
time.sleep(0.5)
self.settings = {'theme': 'dark', 'language': 'en'}c1 = Config()
c1.settings['custom'] = 'value'
print(f"c1: {c1.settings}")
c2 = Config() # Returns existing instance, no re-init!
print(f"c2: {c2.settings}")
print(f'{c1 is c2=}')Initializing Config...
c1: {'theme': 'dark', 'language': 'en', 'custom': 'value'}
c2: {'theme': 'dark', 'language': 'en', 'custom': 'value'}
c1 is c2=True
Perfect! The metaclass intercepts the call to Config() and returns the cached instance if it exists. The __init__ method only runs once.
Metaclasses in the standard library
While you rarely need to write your own metaclasses, they’re used in several important places:
Abstract Base Classes
from abc import ABC, ABCMeta, abstractmethod
class Animal(ABC): # Equivalent to: metaclass=ABCMeta
@abstractmethod
def make_sound(self):
pass
# Can't instantiate an abstract class
try:
a = Animal()
except TypeError as e:
print(f"Error: {e}")Error: Can't instantiate abstract class Animal without an implementation for abstract method 'make_sound'
Enums
from enum import Enum, EnumMeta
class Color(Enum):
RED = 1
GREEN = 2
BLUE = 3
print(type(Color)) # EnumMeta
print(Color.RED)<class 'enum.EnumType'>
Color.RED
ORMs like SQLAlchemy
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String)
# SQLAlchemy's metaclass converts Column definitions
# into proper database mappings
print(type(Base)) # DeclarativeBaseMeta (in SQLAlchemy 2.0+)The metaclass processes the Column attributes and sets up the database mapping machinery. Pretty neat!
The __set_name__ hook
Python 3.6 introduced __set_name__, which is called on descriptors when they’re assigned to a class attribute. This provides a simpler alternative to some metaclass use cases.
class Field:
def __init__(self):
self.name = None
self.value = None
def __set_name__(self, owner, name):
print(f"Field assigned as {name} in class {owner.__name__}")
self.name = name
def __get__(self, obj, objtype=None):
if obj is None:
return self
return obj.__dict__.get(self.name)
def __set__(self, obj, value):
print(f"Setting {self.name} = {value}")
obj.__dict__[self.name] = value
class Model:
name = Field()
email = Field()Field assigned as name in class Model
Field assigned as email in class Model
The __set_name__ hook runs when the class is created:
m = Model()
m.name = "Alice"
m.email = "[email protected]"
print(f'{m.name=}, {m.email=}')Setting name = Alice
Setting email = [email protected]
m.name='Alice', m.email='[email protected]'
This is much simpler than using a metaclass to achieve the same behavior!
When (not) to use metaclasses
Don’t use metaclasses for:
- Simple validation → Use
__init_subclass__or decorators - Adding methods to classes → Use mixins or class decorators
- Registering classes → Use class decorators or
__init_subclass__
# Instead of a metaclass, use __init_subclass__:
class Plugin:
plugins = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.plugins.append(cls)
print(f"Registered plugin: {cls.__name__}")
class MyPlugin(Plugin):
pass
class AnotherPlugin(Plugin):
pass
print(f"Total plugins: {len(Plugin.plugins)}")Registered plugin: MyPlugin
Registered plugin: AnotherPlugin
Total plugins: 2
Consider metaclasses for:
- Deep customization of class creation (like ABCMeta, EnumMeta)
- ORM frameworks where class definitions map to database schemas
- Automatic instance management (like the Singleton pattern)
- Complex validation that needs to happen at class definition time
If you’re not sure whether you need a metaclass, you don’t need a metaclass.
Try simpler solutions first: decorators, __init_subclass__, or composition. Metaclasses should be a last resort for truly complex scenarios.
Solution
import functools
class LoggingMeta(type):
def __new__(meta, name, bases, namespace):
for attr_name, attr_value in namespace.items():
if callable(attr_value) and not attr_name.startswith('_'):
namespace[attr_name] = meta._wrap_method(name, attr_name, attr_value)
return super().__new__(meta, name, bases, namespace)
@staticmethod
def _wrap_method(class_name, method_name, method):
@functools.wraps(method)
def wrapper(*args, **kwargs):
print(f"Calling {class_name}.{method_name}")
return method(*args, **kwargs)
return wrapper
class Calculator(metaclass=LoggingMeta):
def add(self, x, y):
return x + y
def multiply(self, x, y):
return x * y
calc = Calculator()
result = calc.add(3, 4)
print(f"Result: {result}")
result = calc.multiply(3, 4)
print(f"Result: {result}")Calling Calculator.add
Result: 7
Calling Calculator.multiply
Result: 12