Skip to content

Instantly share code, notes, and snippets.

@patx
Last active December 7, 2025 18:10
Show Gist options
  • Select an option

  • Save patx/39e846ed66bead3e42270ff193db35f8 to your computer and use it in GitHub Desktop.

Select an option

Save patx/39e846ed66bead3e42270ff193db35f8 to your computer and use it in GitHub Desktop.
Latest benchmarks for new MicroPie release (v0.17)

Benchmarking Python Web Frameworks: Blacksheep, Starlette, MicroPie, Quart, FastAPI

I created minimal apps for each framework, all returning {"Hello": "World"} as JSON. They ran on Uvicorn with a single worker for baseline performance. Tests hit http://127.0.0.1:8000 with wrk (15s, 4 threads, 64 connections).

Blacksheep

from blacksheep import Application, get, json

app = Application()

@get("/")
async def home():
    return json({"Hello": "World"})

Starlette

from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route

async def homepage(request):
    return JSONResponse({'Hello': 'World'})

app = Starlette(routes=[Route('/', homepage)])

MicroPie

from micropie import App

class Root(App):
    async def index(self):
        return {"Hello": "World"}

app = Root()

Quart

from quart import Quart

app = Quart(__name__)

@app.route('/')
async def greet():
    return {"Hello": "World"}

FastAPI

from fastapi import FastAPI

app = FastAPI()

@app.get("/")
async def read_root():
    return {"Hello": "World"}

Benchmark Results

I ran multiple tests and picked the best (highest Requests/sec) for each framework. The table shows results, ordered fastest to slowest by Requests/sec:

Framework Requests/sec Avg Latency (ms) Max Latency (ms) Transfer/sec (MB) Total Requests Data Read (MB)
Blacksheep 23041.66 2.82 88.84 3.12 345832 46.83
Starlette 21615.41 3.00 90.34 2.93 324374 43.93
MicroPie 18519.02 3.53 105.00 2.84 277960 42.68
FastAPI 8899.40 7.22 56.09 1.21 133542 18.08
Quart 8601.40 7.52 117.99 1.17 129089 17.60

Observations

MicroPie (18,519.02 req/s, 3.53 ms latency) trails Blacksheep and Starlette but outperforms Quart and FastAPI, which also prioritize clean code. MicroPie’s design dynamically serializes dictionaries to JSON at runtime (using orjson if available) and adds headers like Content-Type. This simplifies code—returning a dictionary ({"Hello": "World"}) instead of response objects like Blacksheep’s json() or Starlette’s JSONResponse—but adds overhead from runtime checks and serialization. It also supports multiple response types (strings, bytes, iterables), increasing latency.

In contrast:

  • Blacksheep’s json() function optimizes JSON serialization and headers.
  • Starlette’s JSONResponse class streamlines serialization.
  • FastAPI’s declarative syntax and OpenAPI features add overhead, even in this minimal example.
  • Quart’s Flask-inspired API and JSON serialization slow it down.

MicroPie’s leaner design, with optional orjson and minimal middleware, delivers over double the throughput of FastAPI and Quart, with lower latency (3.53 ms vs. ~7.22–7.52 ms), balancing simplicity and performance better.

Key Takeaways

MicroPie suits small to medium apps where clean code matters. FastAPI excels for large APIs with typing and documentation. Blacksheep and Starlette are best when speed is critical, despite more boilerplate.

  1. Blacksheep leads (23,041.66 req/s, 2.82 ms), best for high-throughput.
  2. Starlette follows (21,615.41 req/s, 3.00 ms), balancing speed and flexibility.
  3. MicroPie (18,519.02 req/s) beats Quart and FastAPI, offering clean code with strong performance, ~20–25% slower than Blacksheep and Starlette.
  4. FastAPI and Quart (~8,600–8,900 req/s) lag due to feature overhead.
  5. Test Conditions: Single Uvicorn worker shows baseline performance; adding workers may improve results, but rankings should hold.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment