MicroPie is built around streaming-first multipart handling, with no heavyweight form parsers.
This design keeps memory usage low and avoids the double-copy overhead seen in other frameworks.
We benchmarked a 13.4 GiB file upload against FastAPI, Litestar, and Starlette on the same machine.
| Framework | Upload Handler Summary | Multipart Parser | Elapsed (13.4 GiB) | Throughput |
|---|---|---|---|---|
| MicroPie | Streams chunks from ASGI queue straight to disk | multipart |
~32.6 s | ~420 MiB/s |
| Litestar | UploadFile.read() loop |
multipart |
~93.4 s | ~147 MiB/s |
| Starlette | await request.form() + file.read() |
python-multipart |
~102 s | ~135 MiB/s |
| FastAPI | UploadFile = File(...) + file.read() |
python-multipart |
~101.8 s | ~135 MiB/s |
Note: Tests were run sequentially, single
uvicornworker, same machine, same filesystem target.
Throughput varies with CPU, disk, and kernel I/O tuning.
- Verified file size and SHA-256 hash after upload.
- Same Python version and
uvicornversion. - Same port, same filesystem target (no tmpfs vs disk mismatch).
- Disabled indexing/AV processes during runs.
- Raised request body limits for frameworks that require it.
- MicroPie uses
multipartin streaming mode. Incoming chunks are pushed directly into the parser and written to disk as they arrive. - Litestar also depends on
multipart, but its higher-level abstractions still buffer reads before handing them off. - Starlette and FastAPI rely on
python-multipart, which parses and buffers the entire request before files are available.
This design difference is why MicroPie shows much higher throughput and lower latency for very large uploads.