libuv is a multi-platform support library with a focus on asynchronous I/O. Originally developed for Node.js, it has become the foundation that powers Node.js's event-driven, non-blocking I/O model. Understanding libuv is crucial to understanding how Node.js handles concurrency and achieves high performance.
libuv is a C library that provides:
- Event loop powered by the operating system's native asynchronous I/O capabilities
- Asynchronous TCP and UDP sockets
- Asynchronous DNS resolution
- Asynchronous file and file system operations
- File system events and watching
- Child processes
- Thread pool for operations that cannot be done asynchronously at the OS level
- Signal handling
- High resolution clock
- Threading and synchronization primitives
The library was created to abstract away the differences between various operating systems (Windows, Linux, macOS, etc.) and provide a consistent API for asynchronous operations.
Node.js runs JavaScript, which is single-threaded. To handle multiple concurrent operations without blocking, Node.js needs a way to:
- Perform I/O operations asynchronously
- Handle multiple connections simultaneously
- Execute long-running operations without freezing the main thread
- Provide a consistent behavior across different operating systems
libuv solves all these problems by providing the underlying event loop and asynchronous I/O primitives.
The event loop is the core mechanism that allows Node.js to perform non-blocking I/O operations despite JavaScript being single-threaded. Here's how it works:
The libuv event loop operates in phases, each with its own queue of callbacks:
┌───────────────────────────┐
┌─>│ timers │
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ pending callbacks │
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ idle, prepare │
│ └─────────────┬─────────────┘ ┌───────────────┐
│ ┌─────────────┴─────────────┐ │ incoming: │
│ │ poll │<─────┤ connections, │
│ └─────────────┬─────────────┘ │ data, etc. │
│ ┌─────────────┴─────────────┐ └───────────────┘
│ │ check │
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ close callbacks │
│ └─────────────┬─────────────┘
└──────────────────────────────┘
Executes callbacks scheduled by setTimeout() and setInterval(). Timers specify the threshold after which a callback may be executed, not the exact time.
setTimeout(() => {
console.log('Timer callback executed');
}, 100);Executes I/O callbacks deferred to the next loop iteration. This includes callbacks for some system operations like TCP errors.
Used internally by libuv. Not directly exposed to JavaScript.
This is the most important phase where:
- New I/O events are retrieved
- I/O-related callbacks are executed (except close callbacks, timers, and
setImmediate()) - The event loop may block here waiting for incoming connections or data
The poll phase has two main functions:
- Execute scripts for timers whose threshold has elapsed
- Process events in the poll queue
setImmediate() callbacks are invoked here. This allows you to execute code immediately after the poll phase completes.
setImmediate(() => {
console.log('setImmediate callback');
});Executes close event callbacks, like socket.on('close', ...).
While libuv provides asynchronous I/O for many operations, some operations don't have native asynchronous APIs in all operating systems. For these operations, libuv maintains a thread pool.
The following operations are executed in libuv's thread pool:
- File System Operations - All
fs.*module operations exceptfs.FSWatcher() - DNS Operations -
dns.lookup()and certaindns.resolve()calls - CPU-Intensive Crypto Operations -
crypto.pbkdf2(),crypto.scrypt(),crypto.randomBytes(),crypto.randomFill() - Zlib Operations - When running synchronously
By default, libuv creates a thread pool with 4 threads. You can configure this with the UV_THREADPOOL_SIZE environment variable:
UV_THREADPOOL_SIZE=8 node app.jsThe maximum size is 1024 threads, but increasing it should be done carefully based on your application's needs.
Network operations (TCP, UDP, HTTP) use the OS kernel's native asynchronous capabilities:
- Linux: epoll
- macOS/BSD: kqueue
- Windows: IOCP (I/O Completion Ports)
- Solaris: Event ports
These operations do not use the thread pool because they're truly asynchronous at the OS level.
const net = require('net');
// This uses the OS's native async I/O, not the thread pool
const server = net.createServer((socket) => {
socket.on('data', (data) => {
console.log('Received:', data);
});
});
server.listen(3000);File system operations use the thread pool because most operating systems don't provide truly asynchronous file I/O:
const fs = require('fs');
// This operation will be queued in the thread pool
fs.readFile('/path/to/file', (err, data) => {
if (err) throw err;
console.log(data);
});DNS resolution has two approaches:
const dns = require('dns');
// dns.lookup() uses getaddrinfo() and the thread pool
dns.lookup('example.com', (err, address) => {
console.log('Address:', address);
});
// dns.resolve() uses c-ares library (async, no thread pool)
dns.resolve4('example.com', (err, addresses) => {
console.log('Addresses:', addresses);
});While not part of the official event loop phases, it's important to understand process.nextTick() and Promise microtasks:
Callbacks passed to process.nextTick() are executed immediately after the current operation completes, before the event loop continues:
console.log('1');
setTimeout(() => console.log('2'), 0);
process.nextTick(() => console.log('3'));
console.log('4');
// Output: 1, 4, 3, 2Promises are handled in the microtask queue, which is processed after process.nextTick() but before moving to the next event loop phase:
console.log('1');
setTimeout(() => console.log('2'), 0);
Promise.resolve().then(() => console.log('3'));
process.nextTick(() => console.log('4'));
console.log('5');
// Output: 1, 5, 4, 3, 2Here's the complete order of execution:
- Synchronous code - Current operation
- process.nextTick() queue - Executed completely
- Microtask queue - Promises, executed completely
- Event loop phases - Timers → Pending → Poll → Check → Close
setTimeout(() => {
console.log('timeout');
}, 0);
setImmediate(() => {
console.log('immediate');
});The order is non-deterministic in the main module because it depends on process performance. However, within an I/O cycle, setImmediate is always executed first:
const fs = require('fs');
fs.readFile(__filename, () => {
setTimeout(() => {
console.log('timeout');
}, 0);
setImmediate(() => {
console.log('immediate');
});
});
// Always outputs: immediate, timeoutconst fs = require('fs');
console.log('Start');
// These will compete for thread pool resources
for (let i = 0; i < 10; i++) {
fs.readFile('/path/to/large/file', () => {
console.log(`File ${i} read`);
});
}
console.log('Files queued');
// With default thread pool (4 threads):
// - First 4 files start immediately
// - Remaining 6 wait in queue
// - As threads complete, they pick up waiting tasksconst crypto = require('crypto');
console.log('Start');
// This blocks the event loop
const start = Date.now();
crypto.pbkdf2Sync('password', 'salt', 100000, 512, 'sha512');
console.log(`Sync took: ${Date.now() - start}ms`);
// This doesn't block (uses thread pool)
crypto.pbkdf2('password', 'salt', 100000, 512, 'sha512', (err, key) => {
console.log('Async completed');
});
console.log('End');// Bad - blocks the event loop
app.get('/compute', (req, res) => {
const result = doHeavyComputation(); // Synchronous
res.json({ result });
});
// Good - offload to worker threads or child processes
const { Worker } = require('worker_threads');
app.get('/compute', (req, res) => {
const worker = new Worker('./compute-worker.js');
worker.on('message', (result) => {
res.json({ result });
});
});If your application performs many file system operations or DNS lookups, consider increasing the thread pool size:
process.env.UV_THREADPOOL_SIZE = 8;// Bad
const data = fs.readFileSync('/file'); // Blocks
// Good
fs.readFile('/file', (err, data) => {
// Non-blocking
});
// Better (with modern async/await)
const data = await fs.promises.readFile('/file');// Use nextTick sparingly - can starve the event loop
process.nextTick(() => {
// Executed before I/O operations
});
// Use setImmediate for deferring execution
setImmediate(() => {
// Executed after I/O operations
});const start = process.hrtime();
setInterval(() => {
const delta = process.hrtime(start);
const nanosec = delta[0] * 1e9 + delta[1];
const millis = nanosec / 1e6;
// If significantly higher than 1000ms, event loop is lagging
console.log(`Event loop delay: ${millis}ms`);
start[0] = process.hrtime()[0];
start[1] = process.hrtime()[1];
}, 1000);When the event loop is blocked by synchronous operations:
- No new connections can be accepted
- No callbacks can be executed
- No timers can fire
- Application becomes unresponsive
When all thread pool threads are busy:
- New file system operations queue up
- DNS lookups are delayed
- Crypto operations are postponed
- Response times increase
// Enable libuv debugging
process.env.UV_THREADPOOL_SIZE = 4;
// Use async_hooks for tracking async operations
const async_hooks = require('async_hooks');
const hook = async_hooks.createHook({
init(asyncId, type, triggerAsyncId) {
console.log(`Async operation: ${type}`);
}
});
hook.enable();const { performance } = require('perf_hooks');
const obs = new performance.PerformanceObserver((items) => {
items.getEntries().forEach((entry) => {
console.log(`${entry.name}: ${entry.duration}ms`);
});
});
obs.observe({ entryTypes: ['measure'] });
performance.mark('A');
// ... do some work
performance.mark('B');
performance.measure('A to B', 'A', 'B');libuv is the powerhouse behind Node.js's asynchronous capabilities. By understanding how it works:
- You can write more efficient Node.js applications
- You can avoid common pitfalls like blocking the event loop
- You can make informed decisions about threading and concurrency
- You can debug performance issues more effectively
Key takeaways:
- Network I/O is truly asynchronous (no thread pool)
- File system operations use the thread pool
- The event loop has distinct phases with specific purposes
- process.nextTick() and microtasks run between phases
- Blocking the event loop severely impacts performance
- Thread pool size can be configured based on workload
Understanding libuv's architecture helps you build scalable, high-performance Node.js applications that can handle thousands of concurrent connections efficiently.