Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save carefree-ladka/b134007f6749fc3bf743fcdd5013ce6a to your computer and use it in GitHub Desktop.

Select an option

Save carefree-ladka/b134007f6749fc3bf743fcdd5013ce6a to your computer and use it in GitHub Desktop.
Understanding libuv: The Heart of Node.js Event Loop

Understanding libuv: The Heart of Node.js Event Loop

Introduction

libuv is a multi-platform support library with a focus on asynchronous I/O. Originally developed for Node.js, it has become the foundation that powers Node.js's event-driven, non-blocking I/O model. Understanding libuv is crucial to understanding how Node.js handles concurrency and achieves high performance.

What is libuv?

libuv is a C library that provides:

  • Event loop powered by the operating system's native asynchronous I/O capabilities
  • Asynchronous TCP and UDP sockets
  • Asynchronous DNS resolution
  • Asynchronous file and file system operations
  • File system events and watching
  • Child processes
  • Thread pool for operations that cannot be done asynchronously at the OS level
  • Signal handling
  • High resolution clock
  • Threading and synchronization primitives

The library was created to abstract away the differences between various operating systems (Windows, Linux, macOS, etc.) and provide a consistent API for asynchronous operations.

Why Node.js Needs libuv

Node.js runs JavaScript, which is single-threaded. To handle multiple concurrent operations without blocking, Node.js needs a way to:

  1. Perform I/O operations asynchronously
  2. Handle multiple connections simultaneously
  3. Execute long-running operations without freezing the main thread
  4. Provide a consistent behavior across different operating systems

libuv solves all these problems by providing the underlying event loop and asynchronous I/O primitives.

The Event Loop Architecture

The event loop is the core mechanism that allows Node.js to perform non-blocking I/O operations despite JavaScript being single-threaded. Here's how it works:

Event Loop Phases

The libuv event loop operates in phases, each with its own queue of callbacks:

   ┌───────────────────────────┐
┌─>│           timers          │
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │     pending callbacks     │
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │       idle, prepare       │
│  └─────────────┬─────────────┘      ┌───────────────┐
│  ┌─────────────┴─────────────┐      │   incoming:   │
│  │           poll            │<─────┤  connections, │
│  └─────────────┬─────────────┘      │   data, etc.  │
│  ┌─────────────┴─────────────┐      └───────────────┘
│  │           check           │
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │      close callbacks      │
│  └─────────────┬─────────────┘
└──────────────────────────────┘

1. Timers Phase

Executes callbacks scheduled by setTimeout() and setInterval(). Timers specify the threshold after which a callback may be executed, not the exact time.

setTimeout(() => {
  console.log('Timer callback executed');
}, 100);

2. Pending Callbacks Phase

Executes I/O callbacks deferred to the next loop iteration. This includes callbacks for some system operations like TCP errors.

3. Idle, Prepare Phase

Used internally by libuv. Not directly exposed to JavaScript.

4. Poll Phase

This is the most important phase where:

  • New I/O events are retrieved
  • I/O-related callbacks are executed (except close callbacks, timers, and setImmediate())
  • The event loop may block here waiting for incoming connections or data

The poll phase has two main functions:

  • Execute scripts for timers whose threshold has elapsed
  • Process events in the poll queue

5. Check Phase

setImmediate() callbacks are invoked here. This allows you to execute code immediately after the poll phase completes.

setImmediate(() => {
  console.log('setImmediate callback');
});

6. Close Callbacks Phase

Executes close event callbacks, like socket.on('close', ...).

libuv Thread Pool

While libuv provides asynchronous I/O for many operations, some operations don't have native asynchronous APIs in all operating systems. For these operations, libuv maintains a thread pool.

Operations Using the Thread Pool

The following operations are executed in libuv's thread pool:

  1. File System Operations - All fs.* module operations except fs.FSWatcher()
  2. DNS Operations - dns.lookup() and certain dns.resolve() calls
  3. CPU-Intensive Crypto Operations - crypto.pbkdf2(), crypto.scrypt(), crypto.randomBytes(), crypto.randomFill()
  4. Zlib Operations - When running synchronously

Thread Pool Size

By default, libuv creates a thread pool with 4 threads. You can configure this with the UV_THREADPOOL_SIZE environment variable:

UV_THREADPOOL_SIZE=8 node app.js

The maximum size is 1024 threads, but increasing it should be done carefully based on your application's needs.

How libuv Handles Different I/O Types

Network I/O

Network operations (TCP, UDP, HTTP) use the OS kernel's native asynchronous capabilities:

  • Linux: epoll
  • macOS/BSD: kqueue
  • Windows: IOCP (I/O Completion Ports)
  • Solaris: Event ports

These operations do not use the thread pool because they're truly asynchronous at the OS level.

const net = require('net');

// This uses the OS's native async I/O, not the thread pool
const server = net.createServer((socket) => {
  socket.on('data', (data) => {
    console.log('Received:', data);
  });
});

server.listen(3000);

File System I/O

File system operations use the thread pool because most operating systems don't provide truly asynchronous file I/O:

const fs = require('fs');

// This operation will be queued in the thread pool
fs.readFile('/path/to/file', (err, data) => {
  if (err) throw err;
  console.log(data);
});

DNS Operations

DNS resolution has two approaches:

const dns = require('dns');

// dns.lookup() uses getaddrinfo() and the thread pool
dns.lookup('example.com', (err, address) => {
  console.log('Address:', address);
});

// dns.resolve() uses c-ares library (async, no thread pool)
dns.resolve4('example.com', (err, addresses) => {
  console.log('Addresses:', addresses);
});

Process.nextTick() and Microtasks

While not part of the official event loop phases, it's important to understand process.nextTick() and Promise microtasks:

process.nextTick()

Callbacks passed to process.nextTick() are executed immediately after the current operation completes, before the event loop continues:

console.log('1');

setTimeout(() => console.log('2'), 0);

process.nextTick(() => console.log('3'));

console.log('4');

// Output: 1, 4, 3, 2

Promise Microtasks

Promises are handled in the microtask queue, which is processed after process.nextTick() but before moving to the next event loop phase:

console.log('1');

setTimeout(() => console.log('2'), 0);

Promise.resolve().then(() => console.log('3'));

process.nextTick(() => console.log('4'));

console.log('5');

// Output: 1, 5, 4, 3, 2

Execution Order

Here's the complete order of execution:

  1. Synchronous code - Current operation
  2. process.nextTick() queue - Executed completely
  3. Microtask queue - Promises, executed completely
  4. Event loop phases - Timers → Pending → Poll → Check → Close

Practical Examples

Example 1: Understanding Timer vs setImmediate

setTimeout(() => {
  console.log('timeout');
}, 0);

setImmediate(() => {
  console.log('immediate');
});

The order is non-deterministic in the main module because it depends on process performance. However, within an I/O cycle, setImmediate is always executed first:

const fs = require('fs');

fs.readFile(__filename, () => {
  setTimeout(() => {
    console.log('timeout');
  }, 0);
  
  setImmediate(() => {
    console.log('immediate');
  });
});

// Always outputs: immediate, timeout

Example 2: File System Operations and Thread Pool

const fs = require('fs');

console.log('Start');

// These will compete for thread pool resources
for (let i = 0; i < 10; i++) {
  fs.readFile('/path/to/large/file', () => {
    console.log(`File ${i} read`);
  });
}

console.log('Files queued');

// With default thread pool (4 threads):
// - First 4 files start immediately
// - Remaining 6 wait in queue
// - As threads complete, they pick up waiting tasks

Example 3: Blocking the Event Loop

const crypto = require('crypto');

console.log('Start');

// This blocks the event loop
const start = Date.now();
crypto.pbkdf2Sync('password', 'salt', 100000, 512, 'sha512');
console.log(`Sync took: ${Date.now() - start}ms`);

// This doesn't block (uses thread pool)
crypto.pbkdf2('password', 'salt', 100000, 512, 'sha512', (err, key) => {
  console.log('Async completed');
});

console.log('End');

Best Practices

1. Avoid Blocking the Event Loop

// Bad - blocks the event loop
app.get('/compute', (req, res) => {
  const result = doHeavyComputation(); // Synchronous
  res.json({ result });
});

// Good - offload to worker threads or child processes
const { Worker } = require('worker_threads');

app.get('/compute', (req, res) => {
  const worker = new Worker('./compute-worker.js');
  worker.on('message', (result) => {
    res.json({ result });
  });
});

2. Be Mindful of Thread Pool Starvation

If your application performs many file system operations or DNS lookups, consider increasing the thread pool size:

process.env.UV_THREADPOOL_SIZE = 8;

3. Use Asynchronous Methods

// Bad
const data = fs.readFileSync('/file'); // Blocks

// Good
fs.readFile('/file', (err, data) => {
  // Non-blocking
});

// Better (with modern async/await)
const data = await fs.promises.readFile('/file');

4. Understand nextTick vs setImmediate

// Use nextTick sparingly - can starve the event loop
process.nextTick(() => {
  // Executed before I/O operations
});

// Use setImmediate for deferring execution
setImmediate(() => {
  // Executed after I/O operations
});

5. Monitor Event Loop Lag

const start = process.hrtime();

setInterval(() => {
  const delta = process.hrtime(start);
  const nanosec = delta[0] * 1e9 + delta[1];
  const millis = nanosec / 1e6;
  
  // If significantly higher than 1000ms, event loop is lagging
  console.log(`Event loop delay: ${millis}ms`);
  
  start[0] = process.hrtime()[0];
  start[1] = process.hrtime()[1];
}, 1000);

Performance Implications

Event Loop Blocking

When the event loop is blocked by synchronous operations:

  • No new connections can be accepted
  • No callbacks can be executed
  • No timers can fire
  • Application becomes unresponsive

Thread Pool Saturation

When all thread pool threads are busy:

  • New file system operations queue up
  • DNS lookups are delayed
  • Crypto operations are postponed
  • Response times increase

Debugging and Monitoring

Tools for Debugging

// Enable libuv debugging
process.env.UV_THREADPOOL_SIZE = 4;

// Use async_hooks for tracking async operations
const async_hooks = require('async_hooks');

const hook = async_hooks.createHook({
  init(asyncId, type, triggerAsyncId) {
    console.log(`Async operation: ${type}`);
  }
});

hook.enable();

Monitoring Event Loop Health

const { performance } = require('perf_hooks');

const obs = new performance.PerformanceObserver((items) => {
  items.getEntries().forEach((entry) => {
    console.log(`${entry.name}: ${entry.duration}ms`);
  });
});

obs.observe({ entryTypes: ['measure'] });

performance.mark('A');
// ... do some work
performance.mark('B');
performance.measure('A to B', 'A', 'B');

Conclusion

libuv is the powerhouse behind Node.js's asynchronous capabilities. By understanding how it works:

  • You can write more efficient Node.js applications
  • You can avoid common pitfalls like blocking the event loop
  • You can make informed decisions about threading and concurrency
  • You can debug performance issues more effectively

Key takeaways:

  1. Network I/O is truly asynchronous (no thread pool)
  2. File system operations use the thread pool
  3. The event loop has distinct phases with specific purposes
  4. process.nextTick() and microtasks run between phases
  5. Blocking the event loop severely impacts performance
  6. Thread pool size can be configured based on workload

Understanding libuv's architecture helps you build scalable, high-performance Node.js applications that can handle thousands of concurrent connections efficiently.

Further Reading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment