claude cosplaying as the V8-team speaking at JSConf
When ES6 Proxies shipped in 2015, they came with an asterisk. The spec gave us something genuinely powerful—metaprogramming primitives that could intercept fundamental object operations—but the performance story was grim. Internal benchmarks showed property access through a Proxy running 50-100x slower than direct access. We told developers "use sparingly" and hoped for the best.
A decade later, that story has changed—though not as dramatically as we hoped. This talk covers how we got Proxies from "too slow for production" to "fast enough that Vue and MobX build their reactivity systems on them," and where we still fall short.
To understand the optimizations, you need to understand what made Proxies pathologically slow in the first place.
V8's performance depends heavily on inline caches (ICs). When you write obj.foo, V8 doesn't actually look up foo every time. After the first access, it caches the object's "shape" (we call it a Map internally—confusingly unrelated to the JS Map type) and the property's offset. Subsequent accesses become a shape check and a direct memory load. This is why JavaScript can compete with statically typed languages on property access.
Proxies broke this model completely. A Proxy doesn't have a shape in the traditional sense. Every property access must call the get trap, which is user-defined JavaScript. You can't inline a cache for "call this arbitrary function." In early implementations, every proxy.foo went through the full slow path: look up the handler, get the trap, call it, verify invariants. No caching, no optimization.
The Proxy spec includes invariant checks—runtime validations that ensure Proxy behavior stays somewhat consistent with its target. For example, if a target property is non-configurable and non-writable, the get trap must return the actual value. These checks require inspecting the target's property descriptor on every trapped operation. That's not cheap.
Every Proxy operation involves reifying the trap as a callable, even if the handler doesn't define one. Missing a get trap? We still had to check for it, fall through to default behavior, and handle all the edge cases. The spec's flexibility came with a lot of conditional branching.
Optimization happened in phases, each targeting a different source of overhead.
The first major win was simple: cache which traps a handler actually defines. When a Proxy is created, we inspect the handler and record a bitmap of present traps. This eliminated repeated handler.get lookups—we know at Proxy creation time whether to even attempt a trap call.
This alone cut overhead by 30-40% for handlers with sparse trap definitions.
We noticed a common pattern: Proxies where traps exist but don't modify behavior meaningfully. Logging proxies, for example:
const proxy = new Proxy(target, {
get(target, prop, receiver) {
console.log(`accessed ${prop}`);
return Reflect.get(target, prop, receiver);
}
});The trap exists, but it's semantically transparent—it returns exactly what the target would return. We can't optimize away the trap call (side effects matter), but we can optimize what happens after.
We introduced "transparent Proxy" tracking. If a trap consistently returns Reflect.* results without modification, we mark the Proxy as transparent for that trap. Subsequent invariant checks become cheaper because we know the trap won't violate invariants.
The inline cache breakthrough came from recognizing that most Proxies share handlers. Libraries create thousands of Proxy instances with identical handler objects:
const handler = {
get(target, prop) { /* reactive tracking */ },
set(target, prop, value) { /* reactive triggering */ }
};
// Used for every reactive object
const state = new Proxy(rawState, handler);We introduced handler shape tracking. When multiple Proxies share the same handler, their trap dispatch can be cached at the handler level, not the Proxy level. The IC now caches: "if handler has this shape, call this trap function directly."
This brought Proxy property access from ~50x down to the 10-15x range for monomorphic cases—still not great, but a meaningful improvement. Megamorphic handlers (many different handler shapes in one call site) remain slower, but that's rare in practice.
The spec requires invariant checks, but it doesn't require them to be expensive. We implemented lazy invariant verification:
For non-configurable properties, we cache the property's value at Proxy creation time. If the target is frozen or sealed, we know invariants are static. The runtime check becomes a simple comparison rather than a full [[GetOwnProperty]] on the target.
For extensibility checks, we track whether Object.preventExtensions has ever been called on the target. If not, the "cannot report new property on non-extensible target" check is trivially satisfied.
These optimizations reduced invariant checking overhead by 60-70% for frozen targets—which is exactly the case where invariants matter most.
The most recent work brought Proxies into TurboFan, our optimizing compiler. Previously, Proxy operations always bailed out to the interpreter. Now, for sufficiently stable Proxy usage patterns, we generate optimized machine code.
Key techniques:
Trap inlining: For monomorphic handlers with small trap functions, we inline the trap directly into the calling code. The overhead of calling the trap disappears; it's just more code in the same function.
Speculative optimization: We speculatively assume transparent behavior and deoptimize if violated. This is the same strategy we use for regular JavaScript optimization—assume the common case and bail out when wrong.
Escape analysis: Proxies created and used within a single function, without escaping, can have their trap dispatch partially evaluated at compile time.
In idealized microbenchmarks with maximum TurboFan engagement, we've seen get overhead drop to ~2-3x. But here's the catch: real-world code rarely hits these ideal conditions. Mixed handler shapes, megamorphic call sites, and interpreter fallbacks mean production overhead remains higher—typically 7-15x for get operations.
Current V8 (as of early 2026) Proxy performance on our internal benchmarks:
| Operation | 2016 (baseline) | 2026 | Improvement |
|---|---|---|---|
| get | 36-50x slower | ~13x slower | 3-4x faster |
| set | 36x slower | ~27x slower | ~1.3x faster |
| has | 50x slower | ~7x slower | 7x faster |
| apply | 11x slower | ~4x slower | 3x faster |
| construct | 40x slower | ~5.5x slower | 7x faster |
| ownKeys | 80x slower | ~89x slower | No improvement |
Let me be honest with you: these numbers are worse than we hoped. The "2x slower" figures you may have seen quoted online represent idealized microbenchmarks with maximum TurboFan optimization. Real-world mixed usage lands in the 4-30x range depending on the trap.
The ownKeys result is particularly sobering. Despite years of work, we haven't moved the needle. The spec requires returning a fresh array on every call—that allocation pressure is fundamental. No amount of clever engineering can optimize away mandatory object creation.
So why do Vue 3 and MobX work well despite these numbers?
The answer is access patterns. Reactivity systems don't hit Proxies in tight numerical loops. They intercept property access during component renders—operations that are already dominated by DOM manipulation, virtual DOM diffing, and function call overhead. A 13x slowdown on property access is invisible when the surrounding code is 1000x slower.
Vue 3's reactivity system runs approximately 40% faster on current V8 than it did on V8 6.0 (2018). That's real, but it's not because Proxies got 10x faster—it's because we eliminated the worst pathological cases and the framework authors learned to work around our limitations.
The "Proxy is slow" conventional wisdom from 2018 is partially outdated. More accurate: "Proxy is slow, but not slow enough to matter for most use cases."
We're not done. Some Proxy patterns remain stubbornly slow—and some may never improve.
I'm going to be direct: ownKeys is where we failed. The benchmark shows ~89x overhead, essentially unchanged from 2016.
The spec requires ownKeys to return a fresh array every time. That array must be validated against the target's actual keys. If the target is non-extensible, every key must be accounted for. There's no way to cache this, no way to avoid the allocation, no way to skip the validation.
We've optimized the allocation path, the validation checks, the array creation—and we're still at 89x. This is what "spec-mandated overhead" looks like. If you're enumerating Proxy keys in a hot loop, the only fix is to not do that.
The set trap remains ~27x slower than direct assignment. Unlike get, which can often be optimized when the handler just passes through to Reflect.get, set operations have more complex invariant checking and must handle the case where the setter returns false.
We have ideas for improvement here, but they require speculative optimization that risks deopt storms in polymorphic code.
Call sites that see many different handler shapes—common in generic utilities that operate on arbitrary Proxies—can't benefit from handler shape caching. We're exploring polymorphic inline caches for handlers, but the combinatorics are challenging.
Proxies that wrap objects from different realms (iframes, vm contexts) hit slow paths for security checks. This is fundamental to the architecture; we can't cache across security boundaries.
Given current performance characteristics, here's what we recommend:
Do use Proxies for: Reactivity systems, validation layers, API wrappers, mocking frameworks, observable patterns. The overhead is real but manageable when Proxy access isn't your bottleneck.
Avoid Proxies in: Tight numerical loops, high-frequency animation code, parser inner loops, any code path that accesses properties millions of times per frame. A 13x overhead compounds fast.
Never use Proxies for: Anything that relies heavily on Object.keys(), Object.entries(), for...in, or similar enumeration. The ~89x overhead on ownKeys is not going away.
Patterns that optimize better:
- Shared handler objects across many Proxy instances (monomorphic dispatch)
getandhastraps (7-13x overhead)applytrap for function wrapping (~4x overhead)- Proxies with simple pass-through traps
Patterns that optimize poorly:
settrap (~27x overhead)ownKeystrap (~89x overhead)- Unique handler objects per Proxy instance
- Dynamically modifying handlers after creation
getOwnPropertyDescriptorin hot paths
We're exploring several avenues for further improvement.
Handler protocols: A potential TC39 proposal would allow handlers to declare their behavior statically, enabling more aggressive optimization. Think of it as "trap type hints."
Proxy-aware garbage collection: Proxies create hidden reference chains that complicate GC. Better heuristics for Proxy liveness could reduce memory overhead.
WebAssembly interop: Fast paths for Proxies that wrap WASM memory or call WASM functions. Cross-language metaprogramming is increasingly common.
I want to leave you with an honest assessment.
We've made real progress. get went from 50x to 13x. has went from 50x to 7x. apply went from 11x to 4x. These are significant wins that enabled an entire generation of reactive frameworks.
But we also failed in places. ownKeys hasn't improved. set improved less than we hoped. The idealized "2x overhead" microbenchmarks you may have seen don't reflect real-world mixed usage, which lands in the 4-30x range.
Proxies exemplify a pattern in JavaScript's evolution: features ship slow, face skepticism, and then become fast enough through years of engine work. Not fast. Fast enough. There's a difference.
The bet on Proxies paid off—not because we made them as fast as regular objects, but because we made them fast enough that the overhead doesn't dominate in typical use cases. Vue 3 works. MobX works. That's the victory condition.
For the next generation of metaprogramming features, we're taking a different approach: designing for optimizability from day one rather than shipping slow and hoping we can fix it later. The Proxy experience taught us that some spec decisions create permanent performance floors.
Thank you for your time. I'll be at the V8 booth if you want to argue about whether 13x counts as "fast."
Questions? Find us at the V8 booth or file issues at chromium.org/v8.