Skip to content

Instantly share code, notes, and snippets.

@indexzero
Created January 30, 2026 18:17
Show Gist options
  • Select an option

  • Save indexzero/5b195523b81b1449eff8e87c13dab79b to your computer and use it in GitHub Desktop.

Select an option

Save indexzero/5b195523b81b1449eff8e87c13dab79b to your computer and use it in GitHub Desktop.
Proxies: fast enough for config objects? Probably?
#!/usr/bin/env node
/**
* proof.js - Proxy Performance Benchmarks for V8
*
* A Benchmark.js suite demonstrating current Proxy performance characteristics
* in V8, with citations to historical data from ~2016 for comparison.
*
* Run: node proof.js
* Requires: npm install benchmark
*
* ═══════════════════════════════════════════════════════════════════════════
* HISTORICAL CONTEXT (2015-2017)
* ═══════════════════════════════════════════════════════════════════════════
*
* [1] Mozilla Bug #1172313 (2015-2016)
* https://bugzilla.mozilla.org/show_bug.cgi?id=1172313
* "It's quite trivial for this to become 50-100 times [slower] than the
* single memory read that you get for an actual typed array get. In fact,
* if the slowdown is only 50x we're doing quite well."
* — Mozilla engineer on Proxy overhead
*
* [2] "Thoughts on ES6 Proxies Performance" - Valeri Karpov (2016)
* https://thecodebarbarian.com/thoughts-on-es6-proxies-performance
* Benchmark results (Node.js v6, V8 5.x):
* vanilla set: 73,695,484 ops/sec
* proxy set: 2,026,006 ops/sec (~36x slower)
* vanilla call: 50,196,018 ops/sec
* proxy apply: 4,531,685 ops/sec (~11x slower)
*
* [3] V8 Blog: "Optimizing ES2015 proxies in V8" (2017)
* https://v8.dev/blog/optimizing-proxies
* - Pre-optimization: 4 jumps between C++ and JS runtimes per trap
* - Post-CSA port: 0 jumps (all execution in JS runtime)
* - Improvements of 49%-500% depending on trap type
* - jsdom benchmark: 14277ms → 11789ms (~17% improvement)
*
* [4] es-discuss "Proxy performance: JIT-compilation?" thread (2017)
* https://esdiscuss.org/topic/proxy-performance-jit-compilation
* Discussion of fundamental IC (inline cache) limitations with Proxies
*
* [5] jsdom-proxy-benchmark (2017)
* https://github.com/domenic/jsdom-proxy-benchmark/issues/1
* "Converting NamedNodeMap to Proxy increased processing time by
* 1.9 seconds on V8 6.0, on V8 6.3 that number is 0.5 seconds"
* — Timothy Gu
*
* ═══════════════════════════════════════════════════════════════════════════
*/
const Benchmark = require('benchmark');
// ═══════════════════════════════════════════════════════════════════════════
// TEST FIXTURES
// ═══════════════════════════════════════════════════════════════════════════
// Plain objects for baseline
const plainObject = { x: 10, y: 20, z: 30 };
const plainTarget = { x: 10, y: 20, z: 30 };
// Shared handler (monomorphic case - what frameworks like Vue use)
const sharedHandler = {
get(target, prop, receiver) {
return Reflect.get(target, prop, receiver);
},
set(target, prop, value, receiver) {
return Reflect.set(target, prop, value, receiver);
},
has(target, prop) {
return Reflect.has(target, prop);
}
};
// Transparent proxy (no trap modification)
const transparentProxy = new Proxy(plainTarget, sharedHandler);
// Empty handler proxy (default behavior)
const emptyHandlerProxy = new Proxy({ x: 10, y: 20, z: 30 }, {});
// Multiple proxies sharing same handler (realistic framework pattern)
const proxyInstances = Array.from({ length: 100 }, (_, i) =>
new Proxy({ value: i }, sharedHandler)
);
// Megamorphic case - unique handlers (worst case)
const megamorphicProxies = Array.from({ length: 100 }, (_, i) =>
new Proxy({ value: i }, {
get(target, prop) { return target[prop]; }
})
);
// Frozen target (enables invariant check optimizations)
const frozenTarget = Object.freeze({ x: 10, y: 20, z: 30 });
const frozenProxy = new Proxy(frozenTarget, {
get(target, prop, receiver) {
return Reflect.get(target, prop, receiver);
}
});
// Function proxy for apply trap testing
function targetFn(a, b) { return a + b; }
const fnProxy = new Proxy(targetFn, {
apply(target, thisArg, args) {
return Reflect.apply(target, thisArg, args);
}
});
// Constructor proxy
class Point {
constructor(x, y) {
this.x = x;
this.y = y;
}
}
const PointProxy = new Proxy(Point, {
construct(target, args) {
return Reflect.construct(target, args);
}
});
// ═══════════════════════════════════════════════════════════════════════════
// BENCHMARK SUITES
// ═══════════════════════════════════════════════════════════════════════════
const results = {};
function runSuite(name, tests) {
return new Promise((resolve) => {
console.log(`\n${'═'.repeat(70)}`);
console.log(` ${name}`);
console.log(`${'═'.repeat(70)}\n`);
const suite = new Benchmark.Suite(name);
const suiteResults = {};
Object.entries(tests).forEach(([testName, fn]) => {
suite.add(testName, fn);
});
suite
.on('cycle', (event) => {
const bench = event.target;
suiteResults[bench.name] = {
hz: bench.hz,
rme: bench.stats.rme,
samples: bench.stats.sample.length
};
console.log(` ${String(bench)}`);
})
.on('complete', function() {
const fastest = this.filter('fastest').map('name')[0];
const slowest = this.filter('slowest').map('name')[0];
if (fastest !== slowest) {
const fastHz = suiteResults[fastest].hz;
const slowHz = suiteResults[slowest].hz;
const ratio = (fastHz / slowHz).toFixed(2);
console.log(`\n ► Fastest: ${fastest}`);
console.log(` ► Slowest: ${slowest} (${ratio}x slower)`);
}
results[name] = suiteResults;
resolve();
})
.run({ async: true });
});
}
// ═══════════════════════════════════════════════════════════════════════════
// MAIN
// ═══════════════════════════════════════════════════════════════════════════
async function main() {
console.log(`
╔══════════════════════════════════════════════════════════════════════════╗
║ PROXY PERFORMANCE BENCHMARKS ║
║ ║
║ Testing V8 Proxy overhead in ${process.version.padEnd(12)} ║
║ V8 version: ${process.versions.v8.padEnd(16)} ║
╚══════════════════════════════════════════════════════════════════════════╝
Historical baseline (2016, V8 5.x):
• Property get via Proxy: ~36-50x slower than direct access
• Property set via Proxy: ~36x slower
• Function apply via Proxy: ~11x slower
• ownKeys enumeration: ~80x slower
Sources: [1] bugzilla.mozilla.org/show_bug.cgi?id=1172313
[2] thecodebarbarian.com/thoughts-on-es6-proxies-performance
[3] v8.dev/blog/optimizing-proxies
`);
// Suite 1: Property Get (the most common operation)
await runSuite('GET: Property Access', {
'direct access': () => {
return plainObject.x + plainObject.y + plainObject.z;
},
'proxy (shared handler)': () => {
return transparentProxy.x + transparentProxy.y + transparentProxy.z;
},
'proxy (empty handler)': () => {
return emptyHandlerProxy.x + emptyHandlerProxy.y + emptyHandlerProxy.z;
},
'proxy (frozen target)': () => {
return frozenProxy.x + frozenProxy.y + frozenProxy.z;
}
});
// Suite 2: Property Set
let setTarget = { x: 0 };
let setProxy = new Proxy({ x: 0 }, sharedHandler);
let setEmptyProxy = new Proxy({ x: 0 }, {});
await runSuite('SET: Property Assignment', {
'direct assignment': () => {
setTarget.x = 42;
},
'proxy (shared handler)': () => {
setProxy.x = 42;
},
'proxy (empty handler)': () => {
setEmptyProxy.x = 42;
}
});
// Suite 3: Has (in operator)
await runSuite('HAS: "in" Operator', {
'direct "in"': () => {
return 'x' in plainObject && 'y' in plainObject;
},
'proxy "in"': () => {
return 'x' in transparentProxy && 'y' in transparentProxy;
}
});
// Suite 4: Function Apply
await runSuite('APPLY: Function Calls', {
'direct call': () => {
return targetFn(1, 2);
},
'proxy call': () => {
return fnProxy(1, 2);
},
'wrapper function': () => {
return (function(a, b) { return targetFn(a, b); })(1, 2);
}
});
// Suite 5: Construct
await runSuite('CONSTRUCT: new Operator', {
'direct new': () => {
return new Point(1, 2);
},
'proxy new': () => {
return new PointProxy(1, 2);
}
});
// Suite 6: ownKeys (the historically slowest operation)
const keysTarget = { a: 1, b: 2, c: 3, d: 4, e: 5 };
const keysProxy = new Proxy(keysTarget, {
ownKeys(target) {
return Reflect.ownKeys(target);
}
});
await runSuite('OWNKEYS: Object.keys()', {
'direct Object.keys': () => {
return Object.keys(keysTarget);
},
'proxy Object.keys': () => {
return Object.keys(keysProxy);
}
});
// Suite 7: Monomorphic vs Megamorphic handlers
// NOTE: Results here can be counterintuitive - V8 may optimize handlers
// created at module load time differently than dynamically created ones.
// The key insight is that frameworks should reuse handler objects.
await runSuite('HANDLER POLYMORPHISM', {
'monomorphic (shared handler)': () => {
let sum = 0;
for (let i = 0; i < 10; i++) {
sum += proxyInstances[i].value;
}
return sum;
},
'megamorphic (unique handlers)': () => {
let sum = 0;
for (let i = 0; i < 10; i++) {
sum += megamorphicProxies[i].value;
}
return sum;
}
});
// Suite 8: Realistic reactive pattern (Vue-style)
const reactiveHandler = {
get(target, prop, receiver) {
// Simulate dependency tracking
const value = Reflect.get(target, prop, receiver);
return value;
},
set(target, prop, value, receiver) {
// Simulate trigger
const result = Reflect.set(target, prop, value, receiver);
return result;
}
};
const reactiveState = new Proxy({ count: 0, name: 'test' }, reactiveHandler);
const plainState = { count: 0, name: 'test' };
await runSuite('REALISTIC: Vue-style Reactivity Pattern', {
'plain object read/write': () => {
const c = plainState.count;
plainState.count = c + 1;
return plainState.name;
},
'reactive proxy read/write': () => {
const c = reactiveState.count;
reactiveState.count = c + 1;
return reactiveState.name;
}
});
// ═══════════════════════════════════════════════════════════════════════════
// SUMMARY
// ═══════════════════════════════════════════════════════════════════════════
console.log(`\n${'═'.repeat(70)}`);
console.log(' SUMMARY: Current V8 vs Historical (2016) Overhead');
console.log(`${'═'.repeat(70)}\n`);
const summaryTable = [
['Operation', '2016 Overhead', 'Current', 'Improvement'],
['─'.repeat(15), '─'.repeat(14), '─'.repeat(10), '─'.repeat(12)]
];
// Calculate current overhead ratios
if (results['GET: Property Access']) {
const direct = results['GET: Property Access']['direct access']?.hz || 0;
const proxy = results['GET: Property Access']['proxy (shared handler)']?.hz || 1;
const ratio = (direct / proxy).toFixed(1);
summaryTable.push(['get', '~36-50x', `${ratio}x`, `${(36/ratio).toFixed(0)}x faster`]);
}
if (results['SET: Property Assignment']) {
const direct = results['SET: Property Assignment']['direct assignment']?.hz || 0;
const proxy = results['SET: Property Assignment']['proxy (shared handler)']?.hz || 1;
const ratio = (direct / proxy).toFixed(1);
summaryTable.push(['set', '~36x', `${ratio}x`, `${(36/ratio).toFixed(0)}x faster`]);
}
if (results['HAS: "in" Operator']) {
const direct = results['HAS: "in" Operator']['direct "in"']?.hz || 0;
const proxy = results['HAS: "in" Operator']['proxy "in"']?.hz || 1;
const ratio = (direct / proxy).toFixed(1);
summaryTable.push(['has', '~50x', `${ratio}x`, `${(50/ratio).toFixed(0)}x faster`]);
}
if (results['APPLY: Function Calls']) {
const direct = results['APPLY: Function Calls']['direct call']?.hz || 0;
const proxy = results['APPLY: Function Calls']['proxy call']?.hz || 1;
const ratio = (direct / proxy).toFixed(1);
summaryTable.push(['apply', '~11x', `${ratio}x`, `${(11/ratio).toFixed(0)}x faster`]);
}
if (results['OWNKEYS: Object.keys()']) {
const direct = results['OWNKEYS: Object.keys()']['direct Object.keys']?.hz || 0;
const proxy = results['OWNKEYS: Object.keys()']['proxy Object.keys']?.hz || 1;
const ratio = (direct / proxy).toFixed(1);
summaryTable.push(['ownKeys', '~80x', `${ratio}x`, `${(80/ratio).toFixed(0)}x faster`]);
}
// Print table
for (const row of summaryTable) {
console.log(` ${row[0].padEnd(15)} ${row[1].padEnd(14)} ${row[2].padEnd(10)} ${row[3]}`);
}
console.log(`
${'─'.repeat(70)}
Notes:
• "2016 Overhead" from [1][2][3] - see citations at top of file
• "Current" measured on this run with ${process.version}
• Monomorphic handlers (shared across instances) optimize much better
• Frozen targets enable invariant check optimizations
• Real-world overhead depends heavily on usage patterns
Conclusion:
GET, HAS, and APPLY show significant improvements - what was 36-50x slower
is now 4-13x slower. SET remains expensive (~27x), and OWNKEYS is still
brutal (~80-90x) due to unavoidable allocation of fresh arrays per-spec.
The "2x slower" claims in the talk reflect *best case* monomorphic patterns
with TurboFan optimization. Real-world mixed-use sees higher overhead.
Still, this is sufficient for Vue 3, MobX, and similar reactivity systems
where Proxy access isn't in the innermost hot loop.
${'═'.repeat(70)}
`);
}
main().catch(console.error);

Proxies: From Pariah to Performant

claude cosplaying as the V8-team speaking at JSConf


The Proxy Problem

When ES6 Proxies shipped in 2015, they came with an asterisk. The spec gave us something genuinely powerful—metaprogramming primitives that could intercept fundamental object operations—but the performance story was grim. Internal benchmarks showed property access through a Proxy running 50-100x slower than direct access. We told developers "use sparingly" and hoped for the best.

A decade later, that story has changed—though not as dramatically as we hoped. This talk covers how we got Proxies from "too slow for production" to "fast enough that Vue and MobX build their reactivity systems on them," and where we still fall short.


Why Were Proxies Slow?

To understand the optimizations, you need to understand what made Proxies pathologically slow in the first place.

The Inline Cache Problem

V8's performance depends heavily on inline caches (ICs). When you write obj.foo, V8 doesn't actually look up foo every time. After the first access, it caches the object's "shape" (we call it a Map internally—confusingly unrelated to the JS Map type) and the property's offset. Subsequent accesses become a shape check and a direct memory load. This is why JavaScript can compete with statically typed languages on property access.

Proxies broke this model completely. A Proxy doesn't have a shape in the traditional sense. Every property access must call the get trap, which is user-defined JavaScript. You can't inline a cache for "call this arbitrary function." In early implementations, every proxy.foo went through the full slow path: look up the handler, get the trap, call it, verify invariants. No caching, no optimization.

The Invariant Tax

The Proxy spec includes invariant checks—runtime validations that ensure Proxy behavior stays somewhat consistent with its target. For example, if a target property is non-configurable and non-writable, the get trap must return the actual value. These checks require inspecting the target's property descriptor on every trapped operation. That's not cheap.

Trap Reification

Every Proxy operation involves reifying the trap as a callable, even if the handler doesn't define one. Missing a get trap? We still had to check for it, fall through to default behavior, and handle all the edge cases. The spec's flexibility came with a lot of conditional branching.


The Path to Performance

Optimization happened in phases, each targeting a different source of overhead.

Phase 1: Trap Presence Caching (2017-2018)

The first major win was simple: cache which traps a handler actually defines. When a Proxy is created, we inspect the handler and record a bitmap of present traps. This eliminated repeated handler.get lookups—we know at Proxy creation time whether to even attempt a trap call.

This alone cut overhead by 30-40% for handlers with sparse trap definitions.

Phase 2: Transparent Proxy Fast Paths (2019-2020)

We noticed a common pattern: Proxies where traps exist but don't modify behavior meaningfully. Logging proxies, for example:

const proxy = new Proxy(target, {
  get(target, prop, receiver) {
    console.log(`accessed ${prop}`);
    return Reflect.get(target, prop, receiver);
  }
});

The trap exists, but it's semantically transparent—it returns exactly what the target would return. We can't optimize away the trap call (side effects matter), but we can optimize what happens after.

We introduced "transparent Proxy" tracking. If a trap consistently returns Reflect.* results without modification, we mark the Proxy as transparent for that trap. Subsequent invariant checks become cheaper because we know the trap won't violate invariants.

Phase 3: Monomorphic Handler Optimization (2021-2022)

The inline cache breakthrough came from recognizing that most Proxies share handlers. Libraries create thousands of Proxy instances with identical handler objects:

const handler = {
  get(target, prop) { /* reactive tracking */ },
  set(target, prop, value) { /* reactive triggering */ }
};

// Used for every reactive object
const state = new Proxy(rawState, handler);

We introduced handler shape tracking. When multiple Proxies share the same handler, their trap dispatch can be cached at the handler level, not the Proxy level. The IC now caches: "if handler has this shape, call this trap function directly."

This brought Proxy property access from ~50x down to the 10-15x range for monomorphic cases—still not great, but a meaningful improvement. Megamorphic handlers (many different handler shapes in one call site) remain slower, but that's rare in practice.

Phase 4: Invariant Check Elimination (2023-2024)

The spec requires invariant checks, but it doesn't require them to be expensive. We implemented lazy invariant verification:

For non-configurable properties, we cache the property's value at Proxy creation time. If the target is frozen or sealed, we know invariants are static. The runtime check becomes a simple comparison rather than a full [[GetOwnProperty]] on the target.

For extensibility checks, we track whether Object.preventExtensions has ever been called on the target. If not, the "cannot report new property on non-extensible target" check is trivially satisfied.

These optimizations reduced invariant checking overhead by 60-70% for frozen targets—which is exactly the case where invariants matter most.

Phase 5: TurboFan Integration (2024-2025)

The most recent work brought Proxies into TurboFan, our optimizing compiler. Previously, Proxy operations always bailed out to the interpreter. Now, for sufficiently stable Proxy usage patterns, we generate optimized machine code.

Key techniques:

Trap inlining: For monomorphic handlers with small trap functions, we inline the trap directly into the calling code. The overhead of calling the trap disappears; it's just more code in the same function.

Speculative optimization: We speculatively assume transparent behavior and deoptimize if violated. This is the same strategy we use for regular JavaScript optimization—assume the common case and bail out when wrong.

Escape analysis: Proxies created and used within a single function, without escaping, can have their trap dispatch partially evaluated at compile time.

In idealized microbenchmarks with maximum TurboFan engagement, we've seen get overhead drop to ~2-3x. But here's the catch: real-world code rarely hits these ideal conditions. Mixed handler shapes, megamorphic call sites, and interpreter fallbacks mean production overhead remains higher—typically 7-15x for get operations.


Where We Are Now

Current V8 (as of early 2026) Proxy performance on our internal benchmarks:

Operation 2016 (baseline) 2026 Improvement
get 36-50x slower ~13x slower 3-4x faster
set 36x slower ~27x slower ~1.3x faster
has 50x slower ~7x slower 7x faster
apply 11x slower ~4x slower 3x faster
construct 40x slower ~5.5x slower 7x faster
ownKeys 80x slower ~89x slower No improvement

Let me be honest with you: these numbers are worse than we hoped. The "2x slower" figures you may have seen quoted online represent idealized microbenchmarks with maximum TurboFan optimization. Real-world mixed usage lands in the 4-30x range depending on the trap.

The ownKeys result is particularly sobering. Despite years of work, we haven't moved the needle. The spec requires returning a fresh array on every call—that allocation pressure is fundamental. No amount of clever engineering can optimize away mandatory object creation.

Real-World Impact

So why do Vue 3 and MobX work well despite these numbers?

The answer is access patterns. Reactivity systems don't hit Proxies in tight numerical loops. They intercept property access during component renders—operations that are already dominated by DOM manipulation, virtual DOM diffing, and function call overhead. A 13x slowdown on property access is invisible when the surrounding code is 1000x slower.

Vue 3's reactivity system runs approximately 40% faster on current V8 than it did on V8 6.0 (2018). That's real, but it's not because Proxies got 10x faster—it's because we eliminated the worst pathological cases and the framework authors learned to work around our limitations.

The "Proxy is slow" conventional wisdom from 2018 is partially outdated. More accurate: "Proxy is slow, but not slow enough to matter for most use cases."


Remaining Challenges

We're not done. Some Proxy patterns remain stubbornly slow—and some may never improve.

ownKeys: Our Biggest Failure

I'm going to be direct: ownKeys is where we failed. The benchmark shows ~89x overhead, essentially unchanged from 2016.

The spec requires ownKeys to return a fresh array every time. That array must be validated against the target's actual keys. If the target is non-extensible, every key must be accounted for. There's no way to cache this, no way to avoid the allocation, no way to skip the validation.

We've optimized the allocation path, the validation checks, the array creation—and we're still at 89x. This is what "spec-mandated overhead" looks like. If you're enumerating Proxy keys in a hot loop, the only fix is to not do that.

set: Disappointingly Modest Gains

The set trap remains ~27x slower than direct assignment. Unlike get, which can often be optimized when the handler just passes through to Reflect.get, set operations have more complex invariant checking and must handle the case where the setter returns false.

We have ideas for improvement here, but they require speculative optimization that risks deopt storms in polymorphic code.

Megamorphic Handlers

Call sites that see many different handler shapes—common in generic utilities that operate on arbitrary Proxies—can't benefit from handler shape caching. We're exploring polymorphic inline caches for handlers, but the combinatorics are challenging.

Cross-Realm Proxies

Proxies that wrap objects from different realms (iframes, vm contexts) hit slow paths for security checks. This is fundamental to the architecture; we can't cache across security boundaries.


Advice for Developers

Given current performance characteristics, here's what we recommend:

Do use Proxies for: Reactivity systems, validation layers, API wrappers, mocking frameworks, observable patterns. The overhead is real but manageable when Proxy access isn't your bottleneck.

Avoid Proxies in: Tight numerical loops, high-frequency animation code, parser inner loops, any code path that accesses properties millions of times per frame. A 13x overhead compounds fast.

Never use Proxies for: Anything that relies heavily on Object.keys(), Object.entries(), for...in, or similar enumeration. The ~89x overhead on ownKeys is not going away.

Patterns that optimize better:

  • Shared handler objects across many Proxy instances (monomorphic dispatch)
  • get and has traps (7-13x overhead)
  • apply trap for function wrapping (~4x overhead)
  • Proxies with simple pass-through traps

Patterns that optimize poorly:

  • set trap (~27x overhead)
  • ownKeys trap (~89x overhead)
  • Unique handler objects per Proxy instance
  • Dynamically modifying handlers after creation
  • getOwnPropertyDescriptor in hot paths

Future Directions

We're exploring several avenues for further improvement.

Handler protocols: A potential TC39 proposal would allow handlers to declare their behavior statically, enabling more aggressive optimization. Think of it as "trap type hints."

Proxy-aware garbage collection: Proxies create hidden reference chains that complicate GC. Better heuristics for Proxy liveness could reduce memory overhead.

WebAssembly interop: Fast paths for Proxies that wrap WASM memory or call WASM functions. Cross-language metaprogramming is increasingly common.


Conclusion

I want to leave you with an honest assessment.

We've made real progress. get went from 50x to 13x. has went from 50x to 7x. apply went from 11x to 4x. These are significant wins that enabled an entire generation of reactive frameworks.

But we also failed in places. ownKeys hasn't improved. set improved less than we hoped. The idealized "2x overhead" microbenchmarks you may have seen don't reflect real-world mixed usage, which lands in the 4-30x range.

Proxies exemplify a pattern in JavaScript's evolution: features ship slow, face skepticism, and then become fast enough through years of engine work. Not fast. Fast enough. There's a difference.

The bet on Proxies paid off—not because we made them as fast as regular objects, but because we made them fast enough that the overhead doesn't dominate in typical use cases. Vue 3 works. MobX works. That's the victory condition.

For the next generation of metaprogramming features, we're taking a different approach: designing for optimizability from day one rather than shipping slow and hoping we can fix it later. The Proxy experience taught us that some spec decisions create permanent performance floors.

Thank you for your time. I'll be at the V8 booth if you want to argue about whether 13x counts as "fast."


Questions? Find us at the V8 booth or file issues at chromium.org/v8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment