What This Guide Covers — and What It Deliberately Skips
Most JavaScript interview guides target beginners: what is a variable, what does typeof return, what is the difference between == and ===. Those questions are entry-level screens. If you are preparing for a mid or senior role, they are not what you will face.
This guide is built from the questions that actually decide outcomes at that level — questions we have seen asked, and asked ourselves, across interviews at companies ranging from early-stage startups to large public technology companies. Every question here is the kind that separates candidates who know the syntax from candidates who understand the runtime. For each question you get:
- What the interviewer is actually testing — the underlying concept, not just the surface answer
- What a mediocre answer sounds like — so you know what to avoid
- What an excellent answer includes — with working, annotated code
- Follow-up traps — the second question that trips people up after the first answer
The 25 questions are grouped into six themes that build on each other: scope and closures → the event loop → the prototype chain → functions and modern patterns → tricky output questions → performance and memory. Work through them in order the first time.
All 25 questions at a glance: closures and their practical use, the Temporal Dead Zone, the this keyword and its traps, the var a = b = 3 gotcha, hoisting precision, the event loop (call stack / microtask / macrotask), async execution order prediction, Promise combinators, async error handling edge cases, closure memory leaks, prototypal inheritance without classes, Object.create vs new vs extends, WeakMap vs Map, currying and partial application, call/apply/bind, Proxy and Reflect, generators vs async/await, debounce and throttle from scratch, typeof null / NaN !== NaN / [] == ![], the closure-in-a-loop bug, mixed microtask/macrotask output, [] + {} vs {} + [], five production memory leak patterns, V8 hidden classes and inline caches, and CommonJS vs ESM tree-shaking.
Scope, Closures & the Execution Context (Q1–5)
Q1. What is a JavaScript closure and what practical problem does it solve?
What the interviewer is testing: Whether you understand closures as a feature you use on purpose, not just a side effect you memorize a definition for.
The mediocre answer: "A closure is when a function remembers the variables from its outer scope even after the outer function has returned."
That is technically correct, but it is pure recitation. It does not demonstrate that you have ever actually used a closure to solve a real problem.
The excellent answer covers three things:
First, the mechanism: every function in JavaScript carries a reference to its lexical environment — the scope where it was defined, not where it is called. That bundled reference is the closure.
Second, the practical use case — data encapsulation without classes:
function createCounter(initialValue = 0) {
let count = initialValue; // private, not accessible from outside
return {
increment() { count++; },
decrement() { count--; },
value() { return count; },
};
}
const counter = createCounter(10);
counter.increment();
counter.increment();
console.log(counter.value()); // 12
console.log(counter.count); // undefined — genuinely private
Third, where closures create bugs — the classic loop trap (covered in Q20).
Follow-up question to expect: "Can closures cause memory leaks?" Yes — if a closure captures a large object and the closure itself is long-lived (attached to a DOM event listener, stored in a module-level Map), the object cannot be garbage-collected. Solution: null out captured references when no longer needed, or use WeakRef/WeakMap.
Q2. What is the Temporal Dead Zone (TDZ) in JavaScript?
What the interviewer is testing: Do you understand why let and const were designed differently from var, not just that they are "block-scoped."
The precise answer: All three declarations (var, let, const) are hoisted — the JavaScript engine records their existence before executing any code. But only var is initialized to undefined at hoist time. let and const are hoisted but left uninitialized. The period between the start of the enclosing block and the declaration line is the Temporal Dead Zone (TDZ). Any access inside the TDZ throws a ReferenceError.
// var: hoisted AND initialized to undefined
console.log(x); // undefined (no error)
var x = 5;
// let: hoisted but NOT initialized — TDZ in effect
console.log(y); // ReferenceError: Cannot access 'y' before initialization
let y = 5;
Why this matters in practice: The TDZ was intentional. It prevents a whole class of bugs where you accidentally read a variable before it is meaningfully set. If you read var before its assignment, you silently get undefined — a value that looks valid but is not. The TDZ makes that mistake loud, not silent.
A subtler TDZ trap:
let x = 'outer';
{
// TDZ starts here — 'x' inside this block is hoisted but not initialized
console.log(x); // ReferenceError — NOT 'outer'
let x = 'inner';
}
Many developers expect this to print 'outer' because the inner let x hasn't been reached yet. It doesn't — the inner declaration shadows the outer one from the start of the block, putting the inner x in TDZ.
Q3. How does the this keyword work in JavaScript — and what are its common traps?
What the interviewer is testing: Whether you can predict this in real code, not just recite rules.
The five rules, in priority order:
- new binding:
thisis the newly created object - Explicit binding:
call(),apply(),bind()setthisto the first argument - Implicit binding:
obj.method()—thisisobj - Default binding: standalone call in non-strict mode → global object; in strict mode →
undefined - Arrow functions: no own
this— inherits from the enclosing lexical scope at definition time
const obj = {
name: 'Alice',
greetRegular: function() {
console.log(this.name); // 'Alice' — implicit binding
},
greetArrow: () => {
console.log(this.name); // undefined — arrow uses lexical this (module/global scope)
},
greetDelayed: function() {
setTimeout(function() {
console.log(this.name); // undefined — default binding (strict) / global (non-strict)
}, 100);
setTimeout(() => {
console.log(this.name); // 'Alice' — arrow captures `this` from greetDelayed's scope
}, 100);
},
};
The trap that gets people: Extracting a method loses its context.
const greet = obj.greetRegular;
greet(); // undefined — no longer called as obj.greetRegular, so implicit binding is gone
This is why React class component methods historically needed .bind(this) in the constructor. Arrow function class fields solve it permanently: the function is created once per instance during construction, with this lexically bound.
class Button {
// Arrow class field — `this` is always the Button instance
handleClick = () => {
console.log(this.label); // safe, no bind needed
};
constructor(label) {
this.label = label;
}
}
const btn = new Button('Submit');
document.addEventListener('click', btn.handleClick); // this stays bound
One more trap to mention: In browsers, the default binding in non-strict mode binds this to window. In Node.js, it binds to global. Modern code uses globalThis, which normalizes across environments (browser, Node.js, web workers) — but be aware that losing this and accidentally reading from the global object is a silent, hard-to-debug failure mode.
Q4. What does var a = b = 3 actually do, and why is it dangerous?
What the interviewer is testing: Whether you understand variable declaration vs. assignment, and the global scope pollution trap.
(function() {
var a = b = 3;
})();
console.log(typeof a); // 'undefined' — a is locally scoped
console.log(typeof b); // 'number' — b leaked to global scope!
Why: var a = b = 3 is parsed right-to-left as b = 3 (assignment, no declaration — creates a global) then var a = b (declares a locally). The var keyword only applies to a, not b.
In strict mode ('use strict'), the undeclared assignment to b throws a ReferenceError. This is one of the core reasons strict mode exists.
Correct way to declare both locally:
var b = 3, a = b; // or
var a, b;
b = 3;
a = b;
Q5. How does hoisting work for var, let, const, and function declarations?
What the interviewer is testing: Precision. Many developers know "hoisting exists" but have a fuzzy mental model of what actually gets hoisted and how.
Precise breakdown:
var— hoisted and initialized toundefinedlet/const— hoisted but uninitialized (TDZ until declaration line)- Function declarations — hoisted completely: both name and body available at top of scope
- Function expressions (
var fn = function(){}) — only thevaris hoisted (asundefined), not the function body
// Function declaration — works anywhere in scope
console.log(add(2, 3)); // 5
function add(a, b) { return a + b; }
// Function expression — does NOT work before the assignment
console.log(multiply(2, 3)); // TypeError: multiply is not a function
var multiply = function(a, b) { return a * b; };
Why function declarations are fully hoisted: It allows mutually recursive functions to be defined in any order. The engine does a first pass before execution, registering all function declarations in scope.
Practical advice for interviews: Always write functions before you call them. This makes code readable and avoids relying on hoisting behavior, which is a source of subtle bugs.
The Event Loop & Async JavaScript (Q6–10)
Q6. How does the JavaScript event loop work? Explain the call stack, microtask queue, and macrotask queue.
What the interviewer is testing: This is the single most important question in JavaScript interviews at the mid-to-senior level. It underpins every async behavior you will ever debug.
The mental model to communicate:
JavaScript is single-threaded — one call stack, executing one thing at a time. The event loop is the mechanism that makes it appear concurrent. Here is the execution order:
- Call stack: Synchronous code runs here. Functions are pushed and popped. Nothing async can run while this is non-empty.
- Microtask queue: Filled by resolved Promises (
.then,.catch),queueMicrotask(), andMutationObserver. Drained completely after every task, before yielding to the macrotask queue. - Macrotask queue (task queue): Filled by
setTimeout,setInterval, I/O callbacks,setImmediate(Node.js). One task is dequeued per event loop tick.
The key rule: After each macrotask completes (including the initial script execution), the engine drains the entire microtask queue before taking the next macrotask. Microtasks always come first.
console.log('1');
setTimeout(() => console.log('2'), 0);
Promise.resolve().then(() => console.log('3'));
console.log('4');
// Output: 1, 4, 3, 2
// Sync runs first (1, 4), then microtasks (3), then macrotasks (2)
What interviewers want to see you know beyond the basics:
- Microtask starvation is real: A
while(true) { await Promise.resolve(); }loop will freeze the browser tab permanently because the microtask queue never empties, so the browser never gets a turn to paint a frame. The rendering pipeline sits between macrotasks — it cannot interrupt a microtask drain. awaitis microtask sugar: Everyawaitsuspends the async function and schedules its continuation as a microtask. Two consecutiveawaits introduce two microtask "ticks" of delay, which matters when reasoning about interleaving.- Node.js priority order:
process.nextTick()callbacks drain first (before other microtasks), then Promise callbacks, then I/O callbacks, thensetTimeout/setImmediate. This difference from browsers has caused real bugs in code that runs in both environments.
Q7. What is the output order of mixed Promise, async/await, and setTimeout code?
What the interviewer is testing: Can you trace execution order through mixed async code without running it?
async function main() {
console.log('A');
await Promise.resolve();
console.log('B');
}
console.log('C');
main();
console.log('D');
setTimeout(() => console.log('E'), 0);
Promise.resolve().then(() => console.log('F'));
Step through it:
'C'— sync, first line executed'A'— main() is called, sync code inside runs until first await'D'— control returns to the call site after main() suspends- Call stack is now empty. Microtask queue drains:
'B'— the continuation afterawait Promise.resolve()(microtask)'F'—Promise.resolve().then()(microtask)- Macrotask queue runs:
'E'— setTimeout callback
Output: C, A, D, B, F, E
In real interviews, the code is more complex. The strategy is always the same: trace synchronous code first, then microtasks in order, then macrotasks.
Q8. What is the difference between Promise.all, Promise.allSettled, Promise.any, and Promise.race?
What the interviewer is testing: Can you choose the right combinator for a given use case?
- Promise.all(promises): Resolves when all resolve; rejects immediately on the first rejection. Use when all results are required and any failure should abort. Classic: parallel API calls where you need every response.
- Promise.allSettled(promises): Always resolves (never rejects), with an array of
{ status, value/reason }objects. Use when you need to know the outcome of every promise regardless of failures. Classic: batch operations where you want a report, not a hard failure. - Promise.any(promises): Resolves on the first fulfillment; rejects only if all reject (AggregateError). Use for race-to-success patterns. Classic: querying multiple redundant servers, use whichever responds first.
- Promise.race(promises): Settles (resolves or rejects) on whichever promise settles first. Use for timeout patterns. Classic: race a fetch against a
setTimeoutrejection.
// Timeout pattern with Promise.race
function fetchWithTimeout(url, ms) {
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Timeout')), ms)
);
return Promise.race([fetch(url), timeout]);
}
// Resilient batch with allSettled
const results = await Promise.allSettled([
fetch('/api/users'),
fetch('/api/orders'),
fetch('/api/products'),
]);
const failed = results.filter(r => r.status === 'rejected');
const succeeded = results.filter(r => r.status === 'fulfilled');
Q9. What async/await error handling mistakes does try/catch fail to catch?
What the interviewer is testing: Real-world awareness of async error handling edge cases.
Case 1 — Unhandled promise rejection in a detached Promise:
async function process() {
// This rejection is DETACHED — not awaited, not caught
someAsyncOperation().then(doSomething);
try {
await anotherOperation();
} catch (e) {
// This catch does NOT cover the detached promise above
}
}
Any promise you create but don't await and don't .catch() is a potential unhandled rejection. In Node.js 15+, this crashes the process. In browsers, it fires the unhandledrejection event.
Case 2 — Async callbacks inside sync iteration:
try {
[1, 2, 3].forEach(async (id) => {
const data = await fetch(`/api/${id}`);
// Errors here are NOT caught by the outer try/catch
});
} catch (e) {
// Never runs for async errors inside forEach
}
forEach ignores returned promises — its callback's return value is discarded. Each async callback fires a detached promise chain. Use for...of with await for sequential processing, or Promise.all for parallel:
// Sequential — errors are caught
for (const id of [1, 2, 3]) {
try {
const data = await fetch(`/api/${id}`);
await processData(data);
} catch (e) {
console.error(`Failed for id ${id}:`, e);
}
}
// Parallel — one rejection rejects all (use allSettled if you want partial results)
const results = await Promise.all(
[1, 2, 3].map(async (id) => {
const data = await fetch(`/api/${id}`);
return processData(data);
})
);
Case 3 — Fire-and-forget without global handler:
// In production, always have a global safety net
process.on('unhandledRejection', (reason, promise) => {
logger.error('Unhandled rejection:', reason);
// decide whether to crash gracefully
});
// Browser equivalent
window.addEventListener('unhandledrejection', (event) => {
console.error('Unhandled promise rejection:', event.reason);
event.preventDefault(); // suppress browser console warning
});
Q10. How do closures cause memory leaks, and how do you detect them?
What the interviewer is testing: Production awareness. This distinguishes developers who have debugged real memory issues from those who have only read about them.
The mechanism: A closure keeps its entire lexical scope alive as long as the closure itself is reachable. If a closure references a large object, that object cannot be garbage collected even if you never access it again.
// Classic DOM listener leak
function attachHandler() {
const largeData = new Array(1_000_000).fill('*'); // 1M items
document.getElementById('btn').addEventListener('click', function handler() {
// largeData is captured in this closure
// even if we never use largeData here, it stays in memory
console.log('clicked');
});
// largeData goes out of scope here, BUT the listener still holds a reference
}
Real patterns that cause leaks:
- Event listeners never removed (especially on long-lived DOM elements)
- Closures stored in module-level caches that grow unboundedly
- Timers (
setInterval) that reference outer scope but are never cleared - Accidental globals — variables attached to
windowthat are never cleaned up
Detection: Chrome DevTools → Memory tab → Heap snapshot. Look for "Detached DOM tree" nodes and unexpected retained sizes.
Fixes: Remove event listeners when components unmount (removeEventListener or AbortController signal), use WeakMap for caches keyed by objects, clear intervals and timeouts.
Prototypes, Classes & the Object Model (Q11–13)
Q11. How does JavaScript prototypal inheritance work under the hood?
What the interviewer is testing: Do you understand what is actually happening, or are you just using ES6 class syntax without knowing the mechanics underneath?
The core model: In JavaScript, inheritance is object-to-object, not class-to-class. Every object has an internal slot called [[Prototype]] (accessible as __proto__ or via Object.getPrototypeOf()). When you access a property on an object, the engine looks at the object first. If not found, it looks at [[Prototype]], then that object's [[Prototype]], and so on up the chain until it reaches Object.prototype (whose [[Prototype]] is null). This chain traversal is prototype lookup.
const animal = {
breathe() { return `${this.name} breathes`; },
};
const dog = Object.create(animal); // dog's [[Prototype]] is animal
dog.name = 'Rex';
dog.bark = function() { return 'Woof'; };
console.log(dog.breathe()); // 'Rex breathes' — found on animal via prototype chain
console.log(dog.bark()); // 'Woof' — found directly on dog
console.log(Object.getPrototypeOf(dog) === animal); // true
What constructor functions and classes are: Syntactic patterns that set up the prototype chain for you. class Dog extends Animal is syntactic sugar for manually linking Dog.prototype to Animal.prototype. The prototype chain is always what is running under the hood.
The crucial distinction: When you call a method via the prototype chain, this still refers to the original object (the one you called the method on), not the prototype where the method was defined. This is why this.name in breathe() returns 'Rex' — this is dog.
Q12. What is the difference between Object.create(), new Constructor(), and class extends?
What the interviewer is testing: Depth of understanding of object creation in JavaScript.
Object.create(proto): Creates a new plain object whose [[Prototype]] is set to proto. No constructor is called. The most explicit and low-level way to set up inheritance.
new Constructor(): Four things happen implicitly: (1) a new empty object is created; (2) its [[Prototype]] is set to Constructor.prototype; (3) the constructor function is called with this bound to the new object; (4) if the constructor returns a non-primitive, that is used as the result; otherwise, the new object is returned.
// What `new` does, manually:
function simulatedNew(Constructor, ...args) {
const obj = Object.create(Constructor.prototype);
const result = Constructor.apply(obj, args);
return (result !== null && typeof result === 'object') ? result : obj;
}
class extends: Does the same as the constructor function pattern, but also sets up the prototype-to-prototype link (so static methods are inherited too), and handles super() which is required before accessing this in derived class constructors.
When would you choose Object.create over class? When you want a pure data object that inherits from a specific prototype without invoking a constructor — useful for creating objects with a shared prototype for method lookup while keeping the object itself minimal.
Q13. What is the difference between WeakMap and Map, and when should you use each?
What the interviewer is testing: Awareness of memory management and garbage collection in JavaScript — a topic that separates intermediate from senior-level thinking.
The key difference: Map holds strong references to its keys. As long as a Map exists and contains a key, that key's object will not be garbage collected. WeakMap holds weak references to its keys (which must be objects). If the only remaining reference to an object is a WeakMap key, the object can be garbage collected, and the WeakMap entry is automatically removed.
When to use WeakMap:
Associating private data with DOM elements or external objects without preventing garbage collection:
const privateData = new WeakMap();
class Component {
constructor(element) {
privateData.set(element, { clickCount: 0, initialized: true });
}
handleClick(element) {
const data = privateData.get(element);
data.clickCount++;
}
}
// When 'element' is removed from the DOM and has no other references,
// the WeakMap entry is automatically cleaned up.
// With a regular Map, you'd have a permanent memory leak.
Caching computed results where the cache should not prevent collection of the source object:
const cache = new WeakMap();
function expensiveComputation(obj) {
if (cache.has(obj)) return cache.get(obj);
const result = /* ... heavy work ... */ obj.data.reduce((a, b) => a + b, 0);
cache.set(obj, result);
return result;
}
// When obj is garbage collected, so is its cache entry.
What WeakMap cannot do: It is not iterable. You cannot get its size, loop over its entries, or clear it. This is by design — iteration would require strong references. If you need iteration, use a regular Map and manage cleanup yourself.
Functions, Patterns & Modern JavaScript (Q14–18)
Q14. What is currying in JavaScript? Implement a curry function that handles any arity.
What the interviewer is testing: Functional programming concepts and whether you can implement higher-order functions.
The concept: Currying transforms a function that takes multiple arguments into a sequence of functions that each take one argument. It is named after mathematician Haskell Curry. The practical benefit: partial application — you fix some arguments now and supply the rest later.
// Manual curry — readable
const add = (a) => (b) => (c) => a + b + c;
add(1)(2)(3); // 6
const add10 = add(10); // partially applied
add10(5)(3); // 18
The interesting interview question is: implement a curry() utility that curries any function automatically.
function curry(fn) {
return function curried(...args) {
if (args.length >= fn.length) {
// We have enough arguments — call the original function
return fn.apply(this, args);
}
// Not enough arguments yet — return a function that waits for more
return function(...moreArgs) {
return curried.apply(this, args.concat(moreArgs));
};
};
}
// Usage
const multiply = curry((a, b, c) => a * b * c);
multiply(2)(3)(4); // 24
multiply(2, 3)(4); // 24
multiply(2)(3, 4); // 24
multiply(2, 3, 4); // 24
const double = multiply(2); // partial application
const sixTimes = double(3); // more partial application
sixTimes(5); // 30
The follow-up trap interviewers use: This implementation relies on fn.length — the function's declared parameter count. fn.length does not count rest parameters (...args) or parameters with default values. So curry(function(...args) {}) has fn.length === 0 and will call the function immediately regardless of arguments passed.
const broken = curry((...args) => args.reduce((a, b) => a + b, 0));
broken(1, 2, 3); // Calls immediately — length is 0
// Fix: pass arity explicitly when the function uses rest params
function curryN(fn, arity = fn.length) {
return function curried(...args) {
if (args.length >= arity) return fn.apply(this, args);
return function(...more) { return curried.apply(this, args.concat(more)); };
};
}
Real-world use case: Configuring middleware pipelines and API client factories — you partially apply the base URL or auth token up front, then supply endpoint-specific args per call.
Q15. What is the difference between call(), apply(), and bind() in JavaScript?
What the interviewer is testing: Explicit this binding and when each makes sense.
- .call(thisArg, arg1, arg2, ...) — calls the function immediately,
thisset tothisArg, remaining arguments passed individually - .apply(thisArg, [args]) — same as call but arguments passed as an array. Useful when you have arguments in array form already.
- .bind(thisArg, arg1, ...) — returns a new function with
thisand optionally some arguments permanently bound. Does not call immediately.
function greet(greeting, punctuation) {
return `${greeting}, ${this.name}${punctuation}`;
}
const user = { name: 'Alice' };
greet.call(user, 'Hello', '!'); // 'Hello, Alice!'
greet.apply(user, ['Hello', '!']); // 'Hello, Alice!'
const boundGreet = greet.bind(user, 'Hello');
boundGreet('!'); // 'Hello, Alice!' — called later, '!' supplied now
boundGreet('?'); // 'Hello, Alice?'
Real use cases:
.call: Borrowing methods —Array.prototype.slice.call(arguments)(pre-rest parameters).apply: Spreading an array as arguments —Math.max.apply(null, numbers)(now:Math.max(...numbers)).bind: Fixingthisfor callbacks — React class component methods, setTimeout callbacks
Q16. What are JavaScript Proxy and Reflect, and what problems do they solve?
What the interviewer is testing: Knowledge of meta-programming features — this question separates developers who read the spec from those who just use common APIs.
Proxy: Wraps an object and intercepts operations on it via "traps" — handlers for get, set, has, deleteProperty, apply, construct, and more.
Reflect: A companion API that provides the default behavior for each trap. Instead of manually re-implementing what a property get would normally do, you call Reflect.get(target, prop, receiver).
// Validation proxy — prevents invalid property assignments
function createValidatedObject(target, validators) {
return new Proxy(target, {
set(obj, prop, value) {
if (validators[prop] && !validators[prop](value)) {
throw new TypeError(`Invalid value for ${prop}: ${value}`);
}
return Reflect.set(obj, prop, value); // default behavior
},
});
}
const user = createValidatedObject({}, {
age: (v) => Number.isInteger(v) && v >= 0 && v <= 150,
email: (v) => typeof v === 'string' && v.includes('@'),
});
user.age = 25; // OK
user.email = 'a@b'; // OK
user.age = -1; // TypeError: Invalid value for age: -1
Other practical use cases:
- Observability: Log every property access for debugging
- Immutable objects: Throw on any write attempt
- Lazy loading: Only fetch data when a property is actually accessed
- Vue 3's reactivity system: Built entirely on Proxy, replacing Vue 2's
Object.definePropertyapproach
Q17. How do JavaScript generators work, and when should you use them over async/await?
What the interviewer is testing: Understanding of pausable execution and its use cases.
A generator function (function*) returns an iterator. Each yield pauses execution and returns a value to the caller. The caller resumes execution by calling .next(). Unlike async functions, generators are synchronous by default and give the caller full control over when to proceed.
function* range(start, end, step = 1) {
for (let i = start; i < end; i += step) {
yield i;
}
}
for (const n of range(0, 10, 2)) {
console.log(n); // 0, 2, 4, 6, 8
}
// Infinite sequence — safe because evaluation is lazy
function* fibonacci() {
let [a, b] = [0, 1];
while (true) {
yield a;
[a, b] = [b, a + b];
}
}
const fib = fibonacci();
console.log(fib.next().value); // 0
console.log(fib.next().value); // 1
console.log(fib.next().value); // 1
console.log(fib.next().value); // 2
When generators beat async/await:
- Infinite or very large sequences where you do not want to materialize everything in memory at once
- Custom iterators for data structures (trees, graphs, linked lists)
- State machines where the caller explicitly controls progression
- Redux-Saga — generators make async side effects unit-testable by yielding plain objects that describe intent rather than executing it directly
Async generators (async function*) combine generators with async/await, making them the idiomatic way to process streaming data in modern JavaScript:
// Paginate an API without loading all pages at once
async function* paginatedFetch(baseUrl) {
let page = 1;
while (true) {
const res = await fetch(`${baseUrl}?page=${page}`);
const data = await res.json();
if (data.results.length === 0) return;
yield data.results;
page++;
}
}
// Consumer — processes one page at a time, memory stays bounded
for await (const page of paginatedFetch('/api/users')) {
await bulkInsert(page);
}
This pattern is increasingly common with the Streams API and ReadableStream supporting async iteration natively in modern browsers and Node.js 16+.
Q18. What is the difference between debounce and throttle? Implement both from scratch.
What the interviewer is testing: Practical knowledge of performance patterns, and whether you can implement them without relying on lodash.
Debounce: The function fires only after the caller stops calling it for a specified delay. Resets the timer on every call. Use for: search input, window resize handling.
function debounce(fn, delay) {
let timerId;
return function(...args) {
clearTimeout(timerId);
timerId = setTimeout(() => {
fn.apply(this, args);
}, delay);
};
}
const handleSearch = debounce((query) => {
fetch(`/api/search?q=${query}`); // Only fires 300ms after user stops typing
}, 300);
Throttle: The function fires at most once per interval, regardless of how often the caller calls it. Use for: scroll events, mouse move, game loops.
function throttle(fn, interval) {
let lastCallTime = 0;
return function(...args) {
const now = Date.now();
if (now - lastCallTime >= interval) {
lastCallTime = now;
fn.apply(this, args);
}
};
}
const handleScroll = throttle(() => {
updateScrollPosition(); // Fires at most once every 100ms no matter how fast scrolling happens
}, 100);
The conceptual distinction to nail in the interview: Debounce says "wait until things settle down, then act once." Throttle says "act at most this often, regardless of how busy it gets." Debounce collapses a burst into a single trailing event. Throttle samples from a burst at a fixed rate.
Tricky Output & Gotcha Questions (Q19–22)
These questions are asked to test your understanding of JavaScript's type system, coercion rules, and reference semantics. Do not try to memorize outputs — understand why they happen, because the "why" is what interviewers want to hear.
Q19. Why does typeof null === "object", NaN !== NaN, and [] == ![] evaluate to true?
typeof null === "object" — a 30-year-old bug
In the original JavaScript implementation (1995), values were stored as 32-bit words. The type tag for objects was 000. null was represented as a null pointer — all zeros — so its type tag was also 000. typeof checked the tag and returned "object". This was a bug. It was never fixed because doing so would break too much existing code. The correct check for a non-null object is:
typeof value === 'object' && value !== null
NaN !== NaN — NaN is not reflexive
NaN stands for "Not a Number" and represents an undefined or unrepresentable numeric result (0/0, Math.sqrt(-1)). According to IEEE 754 floating-point standard, NaN is not equal to anything, including itself. This is the only value in JavaScript for which reflexive equality (x === x) is false.
const x = NaN;
x === x; // false
Number.isNaN(x); // true — correct way to check
isNaN('hello'); // true — dangerous: coerces to NaN first, then checks
Number.isNaN('hello'); // false — correct: checks without coercion
[] == ![] — abstract equality and type coercion
This one evaluates to true. Let's trace the coercions:
![]is evaluated first:[]is truthy, so![]isfalse- Now we have
[] == false - Abstract equality: when comparing with a boolean, convert the boolean to a number:
[] == 0 - Comparing object to number: convert the object via
ToPrimitive:[].valueOf()returns the array itself (not primitive), so try[].toString()which returns"" - Now:
"" == 0 - Comparing string to number: convert string to number:
Number("") === 0 - Finally:
0 == 0→true
The lesson: Never use == unless you specifically need type coercion (rare). === is your default. This kind of coercion chain is exactly why.
Q20. What is the closure-in-a-loop problem in JavaScript and how do you fix it?
The problem:
for (var i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 0);
}
// Output: 3, 3, 3 (not 0, 1, 2)
All three callbacks share the same i variable (because var is function-scoped, not block-scoped). By the time the timeouts fire, the loop has finished and i is 3.
Fix 1 — Use let (block-scoped): Each iteration creates a new binding.
for (let i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 0); // 0, 1, 2
}
Fix 2 — IIFE to create a new scope per iteration: (historical, pre-ES6)
for (var i = 0; i < 3; i++) {
(function(j) {
setTimeout(() => console.log(j), 0);
})(i);
}
Fix 3 — Use a factory function:
function makeCallback(n) {
return () => console.log(n);
}
for (var i = 0; i < 3; i++) {
setTimeout(makeCallback(i), 0);
}
In practice: Use let. The IIFE and factory approaches are useful to demonstrate understanding of how closures work, but let is the correct answer for any new code you write.
Q21. What is the output order of this mixed microtask and macrotask code?
console.log('start');
setTimeout(() => console.log('timeout 1'), 0);
Promise.resolve()
.then(() => {
console.log('promise 1');
setTimeout(() => console.log('timeout 2'), 0);
})
.then(() => console.log('promise 2'));
setTimeout(() => console.log('timeout 3'), 0);
console.log('end');
Trace through it:
- Sync:
'start','end' - Macrotask queue after sync:
[timeout 1, timeout 3] - Microtask queue after sync:
[promise 1 handler] - Drain microtasks:
'promise 1', which queuestimeout 2in macrotask queue and schedulespromise 2as a new microtask - Continue draining microtasks:
'promise 2' - Microtask queue empty. Macrotask queue:
[timeout 1, timeout 3, timeout 2] - Execute macrotasks:
'timeout 1','timeout 3','timeout 2'
Output: start, end, promise 1, promise 2, timeout 1, timeout 3, timeout 2
Note that timeout 2 (scheduled inside the promise handler) always comes after timeout 3 (scheduled before the promise handler ran), because timeout 2 is queued to the macrotask queue after the promise microtasks drain.
Q22. What does [] + {} evaluate to in JavaScript, and why does {} + [] give a different result?
[] + {}; // '[object Object]'
{} + []; // 0 (in some contexts)
[] + {}: Both operands are objects, so the + operator tries to convert them to primitives. [].toString() is "". ({}).toString() is "[object Object]". String concatenation: "" + "[object Object]" = "[object Object]".
{} + []: This depends on context. When evaluated as a statement (as in the browser console), the {} at the start is parsed as an empty block, not an empty object literal. So it becomes the +[] unary plus expression. +[] converts [] to a number: Number("") === 0. Result: 0.
When forced into an expression context (e.g., console.log({} + []) or let x = {} + []), the {} is correctly parsed as an object literal, and the result is "[object Object]" just like the first case.
The lesson: JavaScript's parser distinguishes between a block statement and an object literal based on context. This is one reason to always wrap objects in parentheses when using them as expressions: ({}).
Performance, Memory & Architecture (Q23–25)
Q23. What are the most common JavaScript memory leak patterns in production code?
What the interviewer is testing: Real-world experience with production systems, not just textbook knowledge.
1. Forgotten event listeners
// Leaks every time addHandler is called
function addHandler() {
window.addEventListener('resize', heavyCallback);
// Missing: window.removeEventListener('resize', heavyCallback)
}
// Fix: use AbortController (modern, clean)
const controller = new AbortController();
window.addEventListener('resize', heavyCallback, { signal: controller.signal });
// Later: controller.abort() — removes all listeners attached with this signal
2. Unbounded caches (growing Maps/Sets with no eviction)
const resultCache = new Map(); // Grows forever — memory exhaustion in long-running servers
// Fix: use LRU cache or WeakMap keyed by object identity
// Simple bounded cache:
class LRUCache {
constructor(limit) {
this.limit = limit;
this.cache = new Map();
}
get(key) {
if (!this.cache.has(key)) return null;
const value = this.cache.get(key);
this.cache.delete(key);
this.cache.set(key, value); // Move to end (most recent)
return value;
}
set(key, value) {
if (this.cache.has(key)) this.cache.delete(key);
else if (this.cache.size >= this.limit) {
this.cache.delete(this.cache.keys().next().value); // Evict oldest
}
this.cache.set(key, value);
}
}
3. Closures retaining large scope variables unnecessarily
function process(data) {
const bigBuffer = new ArrayBuffer(100_000_000); // 100MB
// bigBuffer is captured in the returned closure even though it is not used
return function() {
return data.length; // only needs data, but bigBuffer is still retained
};
}
// Fix: don't capture what you don't need — restructure to avoid shared scope
4. Detached DOM nodes
let detachedTree;
function createTree() {
const ul = document.createElement('ul');
// Populate with many children...
detachedTree = ul; // Removed from DOM but still referenced — not GC'd
}
// Fix: null out detachedTree when done, or store only data not DOM nodes
5. setInterval without clearInterval
// Leaks the callback and everything it closes over for as long as the page lives
const id = setInterval(() => {
updateDashboard(); // closes over large state objects
}, 1000);
// Fix: always store the ID and clear on component unmount / page unload
window.addEventListener('beforeunload', () => clearInterval(id));
Q24. How does V8 optimize JavaScript code? What are hidden classes and inline caches?
What the interviewer is testing: Understanding of runtime internals. Not every developer needs this, but senior developers should be able to reason about why certain code patterns are slower than others.
V8 (Chrome, Node.js) uses a multi-tier JIT (Just-In-Time) compiler. Code starts in the interpreter (Ignition), gets profiled, and hot paths are compiled to optimized machine code by Turbofan (V8's optimizing compiler). If the optimistic assumptions Turbofan made turn out to be wrong at runtime, V8 deoptimizes — it throws away the compiled code and falls back to the interpreter. Deoptimization is the single biggest preventable source of performance degradation in JavaScript-heavy workloads. Two key mechanisms that trigger it:
Hidden classes (Shapes): V8 assigns a "hidden class" to objects based on their property layout. Objects with the same properties added in the same order share a hidden class, allowing V8 to generate fast property lookup code. Adding properties out of order or conditionally forces V8 to create multiple hidden classes, degrading performance.
// Fast: both objects have the same hidden class
function Point(x, y) {
this.x = x;
this.y = y;
}
const p1 = new Point(1, 2);
const p2 = new Point(3, 4);
// Slow: objects end up with different hidden classes
const a = {};
a.x = 1;
a.y = 2;
const b = {};
b.y = 2; // different order
b.x = 1; // a and b have different hidden classes
Inline caches (ICs): V8 caches the hidden class at property access call sites. If the same hidden class shows up consistently, V8 assumes it always will and generates a fast path. If different types show up at the same call site (polymorphism), V8 has to generate slower generic code or deoptimize.
Practical implications:
- Always initialize all object properties in the constructor in the same order — this keeps all instances on the same hidden class
- Avoid
delete obj.prop— it forces a hidden class transition and marks the object as a "dictionary" mode object, losing all IC benefits - Avoid mixed-type arrays (
[1, 2, 'three']) — V8 uses typed array representations internally; mixing forces the slow generic path - Pass consistent argument types to functions — polymorphic call sites prevent Turbofan from optimizing
How to diagnose deoptimizations in Node.js:
# Run with deoptimization tracing
node --trace-deopt --trace-opt your-script.js 2>&1 | grep -E "deopt|optimized"
# More detailed: V8 flags in a profile
node --prof your-script.js
node --prof-process isolate-*.log
In practice, most applications will never need to optimize at this level. But when you have a hot loop processing millions of items per second — parsers, data pipelines, game engines — understanding hidden classes and deoptimization is the difference between 10ms and 100ms.
Q25. What is the difference between CommonJS and ES Modules, and how does tree-shaking work?
What the interviewer is testing: Understanding of the module system, bundler tooling, and its effect on production bundle size.
CommonJS (require/module.exports):
- Dynamic —
require()can be called anywhere, with any expression as the path - Synchronous — the module is executed and the exports are available immediately
- Resolved at runtime — bundlers cannot statically determine what is imported
// CommonJS — dynamic, cannot be tree-shaken
const utils = require('./utils'); // entire module loaded
const name = condition ? require('./a') : require('./b'); // runtime path
ES Modules (import/export):
- Static —
importstatements must be at top level with literal specifiers - Live bindings — imported names are live references, not copies
- Asynchronous loading (but
import()for dynamic)
// ES Module — static, tree-shakeable
import { formatDate } from './utils'; // bundler knows exactly what is used
Tree-shaking: Because ES module imports are static, bundlers (webpack, Rollup, esbuild) can build a complete dependency graph at build time and identify which exports are actually used. Unused exports are marked as "dead code" and eliminated from the final bundle.
// utils.js
export function formatDate(d) { /* used */ }
export function parseCSV(s) { /* never imported anywhere */ }
export function slugify(s) { /* never imported anywhere */ }
// app.js
import { formatDate } from './utils';
// After tree-shaking: parseCSV and slugify are removed from the bundle
// Even though utils.js exports them
Why CommonJS cannot be tree-shaken: Because require() is a function call evaluated at runtime, a bundler cannot know what will be required without executing the code. Every CommonJS export must be included.
Side effects field in package.json: Libraries mark themselves as side-effect-free ("sideEffects": false) to tell bundlers it is safe to remove any unused imports entirely — even without a named export analysis.
Why this matters: A well-tree-shaken bundle can be 30–70% smaller than a naive one. Smaller bundles mean faster parse times, faster execution, and better Core Web Vitals. For large applications using utility libraries like lodash, switching from import _ from 'lodash' to named ES module imports (lodash-es) can save hundreds of kilobytes.
How to Actually Prepare — and How to Answer Well Under Pressure
Passing a technical interview is not just about knowing the right answers — it is about demonstrating that you think like a senior developer. An interviewer asking about closures is not checking whether you memorized a definition. They are watching how you construct an explanation: do you go from mechanism to implication to example, or do you recite a definition and stop?
How to structure every technical answer
Use this three-part structure for any conceptual question: mechanism → implication → example.
- Mechanism: What actually happens at the language/runtime level (technical accuracy)
- Implication: What breaks or gets subtle if you misunderstand it (judgment)
- Example: One concrete code snippet that makes it tangible (practical application)
A two-minute answer with this structure is consistently rated higher than a five-minute answer that covers more ground but loses clarity. Interviewers are time-constrained and are pattern-matching for "does this person think clearly under pressure."
When you do not know the answer
Say so directly: "I am not certain about the specifics here, but let me reason through it." Then reason through it out loud. Companies that hire well are looking for how you think, not what you have memorized. A candidate who says "I haven't used Proxy in production, but based on how traps work I would expect..." and reasons correctly will score above a candidate who confidently gives a half-correct answer and stops.
A two-week preparation schedule
- Week 1, days 1–2: Closures, TDZ, hoisting,
this. Write every code example by hand without running it first. Predict the output, then verify. - Week 1, days 3–4: Event loop deep dive. Write Promise + setTimeout examples, trace through the queues on paper. This is the topic most often underestimated in prep.
- Week 1, day 5: Prototype chain. Build a two-level inheritance hierarchy using only
Object.create— noclasskeyword allowed. Then inspect the prototype chain withObject.getPrototypeOf. - Week 2, days 1–2: Implement
curry,debounce,throttle, and a simplifiedPromisefrom scratch. Attempt each without references first. The implementation process reveals gaps a reading-only approach misses. - Week 2, days 3–4: Proxy/Reflect, WeakMap, generators, async generators. Build one small demo for each — a validation proxy, a bounded cache, and a paginated fetcher.
- Week 2, day 5: Two full mock sessions, spoken aloud, timed. Presenting to a colleague is ideal; recording yourself and reviewing is a viable solo alternative.
What different company types actually test
Interview focus varies significantly by company type. Calibrate your preparation accordingly:
- FAANG / large tech companies: Heavy on output-prediction questions (Q7, Q21), prototype chain mechanics, and algorithmic thinking applied to JavaScript patterns. Expect output traces on a whiteboard and follow-up questions designed to probe the edges of every answer.
- Product-focused startups and scale-ups: More interested in async error handling (Q9), memory leak diagnosis (Q23), module systems (Q25), and framework-specific behavior rooted in JavaScript fundamentals. They want to know you can debug a production issue, not just recite specs.
- Frontend-specialist roles: Debounce/throttle implementation (Q18), event delegation, browser rendering pipeline (how microtask starvation causes frame drops), and how the DOM interacts with the event loop.
- Node.js / full-stack roles:
process.nextTickvs Promises (Q6), stream processing with async generators (Q17), CommonJS vs ESM in a server context (Q25), and memory profiling in long-running processes (Q23, Q24). - Staff and principal roles: All of the above, plus the ability to discuss trade-offs: when to use generators vs async/await, when WeakMap is the wrong tool, what V8 deoptimization means for a specific architectural decision. They are probing for judgment, not just knowledge.
Frequently Asked Questions
Do I need to memorize all 25 of these answers?
No. Memorizing answers is the wrong approach because interviewers ask follow-up questions that deviate from any script. Instead, understand the underlying concepts deeply enough that you can reason through variations you have never seen before. If you truly understand the event loop, you can answer any output-prediction question involving Promises and setTimeout — not just the specific examples you studied.
Are these questions still relevant in 2026 with TypeScript everywhere?
Yes, because TypeScript compiles to JavaScript. TypeScript's type system does not change how this, closures, the prototype chain, or the event loop work at runtime. If anything, TypeScript interviews add a separate layer on top of these JavaScript fundamentals, not instead of them. Understanding the runtime is more important than ever, because TypeScript gives a false sense of safety if you do not understand what the compiled output actually does.
How important are output-prediction questions at real companies?
It depends on the company. FAANG and similar companies use them to probe the precision of your mental model. Many product companies (startups, scale-ups) care more about system design, code quality, and debugging methodology. That said, being able to predict JavaScript output is a useful skill beyond interviews — it directly helps you debug production issues and review code more effectively.
What is the most common mistake developers make in JavaScript interviews?
Providing a surface-level definition when a demonstration is possible. "A closure is a function that remembers its outer scope" is a definition. "Here is a counter factory that uses closures to create genuinely private state, and here is how that differs from a simple module pattern" is a demonstration. Interviewers at mid-to-senior levels have heard the definitions. They want to see you apply the concept.
How should I handle questions about framework-specific behavior (React, Vue, etc.)?
Anchor your answer in the underlying JavaScript mechanism first, then connect it to the framework. "React's useEffect cleanup function runs on unmount — that is essentially the same problem as removing event listeners when a closure's lifetime ends, which matters because..." This approach demonstrates both JavaScript depth and framework knowledge, which is more impressive than either alone.
Is it worth practicing coding problems (LeetCode style) for JavaScript-specific interviews?
For front-end-focused roles, the time is better spent on the topics in this guide — closures, async patterns, the DOM, browser APIs, and framework internals. For full-stack Node.js roles, algorithms and data structures matter more. Check the job description: if it mentions "data structures and algorithms," prepare for those. If it mentions "deep JavaScript knowledge" or lists specific frameworks, focus here.
What JavaScript concepts are tested most frequently in technical interviews?
Based on patterns across interviews at various company types, the most frequently tested concepts in order of frequency are: (1) closures and scope — appears in nearly every mid+ interview in some form; (2) the event loop and async execution order — especially output-prediction questions; (3) this binding — both conceptually and in code tracing; (4) Promise error handling — particularly the detached promise and forEach patterns; (5) prototypal inheritance — especially at companies using vanilla JS or building frameworks. The questions least likely to appear at product companies but most likely at engine/infrastructure companies: V8 optimization (Q24) and generator internals (Q17). Debounce and throttle implementation (Q18) appears disproportionately often at frontend-specialist roles regardless of company size.


