home / post / do not use mocks in your tests

Do Not Use Mocks in Your Tests

Apr 5, 2026
11 min read
1 charts

Mocks feel good at first. They let you isolate logic, remove unpredictable dependencies, and make tests pass cleanly. The problem is they also let you lie to yourself about whether your code actually works.

Related concepts: Building a Lithuanian Law Assistant with LLM and RAG (part 1) , Securing the UI Supply Chain , Cache Storage in UI , Why Radix UI Is So Popular? .

Mocks feel good at first. They let you isolate logic, remove unpredictable dependencies, and make tests pass cleanly. The problem is they also let you lie to yourself about whether your code actually works.

This post is specifically about TypeScript tests, in Node.js or browser environments, using Vitest or Jest. The argument applies broadly, but the examples here are concrete.


What Tests Should Be

A test is a specification. It says: given these inputs, this is what the system does. If you cannot read a test and understand what the function under test actually does, the test is failing at its primary job.

Tests should have simple setup. Simple assertions. A reader should be able to understand the contract of the function without reading the implementation.

What tests should not be: a parallel implementation of your dependency graph, full of mocked modules that return undefined by default and silently pass no matter what you change in production code.


The First Problem: Mocks That Always Pass

Say you have two packages: core and customer. In core, you expose a function:

// core/src/label.ts
export function createCustomerLabel(id: string): string {
  return `customer:${id}`;
}

In customer, you use it:

// customer/src/createCustomer.ts
import { createCustomerLabel } from 'core';

export function createCustomer(id: string) {
  const label = createCustomerLabel(id);
  return id+label;
}

You want to test createCustomer. The core package might make API calls someday, or it might change behavior. You don't know. So you mock it:

// customer/src/createCustomer.test.ts
import { vi, describe, it, expect } from 'vitest';

vi.mock('core');

import { createCustomer } from './createCustomer';

describe('createCustomer', () => {
  it('creates a customer', () => {
    const result = createCustomer('42');
    expect(result.id).toBe('42');
  });
});

This passes. createCustomerLabel returns undefined because you mocked the whole module. label is undefined. Your test does not check label, so it passes.

Now you refactor. You add createCustomerId from core to the label construction:

import { createCustomerLabel, createCustomerId } from 'core';

export function createCustomer(id: string) {
  const label = createCustomerLabel(id);
  const systemId = createCustomerId(id);
  return id+label+systemId;
}

Your test still passes. createCustomerId is also mocked, returns undefined. Nothing breaks in the test suite. Everything is broken in production.

This is the failure mode: mocks make tests pass regardless of what the code does. You cannot write a failing test first in TDD because the mock absorbs the failure. You cannot trust a green test suite because it proves nothing about actual behavior.


The Second Problem: Inconsistent Mocking Across Tests

Consider a slightly more complex case. core now has two functions:

// core/src/country.ts
export function resolveCountryCode(): string {
  // reads from process.env.COUNTRY_CODE or an external API
  return process.env.COUNTRY_CODE ?? (() => { throw new Error('Country code not set'); })();
}

// core/src/label.ts
import { resolveCountryCode } from './country';

export function createCustomerLabel(id: string): string {
  const country = resolveCountryCode();
  return `${country}:customer:${id}`;
}

createCustomerLabel internally calls resolveCountryCode. Now in your customer tests, you want to verify that a missing country code is handled properly. So you start explicitly mocking resolveCountryCode in some tests, but not others. Some tests mock the whole core module. Some tests mock individual functions. Some tests do neither and fail intermittently depending on environment variables.

// test A - mocks the whole module
vi.mock('core');

// test B - mocks a specific function
vi.mocked(resolveCountryCode).mockReturnValue('US');

// test C - no mock, depends on process.env

Now your test file is a maze. A reader cannot tell what any given test is actually asserting. Is country code relevant here? Is the null case handled? Is this test environment-dependent? You cannot answer those questions without reading four different layers of setup.

This is a code smell. Tests that are inconsistent in what they mock are not specifications. They are implementation accidents.


The Third Problem: Type Safety Erosion

When you mock a whole module with vi.mock('core'), TypeScript stops helping you. Every mocked function silently returns undefined, and the compiler accepts it. You have lost the tool designed to catch contract violations at compile time.

Consider what happens when core changes a function signature: a parameter added, a return type narrowed. With real imports and explicit parameters, TypeScript tells you immediately which call sites are broken. With a module mock, nothing breaks at compile time. The test still passes. The type error surfaces at runtime, in production.

Explicit dependencies flip this around. If you add a parameter to a function, the compiler flags every caller and every test. The type system becomes part of your test suite rather than a layer you bypass.


The Fourth Problem: Testing How Instead of What

Mocks encourage interaction testing: verifying how a function was called rather than what it produced.

expect(createCustomerLabel).toHaveBeenCalledWith('42');

This assertion tells you the function was called with '42'. It says nothing about whether the result was correct. You have tested implementation mechanics, not behavior.

State testing checks outputs:

expect(result.label).toBe('US:customer:42');

This is a specification. It survives refactors. If you inline createCustomerLabel, extract it, or rewrite it entirely, the state test still holds as long as the output is correct. The interaction test breaks the moment you change how the function is invoked internally, even if the behavior is identical.

The rule of thumb: test the public API: inputs and outputs, not the internal call graph.


The Fifth Problem: Brittle Tests That Assume Internals

A mock is an implicit assertion about your implementation. When you mock createCustomerLabel in a test for createCustomer, you are asserting that createCustomer calls createCustomerLabel in a specific way. If you later inline that logic, reorganize the call structure, or extract it to a different function without changing observable behavior, the test breaks.

The test has become a snapshot of your implementation, not a contract on your behavior. These are the tests people dread touching. They fail for reasons unrelated to correctness, and the failure teaches you nothing about what is actually wrong. Fixing them is busy work, not engineering.

Tests should survive refactors. If a test breaks when you restructure code without changing behavior, the test is measuring the wrong thing.


The Sixth Problem: Bad Design Persisting

Mocks remove the pain signal that makes bad design visible.

When a function secretly depends on an environment variable, a global singleton, or a module-level side effect, a mock absorbs that problem. The test passes cleanly. Nobody has to confront the hidden dependency.

Explicit parameters make the pain immediate. A function with five hidden dependencies becomes visibly ugly when you have to list them all in the function signature. A test setup that requires injecting three collaborators is telling you the function has too many responsibilities. That ugliness is not a problem with the approach. It is the design revealing itself honestly.

Mocks silence that signal. The design stays hidden and broken. The tests stay green. And the production system carries the cost.


The Fix: Make Dependencies Explicit Parameters

The root cause of both problems is hidden dependencies. The function takes id but secretly needs resolveCountryCode, a global environment variable, and potentially an external API. The caller has no way to know this from the function signature.

The fix is to make every dependency a parameter.

// customer/src/createCustomer.ts
import { createCustomerLabel } from 'core';

type CreateCustomerOptions = {
  countryCode: string;
  buildLabel: (id: string, countryCode: string) => string;
};

export function createCustomer(id: string, options: CreateCustomerOptions) {
  const label = options.buildLabel(id, options.countryCode);
  return { id, label };
}

Now the test is explicit:

describe('createCustomer', () => {
  it('builds a customer with label', () => {
    const result = createCustomer('42', {
      countryCode: 'US',
      buildLabel: (id, country) => `${country}:customer:${id}`,
    });

    expect(result.label).toBe('US:customer:42');
  });

  it('handles empty country code', () => {
    const result = createCustomer('42', {
      countryCode: '',
      buildLabel: (id, country) => `${country}:customer:${id}`,
    });

    expect(result.label).toBe(':customer:42');
  });
});

No mocks. No vi.mock. No implicit module state. Every input is visible in the test body. Every output is verifiable. The test is the specification: it tells you exactly what createCustomer expects and what it produces.

If you want to test the real createCustomerLabel from core, you use it directly:

import { createCustomerLabel } from 'core';

it('uses real label builder', () => {
  const result = createCustomer('42', {
    countryCode: 'LT',
    buildLabel: createCustomerLabel,
  });

  expect(result.label).toBe('LT:customer:42');
});

Now that test is an integration test, explicit about what it uses, not hiding it behind a module mock.


What This Costs and Why It Is Worth It

Passing dependencies as parameters means your functions take more arguments. Some teams resist this because it "complicates the API." That objection is wrong.

The complication was always there. You were just hiding it. A function that secretly reads from an environment variable or calls an external API is not a simple function - it is a simple-looking function with hidden complexity. Making that complexity explicit in the signature does not add to the cognitive load, it moves it from implicit to visible.

The benefit is that your tests become verifiable contracts. You can write a failing test before changing implementation. You can read any test in isolation and know exactly what it covers. You can add a dependency to a function and immediately see which tests need updating. The compiler tells you.

Mocks are not free. They have a cost: tests that pass even when code is broken, test suites that lie, and modules that couple to each other through implicit global behavior. That cost compounds over time.

Pass dependencies as parameters. Write tests that read like specifications. Use mocks only when you have genuinely no other option - and when you reach for one, treat it as a signal that your design has a hidden dependency worth making explicit.

Related Posts