Smelly tests are easy to recognize once you’ve been burned by them enough times. This article walks through the most common automation smells, from flaky “just rerun it” tests to over-mocked setups that don’t reflect real user behavior. If your test suite technically passes but still feels untrustworthy, this is a quick, relatable guide to spotting problems early and understanding what they usually say about your test design.
Have you ever opened a test and immediately regretted it?
Nothing has failed yet. You have not even hit run. But your brain already knows. Something is off. The setup is massive. The mocks keep coming. The selectors look like they were copied from a browser dev tool at 2 a.m.
That’s not intuition. That’s experience.
Smelly tests announce themselves early. And once you know the signs, you start spotting (and smelling) them everywhere.
The “Just Rerun It” Test
This one does not fail often. Just often enough.
Someone says, “It’s flaky, but it usually passes.” Which is the tester code for “we don’t trust this test anymore.”
If a test needs a second run to prove it was right the first time, it is not protecting anything. It is training the team to ignore failures. That is worse than having no test at all.
Flakiness almost always means the test is guessing. Guessing about timing. Guessing about the state. Guessing about what the system will do next.
Tests should not guess.
The DOM Archaeology Test
You change a label. A test fails. You move a div. Five more fail.
You inspect the selector, and it looks like an ancient map of the DOM. Nested. Fragile. Completely dependent on how things happen to be structured today.
These tests are not testing behavior. They are memorizing layout trivia.
If a tiny UI refactor breaks half your suite, your tests are too close to the implementation and too far from what users actually do. Brush up on Good Selectors
The Mock Museum
This is the one you can smell from across the room.
Mock the API. Mock the auth service. Mock the feature flags. Mock the dates. Mock the config. Mock the mock.
At some point, you stop testing your system. You are testing a carefully staged simulation that users will never see.
Mocks are not evil. They are useful. But when most of the system is fake, the confidence is fake too.
If users talk to the real backend, the test should probably do the same.
The “What Is This Even Testing?” Test
You read the test name. Still not sure.
You read the setup. Now you are even less sure.
You read the assertions and realize the test checks three different things, none of which are clearly related.
These tests usually grew over time. One more check here. One more helper there. Now nobody wants to touch it.
If a test does not tell a clear story, it will not age well.
The Green Build That Nobody Believes
Everything passes. CI is green. And yet the release still feels risky.
This is the quiet smell. The one that does not show up in failure logs.
It happens when tests exist mostly because they always have. They check happy paths. They mirror implementation. They never fail when real bugs happen.
Passing tests is only useful if they would fail for the right reasons.
A Quick Smell Test You Can Run Anytime
When you are not sure about a test, try this:
Ask what real problem it would catch
Ask how confident you would feel removing it
Ask how often it breaks for unrelated reasons
Ask whether it looks like something a user actually does
If the answers feel uncomfortable, your nose is probably right.
You Do Not Need to Eliminate Every Smell
Every test suite has some stink. That is normal.
The goal is not a perfect suite. The goal is to stop adding new smells and slowly clean up the worst ones.
Fix flaky tests before adding more. Delete tests that do not earn their keep. Favor clarity over cleverness. Test the system your users actually use.
Good tests fade into the background. Smelly ones make themselves known.
And once you notice the smell, it is hard to ignore.




