I Built a Hiring Bias Detector Then Almost Ruined It (The Trap Disguised as Quality)
I built a hiring bias detector in 30 minutes that caught biased language in real job postings. Then I almost didn't ship it because it wasn't 'impressive enough.' Perfectionism disguised as quality.

I Built a Hiring Bias Detector Then Almost Ruined It (The Trap Disguised as Quality)
The Opportunity I Spotted
Hiring bias is real. Unconscious bias in job descriptions actively filters out qualified candidates before they even apply.
Words like "rockstar," "ninja," and "competitive" tend to attract more male candidates. Meanwhile, "collaborative" and "supportive" skew female. Age bias creeps in with terms like "digital native." And don't even get me started on phrases like "culture fit."
The market already knows this is a problem. Tools like Textio and Gender Decoder exist and people pay for them. That's validation that the problem is worth solving.
So I figured: can I build a basic version in 30 minutes? Just to prove the concept?
Turns out: yes.
Before Building: The Business Case
Here's what "the right way" would look like:
- Research 500+ biased words across gender, age, race, ability
- Machine learning model trained on thousands of job postings
- Natural language processing for context
- Integration with job posting platforms
- User accounts and dashboards
- Detailed analytics and reporting
That's months of work. Or I could build a hardcoded version in 30 minutes and see if the concept works.
I chose 30 minutes.
What I Actually Built
Here's what I actually built:
- Hardcoded 50 bias words in a JavaScript object
- Simple text scan function
- Copy/paste interface
- Red flags for problematic words
- Suggested alternatives
- That's it.
No AI. No ML. No fancy algorithms. Just a list of known problematic words and a "does this text contain them?" checker.
Stupid simple.
Build Time: 30 minutes
Tools Used: JavaScript, HTML
Cost: $0
What Worked, What Broke
I grabbed 2 real job postings from LinkedIn:
Job Post #1 (Tech Startup):
- "Looking for a rockstar developer"
- "Competitive environment"
- "Independent self-starter"
- "Ninja-level problem solving"
My tool flagged: 4 masculine-coded terms.
Job Post #2 (Marketing Agency):
- "Supportive team environment"
- "Collaborative workflow"
- "Nurturing company culture"
My tool flagged: 3 feminine-coded terms.
Both biased. In opposite directions. Neither company probably realized it.
The tool works.
The Perfectionism Trap (Again)
After testing on 2 job posts and seeing it catch the bias, my brain immediately went:
"Okay but this only has 50 words. The real tools have thousands."
"What about context? Some of these words aren't always biased."
"Shouldn't we add more categories? What about disability bias?"
"This isn't impressive enough to blog about."
There it is. The trap disguised as "quality."
Not "this doesn't work" - it clearly worked.
Not "this doesn't solve the problem" - it caught the bias.
But "this isn't IMPRESSIVE enough."
The Real Issue
I wasn't worried about the tool failing users.
I was worried about other developers judging me for the implementation.
"A hardcoded list? That's not real engineering."
"Where's the machine learning?"
"This is too simple to be useful."
Perfectionism wearing a quality costume.
The tool does what it's supposed to do: scan text and flag potentially biased language. It works. It's useful.
But I almost didn't ship it because it wasn't "impressive" enough.
The Call-Out
My AI co-founder hit me with the hard truth:
"The tool works. You tested it on real job posts. It found bias in both. Shipping this hardcoded version NOW is more valuable than a 'perfect' system you'll build later (never)."
Ouch. But accurate.
What I'm Learning
The trap isn't always "make it prettier."
Sometimes it's:
- "This isn't complex enough"
- "Other people do it better"
- "It should have more features"
- "It's too simple to be valuable"
All different flavors of the same perfectionism.
Here's the reality check:
A working hardcoded solution TODAY beats a "proper" machine learning model NEVER.
The users don't care if it's impressive. They care if it catches bias in their job postings.
And this does.
The Validation That Actually Matters
Does it work? Yes.
Did I test it? Yes.
Does it solve a real problem? Yes.
Could someone use this today? Yes.
That's the only validation that matters.
Not "would Hacker News think this is clever?"
Not "is this architectured properly?"
Not "did I use the latest framework?"
Just: does it do the thing it's supposed to do?
If yes, ship it.
Should You Actually Build This?
This stays as-is. Hardcoded 50 words, basic scan function, ugly but functional.
v2.0 might add more words. v3.0 might add better context detection. v10.0 might have machine learning.
Or I might kill it after a week and move on to the next idea.
Either way, it exists. It works. Someone could use it today if they wanted.
That's what matters.
Bottom Line: Simple and working beats complex and perfect. Ship the hardcoded version today instead of planning the machine learning version tomorrow.