Why My Left Hand Disappeared in AI: A Simple Truth About Training Data

The other day, I gave an AI a simple prompt:
“Draw me writing with my left hand on a blackboard.”

What came out?
A beautiful image. Everything was perfect — except for one thing.
I was confidently writing with my right hand.

Note: Ignore my misspell and the pic :)

Now, I know what you’re thinking:

“Maybe the prompt wasn’t clear?”
Nope. Clear as daylight.
“Maybe the model misunderstood?”
That would assume it’s thinking. It’s not.

So why did it still show me writing with the right hand?

The answer is simple — and it tells us a lot about how AI actually works.

AI Doesn’t “Think” — It Predicts What’s Most Likely

When we say “AI generated this image,” we often assume it’s smart — like a little robot that understands the world.

But here’s the truth:

AI doesn’t understand. It just completes patterns.

These image models are trained on billions of images and captions scraped from the internet — stock photos, blog posts, media libraries, social content, you name it.

So when I say:

“Person writing with left hand on blackboard”

The model dives into its massive memory bank and says:

“Ah, I’ve seen lakhs of images of people writing… and 95% of them use their right hand.”

So even when I ask for the left, it defaults to the right — not because it’s wrong, but because it’s been taught that right = most common.

It’s Not a Bug. It’s a Bias.

In India, we have a word for this kind of thing: aadat.
The model isn’t being disobedient. It’s just doing what it’s always seen.

Now apply this to everything AI generates:

  • Ask for a CEO? You’ll often get a man in a blazer, clean-shaven, fair-skinned.
  • Ask for a teacher? You might get a woman, chalk in hand, smiling at a whiteboard.
  • Ask for a wedding? Probably not South Indian unless you force it.

It’s not because AI hates diversity — it’s because it’s trained on a skewed slice of the internet, and the internet has its own biases.

And once a pattern becomes dominant in the data, it becomes default in the model.

Left-Handed in Real Life. Invisible in Data.

Roughly 1 in 10 people in the world are left-handed.
But on the internet? You’d think it’s 1 in 1000.

Because:

  • Stock photographers shoot for symmetry
  • Textbooks rarely label “left-handed”
  • Social content rarely tags hands at all
  • And most of what’s uploaded — shows right-handers

So the model didn’t “forget” my left hand.
It just never really saw it enough to remember.

That’s how bias enters AI without anyone trying — not because someone hardcoded it, but because the data had a pattern, and the model learned it blindly.

Why This Matters (Even If You’re Not Building AI)

Now you might say — “Big deal, it’s just a hand.”
But if AI can get something as basic as handedness wrong, what else might it subtly twist?

When we use AI to generate:

  • People
  • Stories
  • Personalities
  • Representations of culture, gender, identity…

…the model is pulling from what it has seen most often, not what is most accurate.

And unless we understand this, we’ll start accepting these defaults as reality — even when they exclude people like us.

The Bias Is Quiet, But It’s Always There

So next time AI shows you something that feels a little off —
Like a left-hander writing with their right,
Or a South Indian wedding that somehow looks like a Paris shoot,
Or a woman doctor who suddenly becomes a nurse…

Don’t blame the model.
Blame the data it was trained on.

Because AI doesn’t hallucinate out of nowhere.
It just repeats what it’s been taught to see.

And if the left hand disappears in your prompt — it’s not a glitch.

It’s the quiet cost of being statistically rare in a machine built for the majority.


Discover more from Rudra Kasturi

Subscribe to get the latest posts sent to your email.

Leave a Reply