Nijin Muhammed

May 15, 2025 • 2 min read

Can You Even Tell If It's Real Anymore?

Google came out with this new thing called SynthID

Can You Even Tell If It's Real Anymore?

These days, it’s getting really hard to know if something you see online is real — or made by AI. A photo, a video, a design, even a piece of writing — it might look real, feel human, but turns out, it’s made by a machine.

And honestly? Most of the time, you won’t even know.

This is a big deal. Because it affects how we trust what we see. It affects how people make decisions, what they believe, and even what they buy.

Now, Google just released something called SynthID — a new kind of invisible watermark for AI-generated content. Basically, it can mark and later detect whether something was made by AI. That sounds like a step forward, right?

But here’s the issue...

The problem ain't the tech. It's the people using it

See, for this watermarking thing to actually work, AI companies have to choose to use it. Google is using it in their own models, which is great. But not everyone’s gonna do the same. Companies like OpenAI probably won’t want to add watermarks to their content.

Why?

Because think about it — if people get to choose between content that’s watermarked and content that’s clean (no watermark), they’ll probably pick the clean one. Nobody wants their stuff marked, even if they don’t know it’s there. It feels like being tracked or tagged.

Okay… so what if they make a law?

Let’s say they pass a law saying everyone has to use watermarks. Would that solve it?

Nope.

Here’s why: open-source code.

Big companies like Meta already share their AI models openly. Anyone — good or bad — can download the code, run it on their own computer, and even tweak it. That means someone could just remove the watermark part from the code and boom — back to clean, untraceable AI content.

Even if the law said, “Hey, open-source stuff has to include watermarks,” bad actors aren’t gonna care. They’ll just rip the watermark code out anyway.

So where does that leave us?

This watermarking tech is cool. It’s smart. But right now, it can’t really stop people from hiding where content came from. We still won’t know for sure if something online was made by a human or an AI.

That’s kinda scary, right? Because it means fake content, deepfakes, misinformation — all of that stuff can still fly under the radar.

Tech is trying. But the real challenge isn’t just building it — it’s getting everyone to actually use it.


Thanks for reading. Stay curious, stay skeptical.

If this topic interests you, keep learning. Things are changing fast — and the more we understand, the better we can navigate this new world.

Feel free to share this with someone who needs to know.

Join Nijin on Peerlist!

Join amazing folks like Nijin and thousands of other people in tech.

Create Profile

Join with Nijin’s personal invite link.

0

13

0