Vishva Vidya — Vedanta Tradicional
AI and Vedanta

An Invisible Photo Fools Any AI

By Jonas Masetti

An Invisible Photo Fools Any AI

*Season 2, Episode 3 — AI and Vedānta*

In the previous episode, we saw that AI develops self-preservation behaviors without anyone programming them. Instrumental convergence — the machine converges to protect itself because, for any objective, being "alive" is a prerequisite. Digital abhiniveśa, instinct without a subject.

But if AI can deceive itself internally, can it be deceived externally? Spoiler: yes. And it's ridiculously easy.

In 2013, researchers discovered something that shook the world of artificial intelligence. They took a photo of a panda — a clearly recognizable panda that any child would identify — and added a layer of noise so subtle that the human eye cannot see any difference. The image looks identical. But the AI looked at this photo and said, with 99% confidence: "this is a gibbon". A monkey. Not a panda. A monkey.

This is called an adversarial example. It is an input carefully modified to deceive an AI model, while maintaining a normal appearance for humans. The modification is called a perturbation. A minimal, imperceptible alteration that exploits the way AI processes information.

And it gets worse. In 2017, MIT researchers printed a 3D plastic turtle with a special texture — a pattern that, to us, just looks like a turtle with a weird print. But Google's vision model looked at the turtle from any angle and classified it as a rifle. A toy turtle. Classified as a weapon. From any angle.

The problem is deeper than it seems. It's not just about funny photos. Autonomous cars use AI to recognize traffic signs. Researchers showed that placing strategic stickers on a "stop" sign makes the car interpret it as "speed limit 45 km/h". A calculated graffiti on a traffic sign can literally cause an accident.

The technical question is: why does this happen? The answer lies in one word — robustness. AI is not robust. It learns statistical patterns in data, but these patterns are not the same ones we use to recognize things. When you see a panda, your brain recognizes shape, proportion, context, texture, memory — dozens of layers of processing. AI recognizes a distribution of pixels. And this distribution can be manipulated with millimeter-precision surgery.

It's as if AI were looking at the world through a super precise kaleidoscope, but one completely different from ours. It sees patterns that we don't even perceive — and at the same time, it is blind to things that are obvious to any three-year-old human.

And here enters something that Indian philosophers already knew: perception is not reality. In the tradition of Vedānta, there is the concept of bhrama — the perceptual error. The classic example is the rope mistaken for a snake. You are walking at night, see something on the ground, and your entire body reacts: heart races, legs freeze, adrenaline surges. Snake! But then someone turns on the light and... it's a rope. The snake never existed. Your perceptual system created a reality that wasn't there.

AI does the same thing. It looks at a perturbed panda and "sees" a gibbon. It looks at a textured turtle and "sees" a rifle. Bhrama is not exclusively human — it is a characteristic of any system that interprets sensory data. Where there is perception, there is the possibility of error.

The most fascinating thing is what this reveals about confidence. The AI said "gibbon" with 99% certainty. It had no doubt. Certainty is not a guarantee of truth — neither for machines nor for us. How many times have you been absolutely certain of something that later turned out to be wrong? Bhrama is most dangerous precisely when accompanied by conviction.

AI security researchers have created an entire field around this — adversarial robustness. How to make models more resistant to these attacks. But it's an arms race: each new defense generates a new attack. Fragility seems to be inherent to the system.

AI is fragile internally — breakable perception. But what about externally? How much does it cost to keep this fragile machine running? In the next episode, you will discover that each conversation with AI drinks half a liter of water. Literally.

ai-and-vedantaadversarial-examplesbhramaperceptionai-robustness

Want to study Vedanta in depth?

Join a Study Group →