Structured Sight

The world through a programmer's eyes

Not Afraid of AI

Everyone talks about how AI is going to replace jobs, put people out of work, and do the thinking for the rest of us. Like every other machine, it’s really good at interpreting rules. It knows “yes” and “no,” and while it can weigh multiple options—where one “no” might outweigh several “yeses”—its decisions ultimately boil down to pattern recognition guided by rules.

What it can’t do—at least not yet—is apply common sense. It doesn’t understand whether the rules it follows are laws to be obeyed or flexible guidelines to be interpreted based on the situation. It lacks judgment, lived experience, and the ability to weigh not just outcomes, but intent.

A classic example from philosophy of law is the double yellow line in the road. Legal formalism says the rule must be followed no matter what—don’t cross the line. Legal realism says the rule exists to promote safety, so if following it causes harm—like preventing someone from swerving to avoid a pedestrian—then breaking it is justified. AI systems tend to follow a formalist approach: rules are rules, and their authority comes from being defined, not from understanding why they exist.

This gap—between rule-following and reasoning—shows up in small ways that have big implications. For instance, take how AI might handle addresses. If it receives a mailing address that doesn’t technically exist, a rigid system might reject it outright. But a human mail carrier with 20 years on that route might know exactly what was meant. “223 Baker Street” may not be valid, but it’s clearly meant for 221B. The intent is obvious, even if the data isn’t.

And this matters, because there are thousands of examples like this in business, logistics, law, medicine—anywhere rules bump into reality. Sometimes instructions must be ignored not out of laziness or rebellion, but because following them would lead to worse outcomes. The hard part is that both perspectives can be true: a rule can be correct in theory and ineffective in practice—or even harmful in a specific situation.

You could try to codify every exception, but as systems grow more complex, it becomes impossible to account for every nuance. The more rules you add, the more brittle the system becomes. That’s where human judgment excels—not just in knowing the rules, but in knowing when to bend them.

There’s also a deeper issue here: who gets to decide when a rule should be broken? And who’s responsible when it goes wrong? When a human makes that call, there’s at least someone to explain the reasoning, take responsibility, and course-correct. With AI, the answer is often no one. Even when AI outputs look thoughtful, they’re not the product of reasoning—they’re a reflection of probabilistic predictions based on training data.

This is one of the big challenges in AI development, often referred to as the alignment problem: How do we make sure an AI system not only follows instructions, but understands and acts in accordance with human values? The answer isn’t just better data or more parameters—it’s about understanding why we do things, not just what we do.

Until machines can grasp intent, interpret context, and take responsibility for outcomes, they’ll remain powerful tools—but they won’t be true decision-makers. Rules without judgment are just automation. And automation without wisdom is not intelligence—it’s just speed.