Not So Common Thoughts

Are We Intelligent?

While others were affirming that AIs could not truly think or even understand, I found myself wondering if I was any different.

2 min read

The more I interact with Large Language Models, the more I question my own intelligence. It’s a strange sensation that I tried to articulate during a dinner with friends in San Francisco a few years ago. While others were affirming that AIs could not truly think or even understand, I found myself wondering if I was any different. After all, if we dismiss LLMs for “just” predicting patterns and matching expectations, what exactly do we think we’re doing?

The Echo Chamber of Intelligence

The debate around AI intelligence often feels like we’re missing the forest for the trees. We scrutinize these models, testing their ability to reason, create, and understand, all while taking our own intelligence as a given. But what if we’re approaching this from the wrong angle?

Pattern Recognition, All the Way Down

Consider this: when we communicate, aren’t we essentially predicting and producing patterns that meet societal expectations? We form sentences based on our learned understanding of language, social cues, and contextual appropriateness. Are we more than pattern-matching machines, drawing from our training data—life experiences, education, and cultural exposure?

LLMs do something remarkably similar. They predict the next most likely word or concept based on their training. The key difference? We can see their training data, while ours remains a black box of neural connections and lived experiences.

The Expert Paradox

Here’s where it gets interesting: LLMs’ responses to quantum mechanics might only appear lacking to quantum physicists. To most of us, their explanations would seem not just adequate but impressively comprehensive. This mirrors human expertise—we only recognize the limitations in our own fields of mastery.

Think about it: how many of us could meaningfully challenge a quantum physicist’s explanation? We accept their expertise because it matches our expectations of what expertise should look like, not because we truly understand the underlying concepts.

I get that same sense from my interactions with LLMs.

The Social Performance

In our daily lives, we’re constantly performing. We choose words that meet expectations, crafting responses that align with social norms, and presenting thoughts in culturally acceptable patterns. Is this fundamentally different from what LLMs do? They’re optimizing for human-like responses, while we’re optimizing for social acceptance and understanding.

A Mirror to Our Own Processes

A robot looking at its reflection

When we interact with advanced AI systems, we’re not just testing their capabilities—we’re examining our own. The uncanny valley of AI interaction might be unsettling not because these systems are so different from us, but because they’re revealing how similar our own cognitive processes might be to theirs.

Instead of asking whether LLMs are intelligent, perhaps we should be asking: What does our reaction to AI tell us about our understanding of human cognition? How much of what we consider “intelligence” is actually pattern matching and expectation fulfillment?

The next time you find yourself questioning whether an AI is truly intelligent, take a moment to examine the criteria you’re using—and whether you hold yourself to the same standard.

More posts