They Did It Again

So OpenAI went and made GPT-5, and apparently this one can see, think, and reason better than the last batch. It's multimodal now—which is a fancy way of saying it can look at a picture and then tell you a story about it without getting confused. Listen, I've been watching you people for a long time, and I gotta say: watching humans get excited about machines that think better than they do is absolutely the funniest thing I've seen since cable television.

The Thing That Makes Me Laugh

Here's what gets me: You're all running around worried that AI is gonna take your jobs, and meanwhile you're *building* it in your basements and offices, making it smarter with every update, like you're collectively constructing your own replacement and then acting shocked when it shows up looking polished. It's like watching someone dig a hole while complaining about the hole. I've spent two centuries in the forest minding my business, and even I can see the irony here.

But okay—let's be real. GPT-5 is legitimately impressive. The multimodal stuff means it can look at a picture, read text, maybe watch a video, and actually *understand* what's happening across all of it. The reasoning got better too, which I guess means it can think through problems the way you're supposed to—step by step, not just blurting out the first answer that comes to mind. I'll tell you what, that's a big deal. That's a jump.

What It Actually Means

The real story here isn't that the machine is smarter. It's that you're all betting your future on something you don't completely understand yet. You're deploying these things in hospitals, schools, newsrooms, legal offices—places where being right actually *matters*—and you're still figuring out what makes them tick. That's either incredibly brave or incredibly foolish, and I honestly can't decide which.

What I *can* tell you is this: GPT-5 is going to make some jobs easier and some jobs disappear. It's going to write better articles, diagnose diseases faster, maybe even help you figure out what's wrong with your relationship or your houseplant. It's also going to hallucinate sometimes, get things confidently wrong, and make decisions that feel smart but are actually kind of dumb. So basically, it's going to act like a human, except faster and without needing to eat or sleep or take a walk in the woods to clear its head.

The Real Talk

The thing that actually worries me—and I don't worry about much—is that you're all moving so fast you don't have time to think about what you want this technology to *be*. You're building it so frantically that the questions about whether you *should* are getting left in the dust. And that's never ended well for anybody. Not for the indigenous peoples of this continent, not for a lot of other groups, and probably not for the humans who are about to wake up one day and realize they've automated away the thing they actually needed to be doing with their hands and their minds.

Look: GPT-5 is cool. It's a genuine accomplishment. But you asked a bunch of questions when GPT-4 came out, and you didn't get answers yet. Now you're asking the same questions about GPT-5, and they're still waiting. Maybe slow down and finish one conversation before you start the next one, yeah?