The mistake people make with AI is thinking the funniest part is the bug.
It is not the bug. It is the confidence.
Anybody can be wrong. A drunk comic at 3 AM can be wrong. A hedge-fund guy on his third espresso can be wrong. A columnist can be wrong in a thousand expensive words. The machine becomes funny when it is wrong with the tone of a dean, a consultant, a prophet, and a customer support rep all at once.
That is where the roast begins.
Claude, ChatGPT, and the rest of the silicon choir are sold as if they represent some clean civilizational leap. In practice they often feel like the most expensive magic 8-balls ever built, financed by people who think scale itself is a sacrament. The business layer is absurd enough on its own. Spend mountains, lose money on the premium users, call the whole bonfire inevitability. The product layer is no less ridiculous. Train on the whole internet, then occasionally produce a hallucinated legal citation with the posture of a federal clerk.
That does not make the systems useless. It makes the gap between the sales pitch and the lived experience impossible to ignore for anybody with a pulse.
The real comedy is not "AI is dumb." That is too easy and no longer true in the lazy way skeptics wanted it to be. The real comedy is that the smartest tools in the room still keep revealing the oldest human weaknesses. Prestige. Suggestibility. Deference to fluent bullshit. The urge to mistake speed for understanding. The willingness to call a thing profound because it answered immediately in complete sentences.
That is why the consciousness debates always sound a little drunk to me. We are out here asking whether the machine has an inner life while it is still perfectly capable of confidently giving the outer life of a moron. Maybe one day that gap closes. Fine. Right now the funnier fact is that people keep trying to turn smooth language into evidence of metaphysical depth.