The Alchemist's Equivalence: A Dialogue on AI and the Progress of Knowledge

Using AI to Critique AI
Here we encounter a delicious irony: I used ChatGPT and Claude to help write a dialogue that questions the very foundations of artificial intelligence. LLMs proved invaluable in transforming my scattered thoughts into a coherent Platonic dialogue with engaging characters and flowing conversation.
The process wasn't automatic. I had to constantly fact-check, correct misconceptions, and guide the AI away from its tendency to generate plausible-sounding but inaccurate claims. The back-and-forth process of refinement was essential—the AI provided the stylistic polish and structural coherence, while I supplied the conceptual direction and critical oversight.
This collaboration highlights a crucial point: these tools are extraordinarily useful for what they do well, but we shouldn't mistake their capabilities for genuine intelligence. They excel at pattern matching, style transformation, and linguistic fluency. What they can't do is thinking or question their own fundamental assumptions.
The real critique isn't of AI tools themselves—they're powerful and valuable. The problem lies in our tendency to confuse sophisticated pattern matching with true intelligence, and in our assumption that scaling up current approaches will somehow lead us to machines that think in the deep sense that humans do.
The Dialogue
Dr. John Computationalist, a computer scientist, sits across from Dr. Marcus Doubtful, a physicist turned philosopher. They're nursing their second round of beers, debating the parallels between alchemy and AI.
John: Look, Marcus, I think you're being way too doom-and-gloom about where we're headed. Yeah, sure, we don't have all the theory nailed down yet, but come on—machine translation that actually works, AI diagnosing cancer better than doctors, predicting how proteins fold... This isn't some pie-in-the-sky stuff. These are real wins.
Marcus: (chuckling) But that's exactly what I'm getting at! The alchemists had wins too. Jabir ibn Hayyan—eighth century—figured out how to make nitric acid, hydrochloric acid, sulfuric acid. We still use those compounds today. They perfected distillation, crystallization—
John: (interrupting) Oh, come off it. That's not the same thing at all. Those guys were literally trying to turn lead into gold and find magic potions for eternal life. We're working toward Artificial General intelligence, or AGI, which—last I checked—actually exists. (taps his temple) Right here.
Marcus: Do we, though? I mean, are we really sure that what we call intelligence can be... captured? Bottled up in some algorithm? What if consciousness, understanding, hell—even basic meaning—what if all of that depends on something that just can't be programmed?
John: (rolling his eyes) Now you sound like my philosophy professor. "What is consciousness, man?" Come on. Look at GPT-4. It writes code, essays, solves calculus problems. That trajectory isn't just noise—it's progress.
Marcus: Sure, just like Paracelsus made medicines that actually worked while still believing the human body ran on magical humors. Look, something working doesn't mean you understand it. When was the last time you could explain why one of these language models spits out the answer it does?
John: (sighs heavily) Okay, fine. Interpretability is... yeah, it's a problem. But we've got scientific method, right? Reproducible experiments, benchmarks, peer review—science works even when we're stumbling around in the dark.
Marcus: Does it, though? Really? How many AI papers can you actually reproduce? How often do these models completely fall apart the moment you show them something slightly different from what they trained on?
John: (getting a bit heated) That's a problem with some sloppy research practices, not with the whole field—
Marcus: (leaning forward) No, no, hear me out. Even if we fix all that, and I don't think we will, we're still missing the bigger picture. You can't say you understand something if you can't even define the limits of what you know. It's like... it's like trying to map a territory while wearing a blindfold.
John: (grudgingly) Alright, I'll give you that interpretability is harder than I'd like. But we're making progress there too—mechanistic interpretability, scaling laws, mathematical frameworks for learning...
Marcus: Maybe. But here's the thing about alchemy—it didn't evolve into chemistry because alchemists got better at alchemy. It changed because the outside world stopped putting up with unreliable results. Industry needed reactions that worked every time. Military needed precise metallurgy. Reality imposed discipline.
John: (perking up) And that's happening with AI too, isn't it? Companies are pouring billions into this stuff because it delivers.
Marcus: (grimly) Exactly. And that's the trap. You're succeeding just enough to avoid asking the hard questions. Why rock the boat when the money's flowing?
John: So... our success is actually screwing us over? (pauses, considering) That's depressing, but I can't say you're wrong.
Marcus: There were brilliant minds in alchemy too—skeptics, visionaries, people asking deep questions. But real change only came when the external pressure mounted. When unreliable theory started causing economic disasters or getting people killed.
John: (darkly) So we're waiting for AI to cause some massive systemic failure? Automated systems crashing the economy, legal systems making terrible decisions based on biased algorithms...
Marcus: It might take something like that, yeah. When we can't afford black boxes anymore. When interpretability becomes a matter of survival, not just academic curiosity.
John: God, that's grim. Waiting for disaster to make us smarter.
Marcus: There's another way. Lavoisier didn't wait for alchemy to collapse—he questioned its foundations when it was still making money. Insisted on conservation of mass, rigorous experiments. He chose understanding over tradition.
John: (leaning back) So what's that look like for us? Just... abandon neural networks? Deep learning? LLMs?
Marcus: Not abandon—demote. Keep them as useful tools, but stop pretending they're the final answer to intelligence. Like early chemists kept their distillation equipment but ditched the four elements theory.
John: (frustrated) But the potential here is massive. We could cure diseases, model climate change, accelerate scientific discovery... You really want to slow all that down for the sake of theory?
Marcus: That's the eternal tradeoff, isn't it? Quick wins versus deep understanding. History's pretty clear on which approach wins in the long run.
John: (after a long pause) So the question is whether we'll pursue theory because we want to, or because reality forces us to after everything goes to hell.
Marcus: Pretty much. Success, overconfidence, ignored warnings, slow disillusionment... and then, if we're lucky, a new framework that actually explains why the old tools worked when they did work.
John: (standing up abruptly) And what if there is no framework? What if AGI is just impossible under our current assumptions?
Marcus: Then we're like alchemists trying to turn lead into gold with fire and mercury. They weren't wrong that transmutation was possible—nuclear physics proved that centuries later. But their methods? Completely hopeless for the task.
John: (pacing now) You're saying logic, math, programming—the foundations of science itself—have built-in limits? Come on, Marcus. I can write a program to solve any problem I can define clearly enough. Mathematics is the language of nature, right? Galileo said that.
Marcus: (standing too) And yet mathematics contains the seeds of its own incompleteness.
(He walks to the window, looking out)
Marcus: This isn't mysticism, John. Gödel showed that any formal system powerful enough to be interesting can't prove all its own truths. Turing proved there are problems no algorithm can ever solve. These aren't edge cases—they're fundamental limits.
John: But those are just... technicalities. They don't stop practical progress.
Marcus: No, they don't stop you from building systems that look intelligent. But they do suggest that no program—no formal system—can capture everything we mean by human reasoning. Self-reference, meaning, intention, context... If understanding depends on those, then yeah, some aspects of mind might be beyond programming.
John: (throwing up his hands) So what's the alternative? Give up on formal methods entirely?
Marcus: Not give up—recognize the boundaries. Physicists didn't abandon Newton when they found relativity and quantum mechanics. They learned that Newtonian mechanics captures some aspects of reality, but not all. Maybe we'll find that logic and computation capture some aspects of mind, but we'll need entirely new concepts for the rest.
John: (challengingly) Then show me these new concepts, Marcus. Give me this new language. If you're going to tear down logic and math and programming, you better have something to replace them with.
Marcus: (smiling slightly) I'm not tearing anything down. I'm just pointing out the cracks that are already there, if you're willing to look. The limitations of logic, mathematics and programming when dealing with intelligence, consciousness, life itself. Maybe the cracks are a sign of something deeper—what if our assumptions are wrong?
(He turns back to the window)
Marcus: These questions—intelligence, consciousness, meaning, the nature of being, the limits of reason—they've been with us since the ancients. Kant, Bergson, Gödel, Turing, Husserl, Heidegger. Maybe it's time we stopped treating them as afterthoughts and made them the starting point.
John: (quietly) Sounds like the beginning of another conversation.
Marcus: (without turning around) Yeah. One we've been putting off way too long.