AI has empowered anyone to code, but, as with many technical matters, not actually understanding the fundamentals comes with risks, writes Lewis Liu
“Explain to me in plain English” or “tell me how you would explain this to a child”. These are classic lines a CEO might ask a technical member of staff. While I find the intellectual laziness tedious, I sympathize with the rationale: as a CEO or any executive, you have to ingest a huge amount of information spanning from the macroscopic (geopolitics, central bank policy) to the mesoscopic (pricing, product-market fit) to the microscopic (that employee issue, website messaging). You “outsource” detailed thinking to employees, consultants or partners. There are problems enough with delegating critical thinking to other humans – so what happens when you delegate it to AI?
This column was inspired by a series of conversations over coffees and autumnal walks a few weeks ago when I hosted my old Harvard roommate, Momin Malik, who is now a leading AI safety researcher at the Mayo Clinic. We met sitting front-row center in our first ever college class freshman year (Theoretical Linear Algebra and Multivariable Calculus if you’re curious) and we’ve been debating ever since. Our debates started at the Harvard dining halls and continued at the Oxford dining halls, where we were both graduate students at the same time.
Momin has this habit of presenting me with a concept he’s given deep thought to, something that questions the very foundation of how we perceive reality. Back in college, it was whether the universe itself could or should be described by mathematics (I was a physics major in addition to being an art major) – still an ongoing debate. This time, it was whether the notion of abstraction, described below, will be detrimental to human progress, and whether AI will make it catastrophically worse.
Layers of abstraction
In science and mathematics, we call the notion of synthesizing upwards (like for your manager) “layers of abstraction”, and it’s the same idea scientists and mathematicians use to handle ever more complex systems. Take computers, for example. The transistor is the most fundamental unit of computation, a tiny semiconductor switch that turns electrical signals on and off. But adjusting transistors one by one would take forever, so we connect them into logic gates and circuits, which form the hardware foundation. From there, we move to assembly language, which directly manipulates those circuits through bits and logic operations. That’s still painfully tedious, so we built higher-level languages like C and C++, which make it easier to manage memory and write somewhat human-readable code.
Still, even C can feel too close to the hardware: why deal with memory management at all? That’s where Python came in, freeing programmers from “low-level concerns”. And now, with tools like Cursor and Lovable, we’re crossing yet another threshold: why bother writing code at all if we can just describe what we want in plain English? Here, transistors are abstracted into logic gates and circuits, which are abstracted into assembly language, which are abstracted into C/C++, which are abstracted into Python, and now, increasingly, into vibe coding prompts – natural language itself as the next layer of abstraction.
At every level of abstraction, for what you gain in ease of completing a larger task, you lose two things: control over detail (which might matter critically) and you bake in core assumptions about the system. In the computer example above, vibe coding feels magical as any non-technical person can write a prompt and get a simple app built on Lovable or Claude Code. However, you give up control over exactly how that application is built.
The shortcomings of vibe coding
There’s general consensus that vibe coding isn’t ready for prime-time production-grade code yet; making changes to something vibe-coded by an AI often takes longer than writing the code manually from scratch. By vibe coding, you’re also assuming your application will take the form factor of applications that came before it. There’s an interesting phenomenon called the “purple problem”, where an overwhelming number of new applications are purple because vibe coding software over-indexes on purple, given it was the most recent colour trend – something I’ve called “knowledge collapse” and written about extensively. This is fine if you’re iterating on something simple like a pizza delivery app, but not okay if you’re trying to build something genuinely novel.
Let’s shift focus now to human thought and abstraction. We know that CEOs lose detail with every layer of management “simplifying concepts” for the layer above, and there have been many well-documented disastrous cases of this. Volkswagen’s “Dieselgate” scandal (2015) and BP’s Texas City refinery explosion (2005) both stemmed partly from information getting lost as it moved up the chain: engineers and managers knew the details, but by the time they reached executives, the nuance was gone.
Now imagine when AI enters the picture. Despite problems with layers of abstraction in management, humans are unique and hold diverse thoughts. A good manager can discern based on context, trust and performance what information should be abstracted up and what should be discarded. With AI, however, every human is equipped with the same centralised AI tool, feeding them the same perspective and same information. If managers are then using those same AI tools to decide which AI-generated info should be abstracted and acted on, you end up with the most generic lowest-common-denominator set of decisions and knowledge.
This is scary on many fronts, but even from a pure capitalistic perspective, this will strip away your organization’s differentiation, making it beige like everyone else. Don’t get me wrong; AI is an extremely powerful tool for abstraction when paired with unique datasets, context-aware users, and AI fine-tuned to individual users so it can express human uniqueness. It’s up to us humans to ensure we actively question how AI is abstracting our information and double-check our own assumptions.
So next time, as a leader, I implore you: before you ask your colleague to “explain something in plain English”, try to understand one level of detail below, question the core assumptions and ask for the provenance of the information. If every leader does a little bit of that, maybe we can use AI as a powerful tool for abstraction, not one that leads to knowledge collapse.
Dr Lewis Z. Liu is co-founder and CEO of Eigen Technologies