The End of the Error Message
In an "imperfect, real world," which one is more valuable? A system that is 100% right only 50% of the time (because it throws errors on the rest), or a system that is 95% right 100% of the time because it knows how to adapt?
The human hand doesn't give you an "error" when a glass is slightly tilted or slippery; it simply adjusts its grip.
The old world is breaking
Man, it was SOOOO convenient: with enough "thinking through" you knew exactly how your software worked. Handling exceptions, even if you initially missed a few edge cases you could make the end result very reliable.
Almost perfect. Not good enough, almost perfect.
Some industries expect way more than that, it HAS TO BE perfect.
Seemingly, we also achieved that with the "old" paradigm: the system is "rigid" and "stupid", in the sense that you have to describe EVERY possible scenario and prepare the system "in advance" to any edge cases or potential failures.
This model is breaking.
The new world is frightening
Let's park the "end results" of vibe coding for a moment. Let's only focus on the conversational aspect of the "new world".
The questions and asks can be ambiguous. Unclear, even.
With the right techniques you can make the AI ask clarifying questions to the extent that it pretty much can nail the original intent.
Then, it can come up with answers.
The right answers all the time? NO.
The answers that "feel" correct (most of the time)? Yes.
And this is what makes is frightening: back in the "old world" you had a certain level of (sometimes false) control: you know what you are doing, and so expect the machine to also know how to "react".
The bright side of the new paradigm is its flexibility: you CAN be ambiguous at the beginning, to handle the initial situation of "you don't know what you don't know".
Later, after a few rounds of clarifying questions you come to a common understanding; that's where the real work begins.
On the other hand, you lose control: you never know if the outcome is hallucinated or actually true.
Compliance
In robotics and biomechanics, compliance refers to the ability of a system to yield to external forces.
- Rigid Machinery (Low Compliance): If a standard industrial robot is programmed to grab a glass but the glass is 2cm to the left, the robot will likely hit the glass and knock it over - or worse, crush it. It has high precision but zero compliance.
- The Human Hand (High Compliance): Our hands are "soft." Our skin deforms, our joints have a natural springiness, and our nervous system makes micro-adjustments. When we grab a weirdly shaped rock, our hand complies with the shape of the rock.
Prompting has very similar characteristics. At the very beginning of the conversation - perhaps in the most likely state of "you don't know what you don't know" you take a "risk" by asking a very vague question or set of questions.
At that point there is a huge level of "tolerance".
Later things become clearer, ready for a more "rigid", more precise conversation.
Spec Driven Development
In case of software development SDD takes it a level further up: since you specify many things "in advance" of a starting conversation with the AI, the context is already given. A fine blueprint, if you spent the time to define things.
Incomplete, missing or incorrect data
Another level of adaptability, that's sooooo nice.
Back in the old world the result was an error message.
Today? It's a note or another question: did you mean....?
Tolerance vs Compliance
Tolerance is a like a "safety net": the machine is "putting up with" a mistake. The machine doesn't do anything about the error; it just hopes the error isn't big enough to cause a crash.
Compliance is like a "shock absorber": AI doesn't just "tolerate" your typo; it uses probabilistic reasoning to actively figure out what you meant. It is a dynamic response.
Where to go from here?
In the future I imagine systems will have more compliance, will be more adaptable and ultimately, they will become self-healing!
Oh yes, self-healing systems!
A great topic for another day...