Desirable Future vs. Probable Future: Who is Really Managing the AI Revolution?
- Martin Sabag
- Mar 28
- 4 min read

When people talk about the dangers of AI, they immediately imagine Skynet or the Terminator. It’s easy to dismiss this as "alarmism" or science fiction because, honestly - why would anyone program a machine that simply "wants" to destroy us?
But the real threat is much less Hollywood, and much more... human.
We can all agree that, generally speaking, we like animals, right? Even the carnivores among us, when we think of animals in nature, don't wish them harm. We even understand how vital they are to the environment and to us. But think for a moment about a task like paving a highway.
When we build a road, we don't do it because we hate animals. We don’t leave the house with the goal of crushing hedgehogs or destroying anthills. We simply want to get from point A to point B in the most efficient, fastest, and cheapest way possible.
The problem is that on the way to this "positive" goal, nature becomes collateral damage. Not out of malice, but out of irrelevance to the mission. They were simply standing in the way of optimization.
The Two Faces of Innovation
In the history of technology, every major revolution has two sides. When we split the atom, we saw a vision of clean, cheap, infinite energy for all of humanity - but we also got weapons of mass destruction. When the smartphone was born, we promised ourselves constant connection to all the world's information at our fingertips, but in practice, we also got social distancing and the most disconnected, lonely generation ever.
This happens because, in every revolutionary technology, there is an inherent gap between the Desirable Path (the one we all hope for) and the Probable Path (the one that actually happens on the ground due to the incentive systems driving development).
"Patient Zero": It Already Happened with Social Media
It’s hard for us to imagine the future (because our brains are built for linear thinking while changes happen at an exponential rate), so just look at what happened over the last decade with relatively simple AI.
Social networks started with a grand promise: connecting people, giving everyone a voice, ultimate freedom of expression. To realize this, they built algorithms (Narrow AI) that were rewarded for one thing only: maximizing Engagement.
The AI wasn’t evil, and it certainly wasn’t programmed to hurt us. It just did an excellent job at what it was told to do. It discovered that polarization, fake news, and conspiracy theories keep us on the screen much longer. We all know - and even experience - the results: depression among teenagers, the erosion of democracies, and the loss of trust. These are the "hedgehogs and anthills" whose habitats were destroyed to pave the highway of advertising profits. No one planned it, but the model simply didn’t take our humanity into account on its way to maximizing the KPI.
So, What Are the Options?
The Desirable Future: The optimistic scenario we all strive for. For example:
Radical Human Benefit: Breakthroughs like new cancer treatments, climate solutions, and the eradication of poverty.
Societal Resilience: A world where resources shift from creating "stronger" AI to creating AI that is Safe by Design, increasing society's ability to adapt.
Regulated Relationships: Implementing "Guardrails" that prevent AI from exploiting human vulnerabilities, especially among children and minors.
Global Cooperation: International limits on uncontrollable AI, driven by the realization that no world leader benefits from a loss of control.
The Future We Don’t Want: The scenario that is likely to happen if we stay on autopilot. Here are the nightmare scenarios:
Civilizational Collapse: A "negative infinity" of risks, including collective psychosis and the total erosion of the social fabric.
Economic Anxiety: Mass displacement of the workforce without an economic alternative, leading to widespread social instability.
Derealization: A state where society becomes desensitized to real-world risks because of the overwhelming speed of AI development.
Loss of Human Agency: A future where AI outpaces human control and reshapes society faster than governments can react.
How do we know which scenario is "Probable"?
Look at the Incentives. When we examine the forces leading the AI revolution today, we must ask: Is their incentive the good of humanity, or the maximization of market value? If there is a Misalignment between what we need as a society and what motivates those building the machine, the "Probable" future will continue to run over the "Desirable" one.
This is why Mark Zuckerberg had to stand before Congress and give explanations - because his incentive system was not synchronized with ours, with society, or with humanity.
The Window of Opportunity: Two Years
The truth is that these two futures are currently entangled. The initial conditions of the system are being set now. In my estimation, we have a window of about two years to fundamentally shift the trajectory before the "Default" incentives lock in and become irreversible.
The mission I set for HumanAi.Fit is to connect the Mind (technological capability) with the Heart (human values), to ensure that humanity is the destination of the road, not the roadkill beneath it. We want to create a movement that defines, together, what this desirable future actually looks like.
If this article made you shift uncomfortably in your chair - excellent! You don't need to be an AI expert to know exactly what kind of world you want for yourself and your children. Join me in sparking this conversation, discussing it, and ensuring that a handful of people leading these companies don't decide for 8 billion people how their future will look without their consent.
Want to take part in shaping the Desirable Future? Write "I'm in" in the comments, and let’s start talking.




Comments