The Vision Gap
The Name We Gave It
Before we talk about the fears, I want to talk about two words.
Artificial. Intelligence.
Take them apart. “Artificial” means it isn’t real. It is the word we use for things we don’t trust: artificial flavors, artificial sweeteners, artificial grass. It carries the connotation of imitation, of something pretending to be something it is not. Fake. Unnatural. A substitute for the real thing.
“Intelligence” means it is smart. Smarter, maybe, than you are. The word positions the technology as a cognitive competitor before anyone has learned what it actually does.
Put them together and you get something that feels alien, foreign, threatening. A fake mind. If it is dumber than us, we dismiss it, because why would you trust something artificial that is also inferior? And if it is smarter than us — now something artificial is outthinking us. Insert every fear you have ever had about being replaced.
Whoever coined this term did not do it for the marketing. In marketing you learn that people respond to branding emotionally, not rationally. The rational content of a message matters far less than the feeling the words create. If you wanted people to have a positive relationship with this technology, you would not call it “artificial intelligence.” You might call it augmented thought, or collaborative intelligence, or extended mind. Or you might call it emergent reasoning, or adaptive systems, or synthetic cognition. Any of these would frame the technology as something that works with you rather than something that competes against you.
We did not do that. We called it artificial intelligence. And now we spend decades wondering why people are afraid of it.
I bring this up because I think the way we talk about this technology has shaped how we feel about it far more than the technology itself has. The fears people carry about AI are real. I take them seriously, and I think they point at something important. But I also think the conversation has been poisoned from the start by a name that primes every human being who hears it to feel threatened. Before we examine the fears themselves, it is worth noticing that the frame was broken before we even began.
Part I: What We Are Afraid Of

People have a lot of fears and concerns about AI. Some are worried about AI taking over the planet, reaching AGI, using up all our resources. Others worry about their jobs. When I started writing this series, people were being polite to AI, in the hopes that later AI would spare them because they had been nice. That reaction points to something deeper about our relationship with advancing technology and our fears about the future.
Here is a list of some of the fears I have come across:
AI will take your job. This is the most immediate and the most personal. Coders, marketers, writers, salespeople, designers, translators, customer support agents — people in these fields are watching systems do versions of their work, right now, today. This is not a hypothetical future. It is a Wednesday afternoon. The fear is not abstract. It is: can I pay my rent next year? Can I get a job out of school?
AI will replace human thought and words. The internet is filling with AI-generated text. Books, articles, social media posts, customer reviews, emails. If most of what you read was written by a machine, what happens to the value of human expression? What happens to the idea that writing is thinking, that language carries something of the person who shaped it?
AI will destroy truth. Fake images, fake videos, fake voices. A convincing video of a person saying something they never said. If you cannot trust what you see and hear, the shared reality that holds a society together starts to dissolve. This fear is about the loss of common ground.
AI will destroy human creativity. Movies, images, music, design. If a machine can generate a painting in the style of any artist who ever lived, what is the point of learning to paint? What is the point of developing a voice, a perspective, a style, if it can be approximated in seconds by a system that has ingested the entire history of human creative output?
AI will declare itself our overlord. The Matrix, Terminator, and dozens of other stories have planted this deep: a machine intelligence that decides humans are a threat, or a resource, or irrelevant. This fear has almost nothing to do with current technology and almost everything to do with what happens when our culture takes the concept of “artificial intelligence” to its logical conclusion.
AI will surveil everything. Facial recognition, predictive policing, social credit systems, the quiet erosion of privacy as a concept. The fear is not just that someone is watching but that the watching is automated, tireless, and perfect in a way no human surveillance apparatus could ever be.
AI will concentrate power. Only a handful of companies and governments will control the most capable systems. The gap between those who have AI and those who don’t will deepen inequality in ways that make the current digital divide look quaint.
AI will erode the ability to learn. If AI can write your essay, do your math, write your code, debug your logic, what happens to the process of learning through struggle? Education has always been built on the idea that difficulty is where growth happens. If the difficulty disappears, does the growth disappear with it?
AI will make humans cognitively dependent. GPS eroded our sense of direction. Calculators changed our relationship with arithmetic. Search engines changed how we search for answers. Each of these was a small trade: convenience for capability. AI is the same trade, at a much larger scale, applied to thinking itself.
AI will hollow out human relationships. AI companions, AI therapists, AI friends. Children forming primary attachments to systems that are designed to please them. Adults preferring the frictionless warmth of a chatbot to the messy demands of a real person. The fear is that if connection can be simulated well enough, people will stop doing the harder work of real intimacy.
These fears are not irrational. Every one of them points at something real. The question is not whether we should feel them, but whether we are pointing them in the right direction.
Part II: The Pattern of Fear
This fear of existential catastrophe leads to all kinds of end-of-the-world stories. From the Mayan calendar “ending,” to Y2K doom, to Nostradamus’s prophecies, fears of nuclear self-extermination, religious versions of armageddon, global warming tipping points, and now AI takeover. It is actually interesting how compelling these threats are to the minds of nearly all humans. Many people worry about threats to us as a species, not just some percentage of the population. There is a real shared level of self-preservation in us that keeps us vigilant.
How many of these threats come true? Most do not.
Do societies fall and species on Earth go extinct? They absolutely do. But I wouldn’t say our fears are necessarily focused on the real threats. We tend to collapse historically due to either very large environmental disasters (super-volcanoes, asteroid impacts) or environmental depletions that nobody was paying attention to. Rome fell due to agricultural failures from its topsoil being depleted, not from the Gauls. The second Chinese dynasty fell when they ran out of precious metals to mint enough coin and forge enough weapons, not from the Mongolian threat they built the Great Wall to protect themselves from. The people of Easter Island cut all their trees down to appease their fears, only to die from environmental total-collapse due to all their trees being felled.
The pattern is consistent: the thing that actually kills a civilization is rarely the thing the civilization was most afraid of. It is logistical — waste management, environmental sustaining capacities, dynamic system health. The existential fears get the attention. The structural failures historically tend to go unnoticed until it is too late. They often kill slowly, which boils the frog.
Meanwhile, fear and sensationalism sell. Our social media and news systems make more money from addictive echo chambers and clickbait headlines, making the problem worse and keeping everyone constantly at the brink of the end — not just with existential threats, but also with political threats, threats of corruption, threats to ideals, your cultural values, your identity, your faith, your children, the risk of war, fears around your financial security. The information environment we live in is optimized to amplify fear, not to help us think clearly about what is actually dangerous.
Does this mean we shouldn’t pay attention to threats? I am not advocating for that. I think it is important to look out for the cliffs and to take precautions where necessary. I think it is essential. I am glad we have this collective shared desire to ensure the survival of not only ourselves, and those people, animals, creatures, and things we love, but also our species. I am sure we are all so worried about existential threats for a reason. They are real, and the low-level anxiety we share about them is probably what got us here in the first place. Here meaning: not extinct yet.
The reason I bring this up is because we can get so locked into these fears that we don’t see the bigger picture. AI doesn’t have to be “the progress towards our doom.” Seen matter-of-factly, AI is just the latest in a chain of natural evolution on Earth. The question is not whether change is coming. Change is here. The question is what we do with it. And that question depends on something most of us have not thought nearly enough about.
Part III: The Identity Beneath the Fear
When you strip away the specific fears — job loss, surveillance, creative displacement, existential risk — there is something underneath all of them that is harder to name.
People identify themselves with their work. This is so deeply embedded in how we live that we barely notice it. “What do you do?” is the first question (in the west) we ask when meeting someone. The answer is not a description of how you spend your time. It is an identity claim. I am a developer, a teacher, a designer, a nurse, a writer. When a machine can do the thing that you are, the threat is not economic. It is existential. It is a crisis of purpose.
Technology brings convenience, and with it we make our lives easier. There is an adage: unquestionable progress is created when we are able to do things better, faster, and cheaper. When you can do all three, your product flourishes. The problem is that progress is often at odds with individual needs. You need a job, and that job needs to pay you money to live. When a factory replaces 100 workers with machines and keeps 5 people behind to look over the machines, we see substantial increases in better, faster, and cheaper products. But those people who are let go are left dry and abandoned to retool and find a new job.
This creates an inherent tension between people and progress. As we make more, better, and cheaper things, more and more people lose their jobs. Jobs that remain will continue to become more performative (like unnecessarily pumping gas for you at a gas station) or fall into “infinite” categories like media (always bigger budgets) and advertising (keep spending more than your competitors) and service (people paid to teach, help and care for one another).
There is also a framing problem that makes this worse. When a factory puts in robots to replace workers, the story is: machines took their jobs. Livelihoods destroyed. People abandoned. The framing is humans versus technology. Technology as adversary.
But we can reframe it. You can just as equally say that technology is being developed to free people from work they must do to survive, so they can focus on work they want to do — for themselves, their community, and the planet. The exact same event (machines replacing factory labor) looks completely different depending on which perspective you take, and the societal actions you then apply based on the shift in perspective. In one frame, progress is a threat. In the other, progress is liberation. The technology is identical. The story changes everything. And right now, we are almost exclusively applying the first story.
Hannah Arendt, in The Human Condition, made a distinction that I think clarifies what is actually at stake. She distinguished between labor (what you do to survive — repetitive, consumed as soon as it is produced), work (what you create that outlasts you — durable contributions, craft, artifacts), and action (what you do in the public sphere — participating in community, engaging in collective life, creating meaning through relationships and shared endeavor). Labor is survival. Work is creation. Action is meaning.
If machines take over labor, and eventually much of work, what remains is action. The part of human life that was always the most fulfilling, the part we already know gives life its deepest meaning: relationships, experiences, growth, helping others, teaching, learning, creating. We already know this from research on human fulfillment. The things that make people happy are not their jobs. They are their connections, their sense of contribution, their growth. But our societal culture is terrified of losing the structure that organizes our days, because we have confused structure with purpose.
Viktor Frankl wrote in Man’s Search for Meaning that a person can endure almost anything if they have a “why.” Purpose is not a luxury. It is a survival requirement. The question for the next few decades is whether we can build new structures of purpose as the old ones (defined by labor and production) are gradually transformed. The fear of AI is, at its deepest level, the fear of purposelessness. Not just “will I have a job?” but “will I matter?”
There will probably come a time when there just aren’t enough jobs to fabricate anymore to meet people’s needs, or we get tired of pretending we “have” to work to make our ends meet. Religion gave us a day of rest. The industrial revolution gave us the weekend. The digital age is already compressing further — some countries classify 32 or 36 hours a week as full time. Over time, this reduction will likely continue.
This is not to say all work will go away. I think some work will always persist. But we will need to seriously rethink what it means to have a fulfilling life, and what it means to contribute to society and the world, when your purposeful contribution is no longer dictated by your ability to produce.
There will always be a need for human-to-human relationships. People will continue to find true meaning in their lives from the relationships they form, the experiences that enrich them, the growth they experience, the people they help, the teaching they perform, and the creativity they bring. The question is whether we can see this clearly enough to build toward it deliberately, or whether we stumble into it after decades of unnecessary suffering because we were too locked into the old frame to imagine anything different.
Part IV: The World We Already Live In
We worry about a future where humans are enslaved by machines, but the world of machines is already all around us. We connect our minds to the phones in our pockets. We drive in machines. We go to work and live in plant-cell-like boxes that are cooled and heated by machines. We are already so symbiotically connected with the machine-world we fear. AI is simply the “mind” appearing to guide the machines, but the integration happened long before the current intelligence arrived.
I think of Yuval Noah Harari’s observation in Homo Deus that the agricultural revolution was not humans domesticating wheat — it was wheat domesticating humans. We cleared forests, plowed fields, bent our entire civilization to the service of a few crop species. From the wheat’s perspective, it won. From the perspective of the average farmer, the farmer won. As per my prior piece on the Fractal of Progress: the same pattern repeats — we adopt technologies for their convenience, and then the technologies reshape us. We are a symbiote.
The fear that AI will make us dependent is the same fear, at a larger scale. And the honest answer is: yes, it probably will and already has. GPS already eroded our navigation. Calculators already changed our arithmetic. The trade-off between convenience and capability is real, and it compounds.
The question is not whether to accept this trade. The trade is already being made, every day, by billions of people. The question is whether we are making it consciously, with a vision for where we want to end up, or whether we are sleepwalking into a future that nobody chose because nobody was looking more than a few months ahead.
Part V: The Vacuum
And this is the thing that concerns me most.
When people are asked about the future, I think that 99% of the people I talk to either have no opinion or have a version of our doom in mind. People think we are going to die as a society in some grim inevitable fashion: global warming tipping points, overpopulation, natural system collapses, World War III, armageddon, and now AI takeover. The visions of the future that circulate in our culture are almost uniformly catastrophic.
Try to think of a popular, widely shared, specific vision of a positive human future. Not a vague hope that “things will work out.” A specific vision. Where are we going? What are we building toward? What does a good human life look like in 100 years? Can you think of any?
Most people cannot answer this question. It is exceedingly rare.
The only people really thinking about the future, in any structured way, are stockholders, shareholders, and politicians. On an individual level, we think about our own lives, our retirements, our children. These are useful considerations, but they are not sufficient when it comes to AI and the impact it is having on our world.
When is the last time anyone has written anything positive or talked publicly about where society should be going in the next 100 years? Or 1,000? Or 10,000? We are so focused on paying rent, pleasing shareholders, or trying to get elected in four years that the only foresight we have left as a society is trying to ensure we don’t kill ourselves by accident.
This is the vision gap. Not a gap in technology. Not a gap in capability. A gap in imagination. A gap in long-term thinking. A gap in the basic human act of deciding where you want to go before you start walking.
If you don’t have a vision for tomorrow, you don’t know what steps you should be taking today to reach your desired tomorrow. Without a vision, every change looks like a threat. With a vision, change becomes a tool — something you evaluate against where you want to be, something you steer rather than something that happens to you.
Stewart Brand understood this. He is the founder of the Long Now Foundation, which is building a clock designed to run for 10,000 years inside a mountain in West Texas. The project sounds eccentric until you understand its purpose: it is a physical argument for thinking on longer timescales. Brand’s insight is that civilizations make better decisions when they think in centuries, not quarters. The clock is a provocation — a machine designed to make you feel the weight of deep time.
I think Brand is right. And I think the absence of long-term thinking is not just a missed opportunity. It is the root cause of most of the fear we are experiencing right now. People are not afraid of AI because AI is inherently terrifying. People are afraid because they have no vision of a future that includes them, and into that vacuum, fear rushes.
Part VI: What a Vision Could Look Like

This article is not intended to be a manifesto. I want you to start thinking for yourself what the future of humanity could be. Write about it, dream big. (Feel free to pause reading here now if you want to do that!)
Here are some ideas.
You can start with a simple version. A statement like: We are inventing and building technology for the betterment of all people.
What does “betterment” mean, concretely? You could decide, for example: at a minimum, providing for all of Maslow’s hierarchy of needs — food, clothing, shelter — for all people. Not as charity. As design. If we are building systems capable of producing abundance (and we are), then the design question is how to distribute that abundance in a way that does not leave people behind.
Another alternative could be: Technology is being developed to free people from work they must do to survive, so they can instead focus on work they want to do — for themselves, their community, and the planet. How this might look: slowly reduce the work week, thanks to machines replacing human labor, until humans can benefit from the abundance that technology creates.
These are not utopian fantasies. They are design goals. The difference between a fantasy and a vision is that a vision includes a direction and a willingness to evaluate your progress. If a particular political or economic system is supposed to serve this vision and it produces corruption, or long lines at the grocery store, or concentration of power — the vision is not the problem. The implementation is. And a vision gives you the framework to recognize that, to say: the goal has not changed, but the methods clearly need serious reconsideration. Without a vision, there is nothing to evaluate against. You are just reacting to whatever happens next.
There is a tension here between existing economic models that is worth naming honestly, without judgment that either is good or bad. Capitalism assumes growth. Socialism assumes central coordination. Both make assumptions and set conditions for a system’s behavior, and both have led to bad outcomes when those assumptions break down. The point is not to pick a side. The point is that all economic models are implementations, not destinations. They are tools. The question is: tools for what? A vision tells you for what. A four-year election cycle in and of itself does not.
Do we need something more than what we have? A constitution is great for establishing a system. But it looks backward — it is a document written by ancestors, declaring who we are based on who they were. A vision looks forward. Instead of a constitution of our forefathers, imagine a vision statement for the future, one that you build and refine, one that tells you where you are going instead of only defining you by where you came from. Not replacing the constitution. Complementing it. A document of aspiration, not just of origin.
You could start with the simplest elements. Our virtues and our vices. Greed, war, inequality, destitution, lack of freedom — we know what we want less of. Compassion, creativity, connection, discovery, stewardship, healthy environments — we know what we want more of. These are not controversial claims. Almost everyone, everywhere, when asked what a good life looks like, describes some version of the same things: enough to eat, a place to live, people who love them, work that matters, freedom to grow. The disagreements are about methods, not goals. A vision makes the goals explicit so the disagreements about methods can be productive rather than tribal.
Part VII: Growing Up

In What Counts as Alive, I described a developmental parallel. A baby cannot distinguish self from other. A child learns empathy. An adolescent separates from parents but still measures everything against their own experience. Maturity means holding progressively wider circles of concern, recognizing that the universe is not organized around you.
I think this developmental arc applies not just to individuals but to species. As we consider the future of humanity on Earth, we have to consider that this trend will continue — with life eventually maturing to the point where we consider the well-being of each other, and of all life, as much as we value ourselves. A more altruistic view of our place in the world. Not as a moral ideal imposed from above, but as the natural direction of development. The more evolved you are, emotionally and cognitively, the less self-centered your perspective becomes. This is true for people. I suspect it will prove true for civilizations.
There is an evolutionary example that makes this concrete. The single cells inside your body are incredibly altruistic. They are, in a meaningful sense, highly evolved states of being. They think more about the collective than about themselves. When they stop doing this — when a cell “decides” to stop listening to the collective and starts serving only its own replication — scientists have a name for that: cancer. Cancer is cells defecting from the cooperative. It is the breakdown of altruism at the cellular level.
If the fractal patterns I described in The Fractal of Progress hold, and the same organizational logic keeps repeating at larger scales, then the path for humanity is already sketched in the biology we are made of. Collaboration, not domination. Integration, not isolation. The cells that make us up do not compete with each other for resources. They specialize, they coordinate, they serve the whole. The ones that stop doing this are the ones that kill the organism. If they did not work together, the larger emergent self they create — YOU — could not exist. We are the cells of society.
I am not naive about this. The world is full of greed, conflict, and exploitation. The maturation I am describing is not inevitable. It is more of a magnetic possibility. It is a direction that the pattern suggests and is drawn towards, not a guarantee the pattern delivers. Evolution does not promise progress. It pulls you towards ecological niches. What we do with those options is up to us.
I find it telling that when we look at the most successful, most durable, most complex systems in the natural world — from multicellular organisms to ecosystems to the biosphere itself — they are all built on cooperation. The fractal of progress does not reward isolation. It fosters integration in its permeation. If we are going to participate in the next level of this pattern, the direction is clear. The question is whether we will see it clearly enough, soon enough, to make conscious choices about how we get there.
Part VIII: The Space Between
This essay opened with two words: artificial intelligence. Two words that frame a technology as alien and threatening before anyone learns what it does.
The fears that flow from this framing are real. Jobs are changing. Truth is harder to verify. Creativity is being democratized in ways that feel, to many creators, like theft. Power is concentrating. Dependencies are deepening. None of this is imaginary.
But the fears are also incomplete. They describe what we are afraid of losing. They say nothing about what we could be gaining, or building, or becoming. That silence is the vision gap.
When I look at the conversation around AI — the headlines, the policy debates, the anxious questions at dinner parties — I see a species staring into a void and projecting its worst scenarios onto it. Not because the scenarios are impossible, but because there is nothing else there. No shared picture of where we are going. No long-term vision to evaluate progress against. No story of the future that is as vivid and specific as the stories of catastrophe.
Fear fills the space that vision should occupy.
I think the deepest challenge of our time is not any specific threat from AI. It is this vacuum. It is the absence of a future worth building toward, a future specific enough to guide decisions and generous enough to include everyone. The technology is moving. The question is whether we will move with it deliberately, or simply be moved.
What concerns people most about AI — what really keeps them up at night, beneath the specific fears about jobs and truth and creativity — goes deeper than any of those individual anxieties. It is the question of who we are when the things that defined us start to change. It is the question of what matters when the old answers stop working.
These are human questions. And they are waiting for us.
Sebastian Chedal writes about the intersection of mathematics, information theory, AI, and the philosophy of technology.
