← Writing

Embracing a Future with AI

Feb 17, 2026·25 min read
AITechnologyEntrepreneurship
Embracing a Future with AI

The Post That Stopped My Scroll

A few weeks ago, I was doing what most of us do at 11 PM. Lying in bed, doom-scrolling X when I should have been sleeping. And I came across a post by Matt Shumer that stopped me cold. It wasn't inflammatory. It wasn't clickbait. It was just... honest. Brutally, uncomfortably honest about how fast AI is changing everything and how almost nobody is paying attention.

Eighty million people read that post. Eighty million. And the responses were what you'd expect: a mix of existential dread, denial, and a few people quietly nodding because they'd already seen it coming.

I was one of the people nodding.

Not because I'm smarter than anyone else. Not because I have some crystal ball. But because I spend every single day building with these tools. I'm the co-founder of Influx, a digital agency who has been doing patient acquisition for healthcare for over a decade. I'm not theorizing about AI from an ivory tower. I'm in the trenches. I'm watching what these systems can do today, and more importantly, I'm watching how fast "today" becomes "six months ago."

And here's what I keep coming back to: the dominant narrative around AI is almost entirely wrong.

Not wrong in the facts. The disruption is real, the speed is real, the scale is real. But wrong in the framing. Wrong in the conclusion. Because the loudest voices are painting a picture of a future we should dread, when what I see, what I live every day, is a future bursting with more opportunity than any of us have seen in our lifetimes.

This essay is my attempt to make the other case. The case for enthusiasm. The case for leaning in. The case that this moment, right now, is the most exciting time to be alive, to be building, to be creating. And the people who figure that out fastest are going to shape what comes next for all of us.


The New Gold Rush

Let me tell you about something many of you may have already heard of, but that still blows my mind every time I think about it.

Just about 3 months ago, a developer sat down and started building a product... ClawdBot (now OpenClaw). Just a few days ago, OpenAI acquired it for somewhere near or over a billion dollars. The whole thing, from first line of code to billion-dollar exit, took eighty-two days. Let me say that again: eighty-two days.

I don't care what industry you're in, what your background is, or how jaded you've become about tech hype. That number should make you sit up straight. Because it's not a fluke. It's a signal. It's telling us something fundamental about the world we're living in right now.

We've seen gold rushes before. The actual Gold Rush of 1849. The oil booms. The dot-com era. The mobile revolution. Each one created massive new wealth, reshaped industries, and created entirely new categories of work and opportunity that nobody could have predicted beforehand.

But none of them moved this fast.

I think about the people who left Europe in the 1800s. They packed up everything. Their families, their savings, their entire lives. They got on boats headed for a place most of them had never seen. Why? Because they heard there was opportunity. Land. Resources. A chance to build something from nothing. They didn't have guarantees. They didn't have a business plan reviewed by McKinsey. They had a gut feeling that the risk of staying put was greater than the risk of going.

That's where we are right now with AI. Except you don't have to get on a boat. You don't have to leave your family. You don't have to cross an ocean. The frontier is sitting on your desk. It's in your pocket. And the barrier to entry isn't physical courage or generational savings. It's curiosity and willingness to learn.

I co-founded a digital marketing agency over a decade ago. We've grown it to a consistently growing eight-figure annual revenue with about 85 people. And I can tell you with absolute certainty that the tools available to a single motivated person today are more powerful than what my entire team had access to just three years ago. That's not an exaggeration. That's not marketing speak. That is the literal, practical reality of what AI has done to the cost and complexity of building things.

A solo developer can now ship software that would have required a team of ten. A single creator can produce video content that would have needed a production crew. A one-person marketing operation can run campaigns with the sophistication of an agency. The leverage is insane, and it's available to anyone willing to pick it up.

This is the gold rush. And most people are still debating whether gold is real.


The Fear Default

I get it. I really do.

When something this big shows up this fast, the natural human reaction is fear. It's not weakness. It's biology. We're wired to be suspicious of rapid change. Our ancestors who approached the rustling bush with caution lived longer than the ones who ran toward it yelling "cool, what's that?" Evolution built us to be skeptics first.

But here's the thing about that evolutionary wiring: it was optimized for a world of physical threats. Saber-toothed tigers. Rival tribes. Poisonous berries. It was not optimized for evaluating technological paradigm shifts. And when you apply threat-detection instincts to something like AI, you get a very predictable pattern.

First comes denial: "It's just a chatbot. It's autocomplete with a fancy wrapper."

Then comes dismissal: "It can't do anything creative. It just remixes existing stuff. There's no real intelligence there."

Then comes the gotcha game: "Look, it made a mistake. See? It hallucinated. It gave wrong information. It drew hands with seven fingers."

And every single one of those reactions feels satisfying in the moment. Every single one of them gives you a little hit of dopamine: see, I was right, it's not that impressive, I don't have to worry. But every single one of them is also a losing bet against the most well-funded, talent-dense technological arms race in human history.

I watch this play out in my own industry constantly. Someone will point to an AI-generated piece of marketing copy and say, "See? You can tell it's AI. It's generic. It doesn't have voice." And they're right. Today. But they were also right six months ago, and the gap between AI output and human output has closed by half in that time. They'll be right again six months from now, and the gap will have closed by half again.

Do you see the math? "It's not good enough yet" is not a permanent position. It's a countdown timer. And the timer is running faster than most people realize.

Remember when OpenAI first released Sora? The initial demos were ten-second clips. They were impressive for what they were, but they were clearly AI. The physics were a little off. Things morphed in weird ways. People confidently declared that AI video was years away from being useful.

A handful of months later, and here we are with Higgsfield, Kling and Seedance 2.0. We are getting five-minute mini movies. Are they perfect? No. Are they entertaining, engaging, and genuinely impressive? Absolutely. And the people who dismissed the ten-second clips? They are already moving to dismiss the five-minute versions. "Well, sure, but you can still tell. The lighting isn't quite right. The movements are not natural."

My kids say the same thing. "Dad, you can tell it's AI." And I say, sure, yes you can. But remember when it was ten seconds of a woman walking down a street? Now it's a five-minute short film with dialogue and plot. Are they perfect? No. But they are getting better at a pace that should make every person in every creative industry pay very close attention.

They'll keep finding flaws. And the flaws will keep shrinking. Because that's what happens when tens of billions of dollars and the most brilliant engineers on the planet are all pushing in the same direction. Betting against improvement isn't skepticism. It's denial dressed up as sophistication.


The Creator's Superpower

Here's where I want to flip the script entirely. Because the disruption narrative ("AI is going to replace jobs, destroy industries, make humans obsolete") isn't just pessimistic. It's incomplete. It's looking at one side of the ledger and pretending the other side doesn't exist.

Yes, AI is going to disrupt existing workflows. Yes, some jobs as currently defined will change dramatically or disappear. That part is true, and I'm not going to pretend otherwise.

But here's what the fear narrative completely misses: AI doesn't just destroy capabilities. It distributes them.

Think about what it used to take to make a short film. You needed a camera, an expensive one. You needed lighting equipment. A sound setup. Editing software and the expertise to use it. A crew. Locations. Permits. Catering, probably. The barrier to entry for filmmaking was enormous, which meant that the only people who got to make films were the ones with access to capital and industry connections.

Now? A creator with a vision, a laptop, and access to the right AI tools can produce something that would have required a full production team five years ago. Not something identical. Something different. But something that can be genuinely compelling, emotionally resonant, and visually stunning in ways that weren't possible for an individual before.

The same thing is happening in music. In writing. In game development. In software engineering. In graphic design. In architecture. In every single creative and technical field, the tools are getting powerful enough that the bottleneck is no longer resources or technical skill. It's vision. Ideas. Taste. The human parts.

I find this incredibly exciting. Because it means that the people who have great ideas but lacked the resources to execute them are suddenly empowered. The kid in Broken Arrow, Oklahoma, or some small town in Wisconsin, or a village in India, who has a brilliant concept for a game, or a film, or a product, can now actually build it. Not someday. Now.

I see this in my own work every day. Small healthcare practices that could never afford a custom software team can now compete with hospital systems that have $50 million IT budgets. A small e-commerce brand can run marketing with the sophistication of a company ten times its size. The playing field isn't just leveling. It's inverting. Being small and nimble is becoming an advantage, because you can adopt new tools faster, iterate quicker, and don't have legacy systems and bureaucratic approval processes slowing you down.

If you're a creator, an entrepreneur, a builder of any kind, you have superpowers now. Actual, literal superpowers compared to what was possible even two years ago. And the crazy part is that barely anyone has figured this out yet. We're in the earliest innings of people understanding what's actually possible.


What I Think About at 2 AM

I have two kids. And like any parent, I think about their future constantly. What world am I leaving them? What skills will matter? What will their lives look like?

These questions hit different when you're deep in the AI world. Because I can see the trajectory in a way that most people can't. Not because I'm special, but because I'm immersed in it every day. And the honest truth is that some of what I see keeps me up at night.

Not the "robots taking over" kind of keeps-me-up. That's science fiction anxiety, and while I don't dismiss it entirely, it's not what occupies my thoughts at 2 AM.

What keeps me up is the speed of change relative to the speed of adaptation. Institutions move slowly. Education moves slowly. Public policy moves at a glacial pace. And AI is moving at the speed of a venture-backed rocket with unlimited fuel. The gap between what's possible and what most people understand to be possible is growing every single day, and that gap is where a lot of the danger lives.

My kids are going to grow up in a world where the most powerful creative and analytical tools in human history are freely available to anyone. That's amazing. But they're also going to grow up in a world where the most powerful manipulation, surveillance, and deception tools in human history are freely available to anyone. And that's terrifying.

I think about deepfakes. Not the obvious ones, the ones that are clearly fake and kind of funny. I think about the ones that are indistinguishable from reality. The ones that will be used to fabricate evidence, manipulate elections, destroy reputations, and create chaos at scale. We're not ready for that. We don't have the institutional immune system to handle it.

I think about social engineering attacks that are so personalized, so contextually aware, so psychologically sophisticated that even smart, cautious people will fall for them. Phishing emails written by AI that has studied your communication patterns, your relationships, your vulnerabilities. Not the clumsy Nigerian prince emails of the past. Something far more insidious.

I think about autonomous weapons systems. AI-powered surveillance states. The concentration of power in the hands of whoever controls the most capable models. The potential for these tools to be used for control rather than liberation.

I'm not going to pretend these risks aren't real. They are. And anyone who tells you otherwise is either naive or selling something.

But here's where I land, every single time, after running through the scenarios at 2 AM: the solution is not to slow down. It's not to hide. It's not to pretend we can put this genie back in the bottle. The genie is out. It's been out. And no amount of wishful thinking or regulatory hand-wringing is going to change that.

The only way we navigate this well is if the people building with AI for good outwork, outbuild, and outpace the people building with AI for harm.

That's it. That's the whole strategy. The optimists have to outrun the pessimists and the bad actors. The creators have to produce more than the destroyers. The people using these tools to make life better have to move faster and think bigger than the people using them to make life worse.

And that means we need more people leaning in, not fewer. Every person who sits on the sidelines out of fear is one fewer person pushing toward the good outcomes. Every person who dismisses AI as hype or threat is one fewer person building the future we actually want to live in.

That's what gets me out of bed in the morning. Not the fear, but the responsibility. The understanding that the people who engage with this technology thoughtfully and ambitiously are the ones who will determine whether my kids grow up in a world that's amazing or a world that's terrifying.

I choose amazing. And I'm willing to bet my time, my energy, and my career on making that the more likely outcome.


The Quality Trap

Let me spend a moment on something I see constantly, because it's one of the most seductive and most dangerous forms of AI skepticism.

I call it the Quality Trap. It goes like this: someone encounters an AI-generated output (an image, a piece of writing, a video, a piece of code) and they find a flaw. The hands are wrong. The reasoning has a gap. The code has a bug. The prose is a little flat. And they use that flaw as evidence that AI "isn't there yet" and therefore isn't worth taking seriously.

This feels smart. It feels like critical thinking. It feels like you're the discerning one in a room full of hype-drunk believers.

But it's a trap. Because it assumes that the current quality level is the permanent quality level. And if there's one thing we know about AI development, it's that quality only moves in one direction: up.

Let me put it in concrete terms. GPT-3 came out in 2020. It could generate text that was coherent but often nonsensical over longer passages. It couldn't follow complex instructions. It had no real reasoning ability. If you evaluated AI's potential based on GPT-3 and concluded "this is neat but limited," you would have been right. For about 18 months. Then GPT-3.5 showed up and was dramatically better. Then GPT-4. Then Claude. Then Gemini. Then the open-source models caught up. Then reasoning models emerged. The pace hasn't just continued. It's accelerated.

Every person who confidently declared "AI can't do X" has eventually been proven wrong. Not in decades. In months.

"AI can't write good code." It can now write code that passes senior engineer interviews.

"AI can't create realistic images." We now have images that fool professional photographers.

"AI can't reason about complex problems." Current models can pass the bar exam, medical licensing exams, and PhD-level science tests.

"AI can't make videos." We've covered this one.

"AI can't understand context and nuance." It increasingly can, and the gap between AI understanding and human understanding in specific domains is narrowing rapidly.

I'm not saying AI is perfect. It's not. I work with it every day and I see the limitations constantly. But I've also learned to distinguish between "this is a fundamental limitation" and "this is a temporary limitation that will be solved by more compute, better training data, and smarter architectures." Almost everything people point to as evidence of AI's inadequacy falls in the second category.

So when someone tells me "you can tell that image is AI-generated" or "that AI-written copy doesn't have the same soul as human writing" or "the AI video has uncanny movements," I don't disagree. I just add two words: "for now."

And "for now" has a very short shelf life in this industry.

The people who build their strategies around AI's current limitations are building on quicksand. The people who build their strategies around AI's trajectory are building on bedrock. I know which one I'm choosing.


What I'm Actually Doing About It

I want to get practical for a moment, because philosophy without action is just content.

At Influx, we've fundamentally restructured how we think about work. Not because we had to. We were profitable and growing before AI entered the chat. But because I could see that the agencies who figured out how to leverage AI would have such a dramatic efficiency and quality advantage that the ones who didn't would be left behind.

We're not replacing people with AI. We're amplifying people with AI. A copywriter on our team can now produce three times the output at higher quality because they're using AI as a drafting and ideation partner. Our digital specialists can surface insights in minutes that used to take days. Our designers can generate concept variations at a pace that lets them explore creative directions they never would have had time for before.

The result? Better work, happier clients, and a team that's more focused on strategy and creativity (the high-value stuff) instead of grinding through repetitive execution.

And personally? I'm learning constantly. I spend time every single day using new AI tools, testing capabilities, pushing boundaries. Not because it's my job, though it partly is, but because I genuinely find it fascinating. Every week, I can do something I couldn't do the week before. That feeling of expanding capability is addictive in the best possible way.

I'm also talking about it. Writing about it. Sharing what I'm learning. Because I believe deeply that one of the biggest risks we face isn't the technology itself. It's the knowledge gap. The gap between the people who understand what's happening and the people who don't. The more people we can bring along on this journey, the better our collective outcomes will be.


The Optimism Imperative

I want to be very precise about what I mean by optimism, because I think the word gets misused in these conversations.

I'm not talking about blind optimism. I'm not talking about ignoring risks or dismissing concerns or putting on rose-colored glasses and pretending everything will be fine. That's not optimism. That's delusion, and it's dangerous.

What I'm talking about is active optimism. The kind that says: "I see the risks clearly. I understand the potential for harm. And I'm going to do everything in my power to push the outcomes toward good." It's optimism as a strategy, not optimism as a feeling.

Because here's the uncomfortable truth: pessimism, in this context, is a self-fulfilling prophecy. If the smart, capable, well-intentioned people all decide that AI is too dangerous and step back, who's left building? The people who don't care about safety. The people who don't care about ethics. The people who see powerful tools and think only about power.

The most dangerous possible outcome isn't that AI becomes too powerful. It's that AI becomes too powerful and only the wrong people know how to use it.

Every educator who refuses to engage with AI is ceding that space to someone else. Every artist who dismisses AI tools is letting someone with less taste and less vision define the aesthetic future. Every business leader who pretends this isn't happening is guaranteeing that their company will be disrupted by someone who took it seriously.

I don't want to live in a world shaped only by tech bros and defense contractors. I want the artists involved. The teachers. The small business owners. The parents. The people who care about community and beauty and meaning. And the only way to get them involved is to make the case, clearly, compellingly, honestly, that this is their moment too.

Not despite the risks. Because of them.


A Letter to the Builders

I'm from Park City, Utah. A ski town. A place where people come to get away from it all, not to build tech companies. It's not Silicon Valley. It's not Austin or Miami or New York. It's a mountain town where people raise families and know their neighbors' names.

And I'm building with cutting-edge AI tools from there. Not because Park City is a tech hub (it definitely isn't) but because it doesn't need to be. The tools don't care where you live. The models don't check your zip code. The opportunities are genuinely, truly, actually available to anyone with an internet connection and the drive to pursue them.

This is what I mean when I say this is the new frontier. The old frontiers had geography. You had to be in San Francisco for the Gold Rush. You had to be in Detroit for the auto industry. You had to be in Silicon Valley for the first tech boom. The AI frontier has no geography. It's everywhere. It's in Park City and Mumbai and Nairobi and São Paulo.

That's revolutionary. And I don't think people have fully absorbed what it means.

It means the next billion-dollar company might be started by a 22-year-old in rural Montana. It means the next great film might be made by a teenager in Lagos. It means the next breakthrough in healthcare might come from a solo researcher in Vietnam who has access to the same models as a team at Stanford.

We're entering an era where the distribution of opportunity is more equal than it has ever been in human history. Not perfectly equal. There are still massive disparities in infrastructure, education, and access. But the trend line is unmistakable, and AI is accelerating it dramatically.

So this is my message to the builders. To the people who have ideas but have always felt like they didn't have the resources, the connections, the credentials, or the permission to pursue them:

The gate is open.

I don't mean that metaphorically. I mean that the actual, practical barriers that prevented you from building the thing you've been thinking about are lower right now than they have ever been, and they're getting lower every single day. The cost of starting a software company has collapsed. The cost of creating professional content has collapsed. The cost of testing and validating ideas has collapsed.

What hasn't collapsed, and what can never collapse, is the value of having a good idea in the first place. The value of taste. Of judgment. Of understanding what people need and caring enough to build it for them. Those are human qualities, and they're more valuable now than ever, precisely because the execution barriers have fallen away.

AI isn't replacing human creativity and judgment. It's making them the only things that matter.


The Next Five Years

I don't have a crystal ball. Nobody does. But I have a trendline, and trendlines are useful.

Here's what I think is coming over the next five years, based on what I'm seeing today:

The cost of creating a functional software product will approach zero. Not literally zero. There will still be hosting costs, there will still be edge cases that require human judgment. But the gap between "I have an idea" and "I have a working prototype" will shrink from months to hours. This is already happening, and it's going to accelerate.

Content creation will undergo a transformation as significant as the one caused by the printing press. The ability to create high-quality video, audio, text, and interactive experiences will become as universal as the ability to send an email. The bottleneck will shift entirely from production capability to creative vision.

Personalized education will become a reality. Every student will have access to a tutor that understands their learning style, adapts in real time, has infinite patience, and has mastery of every subject. This alone could be the most transformative application of AI: the democratization of excellent education.

Healthcare will be dramatically improved. Diagnosis will be faster and more accurate. Drug discovery will accelerate. Administrative burden, the thing that burns out doctors and drives up costs, will be largely automated. The healthcare system won't be fixed overnight, but the tools to fix it will be available.

And yes, the challenges I mentioned earlier will intensify too. The deepfakes will get better. The social engineering will get more sophisticated. The concentration of power will be a real and ongoing concern. These aren't problems we can solve once and forget about. They're ongoing tensions that will require ongoing vigilance, innovation, and moral courage.

But here's the thing: every era of human progress has come with new risks. Fire gave us warmth and also arson. The printing press gave us literacy and also propaganda. The internet gave us connection and also misinformation. Nuclear physics gave us energy and also weapons.

In every single case, the answer was not to abandon the technology. It was to develop the wisdom, the institutions, and the collective will to use it well. That's what we need to do now. And we need to do it with urgency, because the timeline is compressed in a way that previous revolutions weren't.


Don't Sit on the Sidelines

I'll end with something simple.

If you've read this far, you're already ahead of most people. Not because you agree with me (maybe you don't, and that's fine) but because you're engaging with the question. You're thinking about it. You're not looking away.

Now do something with that.

I don't care what it is. Learn a new AI tool. Build something small. Experiment. Fail. Try again. Start that project you've been thinking about. Write that thing. Create that product. Have that conversation with your team about how AI fits into your work.

Don't wait until you feel ready. Nobody feels ready. I didn't feel ready when I started pushing AI into every corner of my business. I just knew that waiting felt more dangerous than starting.

Don't wait until AI is "good enough." It's already good enough to be transformative, and it's getting better every single day.

Don't wait for permission. Nobody is going to tap you on the shoulder and say "okay, now it's your turn to participate in the biggest technological revolution in human history." You just have to decide to participate.

I think about my kids. I think about the world they're going to inherit. And I know that the shape of that world is being determined right now, not in five years, not in ten years, right now, by the people who are choosing to engage rather than observe.

I want to be one of those people. I want to build things that make the future better. I want to work alongside others who feel the same way. And I want to look back in twenty years and know that when the moment came, I didn't sit on the sidelines.

The moment is here. The tools are ready. The frontier is open.

What are you going to build?


Adam Daniells is the co-founder of Influx. He splits time between Utah, Oregon, and Florida with his family. Find him on Instagram at @catchupwithadam and on X at @adamdaniells.

Share this essay