Mohamed Yassine Hemissi

Student • Dreamer • Human

HomeProjectsBlog

© 2026 Mohamed Yassine Hemissi. All rights reserved.

AI Feels Powerful, AI Displacement Sounds Real, but Something Doesn't Add Up: My Thoughts
← Back to blog

AI Feels Powerful, AI Displacement Sounds Real, but Something Doesn't Add Up: My Thoughts

March 24, 2026·11 min read·By Mohamed Yassine Hemissi
PersonalEconomicsOpinion

Note: This piece is intentionally messy and not a professional report. It should be read as an amusing reflection-provocation piece rather than a formal report.

Introduction

Amid widespread claims that artificial intelligence will replace human labor entirely, I had to look at the situation from different perspectives. As someone who started in software around 8 years ago, I probably had a small existential crisis while testing the first prompts on a large language model, because they sound good.

And why do I choose the word sound and not confirm with are here? For multiple reasons. In this blog post I'm trying to throw out some of the ideas I see around AI displacement, why I say that I'm somehow eased that it won't happen tomorrow, that it may happen in a few years probably, and why we're still not seeing signs and proof that it will really happen on the large scale we imagine.

The History Projections

Probably something I hate to see, and honestly find too superficial to use as a standard of comparison, is saying that the artificial intelligence revolution is similar to the industrial revolution in the 18th to mid-19th century, or something like the calculator not replacing the mathematician.

Why? Because artificial intelligence is a way of getting out of the context of these tools and usages. It is not a deterministic algorithm that follows predefined rules for a certain end. It is humanity's attempt to replicate the only thing we are unique in, as of this date in this universe: the indeterminism and autonomy of what we call intelligence. In other words, it is an attempt to create a human intelligence that is controllable and not biologically weak.

While it is, for now, just a representation of mathematical projections into planes that semantically exist with a meaning because of our data and the evolution of the internet, it is not there yet, but it is already capable of replacing a lot of what I'll call tasks.

  • What are we really trying to achieve with AI? Is it evolving with the human race or replacing most of our intelligence? Are we trying to become creators, or just trying to find the next way to gain more economic power and world dominance?

How AI Works?

Without getting into the technical details of what AI really is, assuming we are talking about LLMs when we refer to AI today, to simplify it, it is like bringing a machine and giving it a big box full of information, then telling it to learn everything about the content of that box. But instead of memorizing everything, the machine has enough logical structure, what we'll call biological metabolism in a human projection, that allows it to save information as patterns.

On a large scale, we modify the logical structure so it allows these end goals, pattern recognition, at a larger scale even better: different architecture, better curation, more training, more optimization, bigger structures, and more injected formulas. Then we may also introduce some kid-teaching-like behavior by telling it what is good and what is right in specific tasks, rewarding it to be even better.

All of that is based on a wedding of linear algebra, the core that interacts with calculus, probabilities, and statistics, and obviously computer science for a host of computation. This produced the insane LLM that later gets into the world of software engineering with scripts, tools, and more engineering to produce apps powered by AI that sound autonomous and intelligent, but this is possible only through large-scale training, powerful machines that allow inference, and finally who pays the cost? Electricity and cooling, a lot of water, and a lot of climate-skeptic vibe when blindly loving it.

  • Is it through insanely developed mathematical equations augmented with computation that we can really reach intelligence?

Benchmarks

Benchmarks are a great way to evaluate these language models' capabilities, and there is no way I would doubt the relative importance and confirmation of capabilities that results tell us about models, but I feel these are not really ways to tell if these models are ready to replace humans.

Humanity and intelligence are more than just solving an equation, being able to remember rules about grammar, or answering some objective question that would take hours of thinking in a minute. Actually, these are tasks that you'll never be able to compete with a machine at, because it has the power of computation in it. But these machines are still far away from understanding humans, far from understanding art in a sense not as an equation, having the craziness of innovation, and obviously with no amnesia relative to the time of sessions and existence, which creates what we humans call experience.

I see that artificial intelligence is functionally good, no doubt, and even better than any of us, but structurally, memory, self-awareness, autonomy, self-directed improvement, and experiencing in the real sense of the word are at least to me what define intelligence, and the current system is far away from that. It mimics it, and you'll realize that the more you work with it, but it is not that.

  • Are these benchmarks enough to really evaluate intelligence and not stateful knowledge? And is experience even relevant?

Economic Reality

There will be inertia in companies, and it will take time to fully adapt to AI. It will be the result of structural changes for sure. What AI is good at is augmenting a human: what used to take me 2 weeks, a designer, a reporter, and a manager, I can do in less than that alone with a team of AI. It won't be as good as that team though, so the best way I see it is shrinking teams into solo domain experts augmented with AI.

And what is impressive is that numbers show that more jobs have been created than those being removed, and I believe in the narrative that the current economic situation forces companies to adjust the massive hiring of post-COVID markets and the insane investments made during that period that allowed the job-market peak we all know.

But when it comes to full AI companies, I say we are not there yet, and I can claim it for 2-3 reasons: using the thing, I know that it needs someone to engineer its context, augment it, and show it the successful path, because in the short term they are the AGI we dream about, and in the long term a disaster without guidance, at least yet.

And for real, did we even check the bills for inference? Is pay-as-you-go on tokens at the end of the day really less costly than paying a human employee for the tasks? Is the economic risk and impact of AI making mistakes that slow down business objectives or cause breaks worth less than paying a human who at least has a certain awareness of responsibility to pay more attention?

It is structural, and before saying it is capable of fully replacing, we need long-term statistical data of cost vs. human employee cost, proof that AI can do the job long term and not just insane boilerplates that will make me clap and share 6 Instagram reels talking about how the age of AI and AGI is here and it's the best time to build, expertise vs. cost vs. statistics, and then we may at that time say if we'll be homeless or if it is really an augmentation revolution.

  • Is inference cheaper than employees?
  • Can we trust a machine with the business objectives and risk of failure? What is the standard of QA on this?

Psychosis

The yes-man is a type of friend, person, human, or you, that never refuses the idea, with no critique that would challenge you and make you rethink your existence. Human managers and teams do that, that's why innovation happens, that's why we avoid, mostly, because not always, otherwise we won't have a challenge, failed products in markets and improve. Imagine a company of yes-men, imagine the cost paid just for ideas lacking critique.

Also, AI makes some tasks that used to challenge the brain so easy that we have to find different ways to train the muscle or end up more like a dopamine-extraction machine through throwing text, mining inference, and green and red screens, but there is always what I call self-supervised evolution: you know what is happening, you know what productivity looks like in 2026, you don't refuse the wave but ride it, and make sure you sub-task things that will allow you to stay a useful human, not a typing machine.

  • Are we losing our intelligence through mass reliance on AI? Or are we stepping into the next step in our apex-predator intelligence journey?

The Numbers and Sources

I'm not a fan of this section on a blog post like this one, and I know that probably no one will reach this since I won't openly share it, though it is accessible anyway. It's just my personal thoughts as of 2026-03-24. But I'll have to throw some sources and numbers for credibility and for the sake of studying what exists, so I don't sound like those people I try to avoid who just throw information.

  • Am I sure of anything I said in this article? I'll answer this one: there would be no blog if I was sure.

AI curated the list.

  1. Anthropic - Labor Market Impacts (2026)
    Uses real AI usage data mapped to occupations; shows task-level augmentation and early hiring effects.

  2. Significant disruption': 300 million jobs at risk due to AI - HR Reporter
    Projects exposure of around 300M jobs globally with strong productivity-driven growth.

  3. NBER - Generative AI at Work (Brynjolfsson et al., 2023)
    Field experiment showing around 14% productivity gains in customer support tasks.

  4. Stanford HAI - AI Index Report (2025)
    Tracks global AI adoption, benchmarks, and economic impact trends.

  5. World Economic Forum - Future of Jobs Report
    Estimates 85M jobs displaced vs 97M created, indicating net positive labor shift.

  6. NVIDIA - State of AI Report (2026)
    Reports over 50% of firms achieving measurable productivity gains from AI adoption.