
If you’re in your early–mid 20s right now, the job market probably feels like a bad joke
You did the “right” things. You went to uni. You picked something respectable like computer science, finance, or data. You watched those pandemic era stories about developers getting six-figure offers and thought, “Yeah, I’ll have some of that.”
And then you graduated into this…

Job ads are drying up. Recruiters ghost you. LinkedIn is a rolling obituary of layoffs.
Meanwhile, your X feed is full of people saying:
“Should’ve learned prompt engineering bro”
It’s a neat story. AI arrives, humans get pushed aside, roll credits.
It’s catchy. It is also not the full story.
If we take a closer look at who is struggling the most, it is mainly one group.
Young people with degrees who want office jobs.

The pain is not spread evenly across the whole workforce. It is focused right where the old system promised the most: fresh graduates who want to be software developers, analysts, consultants, junior marketers, junior product people, and so on. The “good jobs”.
Meanwhile, many hands on jobs, like customer service, construction or hospo, are not seeing the same kind of cliff. They are still hard jobs, but the graphs do not suddenly collapse in the same way.
So the first important point is not that “everyone is doomed”. It is that the on ramp into white collar work is breaking right where young workers are trying to step in.
Since 2022, you have seen the news. Intel cutting well into double digit percentages of staff. Amazon removing tens of thousands of roles. Microsoft announcing round after round of layoffs in the thousands.
In reality, there are a few other things happening at the same time.
For years, interest rates were very low. Money was cheap. Companies felt safe hiring aggressively and chasing growth. Especially during the pandemic, when everything moved online, tech companies went even harder and hired like crazy.
Then, inflation kicked in.
Governments and central banks reacted by increasing interest rates. Money stopped being cheap. Growth slowed down. Suddenly those giant teams that looked exciting in 2021 started to look bloated and expensive in 2023.
So companies (specially big tech) began to “correct” their own pandemic over-hiring. AI appeared right in the middle of this correction. It is real, and it does change things, but it also gave executives a very convenient story to tell when they were already planning to cut.
AI matters, but the economy and previous bad decisions matter a lot too.
The next question is obvious. Even if macroeconomics explains a lot, how capable are these models in practice*? Are they really that good to replace fresh grads?
I recently wrote about measuring real-world AI impact here. This is ongoing research btw.
When ChatGPT first arrived at the end of 2022, it indeed felt magical, but mostly for shallow things like short essays, fun conversations and very simple coding questions.
It was not yet a drop in replacement for a full junior engineer or writer, or anything really.
Quite quickly however, the models improved. Take something like Swebench for example, a coding benchmark. Instead of asking trivia questions, it gives models real GitHub issues from actual Python projects and asks the models to fix them. Early on, most models barely cleared single digit success rates on this kind of benchmark (0.17% for gpt-3.5). Now the best models, especially when combined with tools and structured “thinking” modes, score well over 70% on tough variants.
That is a huge jump on paper. However, there is still a difference between “a model can do this in a controlled benchmark” and “a company has wired this into its actual systems in a safe, reliable way”. Most teams are still somewhere in the messy middle of that journey.
So, we are not in a world where every junior job has been silently replaced by a specific AI model (some of the best performing models even have custom scaffolding to get extra performance juice). We are in a world where the models are already pretty capable, and companies are slowly learning how to plug them into real workflows.
That is an important caveat.
The part that should make young workers nervous is what happens when companies start treating whole job roles as reinforcement learning (RL) environments.
Pure scaling of models, with more parameters and more data, seems to be running into diminishing returns. The frontier is shifting toward post training and reinforcement learning, where the model is trained to behave like an agent with a role and a goal.
Think about roles such as
Customer support. Read tickets, categorise them, reply, escalate, follow up, close, report.
HR screening. Read CVs, match them to job descriptions, filter and rank, send rejections or next steps.
Basic operations roles that live inside software tools.
These kinds of tasks are exactly the kind that AI is good at learning. They can be turned into a game where the model tries something, gets told “good” or “bad”, and slowly improves. That is reinforcement learning. A sandbox where the AI agent takes actions and the environment says “that was good” or “that was bad”.
If you can simulate that reliably, you can let models practice the role at scale. Not for a few hundred examples, but for millions of interactions. If you can teach a model to handle a big chunk of that kind of work, the number of humans needed at the bottom of the ladder goes down. Or at least, the bar for those humans goes up. Suddenly “entry level” means “already highly productive with AI tools from day one”.
So even if AI is not the main reason for the current wave of unemployment, it is definitely changing the shape of how people enter the job market. And there’s no going back.
Weird twist. For all the noise about AI, most people and most companies are still pretty bad at using it.
Many developers only started using AI coding assistants in a serious way around 2024. Many companies still do not have proper policies, tooling, or standards around AI. A lot of usage is still “paste random stuff into ChatGPT and hope for the best”.
So, we have this strange situation where
The models are already strong enough to matter
The job market is already tense for other reasons
And most organisations are still clumsy beginners with AI
That means AI is not yet fully deployed in the way people imagine. In a lot of cases it is more of an excuse than a fully functioning replacement. The risk is what happens when companies finally figure out how to use it properly.
Telling you to “just learn to code” is lazy advice. You probably already tried something along those lines.
A few more practical ideas instead:
First, stop treating AI like your enemy and start treating it like your power tool. Most people barely scratch the surface. If you can show that you know how to use AI to make real work faster and better, you stand out. Not in theory, but with concrete stories. For example: “I built a script with the help of AI that cut this task...”
Second, move toward work that is harder to measure and automate. Models are very good when the rules are clear and the reward is obvious (RL). They struggle more when there are politics, emotions, and taste involved. Anything that requires judgment, context, and taking responsibility for a decision is safer than pure button clicking.
Third, replace the old “first job” signal with your own evidence. If it is harder to get that first official title, you need other things that prove you can operate. Tiny projects, volunteer work, open source contributions, internal tools for a friend’s business, anything that exists in the real world and has an outcome that you can point to. You can just build stuff
Show that you can get things done.
Blaming AI for everything is emotionally satisfying. I get it.
It turns a mix of interest rates, corporate strategy, policy choices and bad timing into a single villain with a scifi face.
The problem is that when we lean too hard on that story, we miss the real levers that still exist.
If you are trying to break into knowledge work, you are not just competing with “AI”. You are competing with a completely new hiring landscape and a set of “tools” that are rapidly being trained to do parts of your job.
That sounds bleak, but there is an advantage in understanding it clearly.
Once you see the game clearly, you can start making moves that actually improve your odds.