AI won’t make you an expert, but it will make experts better

Imagine you’re working on a project with two colleagues. One is a seasoned expert, the other is just starting out. Both have access to the same AI tools.
Who do you think would benefit more?
A couple of articles I’ve read recently suggest that it’s the more experienced colleague.
AI amplifies expertise rather than compensating for inexperience
This article from the Economist argues that rather than AI being an equaliser, it will actually increase inequality:
In complex tasks such as research and management, new evidence indicates that high performers are best positioned to work with AI. Evaluating the output of models requires expertise and good judgment. Rather than narrowing disparities, AI is likely to widen workforce divides, much like past technological revolutions.
Essentially, experts can filter AI’s suggestions effectively, whereas novices struggle to discern good from bad outputs.
The wonderfully titled New Junior Developers Can’t Actually Code by Namanyay Goel has a similar theme:
Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning.
Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares.
The foundational knowledge that used to come from struggling through problems is just… missing.
We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
Without struggling through problems manually, people miss out on deep learning and problem-solving skills.
The key skill in the AI era: knowing when to use it
AI is great when it’s used for:
- Speeding up work that you already have expertise in.
- Getting a quick answer to something that is not your core competency.
But if you rely on it too heavily for your core work, you risk skipping the learning process and delivering mediocre work.
Imagine a UX researcher who has never analysed research without AI. Are they really going to be able to judge the quality of AI-generated insights and craft a memorable narrative for the research?
AI as a power tool, not a cheat code
Although it would be handy to skip all the learning and get all of the answers from AI, the best performers in the future are going to be the ones that blend expertise and AI skills.
When you’re using it, you need to ask yourself:
- How important is the quality of the output?
- What’s the risk of getting it wrong?
- Am I able to judge the quality of the output?
The only way to develop this intuition is to use AI, but just don’t do it at the expense of developing your foundational knowledge.