Consultant or ChatGPT? How to avoid AI irritation and build real expertise
Edward Jansen is board member at ADC

Consultant or ChatGPT? How to avoid AI irritation and build real expertise

11 February 2026 Consultancy.eu
Consultant or ChatGPT? How to avoid AI irritation and build real expertise
Edward Jansen is board member at ADC

It’s a familiar daily moment: you open an email or a proposal and it immediately feels like it was written by an AI. It’s superficial, vague and takes far too long to comb through. The result? You run it through your favourite AI tool to summarise it. This dynamic is also referred to as the Fishburne effect: AI produces AI-slob that can only be processed and filtered by even more AI.

Artificial intelligence is fast, efficient and increasingly accessible. Precisely for that reason, the temptation to accept its output at face value is strong. But this is where a distinction emerges that is becoming ever more important for consultants: the difference between thoughtful use of AI and AI-slob. AI-slob is output that is shallow, lacks context, yet is still presented as professional expertise by the sender.

And this is not just annoying. It is risky. Not for AI as a technology, but for the consulting profession itself. When AI output is used without human oversight, interpretation, or independent judgement, the work quietly shifts from advising to reproducing. This not only creates more work through constant checking, but also erodes trust among clients, stakeholders, and colleagues. Exactly the opposite of what AI is supposed to deliver.

Thoughtless output undermines trust

AI is now deeply embedded in the day-to-day work of consultants: from search queries and analyses to code generation and strategic summaries. This accelerates processes, but at the same time blurs our view of quality.

In software development, poor output is often measurable: more bugs, incidents, or rework. In consultancy, it’s different. When is an analysis truly solid? When is advice sufficiently substantiated? And who can really tell the difference between a sharp insight and something that merely sounds plausible but is fundamentally shallow? This is where a blind spot emerges. If weak AI output goes unnoticed yet is still delivered as expertise, the profession loses its distinctive role as a trusted advisor.

Organisations don’t pay consultants for their tools; they pay for their judgement. Professional relationships are built on trust: trust that analyses are sound, advice is well thought through and conclusions are grounded in insight and experience.

That trust only exists when output is consistently high-quality and substantive. Superficial and thoughtless use of AI cuts directly against this. Unverified texts or analyses that sound logical but are thin in substance undermine confidence. Not only in the advice, but in the consultant behind it.

If you repeatedly accept unconsidered, superficial output as “good enough”, you inevitably put your client relationships under pressure.

Consultant or ChatGPT 2.0?

No one adds “powered by Google” to an analysis. Nor is it necessary to explicitly state that a document was (partly) created using AI. But every professional remains fully responsible for their output, regardless of the tools used.

AI can be extremely helpful for structuring, summarising and cleaning up content. But in advisory work, the bar is higher: the final product must be well-reasoned, tested against real-world experience and explicitly shaped by human judgement. That is where the value lies.

In situations where authenticity and expertise are explicitly expected such as boardrooms, strategic programmes, or confidential decision-making, transparency pays off. Not out of obligation, but out of professionalism. Because ultimately the question is simple: is the client receiving advice from a consultant, or from ChatGPT?

Three ways to prevent AI-slob

How do you ensure AI strengthens expertise rather than replaces it? Three practical guidelines:

Make high-quality AI output the standard
An open feedback culture goes beyond the question of whether AI is used. It requires clear agreements on what constitutes good output and when and how AI can be used responsibly. When does a text become too generic? Do we share AI-generated content with clients? Should all AI output always be reviewed?

By having these conversations structurally, a shared standard for responsible AI use emerges. This requires a safe feedback culture where feedback is about safeguarding quality, not exerting control. Organisations that fail to set this standard explicitly risk allowing “good enough” to quietly become the norm.

Train beyond prompting
Good prompting is a basic skill, not the finish line. Equally important is training in responsible use: ethics, risks and impact. Think of discussing successful AI applications, or holding critical feedback sessions on AI usage. This shifts the focus from speed to quality.

Take ownership of the final result
Use AI as an assistant, not as the author. The end product should show that choices were made, trade-offs considered and judgement applied. Expertise doesn’t lie in smooth sentences, but in the reasoning behind them. Those who give up that ownership ultimately lose their credibility as well.

AI isn’t going away, quite the opposite. Agents will become smarter, models more powerful and integration deeper. For consultants, adapting is not a choice but a necessity. But maintaining control remains essential. Because one thing is certain: no one pays for an AI expert, they pay for trust and deep, human insight. Thoughtless output undermines that trust and with it the very core of the consulting profession.