AI Is Already Talking to AI And We’re Not Ready for the Consequences
Why leaders must act now to keep values, context, and nuance in the loop.
Here’s the thing…
AI is already talking to AI more than most people realise. This is not some distant, sci-fi scenario. It is already woven into the way many organisations work.
Let me give you one example.
A candidate uses AI to fine-tune a job application so it slips past the automated filters. The recruiter then uses AI to scan, score, and shortlist that same application.
On paper, this looks like progress. Faster hiring. More consistency. Less admin.
But neither of those AIs has the faintest clue about the actual human being behind the resume.
The system is optimising for patterns, not people.
The Nuance Is Lost in the Crossfire
Here is the hidden cost of this automation loop.
The candidate believes they are simply playing the game better by using AI to get past the first round.
The recruiter believes they are making the process more objective by letting AI do the initial shortlist.
Both are right.
Both are also missing the point.
People are not patterns.
They are contradictions.
They are quirks.
They are life stories that do not fit neatly into a dataset.
If we do not teach AI how to read and value these human elements – values, intentions, context, lived experiences – it will default to the easiest thing it can see: patterns in historical data.
And historical data has a funny way of baking in yesterday’s biases.
Right Now, Humans Are Still in the Loop
At the moment, we still have human judgment acting as a safety net.
A hiring manager might spot that a gap in a resume was actually a sabbatical that built resilience.
A customer service lead might pick up on a subtle tone in a client’s voice that signals dissatisfaction, even if sentiment analysis marks it as “positive.”
Humans still have the ability to notice what the machine cannot measure.
But here is the reality we need to face: we are moving toward a future where AI will not just recommend, it will decide.
When that happens, there is no hacking the system.
There is only the system.
Why This Matters for Leaders Right Now
It is tempting to see this as an HR or recruitment issue. It is not.
This is an organisational capability issue.
In every industry, AI is starting to make the first call – who gets the loan, which supplier wins the tender, which project moves forward.
If the AI has not been taught what your organisation believes matters, it will be taught what is easiest to measure. And the easiest things to measure are rarely the ones that create long-term value.
In practice, that means:
A bank might approve clients who look good on paper while missing out on those with stronger long-term repayment behaviours
A healthcare system might prioritise patients who fit the statistical profile while overlooking those with unusual but urgent needs
A project portfolio system might rank initiatives by short-term ROI while ignoring strategic importance
This is where leaders need to step in.
The Three Levers Leaders Can Pull
Fixing this is not about buying another AI tool. It is about shaping the system so it makes decisions in line with your values and strategy. That means focusing on three things:
Vision – Define what “good” looks like for your organisation. Not in vague mission statements, but in clear, measurable terms.
Guardrails – Put principles and constraints in place so AI cannot drift into decisions that contradict your values.
Capability – Equip your people to actively shape how AI makes decisions, instead of leaving it to vendors.
These three levers are what I have seen make the difference in organisations that manage to keep AI aligned with what actually matters to them.
The Real Risk No One Talks About
The bigger danger is not that AI will replace people.
The bigger danger is that AI will replace thinking.
When people trust the system too much, they stop questioning it. They start assuming the top-ranked option is right because it came from the machine.
That is how bias cements itself.
That is how organisations drift into decisions that no one fully understands or owns.
I heard about a team that discovered their AI-driven procurement system had been excluding smaller suppliers. Not because anyone programmed it to, but because the training data was dominated by past decisions that favoured larger vendors.
It took them more than a year to spot it. By the time they did, the damage to supplier relationships and community trust had already been done.
The uncomfortable truth is that these systems are mirrors. They reflect the data we give them. If we do not actively shape that reflection, we end up amplifying yesterday’s blind spots at tomorrow’s speed.
How to Avoid the “Data-Only” Trap
Here is what I tell leaders:
AI is brilliant at spotting patterns, but people are not patterns.
So you need to feed it more than data. You need to feed it context.
That might mean:
Including qualitative feedback alongside quantitative metrics
Training models on examples that reflect the values you want in decisions, not just the outcomes you had before
Running human override review cycles where judgment is applied to edge cases before the AI learns from them
This is slower upfront, but it is the only way to avoid speed-baking your blind spots into the core of your business.
What This Looks Like in Practice
I have heard about banks discovering that their credit approval AI was rejecting certain customer segments at higher rates. Instead of simply tweaking thresholds, some leaders went back to first principles.
They defined what “good” meant in their lending portfolio, not just in default rates, but in customer relationships and community impact. They retrained the AI with that broader definition in mind.
The outcome, in one case I heard about, was a significant increase in approval rates for underserved segments without a rise in defaults. That was not because the model suddenly became more intelligent. It was because the leadership team became clearer about what truly mattered to them.
Another example I came across involved a government agency using AI to shortlist grant applications. The system was heavily favouring organisations that had previously applied because past applicants had more complete data records.
By adjusting the weighting and adding context-based scoring for first-time applicants, they made the process more inclusive without lowering the quality of funded projects.
The Shift Leaders Need to Make
Leaders need to stop thinking about AI as just a tool and start seeing it as a decision-maker that needs onboarding.
You would never hire a senior manager and just throw them into the job with a pile of old reports. You would explain your strategy, your priorities, and your non-negotiables.
AI needs the same treatment.
It needs clarity on what “good” looks like in your world.
It needs boundaries so it does not wander into bad habits.
It needs feedback so it can get better at making the calls you want it to make.
Why This is Urgent
Right now, most organisations still have humans making the final call. But as AI gets embedded deeper into workflows, it will start making more decisions without review.
By the time you realise the system is misaligned, it will already have made thousands of decisions that are hard to unwind.
This is why I tell CIOs and senior leaders: the time to shape AI is before it is running at full speed. Because once it is, you are not steering anymore, you are just reacting.
The Outcome is Simple
Teach the machines what matters about people, or risk a future where people no longer matter to the machines.
If you want to go deeper on this, there are workshops designed to help leadership teams do exactly that. They cut through the noise, align on a practical vision, and build the guardrails and capability you need to lead AI adoption with confidence.
Not with hype. Not with endless pilots. With clarity, capability, and action.
Stay ahead, stay relevant,
Amit