Issue 3: Garbage In, Garbage Out: Why HR Must Lead the AI Policy Revolution
The hidden crisis of AI is that it makes mediocrity look like brilliance.
David Ballew, Founder & CEO
Originally Published: 3 September 2025
This analysis is based on Nimble Global's proprietary research and 30+ years of practical experience across over 90 countries.
© 2019 - 2026 Nimble Global. All rights reserved.
The British have a saying: 'Every day is a school day.' For me, using AI has been exactly that. It has been a serious learning curve, and it often takes multiple edits, sometimes across different tools, to achieve the correct output. I usually create three to five versions before I'm satisfied... that's my autistic and ADHD brain in action.
Some call that frustration; I call it professional rigour. That's when it hit me: if I need this level of refinement after decades in the workforce, what does that mean for employees who lack the depth to know when AI is wrong?
The Silent Crisis in Your Workforce
Across almost every organisation today, employees are already using AI tools to produce work, often without approval, oversight, or regulation. The grammar is nearly flawless, the structure is logical, and the tone is polished. On the surface, it looks like high-quality work.
But beneath that polish often sits something far more dangerous: content that is shallow, flawed, or even outright wrong. And the real danger is that neither employees nor their leaders can easily tell the difference.
For years, business leaders have been meticulous about systems and data, knowing that 'garbage in' always produces 'garbage out.' I've watched organizations build processes, controls, and quality checks to prevent bad inputs from poisoning outputs.
Yet AI is different. This revolution doesn't just change our tools, it changes our thinking.
And unlike past revolutions, the risks aren't confined to machines, supply chains, or code. They live inside the very judgment of our people.
If unchecked, AI will quietly redefine what competence looks like in your workforce. It will make mediocrity look like mastery. And unless organisations respond, they risk losing the very capability that has always set them apart: the ability to innovate responsibly, think critically, and most importantly, evaluate rigorously.
This is the hidden crisis of AI: it can make mediocrity look like brilliance.
The Overlooked IP Threat
There's another danger most leaders haven't faced squarely: the loss of intellectual property. Every time an employee pastes strategy decks, client data, or product code into an AI tool, they may be handing sensitive information to a system the company doesn't control. Depending on the tool's terms of service, that data might be retained, reused, or exposed.
A single 'innocent' prompt can put trade secrets at risk, violate data privacy regulations, or breach client confidentiality, and most employees don't even realize they're doing anything wrong.
Your IP can be lost in a single keystroke.
When AI Masks Weak Thinking
I recently reviewed a talent retention strategy that had all the hallmarks of a high-quality document. It was well written, structured neatly, and delivered with confidence. But a closer look revealed fundamental flaws. It recommended financial incentives for roles where research shows autonomy and purpose drive retention, and suggested annual reviews for high performers who need frequent feedback. The analyst who created it had used AI to fill in gaps they didn't understand. The scariest part? The document almost reached the leadership team.
Years ago, a university professor of mine, Kate Kyle, long since passed away, used to say, 'You can put lipstick on a pig, but it's still a pig.' Like me, she called things exactly as she saw them.
That's the real risk: employees are outsourcing their thinking, and AI is disguising the gaps.
Why HR Must Lead
It's tempting to think this is an IT problem. It isn't.
Technology teams can set guardrails, but the real issue is how people use AI at work. This is about capability, compliance, and culture, which sit firmly in the HR domain.
If left unmanaged, AI will rapidly erode three critical assets:
Compliance – legal documents, policies, and Performance Improvement Plans (PIPs)
Culture – juniors who never build actual expertise because AI does the heavy lifting
Capability – a workforce that looks sharp on the surface but can't think deeply
Without HR leadership, AI will hollow out the foundations of your workforce. HR must take ownership now, before the risks harden into habits that are far harder to reverse.
What an AI Policy Should Do
The goal of an AI policy is to make sure employees learn to use AI responsibly, critically, and safely. At its core, an effective policy needs to do three things:
Verify outputs: Require employees to be able to explain, defend, and, where necessary, challenge AI-generated work.
Protect skills: Ensure employees, especially juniors, continue to practice core problem-solving without relying too heavily on AI.
Safeguard data: Make it crystal clear that confidential, client, or proprietary information must not be pasted into any non-approved AI tools… similar to using an unsecured WIFI and not the organisation’s VPN.
From there, organisations can layer in more sophistication, including competency-based access levels, mentorship in AI literacy, and approved tool lists that balance innovation with security.
The Call to Action for HR Leaders
The biggest risk of AI is that your people will lose the ability to think critically, while simultaneously exposing your company's intellectual property. You'll be left with a workforce that sounds capable but can't truly deliver. That's why HR must lead. This is about protecting judgment, compliance, and culture. It's about ensuring AI becomes a multiplier of human capability rather than a mask for its absence.
So here's the uncomfortable question:
Has your organisation started drafting its AI policy? And if not, who is responsible for making it happen?
Closing Reflection
Many assume AI does all the work. In reality, the effort lies in asking the right questions, what AI practitioners call a prompt, providing the right direction, and refining the output again and again until it truly delivers value.
It is a collaboration, not a free gift.
For me, AI has become a powerful partner that helps me structure my thoughts and polish communication in ways that once took much longer. That doesn't mean AI replaces my judgment... it augments it. And that's the point: when used wisely, AI doesn't erase human uniqueness, it amplifies it.
AI, like people, is still learning every day. So are we.
The organisations that embrace this mindset will thrive, and those who do not will mistake polish (lipstick!) for progress and pay the heavy price.
Stay Nimble. Stay Compliant.
About the Author: With extensive experience in workforce compliance and global workforce solutions, David Ballew has consistently driven innovation and operational excellence. As the Founder and CEO of Nimble Global, David combines deep industry expertise with a unique perspective shaped by his neurodiverse AuDHD profile, enabling creative problem-solving and multidimensional insight. A pioneer in MSP models and workforce technologies, he is dedicated to bridging global compliance gaps and helping organisations build resilient, future-ready workforces.
Real People. Real Action. Real Innovation.
Disclaimer: This content is intended for informational purposes only and does not constitute legal, tax, or employment advice. Readers should consult qualified professionals in relevant jurisdictions before acting on the guidance provided. Nimble Global disclaims any liability for actions taken based on this publication.
0at
%20(1).png)