AI is getting better and better, but it’s still a long way from being perfect. So when you’re creating your AI policy or framework, how do you protect your team from imperfect AI? Instead of relying on AI alone, strike the right balance between trusting the quality of AI and applying human intelligence to it.
What are you doing to educate your people about using AI in your team and organisation?
I was speaking recently with the senior leader of a very large organisation that has just started an ambitious 3-year project to embed AI into the core part of what they do. When ChatGPT was first launched, they banned it because they were worried about people using it without really understanding its limitations. Now, they’re rolling out a big AI project to deliver better results.
This leader told me they want to be sure the AI tools they build are highly accurate and reliable so their staff know they can trust and rely on them. So, as part of the project, they’re investing a lot into testing, monitoring, and constant QA to make sure the AI results are as close to perfect as possible.
That’s one approach, and I do know other organisations who are also going down that path.
But it’s an extreme approach, and I don’t think it’s the only option.
For most organisations and leaders, you don’t need to go to that extent. Instead, strike a balance. Of course, there’s nothing wrong with aiming for higher quality from AI, but at some point that becomes increasingly difficult and expensive. So, at the same time you’re improving the technology, also educate your people about its limitations. So, whether they are using AI for creating, sharing, decision-making, or anything else, they know it’s not perfect. They should know it’s a good draft, and they then need to apply their human intelligence to it.
This is the balance between technology and people: the quality of the technology working with the judgment and knowledge of people to apply that technology.
This is like driving a car. You might know the road rules, know your car really well, and travel this route frequently. But you still have to drive with attention, focus, and concentration because unexpected things occur, and you need to apply your human intelligence.
As a leader, if you’re crafting your AI policies, guidelines, or framework, keep this in mind. Whatever you say about the AI, also include education in the mix. You can write the best policies in the world and even have the best technology, but you also need to educate your people so they can exercise good judgement.
If you’re interested in more about this, I’m running a free public virtual masterclass for leaders about AI policies, frameworks, and guidelines. Register here, and feel free to share it with others in your team and network as well.