AI: Are We Holding it to Unrealistic Standards?
I recently gave a presentation on AI expectations where I explored a critical question: Are we hindering our own progress with AI by setting unrealistic expectations?
We’re in an exciting era. AI is increasingly capable of creating content, driving business outcomes, and even pushing the boundaries of what we thought possible. But there’s a gap between what we want from AI and what we’re willing to accept. Here’s a recording of the presentation. My thoughts continue after.
In my presentation, you can see that I shared a simple visual: a split image showing a photo-realistic image of a professional woman on one side and a child’s crayon drawing of the same woman on the other. It represents the difference between an idealized output we envision and what we sometimes perceive we’re getting from AI. It’s important to remember that AI, like any new technology or new hire, requires time and resources to develop, refine and deploy. We don’t expect human employees to be experts on day one, so why do we hold AI to that standard? In fact, many humans never reach 100%.
I challenged the audience to think about the metrics they use to measure success, both for AI and for their human workforce. We often fall into the trap of comparing AI directly to a human in a specific role, and then we layer on even more stringent acceptance criteria for the AI because we’re afraid of where it might fall short, or how the consumers of the AI might react to it.
But is that a fair comparison? In reality, human accuracy is rarely perfect. Look at the pharmaceutical industry, with its incredibly high profit margins despite a shocking amount of production waste. Or the oil and gas sector, extracting huge value even though less than half of what’s pulled from the ground becomes usable gasoline. Or consider medical diagnoses, where error rates are around 5%, leading to millions of misdiagnoses annually. Retail inventory management systems are accurate only about 65% of the time. Even movie screenwriting, a creative profession, sees a success rate of only 0.3%.
These are examples of essential industries thriving despite inefficiencies and inaccuracies. If we can tolerate these levels of imperfection in human processes, why are we so quick to dismiss AI for not achieving 100% accuracy?
Perhaps a more effective strategy would be to focus on how AI can augment existing processes and how we can tailor our “job description” for AI to be more process-specific. The real machines that have driven progress throughout history – like those in automated factories – aren’t humanoid robots attempting to replicate all human actions. They’re specialized tools designed for specific tasks within a larger system. AI is no different.
If you’re curious about how to get started, I recommended a few resources in my presentation: https://notebooklm.google.com/, https://ai.google.dev/, and https://cloud.google.com/.
The key takeaway is this: The biggest risk isn’t adopting AI – it’s not adopting AI. The losers in the new AI revolution will be the ones who wait for AI to be perfect before they start exploring its potential. The real challenge is for us to adapt our thinking, our metrics, and our expectations to embrace the immediate business value that AI can offer today.
The AI revolution is here, and it’s up to us to harness its power effectively.
Recommended Posts
Application Modernization Requires Cultural Maturity
July 19, 2024
Modernization: A people-focused approach
July 19, 2024
The Architecture Behind MosquitoMax’s IoT Transformation
July 18, 2024