A Fool with an AI Tool is Still a Fool
The attitude, culture and tools you need to oversee AI.
Can a tool eradicate the unintended negative consequences of human decision-making with AI? Adrian Bridgwater discusses this question in eWeek UK from his excellent article, TIBCO and the Need for ModelOps in the Automation-Centric IT Stack about TIBCO ModelOps.
He wrote, based on our interview, that:
"TIBCO says that with ModelOps, customers can move past the worry of unintended negative consequences of failed automation because of complex or poorly managed AI or rules-based models, making it safer to automate based on validated and secure AI models.
None of which is to say that this technology doesn’t automate and alleviate. It clearly does. #Citizendatascientists are probably inevitable, but they sometimes need to be treated with the same caution as citizen lion tamers. Just because you’ve got the hat doesn’t mean you know the job."
I think Adrian nails it, although, to be clear, we do NOT suggest that this tool--or any tool--can eradicate a human challenge (algorithmic risk and bias). Instead, our take is that #nocode #ModelOps tools can make it easier for what Daniel Kahneman calls Decision Observers to collaborate to select, vet, discuss, observe and manage AI.
Yes, a fool with a tool is still a fool. But, on the other hand, #ModelOps tools in the hands of honest, smart, good humans surely will help mitigate the danger of using #AI at scale.