Feb 9, 2022
Explainable AI, The New Standard
Mike Cornell
Gunslinger and absolute bad man.
Explainable AI, XAI, a topic of discussion that saw the American Democrats reopen their push for the Algorithmic Accountability Act, a bill that would enforce tech firms to account for their platform biases. Currently fear surrounds the growing application of AI within public sectors that can cause a disadvantage to marginalised groups.
Delving deeper into explainability, why is it such a hot topic? Certain AI’s have created a discriminatory or false prediction; the dataset has been biased, incomplete, and unrepresentative thus creating a biased product.
During 2015 Amazon’s hiring algorithm was heavily biased against women. The model had been populated from the previous 10 years of applicants; predominantly men. Creating an NLP model that focussed on capturing a specific linguistic regularity and ignoring others. There are clear ethical issues in using data to create a model that will have tangible outcomes on people’s lives. Where was the due diligence primarily, in collecting the data and secondly, analysing the data set before creating the model?
Facial recognition software presented a uniquely problematic scenario, in 2018 IBM Microsoft and Megvii created gender-recognition AI’s that were accurate up to 99% of the time. If the image was of a white male.
Reliability dropped as low as 35% for anyone outside this bracket.
Garbage In Garbage Out, a concept in computer science that dictates that the quality of data fed into the algorithm will have a direct effect on the quality of the output. Undeniably the datasets used in the examples were inherently flawed as they underrepresented the overall population. There are countless such examples.
Consequently the problem lies in that these AI’s were unable to explain where that bias occurred; these datasets need to be interrogated by a data scientist to unveil the underlying discrimination.
What does “explainable” mean in the context of AI? Jiva.ai words it simply as
“models that tell you why they behave a certain way as well as what the predictions are”.
Coupled with Jiva’s low-code/no-code approach we’ve ensured that all predictions will be explainable to everyone.
Clinician’s across the country are under a crushing burden, offering JivaRDX whilst users would require extra training isn’t an option. The grand vision is to empower all clinicians with the ability to create their own models and receive explainable predictions; there is no ‘black box’ operating under the hood.
Considering that AI is currently an unregulated force, explainability should be at the forefront of all platforms. Hospitals stratifying risk triages, pharmaceutical companies employed in drug discovery, countless public sectors need the safety of explainability in their AI.
RECENT BLOGS
Jul 24, 2024
Jiva.ai achieves Cyber Essentials Plus certification for the second consecutive year!
Linda Sidney
Compliance Lead and DPO @ Jiva.ai. Keeps us all in check.
May 10, 2024
Securing Success: Jiva.ai Achieves Cyber Essentials Certification for the Second Year Running!
Linda Sidney
Compliance Lead and DPO @ Jiva.ai. Keeps us all in check.
Feb 21, 2024
How Will the EU AI Act Impact You?
Linda Sidney
Compliance Lead and DPO @ Jiva.ai. Keeps us all in check.