Unreadable Code

Explainable AI, The New Standard

Explainable AI, XAI, a topic of discussion that saw the American Democrats reopen their push for the Algorithmic Accountability Act, a bill that would enforce tech firms to account for their platform biases. Currently fear surrounds the growing application of AI within public sectors that can cause a disadvantage to marginalised groups.

Delving deeper into explainability, why is it such a hot topic? Certain AI’s have created a discriminatory or false prediction; the dataset has been biased, incomplete, and unrepresentative thus creating a biased product. 

During 2015 Amazon’s hiring algorithm was heavily biased against women. The model had been populated from the previous 10 years of applicants; predominantly men. Creating an NLP model that focussed on capturing a specific linguistic regularity and ignoring others. There are clear ethical issues in using data to create a model that will have tangible outcomes on people’s lives. Where was the due diligence primarily, in collecting the data and secondly, analysing the data set before creating the model? 

Facial recognition software presented a uniquely problematic scenario, in 2018 IBM Microsoft and Megvii created gender-recognition AI’s that were accurate up to 99% of the time. If the image was of a white male. 

Reliability dropped as low as 35% for anyone outside this bracket.

Garbage In Garbage Out, a concept in computer science that dictates that the quality of data fed into the algorithm will have a direct effect on the quality of the output. Undeniably the datasets used in the examples were inherently flawed as they underrepresented the overall population. There are countless such examples.

 

 

Consequently the problem lies in that these AI’s were unable to explain where that bias occurred; these datasets need to be interrogated by a data scientist to unveil the underlying discrimination.

 

 

What does “explainable” mean in the context of AI? Jiva.ai words it simply as

“models that tell you why they behave a certain way as well as what the predictions are”.

Coupled with Jiva’s low-code/no-code approach we’ve ensured that all predictions will be explainable to everyone. 

Clinician’s across the country are under a crushing burden, offering JivaRDX whilst users would require extra training isn’t an option. The grand vision is to empower all clinicians with the ability to create their own models and receive explainable predictions; there is no ‘black box’ operating under the hood.

 

Considering that AI is currently an unregulated force, explainability should be at the forefront of all platforms. Hospitals stratifying risk triages, pharmaceutical companies employed in drug discovery, countless public sectors need the safety of explainability in their AI.

Mike Cornell
apcornell.90@gmail.com

Gunslinger and absolute bad man.



Latest News

EU AI Act Developed by the Organisation for Economic Co-operation and Development (OECD), the EU AI Act is the first legislative proposal of its kind in the world and supports a global consensus around the types of systems that are intended to be regulated as......

Singapore, London, 06th February 2024 —   Jiva.ai, a leading no-code AI platform, and Aevice Health, a prominent provider of remote respiratory monitoring solutions for the healthcare continuum, today announced their collaboration on a jointly funded co-innovation programme by Innovate UK and Enterprise Singapore aimed at......

Looking to 2024’s innovations The age of language models is here (sorry to state the obvious). 2023 saw the advent of easy-to-use LLMs, and the rise of their pre-existing underlying enabling technologies – transformers and retrieval in particular – which transformed the AI landscape in......

Social