Unreadable Code

Explainable AI, The New Standard

Explainable AI, XAI, a topic of discussion that saw the American Democrats reopen their push for the Algorithmic Accountability Act, a bill that would enforce tech firms to account for their platform biases. Currently fear surrounds the growing application of AI within public sectors that can cause a disadvantage to marginalised groups.

Delving deeper into explainability, why is it such a hot topic? Certain AI’s have created a discriminatory or false prediction; the dataset has been biased, incomplete, and unrepresentative thus creating a biased product. 

During 2015 Amazon’s hiring algorithm was heavily biased against women. The model had been populated from the previous 10 years of applicants; predominantly men. Creating an NLP model that focussed on capturing a specific linguistic regularity and ignoring others. There are clear ethical issues in using data to create a model that will have tangible outcomes on people’s lives. Where was the due diligence primarily, in collecting the data and secondly, analysing the data set before creating the model? 

Facial recognition software presented a uniquely problematic scenario, in 2018 IBM Microsoft and Megvii created gender-recognition AI’s that were accurate up to 99% of the time. If the image was of a white male. 

Reliability dropped as low as 35% for anyone outside this bracket.

Garbage In Garbage Out, a concept in computer science that dictates that the quality of data fed into the algorithm will have a direct effect on the quality of the output. Undeniably the datasets used in the examples were inherently flawed as they underrepresented the overall population. There are countless such examples.

 

 

Consequently the problem lies in that these AI’s were unable to explain where that bias occurred; these datasets need to be interrogated by a data scientist to unveil the underlying discrimination.

 

 

What does “explainable” mean in the context of AI? Jiva.ai words it simply as

“models that tell you why they behave a certain way as well as what the predictions are”.

Coupled with Jiva’s low-code/no-code approach we’ve ensured that all predictions will be explainable to everyone. 

Clinician’s across the country are under a crushing burden, offering JivaRDX whilst users would require extra training isn’t an option. The grand vision is to empower all clinicians with the ability to create their own models and receive explainable predictions; there is no ‘black box’ operating under the hood.

 

Considering that AI is currently an unregulated force, explainability should be at the forefront of all platforms. Hospitals stratifying risk triages, pharmaceutical companies employed in drug discovery, countless public sectors need the safety of explainability in their AI.

Mike Cornell
apcornell.90@gmail.com

Gunslinger and absolute bad man.



Latest News

Jiva is pleased to announce that Jarrod Germano, previously SVP Sales at Tech Mahindra and The HCI Group, joins us today as Chief Revenue Officer at Jiva. Jarrod has deep experience in business development and sales within the healthcare/tech space focusing on US and UK......

Low-code and no-code solutions are growing in appealability. Let’s dive into the topic and try to understand it’s implications.   One commonality across all of our apps, software and web pages? The hours upon hours poured into software languages like Python, C++ and JavaScript to......

Jiva.ai has been accepted onto the Welsh Government Technology Export Cluster Programme 2022!   The programme is open to ambitious, Welsh technology businesses, will run over 3 years, and is designed for tech business at all stages of their export journey.    The TEC Programme will......

Social