Unreadable Code

Explainable AI, The New Standard

Explainable AI, XAI, a topic of discussion that saw the American Democrats reopen their push for the Algorithmic Accountability Act, a bill that would enforce tech firms to account for their platform biases. Currently fear surrounds the growing application of AI within public sectors that can cause a disadvantage to marginalised groups.

Delving deeper into explainability, why is it such a hot topic? Certain AI’s have created a discriminatory or false prediction; the dataset has been biased, incomplete, and unrepresentative thus creating a biased product. 

During 2015 Amazon’s hiring algorithm was heavily biased against women. The model had been populated from the previous 10 years of applicants; predominantly men. Creating an NLP model that focussed on capturing a specific linguistic regularity and ignoring others. There are clear ethical issues in using data to create a model that will have tangible outcomes on people’s lives. Where was the due diligence primarily, in collecting the data and secondly, analysing the data set before creating the model? 

Facial recognition software presented a uniquely problematic scenario, in 2018 IBM Microsoft and Megvii created gender-recognition AI’s that were accurate up to 99% of the time. If the image was of a white male. 

Reliability dropped as low as 35% for anyone outside this bracket.

Garbage In Garbage Out, a concept in computer science that dictates that the quality of data fed into the algorithm will have a direct effect on the quality of the output. Undeniably the datasets used in the examples were inherently flawed as they underrepresented the overall population. There are countless such examples.



Consequently the problem lies in that these AI’s were unable to explain where that bias occurred; these datasets need to be interrogated by a data scientist to unveil the underlying discrimination.



What does “explainable” mean in the context of AI? Jiva.ai words it simply as

“models that tell you why they behave a certain way as well as what the predictions are”.

Coupled with Jiva’s low-code/no-code approach we’ve ensured that all predictions will be explainable to everyone. 

Clinician’s across the country are under a crushing burden, offering JivaRDX whilst users would require extra training isn’t an option. The grand vision is to empower all clinicians with the ability to create their own models and receive explainable predictions; there is no ‘black box’ operating under the hood.


Considering that AI is currently an unregulated force, explainability should be at the forefront of all platforms. Hospitals stratifying risk triages, pharmaceutical companies employed in drug discovery, countless public sectors need the safety of explainability in their AI.

Mike Cornell

Gunslinger and absolute bad man.

Latest News

Segment Anything Model by Meta AI Research is a foundational model for image segmentation. This promptable segmentation model takes points marked on the image, bounding boxes, ambiguous masks or text inputs as an input and gives segmentation labels of objects with associated probability values. SAM......

As a company committed to cyber security, we are excited to have recently received our Cyber Essentials certificate, what we see as an important step in continuing to protect client data and information. The Cyber Essentials certification process involves assessing systems and procedures to identify......

The rapid rise of AI in business, such as ChatGPT, has become a global phenomenon in recent months and years. From chatbots and virtual assistants to self-driving cars and advanced predictive analytics, AI has revolutionised the way companies operate across all industries. This has led......