Member-only story

The Technical Side of XAI

Giancarlo Mori
12 min readJan 30, 2023

--

Photo by Milad Fakurian on Unsplash

Explainable AI (XAI) is one of the buzzwords in the field of machine learning (ML), and for good reason. Unlike their human counterparts, computer algorithms do not provide an obvious intuition as to why they might make a certain decision or prediction. XAI seeks to bridge this gap by providing us insight into what was previously viewed as an inaccessible “black box.”

This opens up new possibilities in areas where trust and comprehension are necessary (e.g. legal, medical, government, finances, manufacturing, etc.). We now can understand why a model is making its decisions and whether it could be improved. Currently, we are seeing various methods being used to gain more understanding of AI systems; and more importantly, emerging techniques on the horizon that may yield even more insight into these powerful tools.

XAI is crucial if we are to safely leverage machine learning capabilities while maintaining our trust and understanding of them. It has a big role to play in this field and will continue to be one of the hottest topics for some time.

This blog will dive into the technical aspects of XAI, but for a more general introduction, make sure to check out my previous piece titled “What is Explainable AI (XAI) and Why You Should Care.

How Do We Achieve XAI Models?

--

--

Giancarlo Mori
Giancarlo Mori

Written by Giancarlo Mori

Startup cofounder & CEO | Entrepreneur | Sr. Executive | Investor | AI, Technology, Media, and Crypto buff.

Responses (2)