OmniXAI

 A Python Library for Explainable AI

Meet OmniXAI

Omni eXplainable AI (OmniXAI) is an open-source Python library for explainable AI, offering omni-way explainable AI and interpretable machine learning capabilities to address many pain points in explaining decisions made by AI models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for data scientists, ML researchers, and practitioners who need explanations for various types of data and models at different stages of ML process. OmniXAI includes a rich family of explanation methods and provides you with an easy-to-use unified interface to generate the explanations for your applications by only writing a few lines of codes. OmniXAI also offers a dashboard for visualizing explanations to obtain more insights into model decisions.

Multiple Data Types

Support for tabular data, images, texts and time-series. Provide many standard data pre-processing transforms.

Various ML Frameworks

Support the most popular machine learning frameworks or models, e.g., PyTorch, Tensorflow, Scikit-learn and customized black-box models.

Various ML Stages

A one-stop solution for analyzing different stages in a standard ML pipeline in real-world applications, e.g., EDA, model development and evaluation.

Multiple Explanations

Support the most popular explanation methods, e.g., feature-attribution explanation, counterfactual explanation, gradient-based explanation etc., for analyzing different aspects of a ML model.

Easy-to-use

Provide a simple but unified interface to generate various explanations by only writing a few lines of codes. Support Jupyter Notebook environments.

Visualization

Offer a visualization tool for users to examine the generated explanations, compare explanation methods, and obtain more insights about AI models.

Latest Publications

Read through our latest paper and blog to discover OmniXAI’s functionalities and how it compares to other XAI libraries.

publication
OmniXAI: A Library for Explainable AI
read on arxiv
publication
MACE: An Efficient Model-Agnostic Framework for Counterfactual Explanation
read on arxiv