Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
-
Updated
May 23, 2024 - Jupyter Notebook
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Implementation of LIME focused on producing user-centric local explanations for image classifiers.
This repository contains the Python scripts that I have written and run to execute a series of analytic model developments using datasets taken from the book "The Elements of Statistical Elements" by Hastie, Tibshirani, Friedman
This repository includes a machine learning modeling study about estimating customers hotel cancellation and what are the reasons for these cancellations.
This repository contains code from my thesis on explaining RL agents to humans. It includes DQN training, genetic algorithms for optimizing agent trajectories, and tools for creating agent action videos.
A curated list of awesome NLP, Computer Vision, Model Compression, XAI, Reinforcement Learning, Security etc Paper
A package for Counterfactual Explanations and Algorithmic Recourse in Julia.
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Model-agnostic Statistical/Machine Learning explainability (currently Python) for tabular data
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI and Human-Centered AI.
Fit interpretable models. Explain blackbox machine learning.
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
A Python library for explainable AI using approximate reasoning
moDel Agnostic Language for Exploration and eXplanation
My github page
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Interpretable Machine Learning via Rule Extraction
[ICLR 2024] Official implementation of the paper "GNNBoundary"
[ICLR 2023] Official implementation of the paper "GNNInterpreter"
Add a description, image, and links to the explainable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explainable-ai topic, visit your repo's landing page and select "manage topics."