Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
Updated
Oct 19, 2024 - Python
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Fit interpretable models. Explain blackbox machine learning.
Model interpretability and understanding for PyTorch
A curated list of awesome responsible machine learning resources.
A collection of research materials on explainable AI/ML
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
H2O.ai Machine Learning Interpretability Resources
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
PyTorch Explain: Interpretable Deep Learning in Python.
Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.
Explainable AI in Julia.
All about explainable AI, algorithmic fairness and more
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"
Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
XAI based human-in-the-loop framework for automatic rule-learning.
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Add a description, image, and links to the interpretable-ai topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ai topic, visit your repo's landing page and select "manage topics."