Neural Network Verification Software Tool
-
Updated
Jun 12, 2024 - MATLAB
Neural Network Verification Software Tool
🐢 Open-Source Evaluation & Testing for LLMs and ML models
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models
Code for paper "FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing"
[ICML 2024] Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts (Official Pytorch Implementation)
Code & Data of PoisonedRAG paper
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Evaluation & testing framework for computer vision models
AI-HCI research project with the aim to study the key factors affecting trust in an AI system recommendations.
The open-sourced Python toolbox for backdoor attacks and defenses.
Code from PLDI '23 paper "Architecture-Preserving Provable Repair of Deep Neural Networks."
We make Generative AI accessible to Federal agencies and businesses. Easy-to-use ezGPT™ platform eliminates the need for in-house expertise and delivers pre-built solutions for rapid innovation. With security and privacy at its core, we unlock the potential of AI. Our innovative chatbot guides users, ensuring a smooth and successful experience.
We make Generative AI accessible to Federal agencies and businesses. Easy-to-use ezGPT™ platform eliminates the need for in-house expertise and delivers pre-built solutions for rapid innovation. With security and privacy at its core, we unlock the potential of AI. Our innovative chatbot guides users, ensuring a smooth and successful experience.
Birhanu Eshete is an Associate Professor of Computer Science at the University of Michigan, Dearborn. His main research focus is in trustworthy machine learning with emphasis on security, safety, privacy, interpretability, fairness, and the dynamics thereof. He also studies online cybercrime and advanced and persistent threats (APTs).
Add a description, image, and links to the trustworthy-ai topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-ai topic, visit your repo's landing page and select "manage topics."