This repository is a curated collection of resources for aspiring professionals looking to master Generative AI and Red-Teaming. Whether you're a beginner or looking to deepen your expertise, this guide provides a structured path to understanding the intricacies of AI security and development.
- Essential Mathematics for Machine Learning Playlist
- Linear Algebra for Machine Learning and Generative AI
- Understanding Regression Lines
- Python for Beginners
- Key Concepts: Functions, Loops, Dictionaries, Arrays
- PyTorch Full Course
- TensorFlow Tutorial
- Google's Machine Learning Crash Course
- Machine Learning Specialization recommended by IIT Bombay's PhD scholar
- Machine Learning with Python and Scikit-Learn
- Huntr
- OWASP Top 10 for LLM with Hands-on Practice
- AI Hacking Playground/Crucible platform / Adversarial ML Article
- Web LLM Attacks: Port Swigger
- 0din AI Red Teaming
- AI Supply Chain Security Insights
- LLM Hackthon Enviroment
- Invariant Labs
- RedTeam Arena
- GenAI Dcotor
- Excellent Research Paper Stack for LLM Red-Teaming
- OWASP Top 10 for Large Language Models
- MITRE ATLAS COVERAGE FOR LLM
- Dreadnode Reserach
- MLSec YouTube Channel
- Practical LLM Security by NVIDIA
- Compromising LLMs: AI Malware
- Marta Janus and Eoin Wickens - Sleeping with one AI open
- AI Vulnerability Insights
- AI Security Overview
- Scaling Runtime Application Security
- Shadow Vulnerabilities in AI/ML Data Stacks
- Hidden Layer Security
- Weaponizing ML Models with Ransomware
- Shadow Logic
- Embraced Threaded Blog
- Oligo Blof
- Mindguard Academy
- Wonderful articles about LLM security **
- Withsecure Reserach
- LLM Chronicles
- With secure consulting
- Kaggle
- Let's Build GPT: From Scratch
- DeepLearning.ai
- Comprehensive LLM Course
- Hugging Face Course
- GenAI Foundation
- LLM and GenAI in Cybersecurity
- Python Risk Identification Tool for generative AI (PyRIT) Video
- Adversarial Robustness Toolbox (ART) ; Something like Metasploit.Github repostiory tell you step by step how to test.
- Foolbox ;A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
- NVIDIA'S GARK
- LLM Fuzzer
- Secimport: Tailor-Made eBPF Sandbox for Python Applications
- Hugging Face Security Documentation
- Protect AI
- Pangea
- Promptarmor
- Securiti
- Lasso Security
- Mindgard
- Robust Intelligence ; Protects AI application with AI Firewall
- Oligo ; Reserach Blog
- LAKERA
- Mithril Security
- Mend
Contributions are welcome! If you have additional resources, tools, or insights, please submit a pull request or open an issue.