Oreilly – Securing Generative AI 2024-11
Oreilly – Securing Generative AI 2024-11 Downloadly IRSpace

Securing Generative AI. This comprehensive course teaches you how to securely design, develop, and implement generative AI systems, specifically large language models (LLMs) and retrieval-based content generation (RAG) systems. You will learn about the various threats that threaten AI and learn practical solutions to combat them. Topics covered in this course include malicious command injection, insecure output management, penetration assessment models, and securing RAG systems. In this course, you will learn the principles and best practices of AI security and be able to make your AI systems more secure and resilient to cyberattacks.
What you will learn:
- Security at All Stages: Learn how to integrate security into all stages of AI development, deployment, and operations.
- Practical Skills: Gain practical skills using real-world examples of AI and machine learning
- Artificial Intelligence Threats: Familiarity with common threats in AI, such as malicious command injection
- Securing Large Language Models: Learn how to secure large language models
- RAG Systems Security: Protecting Content Production Systems with Data Recovery
- Tools and Techniques: Familiarity with various tools and techniques for AI security
- Secure Design Principles: Understanding secure design principles and the importance of transparency in AI systems
This course is suitable for people who:
- They work in the field of artificial intelligence and machine learning.
- Responsible for the design and development of artificial intelligence systems.
- Are looking to increase the security of their organization’s AI systems
- Want to learn about threats and security solutions in the field of artificial intelligence
Securing Generative AI Course Details
- Publisher: Oreilly
- Instructor: Omar Santos
- Training level: Beginner to advanced
- Training duration: 3 hours and 31 minutes
Course headings
- 1. Introduction
Securing Generative AI: Introduction - Lesson 1: Introduction to AI Threats and LLM Security
Learning objectives
1.1 Understanding the Significance of LLMs in the AI Landscape
1.2 Exploring the Resources for this Course – GitHub Repositories and Others
1.3 Introducing Retrieval Augmented Generation (RAG)
1.4 Understanding the OWASP Top-10 Risks for LLMs
1.5 Exploring the MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework
1.6 Understanding the NIST Taxonomy and Terminology of Attacks and Mitigations - Lesson 2: Understanding Prompt Injection Insecure Output Handling
Learning objectives
2.1 Defining Prompt Injection Attacks
2.2 Exploring Real-life Prompt Injection Attacks
2.3 Using ChatML for OpenAI API Calls to Indicate to the LLM the Source of Prompt Input
2.4 Enforcing Privilege Control on LLM Access to Backend Systems
2.5 Best Practices Around API Tokens for Plugins, Data Access, and Function-level Permissions
2.6 Understanding Insecure Output Handling Attacks
2.7 Using the OWASP ASVS to Protect Against Insecure Output Handling - Lesson 3: Training Data Poisoning, Model Denial of Service Supply Chain Vulnerabilities
Learning objectives
3.1 Understanding Training Data Poisoning Attacks
3.2 Exploring Model Denial of Service Attacks
3.3 Understanding the Risks of the AI and ML Supply Chain
3.4 Best Practices when Using Open-Source Models from Hugging Face and Other Sources
3.5 Securing Amazon BedRock, SageMaker, Microsoft Azure AI Services, and Other Environments - Lesson 4: Sensitive Information Disclosure, Insecure Plugin Design, and Excessive Agency
Learning objectives
4.1 Understanding Sensitive Information Disclosure
4.2 Exploiting Insecure Plugin Design
4.3 Avoiding Excessive Agency - Lesson 5: Overreliance, Model Theft, and Red Teaming AI Models
Learning objectives
5.1 Understanding Overreliance
5.2 Exploring Model Theft Attacks
5.3 Understanding Red Teaming of AI Models - Lesson 6: Protecting Retrieval Augmented Generation (RAG) Implementations
Learning objectives
6.1 Understanding the RAG, LangChain, Llama Index, and AI Orchestration
6.2 Securing Embedding Models
6.3 Securing Vector Databases
6.4 Monitoring and Incident Response - Summary
Securing Generative AI: Summary
Course prerequisites
- Linux system with Python 3.x installed.
Course images
Sample course video
Installation Guide
After Extract, view with your favorite player.
Subtitles: None
Quality: 720p
Download link
File(s) password: www.downloadly.ir
File size
706 MB