Oreilly – Generative AI in Action, Video Edition 2024-11
Oreilly – Generative AI in Action, Video Edition 2024-11

Generative AI in Action, Video Edition. This comprehensive course shows you how generative AI can transform your business by simplifying the process of creating text, images, and code. The course teaches the fundamentals of AI and its practical applications in enterprise environments, from generating text and images for product catalogs and marketing campaigns to technical reports and even writing software. Course author Amit Bahree shares his experience leading generative AI projects at Microsoft for nearly a decade, including before the current GPT revolution.
What you will learn:
- Practical review of applications of generative artificial intelligence: Introduction to various applications of generative artificial intelligence in the real world
- Architectural patterns, integration guidelines, and best practices: Learn how to implement and leverage generative AI
- Latest techniques: Familiarity with techniques such as RAG, prompt engineering, and multi-faceted
- The challenges and risks of generative AI: understanding and managing challenges like illusions and jailbreaks
- Integrating Generative AI with Business and IT Strategy: Learn how to use Generative AI strategically
Who is this course suitable for?
- This course is suitable for enterprise architects, developers, and data scientists interested in enhancing their architectures with generative AI.
Generative AI in Action, Video Edition Course Details
- Publisher: Oreilly
- Instructor: Amit Bahree
- Training level: Beginner to advanced
- Training duration: 12 hours and 27 minutes
Course headings
- Part 1. Foundations of generative AI
- Chapter 1. Introduction to generative AI
Chapter 1. What is generative AI?
Chapter 1. What can we generate?
Chapter 1. Enterprise use cases
Chapter 1. When not to use generative AI
Chapter 1. How is generative AI different from traditional AI?
Chapter 1. What approach should enterprises take?
Chapter 1. Architecture considerations
Chapter 1. So your enterprise wants to use generative AI. Now what?
Chapter 1. Summary - Chapter 2. Introduction to large language models
Chapter 2. Overview of LLMs
Chapter 2. Transformer architecture
Chapter 2. Training cutoff
Chapter 2. Types of LLMs
Chapter 2. Small language models
Chapter 2. Open source vs. commercial LLMs
Chapter 2. Key concepts of LLMs
Chapter 2. Summary - Chapter 3. Working through an API: Generating text
Chapter 3. Completion API
Chapter 3. Advanced completion API options
Chapter 3. Chat completion API
Chapter 3. Summary - Chapter 4. From pixels to pictures: Generating images
Chapter 4. Image generation with Stable Diffusion
Chapter 4. Image generation with other providers
Chapter 4. Editing and enhancing images using Stable Diffusion
Chapter 4. Summary - Chapter 5. What else can AI generate?
Chapter 5. Additional code-related tasks
Chapter 5. Other code generation tools
Chapter 5. Video generation
Chapter 5. Audio and music generation
Chapter 5. Summary - Part 2. Advanced techniques and applications
- Chapter 6. Guide to prompt engineering
Chapter 6. The basics of prompt engineering
Chapter 6. In-context learning and prompting
Chapter 6. Prompt engineering techniques
Chapter 6. Image prompting
Chapter 6. Prompt injection
Chapter 6. Prompt engineering challenges
Chapter 6. Best practices
Chapter 6. Summary - Chapter 7. Retrieval-augmented generation: The secret weapon
Chapter 7. RAG benefits Chapter
7. RAG architecture
Chapter 7. Retriever system
Chapter 7. Understanding vector databases
Chapter 7. RAG challenges
Chapter 7. Overcoming challenges for chunking
Chapter 7. Chunking PDFs
Chapter 7. Summary - Chapter 8. Chatting with your data
Chapter 8. Using a vector database
Chapter 8. Planning for retrieving the information
Chapter 8. Retrieving the data
Chapter 8. Search using Redis
Chapter 8. An end-to-end chat implementation powered by RAG
Chapter 8. Using Azure OpenAI on your data
Chapter 8. Benefits of bringing your data using RAG
Chapter 8. Summary - Chapter 9. Tailoring models with model adaptation and fine-tuning
Chapter 9. When to fine-tune an LLM
Chapter 9. Fine-tuning OpenAI models
Chapter 9. Deployment of a fine-tuned model
Chapter 9. Training an LLM
Chapter 9. Model adaptation techniques
Chapter 9. RLHF overview
Chapter 9. Summary - Part 3. Deployment and ethical considerations
- Chapter 10. Application architecture for generative AI apps
Chapter 10. Generative AI: Application stack
Chapter 10. Orchestration layer
Chapter 10. Grounding layer
Chapter 10. Model layer
Chapter 10. Response filtering
Chapter 10. Summary - Chapter 11. Scaling up: Best practices for production deployment
Chapter 11. Deployment options
Chapter 11. Managed LLMs via API
Chapter 11. Best practices for production deployment
Chapter 11. GenAI operational considerations
Chapter 11. LLMOps and MLOps
Chapter 11. Checklist for production deployment
Chapter 11. Summary - Chapter 12. Evaluations and benchmarks
Chapter 12. Traditional evaluation metrics
Chapter 12. LLM task-specific benchmarks
Chapter 12. New evaluation benchmarks
Chapter 12. Human evaluation
Chapter 12. Summary - Chapter 13. Guide to ethical GenAI: Principles, practices, and pitfalls
Chapter 13. Understanding GenAI attacks
Chapter 13. A responsible AI lifecycle
Chapter 13. Red-teaming
Chapter 13. Content safety
Chapter 13. Summary - Appendix A. The book’s GitHub repository
- Appendix B. Responsible AI tools
Appendix B. Transparency notes
Appendix B. HAX Toolkit
Appendix B. Responsible AI Toolbox
Appendix B. Learning Interpretability Tool (LIT)
Appendix B. AI Fairness 360
Appendix B. C2PA
Course images
Sample course video
Installation Guide
After Extract, view with your favorite player.
Subtitles: None
Quality: 1080p
Download link
File(s) password: www.downloadly.ir
File size
2.2 GB