Oreilly – How LLMs Understand & Generate Human Language 2024-9
Oreilly – How LLMs Understand & Generate Human Language 2024-9 Downloadly IRSpace

How LLMs Understand & Generate Human Language. Generative language models, such as ChatGPT and Microsoft Bing, have become everyday tools for many of us, but the inner workings of these models remain a mystery to many others. How does ChatGPT know what the next word should be? How does it understand the meaning of the text you provide it? Everyone, from those who have never interacted with a chatbot to those who use it regularly, can benefit from a basic understanding of how these language models work. This course answers some of your basic questions about how generative AI works.
In this course, participants will be introduced to the concept of “word embeddings”: not only how they are used in these models, but also how they can be used to analyze large amounts of textual information using concepts such as vector storage and augmented retrieval generation. Understanding how these models work is important because it helps you understand both their capabilities and their limitations. This knowledge will allow you to critically examine the results of these models and gain a better understanding of their strengths and weaknesses. Furthermore, understanding the fundamentals of how LLMs work can help you better understand future developments in this field and how to more effectively interact with this emerging technology.
What you will learn
- How to convert human language into mathematics that models understand.
- How output words are selected by generative language models.
- Why some application strategies and specific tasks perform better with LLMs than others.
- The concept of “word embedding” and how it can be used to empower LLMs.
- The concept of “vector storage/retrievable augmented production” and its importance.
- How to critically evaluate the results obtained from large language models.
This course is suitable for people who:
- They are interested in deciphering the performance of generative language models.
- They want to be able to talk about these models with their colleagues in an informed way.
- They want to explore some of the black box ambiguities of LLMs but don’t have enough time for deep, hands-on learning.
- In their work, they have potential applications for ChatGPT or other text-based generative AI or embeddable storage methods.
Course Details How LLMs Understand & Generate Human Language
- Publisher: Oreilly
- Instructor: Kate Harwood
- Training level: Beginner to advanced
- Training duration: 1 hour and 54 minutes
Course headings
- Introduction
- How LLMs Understand Generate Human Language: Introduction
- Lesson 1: Introduction to LLMs and Generative AI
- Learning objectives
- 1.1 LLMs
- 1.2 Generative AI
- 1.3 Machine Learning General Overview
- Lesson 2: Word Embeddings
- Learning objectives
- 2.1 How Do AI Models “Read” Input?
- 2.2 How Do We Capture Word Meanings?
- 2.3 Word Embedding Space
- 2.4 How Do LLMs Learn Word Embeddings?
- 2.5 Tokenization
- 2.6 Putting It All Together
- 2.7 A Cool Side-Effect
- Lesson 3: Word Embeddings in Generative Language Models
- Learning objectives
- 3.1 How Are Word Embeddings Used in Generative Language Models?
- 3.2 RNNs
- 3.3 Transformers: Attention
- 3.4 Transformers: Contextual Word Embeddings
- 3.5 Transformers for Generation
- 3.6 What Works Well (and What Can Go Wrong) When We Train Models on Word Embeddings?
- Lesson 4: Other Use Cases for Embeddings
- Learning objectives
- 4.1 Summary
- 4.2 Vector Storage
- 4.3 Retrieval Augmented Generation
- Summary
- How LLMs Understand Generate Human Language: Summary
Course images
Sample course video
Installation Guide
After Extract, view with your favorite player.
Subtitles: None
Quality: 720p
Download link
File(s) password: www.downloadly.ir
File size
369 MB