Oreilly – Deep Learning Patterns and Practices 2022-3

Oreilly – Deep Learning Patterns and Practices 2022-3 Downloadly IRSpace

Oreilly – Deep Learning Patterns and Practices 2022-3
Oreilly – Deep Learning Patterns and Practices 2022-3

Deep Learning Patterns and Practices is a comprehensive guide that introduces you to best practices, repeatable architectures, and design patterns to take your deep learning models from testing to production. A major challenge in deep learning is moving emerging technologies from R&D labs to production. This book helps you overcome this challenge with the latest insights from author Andrew Frelitsch, a fellow at Google Cloud AI. Deep learning models are presented in a unique new way as extensible design patterns that you can easily use in your software projects. Each valuable technique is presented in simple language, accompanied by easy-to-understand diagrams and code samples.

What you will learn:

  • The topic of modern convolutional networks: You will become familiar with the internal details of these networks and understand their performance well.
  • Procedural Reuse Design Pattern for CNN Architectures: With this pattern, you can design your convolutional network architectures in a modular and reusable way.
  • Models suitable for mobile and IoT devices: You will learn how to optimize deep learning models for devices with limited resources.
  • Deploying Large-Scale Models: You will learn about different methods for deploying and managing large-scale deep learning models.
  • Optimizing Hyperparameter Settings: You will learn how to optimize the hyperparameter settings of your model to achieve the best performance.
  • Migrating a model to a production environment: You will learn about the different steps involved in migrating a model from a development environment to a production environment.

This course is suitable for people who:

  • They are familiar with the Python programming language.
  • Are familiar with the basic concepts of deep learning.
  • They are looking to increase their skills in deep learning and model deployment.

Course details: Deep Learning Patterns and Practices

  • Publisher: Oreilly
  • Instructor: Andrew Ferlitsch
  • Training level: Beginner to advanced
  • Training duration: 13 hours and 53 minutes

Course headings

  • Part 1. Deep learning fundamentals
  • 1 Designing modern machine learning
  • 1.1 A focus on adaptability
  • 1.1.1 Computer vision leading the way
  • 1.1.2 Beyond computer vision: NLP, NLU, structured data
  • 1.2 The evolution in machine learning approaches
  • 1.2.1 Classical AI vs. narrow AI
  • 1.2.2 Next steps in computer learning
  • 1.3 The benefits of design patterns
  • Summary
  • 2 Deep neural networks
  • 2.1 Neural network basics
  • 2.1.1 Input layer
  • 2.1.2 Deep neural networks
  • 2.1.3 Feed-forward networks
  • 2.1.4 Sequential API method
  • 2.1.5 Functional API methods
  • 2.1.6 Input shape vs. input layer
  • 2.1.7 Dense layer
  • 2.1.8 Activation functions
  • 2.1.9 Shorthand syntax
  • 2.1.10 Improving accuracy with an optimizer
  • 2.2 DNN binary classifier
  • 2.3 DNN multiclass classifier
  • 2.4 DNN multilabel multiclass classifier
  • 2.5 Simple image classifier
  • 2.5.1 Flattening
  • 2.5.2 Overfitting and dropout
  • Summary
  • 3 Convolutional and residual neural networks
  • 3.1 Convolutional neural networks
  • 3.1.1 Why we use a CNN over a DNN for image models
  • 3.1.2 Downsampling (resizing)
  • 3.1.3 Feature detection
  • 3.1.4 Pooling
  • 3.1.5 Flattening
  • 3.2 The ConvNet design for a CNN
  • 3.3 VGG networks
  • 3.4 ResNet networks
  • 3.4.1 Architecture
  • 3.4.2 Batch normalization
  • 3.4.3 ResNet50
  • Summary
  • 4 Training fundamentals
  • 4.1 Forward feeding and backward propagation
  • 4.1.1 Feeding
  • 4.1.2 Backward propagation
  • 4.2 Dataset splitting
  • 4.2.1 Training and test sets
  • 4.2.2 One-hot encoding
  • 4.3 Data normalization
  • 4.3.1 Normalization
  • 4.3.2 Standardization
  • 4.4 Validation and overfitting
  • 4.4.1 Validation
  • 4.4.2 Loss monitoring
  • 4.4.3 Going deeper with layers
  • 4.5 Convergence
  • 4.6 Checkpointing and early stopping
  • 4.6.1 Checkpointing
  • 4.6.2 Early stopping
  • 4.7 Hyperparameters
  • 4.7.1 Epochs
  • 4.7.2 Steps
  • 4.7.3 Batch size
  • 4.7.4 Learning rate
  • 4.8 Invariance
  • 4.8.1 Translational invariance
  • 4.8.2 Scale invariance
  • 4.8.3 TF.Keras ImageDataGenerator
  • 4.9 Raw (disk) datasets
  • 4.9.1 Directory structure
  • 4.9.2 CSV file
  • 4.9.3 JSON file
  • 4.9.4 Reading images
  • 4.9.5 Resizing
  • 4.10 Model save/restore
  • 4.10.1 Save
  • 4.10.2 Restore
  • Summary
  • Part 2. Basic design pattern
  • 5 Procedural design patterns
  • 5.1 Basic neural network architecture
  • 5.2 Stem component
  • 5.2.1 VGG
  • 5.2.2 ResNet
  • 5.2.3 ResNeXt
  • 5.2.4 Xception
  • 5.3 Pre-stem
  • 5.4 Learner component
  • 5.4.1 ResNet
  • 5.4.2 DenseNet
  • 5.5 Task component
  • 5.5.1 ResNet
  • 5.5.2 Multilayer output
  • 5.5.3 SqueezeNet
  • 5.6 Beyond computer vision: NLP
  • 5.6.1 Natural-language understanding
  • 5.6.2 Transformer architecture
  • Summary
  • 6 Wide convolutional neural networks
  • 6.1 Inception v1
  • 6.1.1 Naive inception module
  • 6.1.2 Inception v1 module
  • 6.1.3 Stem
  • 6.1.4 Learner
  • 6.1.5 Auxiliary classifiers
  • 6.1.6 Classifier
  • 6.2 Inception v2: Factoring convolutions
  • 6.3 Inception v3: Architecture redesign
  • 6.3.1 Inception groups and blocks
  • 6.3.2 Normal convolution
  • 6.3.3 Spatial separable convolution
  • 6.3.4 Stem redesign and implementation
  • 6.3.5 Auxiliary classifier
  • 6.4 ResNeXt: Wide residual neural networks
  • 6.4.1 ResNeXt block
  • 6.4.2 ResNeXt architecture
  • 6.5 Wide residual network
  • 6.5.1 WRN-50-2 architecture
  • 6.5.2 Wide residual block
  • 6.6 Beyond computer vision: Structured data
  • Summary
  • 7 Alternative connectivity patterns
  • 7.1 DenseNet: Densely connected convolutional neural network
  • 7.1.1 Dense group
  • 7.1.2 Dense block
  • 7.1.3 DenseNet macro-architecture
  • 7.1.4 Dense transition block
  • 7.2 Xception: Extreme Inception
  • 7.2.1 Xception architecture
  • 7.2.2 Entry flow of Xception
  • 7.2.3 Middle flow of Xception
  • 7.2.4 Exit flow of Xception
  • 7.2.5 Depthwise separable convolution
  • 7.2.6 Depthwise convolution
  • 7.2.7 Pointwise convolution
  • 7.3 SE-Net: Squeeze and excitation
  • 7.3.1 Architecture of SE-Net
  • 7.3.2 Group and block of SE-Net
  • 7.3.3 SE link
  • Summary
  • 8 Mobile convolutional neural networks
  • 8.1 MobileNet v1
  • 8.1.1 Architecture
  • 8.1.2 Width multiplier
  • 8.1.3 Resolution multiplier
  • 8.1.4 Stem
  • 8.1.5 Learner
  • 8.1.6 Classifier
  • 8.2 MobileNet v2
  • 8.2.1 Architecture
  • 8.2.2 Stem
  • 8.2.3 Learner
  • 8.2.4 Classifier
  • 8.3 SqueezeNet
  • 8.3.1 Architecture
  • 8.3.2 Stem
  • 8.3.3 Learner
  • 8.3.4 Classifier
  • 8.3.5 Bypass connections
  • 8.4 ShuffleNet v1
  • 8.4.1 Architecture
  • 8.4.2 Stem
  • 8.4.3 Learner
  • 8.5 Deployment
  • 8.5.1 Quantization
  • 8.5.2 TF Lite conversion and prediction
  • Summary
  • 9 Autoencoders
  • 9.1 Deep neural network autoencoders
  • 9.1.1 Autoencoder architecture
  • 9.1.2 Encoder
  • 9.1.3 Decoder
  • 9.1.4 Training
  • 9.2 Convolutional autoencoders
  • 9.2.1 Architecture
  • 9.2.2 Encoder
  • 9.2.3 Decoder
  • 9.3 Sparse autoencoders
  • 9.4 Denoising autoencoders
  • 9.5 Super-resolution
  • 9.5.1 Pre-upsampling SR
  • 9.5.2 Post-upsampling SR
  • 9.6 Pretext tasks
  • 9.7 Beyond computer vision: sequence to sequence
  • Summary
  • Part 3. Working with pipelines
  • 10 Hyperparameter tuning
  • 10.1 Weight initialization
  • 10.1.1 Weight distributions
  • 10.1.2 Lottery hypothesis
  • 10.1.3 Warm-up (numerical stability)
  • 10.2 Hyperparameter search fundamentals
  • 10.2.1 Manual method for hyperparameter search
  • 10.2.2 Grid search
  • 10.2.3 Random search
  • 10.2.4 KerasTuner
  • 10.3 Learning rate scheduler
  • 10.3.1 Keras decay parameter
  • 10.3.2 Keras learning rate scheduler
  • 10.3.3 Ramp
  • 10.3.4 Constant step
  • 10.3.5 Cosine annealing
  • 10.4 Regularization
  • 10.4.1 Weight regularization
  • 10.4.2 Label smoothing
  • 10.5 Beyond computer vision
  • Summary
  • 11 Transfer learning
  • 11.1 TF.Keras prebuilt models
  • 11.1.1 Base model
  • 11.1.2 Pretrained ImageNet models for prediction
  • 11.1.3 New classifier
  • 11.2 TF Hub prebuilt models
  • 11.2.1 Using TF Hub pretrained models
  • 11.2.2 New classifier
  • 11.3 Transfer learning between domains
  • 11.3.1 Similar tasks
  • 11.3.2 Distinct tasks
  • 11.3.3 Domain-specific weights
  • 11.3.4 Domain transfer weight initialization
  • 11.3.5 Negative transfer
  • 11.4 Beyond computer vision
  • Summary
  • 12 Data distributions
  • 12.1 Distribution types
  • 12.1.1 Population distribution
  • 12.1.2 Sampling distribution
  • 12.1.3 Subpopulation distribution
  • 12.2 Out of distribution
  • 12.2.1 The MNIST curated dataset
  • 12.2.2 Setting up the environment
  • 12.2.3 The challenge (“in the wild”)
  • 12.2.4 Training as a DNN
  • 12.2.5 Training as a CNN
  • 12.2.6 Image augmentation
  • 12.2.7 Final test
  • Summary
  • 13 Data pipeline
  • 13.1 Data formats and storage
  • 13.1.1 Compressed and raw-image formats
  • 13.1.2 HDF5 format
  • 13.1.3 DICOM format
  • 13.1.4 TFRecord format
  • 13.2 Data feeding
  • 13.2.1 NumPy
  • 13.2.2 TFRecord
  • 13.3 Data preprocessing
  • 13.3.1 Preprocessing with a pre-stem
  • 13.3.2 Preprocessing with TF Extended
  • 13.4 Data augmentation
  • 13.4.1 Invariance
  • 13.4.2 Augmentation with tf.data
  • 13.4.3 Pre-stem
  • Summary
  • 14 Training and deployment pipeline
  • 14.1 Model feeding
  • 14.1.1 Model feeding with tf.data.Dataset
  • 14.1.2 Distributed feeding with tf.Strategy
  • 14.1.3 Model feeding with TFX
  • 14.2 Training schedulers
  • 14.2.1 Pipeline versioning
  • 14.2.2 Metadata
  • 14.2.3 History
  • 14.3 Model evaluations
  • 14.3.1 Candidate vs. blessed model
  • 14.3.2 TFX evaluation
  • 14.4 Serving predictions
  • 14.4.1 On-demand (live) serving
  • 14.4.2 Batch prediction
  • 14.4.3 TFX pipeline components for deployment
  • 14.4.4 A/B testing
  • 14.4.5 Load balancing
  • 14.4.6 Continuous evaluation
  • 14.5 Evolution in production pipeline design
  • 14.5.1 Machine learning as a pipeline
  • 14.5.2 Machine learning as a CI/CD production process
  • 14.5.3 Model amalgamation in production

Images from the Deep Learning Patterns and Practices course

Deep Learning Patterns and Practices

Sample course video

Installation Guide

After Extract, view with your favorite player.

Subtitles: None

Quality: 720p

Download link

Download Part 1 – 1 GB

Download Part 2 – 552 MB

File(s) password: www.downloadly.ir

File size

1.5 GB