Deep learning neural networks have become easy to create. However,
tuning these models for maximum performance remains something of a
challenge for most modelers. This course will teach you how to get
results as a machine learning practitioner.
The course starts with an introduction to the problem of overfitting
and a tour of regularization techniques. Learn through better
configured stochastic gradient descent batch size, loss functions,
learning rates, and to avoid exploding gradients via gradient
clipping. After that, you’ll learn regularization techniques and
reduce overfitting by updating the loss function using techniques such
as weight regularization, weight constraints, and activation
regularization. Post that, you’ll effectively apply dropout, the
addition of noise, and early stopping, and combine the predictions
from multiple models.
You’ll also look at ensemble learning techniques and diagnose poor
model training and problems such as premature convergence and
accelerate the model training process. Then, you’ll combine the
predictions from multiple models saved during a single training run
with techniques such as horizontal ensembles and snapshot ensembles.
Finally, you’ll diagnose high variance in a final model and improve
the average predictive skill.
By the end of this course, you’ll learn different techniques for
getting better results with deep learning models.
All the resource files are uploaded on the GitHub repository at
https://github.com/PacktPublishing/Performance-Tuning-Deep-Learning-Models-Master-Class
Les mer
Produktdetaljer
ISBN
9781803243894
Publisert
2023
Utgave
1. utgave
Utgiver
Vendor
Packt Publishing
Språk
Product language
Engelsk
Format
Product format
Digital bok
Forfatter