Andrej Karpathy Building makemore Part 3: Activations & Gradients, BatchNorm
🎯 Загружено автоматически через бота:
🚫 Оригинал видео:
📺 Данное видео принадлежит каналу «Andrej Karpathy» (@AndrejKarpathy). Оно представлено в нашем сообществе исключительно в информационных, научных, образовательных или культурных целях. Наше сообщество не утверждает никаких прав на данное видео. Пожалуйста, поддержите автора, посетив его оригинальный канал.
✉️ Если у вас есть претензии к авторским правам на данное видео, пожалуйста, свяжитесь с нами по почте support@, и мы немедленно удалим его.
📃 Оригинальное описание:
We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you’d want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video.
Links:
makemore on github:
jupyter notebook I built in this video:
collab notebook:
my website:
my twitter:
Discord channel:
Useful links:
“Kaiming init“ paper:
BatchNorm paper:
Bengio et al. 2003 MLP language model paper (pdf):
Good paper illustrating some of the problems with batchnorm in practice:
Exercises:
E01: I did not get around to seeing what happens when you initialize all weights and biases to zero. Try this and train the neural net. You might think either that 1) the network trains just fine or 2) the network doesn’t train at all, but actually it is 3) the network trains but only partially, and achieves a pretty bad final performance. Inspect the gradients and activations to figure out what is happening and why the network is only partially training, and what part is being trained exactly.
E02: BatchNorm, unlike other normalization layers like LayerNorm/GroupNorm etc. has the big advantage that after training, the batchnorm gamma/beta can be “folded into“ the weights of the preceeding Linear layers, effectively erasing the need to forward it at test time. Set up a small 3-layer MLP with batchnorms, train the network, then “fold“ the batchnorm gamma/beta into the preceeding Linear layer’s W,b by creating a new W2, b2 and erasing the batch norm. Verify that this gives the same forward pass during inference. i.e. we see that the batchnorm is there just for stabilizing the training, and can be thrown out after training is done! pretty cool.
Chapters:
intro
starter code
fixing the initial loss
fixing the saturated tanh
calculating the init scale: “Kaiming init”
batch normalization
batch normalization: summary
real example: resnet50 walkthrough
summary of the lecture
just kidding: part2: PyTorch-ifying the code
viz #1: forward pass activations statistics
viz #2: backward pass gradient statistics
the fully linear case of no non-linearities
viz #3: parameter activation and gradient statistics
viz #4: update:data ratio over time
bringing back batchnorm, looking at the visualizations
summary of the lecture for real this time
1 view
0
0
4 years ago 00:11:11 67
PyTorch at Tesla - Andrej Karpathy, Tesla
4 years ago 00:30:09 338
Andrej Karpathy - AI for Full-Self Driving
11 months ago 00:59:48 16
Введение в большие языковые модели от Andrej Karpathy
3 years ago 00:24:15 58
Andrej Karpathy Talking about AI and Neural Networks at Tesla for Full Self Driving
4 years ago 00:27:39 61
Tesla AI Andrej Karpathy on Scalability in Autonomous Driving
2 years ago 03:28:48 22
Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333
4 years ago 00:28:51 18
Scalability in Autonomus Driving Workshop | CVPR’20 Keynote | Andrej Karpathy | Tesla
9 years ago 00:15:47 62
Andrej Karpathy, Research Scientist, OpenAI - RE•WORK Deep Learning Summit 2016 #reworkDL
4 years ago 01:30:56 1
Recent Breakthroughs in AI with Andrej Karpathy and Lex Fridman
5 years ago 00:09:16 10
Tesla Neural Network Multi-Task Learning Summarized | Andrej Karpathy
3 years ago 00:37:47 1
Andrej Karpathy (Tesla): CVPR 2021 Workshop on Autonomous Vehicles
8 years ago 01:25:17 44
Deep Learning for Computer Vision (Andrej Karpathy, OpenAI)
4 years ago 00:10:32 29
AI for Full-Self Driving by Andrej Karpathy in 10 Minutes
3 years ago 00:15:11 2
’s Heroes of Deep Learning: Andrej Karpathy
7 years ago 00:15:11 72
Heroes of Deep Learning: Andrew Ng interviews Andrej Karpathy
3 years ago 00:23:47 10
Self-driving from VISION ONLY - Tesla’s self-driving progress by Andrej Karpathy (Talk Analysis)
4 years ago 00:17:55 1
Building the Software 2 0 Stack (Andrej Karpathy)
1 year ago 00:05:48 3
Advice for machine learning beginners | Andrej Karpathy and Lex Fridman
9 years ago 00:03:44 145
NeuralTalk and Walk, recognition, text description of the image while walking
4 years ago 00:13:01 3
Explaining Andrej (Tesla’s AI lead)- Part 1: Deep Learning
10 years ago 00:53:27 18
Automated Image Captioning with ConvNets and Recurrent Nets
7 months ago 00:26:53 1
Vedal & Neuro Build A Language Model From Scratch
11 months ago 00:14:07 1
“Что в имени тебе моем?“ Учимся генерировать новые имена у звездного разработчика Tesla и OpenAI.