Paper repro: “Self-Normalizing Neural Networks”

Paper repro: “Self-Normalizing Neural Networks”

  • March 13, 2018
Table of Contents

Paper repro: “Self-Normalizing Neural Networks”

SNNs are really cool: their goal is to create neural networks in which, if the input of any layer is normally distributed, the output will automatically also be normally distributed. This is amazing because normalizing the output of layers is known to be a very efficient way to improve the performance of neural networks, but the current ways to do it (eg BatchNorm) basically involve weird hacks, while in SNN the normalization is an intrinsic part of the mathematics of the neural net.

Source: becominghuman.ai

Tags :
Share :
comments powered by Disqus

Related Posts

A Framework for Building Artificial Intelligence Capabilities

A Framework for Building Artificial Intelligence Capabilities

AI will likely become the most important technology of our era as it’s improved upon over time, but we’re still in the early stages of deployment, CIO Journal Columnist Irving Wladawsky-Berger writes. History shows that even after technologies start crossing over into mainstream markets, it takes considerable time for the new tech to be widely embraced across the economy.

Read More