Hidden layer explained

In artificial neural networks, a hidden layer is a layer of artificial neurons that is neither an input layer nor an output layer. The simplest examples appear in multilayer perceptrons (MLP), as illustrated in the diagram.[1]

An MLP without any hidden layer is essentially just a linear model. With hidden layers and activation functions, however, nonlinearity is introduced into the model.

In typical machine learning practice, the weights and biases are initialized, then iteratively updated during training via backpropagation.

References

  1. Book: Zhang, Aston . Lipton . Zachary . Li . Mu . Smola . Alexander J. . 2024 . Cambridge University Press . 978-1-009-38943-3 . Cambridge New York Port Melbourne New Delhi Singapore . 5.1. Multilayer Perceptrons . https://d2l.ai/chapter_multilayer-perceptrons/mlp.html.