A method used in training artificial neural networks that normalizes the interlayer outputs or the inputs to each layer to make the training process faster and more stable.
In a deep learning model for image classification, batch normalization can be applied to the activations of each convolutional layer to reduce the internal covariate shift and improve the generalization of the model.