Add to Favourites
To login click here

This post by Lorenz Kuger reflects on the recent success of machine learning models and the associated challenges. To prevent these problems, Kuger introduces a recently published article that addresses how to modify gradient descent to avoid saddle points, which until now, has been a less researched direction. This new research is published and available now in European Journal of Applied Mathematics (EJAM). The paper introduces a deterministic gradient-based approach to avoid saddle points, which is crucial to the training process of neural networks.