This article discusses the development of artificial neural networks, specifically the Perceptron, which was introduced approximately 65 years ago. It explains how more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced, which is the essential component of the current implementation of deep learning algorithms. This technology is used to improve the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots. The key question driving new research is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks.