This thesis investigates two potential directions to advance deep learning. The first paper studies generalisation of neural networks with rectified linear activations units (“ReLUs”) and proposes a tropical algebra-based algorithm called TropEx to extract coefficients of the linear regions. The second paper proposes a parametric rational activation function called ERA, which is learnable during network training and significantly increases network expressivity. ERA outperforms previous activations when used in small architectures, which is relevant due to the increasing size of neural networks and the associated costs and electricity usage.