Artificial neurons as ridge functions

Artificial neurons are a particular case of ridge functions. Thus, linear combinations of ridge functions are a generalization of multilayer perceptrons with one hidden layer. Approximation theory of ridge functions provides an upper bound on the approximation capabilities of neural networks.

Why convolutions are everywhere

Modern deep learning architectures often include convolutions. Convolutions are optimized and efficient operations, but the reason why they are so widespread is much more profound. We are going to show that convolutions arise from simple assumptions on data.