This paper introduces a “state-space” representation of deep learning, and generalizes the ordinary concept of multi-layer neural networks to “implicit” ones, where the computationa; graph is allowed to have cycles. It also discusses the issue of well-posedness, related to the existence and unicity of solutions to the equilibrium equation.