The paper is about training implicit models of infinite layers. Instead of employing expensive implicit differentiation and solving the gradient for backward propagation, the paper proposes a novel approach, named Phantom Gradient, that can forgo the heavy computations of the exact gradient, as well as providing an update direction empirically preferable to the implicit model training.

Artificial Intelligence Data science Machine learning