Research projects include:
Energy-based learning with ConvNet functions exhibits surprising behaviors not encountered for earlier energy functions.
Two distinct of outcomes are possible: convergent learning and non-convergent learning.
Convergent learning is consistent with conventional theoretical expectations but is difficult to achieve in practice. Learning convergent energy functions with realistic long-run MCMC samples is essential for energy landscape mapping applications. My research introduces the first ConvNet energy functions with realistic long-run MCMC samples in the image space.
Non-convergent learning is an unexpected phenomenon that occurs for ConvNet energy functions which is explored here. Informative initialization methods for MCMC sampling such as Contrastive Divergence and Persistent Contrastive Divergence are not needed for stable learning and high-quality short-run synthesis. By initializing MCMC samples from noise images rather than a data image or persistent image, one can learn a non-convergent energy function that can generate realistic images from noise like a generator or flow model.
The local modes of an energy function are stable states that appear with high probability. An energy function defines a non-Euclidean geometry over the state space. Geodesic distances along the energy manifold provide a measure of conceptual similarity between states. Related groups of local modes form macroscopic non-convex structures that are analagous to folding funnels of protein potentials. I use a novel MCMC algorithm to detect metastable structures of learned energy functions that correspond to intuitive image concepts, as explored here.
Image classifier networks are highly susceptible to inperceptible perturbations that drastically alter network output. These pertubations can cause a network to give non-sensical labels for images that are clearly recognizable to a human. In contrast to existing approaches that modify classifier training to learn robust networks, this project seeks to secure naturally-trained classifiers using only image transformation. Long-run MCMC sampling with a convergent energy function preserves recognizable image features needed for classification while removing adversarial signals that disrupt classifier performance. The resulting defense is the first to secure highly vulnerable classifiers trained with natural images alone, providing the first viable and competitive alternative to adversarial training and related robust training modifications.