My current research has two major branches:

Energy-based learning with ConvNet functions exhibits surprising behaviors not encountered for earlier energy functions.
Two distinct of outcomes are possible: convergent learning and non-convergent learning.

Convergent learning is consistent with conventional theoretical expectations but is difficult to achieve in practice.
Learning convergent energy functions with realistic long-run MCMC samples is essential for energy landscape mapping applications.
My research introduces the first ConvNet energy functions with realistic long-run MCMC samples in the image space.

Non-convergent learning is an unexpected phenomenon that occurs for ConvNet energy functions which is explored here.
Informative initialization methods for MCMC sampling such as Contrastive Divergence and Persistent Contrastive Divergence are not needed for stable learning and high-quality short-run synthesis.
By initializing MCMC samples from noise images rather than a data image or persistent image, one can learn a non-convergent energy function that can generate realistic images from noise like a generator or flow model.

The local modes of an energy function are stable states that appear with high probability. An energy function defines a non-Euclidean geometry over the state space. Geodesic distances along the energy manifold provide a measure of conceptual similarity between states. Related groups of local modes form macroscopic non-convex structures that are analagous to folding funnels of protein potentials. I use a novel MCMC algorithm to detect metastable structures of learned energy functions that correspond to intuitive image concepts, as explored here.