Daniel D. Johnson


Here are some of the research projects I've worked on. You might also be interested in my Google Scholar profile.

Learning graph structure with a finite-state automaton layer (pdf, talk, code)

Daniel D. Johnson, Hugo Larochelle, Daniel Tarlow
NeurIPS 2020 (spotlight presentation); also presented at GRL+ 2020
We propose a differentiable layer that learns to add new edges to a graph based on a continuous relaxation of the accepting paths of a finite-state automaton. This layer can reproduce static analyses of Python programs and, when combined with a larger graph-based model and trained end to end, improves performance on the variable misuse program understanding task.

dex-lang (code)

Multiple contributors
A research language for typed, functional array processing. My main contributions so far are extensible record and variant types, and I'm working on using these to support statically-typed named axes.

Latent Gaussian Activity Propagation: Using smoothness and structure to separate and localize sounds in large noisy environments (pdf, poster)

Daniel D. Johnson, Daniel Gorelik, Ross E. Mawhorter, Kyle Suver, Weiqing Gu, Steven Xing, Cody Gabriel, Peter Sankhagowit
NeurIPS 2018
We present an approach for simultaneously separating and localizing multiple sound sources recorded by multiple microphones using MAP inference and automatic differentiation. We start with a smoothness prior over the temporal, frequential, and spatial characteristics of the source sounds and use gradient ascent to find the most likely separation.

Learning graphical state transitions (pdf, talk, code)

Daniel D. Johnson
ICLR 2017 (oral presentation)
This work introduces the GGT-NN, a model that uses a graph as an intermediate representation and learns to modify those graphs in response to textual input. The model is able to build knowledge graphs for most of the bAbI tasks and can also simulate the operation of simple Turing machines.

Learning to create jazz melodies using a product of experts (pdf, blog)

Daniel D. Johnson, Robert M. Keller, Nicholas Weintraut
International Conference on Computational Creativity 2017
We describe a neural network architecture that learns to generate jazz melodies over chord progressions. The predictions are generated by combining information from an "interval expert" that follows the melody contour and a "chord expert" that scores notes based on their relative position from the root note in the active chord.

Generating polyphonic music with tied-parallel networks (pdf, code, blog)

Daniel D. Johnson
EvoMusArt 2017
This work describes an architecture for generating polyphonic piano-roll music by using two arrays of LSTMs along perpendicular axes: time-axis LSTMs pass states forward in time, and note-axis LSTMs pass states upward across notes. The work started as a blog post, and was later published as a conference paper.