James Gornet

James Gornet

Email: jgornet [at] caltech [dot] edu
232 Annenberg Hall
PhD Candidate in Computational and Neural Systems, Caltech

Leveraging a neural network's geometric representation from self-supervised learning

People frequently implement neural networks for tasks distinct from the neural network’s training. Language models such as GPT-3 train to predict masked text using the surrounding text. People then use GPT-3 to answer various questions about a given text. Neural networks answer these questions by learning to represent human language. Although neural networks are assumed to be black boxes, neural networks, nevertheless, possess a tractable geometric structure provided by their nonlinear activations. My work studies the neural network’s internal geometric representation using differential geometry. In addition, I develop algorithms for tasks distinct from a neural network’s training by leveraging the neural network’s representation.

Understanding the geometry of natural language

Computational linguistics models human language syntax and semantics using mathematical methods. These methods, however, are restricted to pairwise relationships between words. Language models such as GPT-3 model complex statistical correlations to generate human-like text. My work uses differential geometry to understand how language models represent human language. Common geometric properties across various language models highlight complex semantic structure that pairwise correlations may not elicit.

Extending neurosymbolic programming to write provably correct code

Transformers as well as other attention-based neural networks demonstrate the ability to learn and generate complex algorithms in various computer languages such as Javascript and C++. These neural networks, however, frequently generate incorrect code and cannot validate the correctness of its code. Of course, human programmers also write incorrect code and cannot validate their code’s correctness. The field of dependent type theory studies methods of writing provably correct code. My work integrates neurosymbolic programming and dependent type theory using the univalence axiom, which states intuitively that types behave like path space objects.

Past projects

Understanding generative feedback in vision

Within the past decade, neuroscientists have advanced experimental methods significantly. With optogenetics, calcium imaging, and kilo-neuron electrodes, it is possible to record and perturb neural systems at the cell-level. In addition, organizations—such as the Allen Institute for Brain Science and the Janelia Research Campus—provide massive experimental datasets. The datasets, which take thousands of man-hours to collect, contain a wealth of information from visual processing to whole fly brain connectomics. Statistical analysis, then, has become the bottleneck to understanding the computational principles described by these experiments. By developing methods in statistics, I can interpret these experiments in a verifiable way. As an example, I am currently working with neuroscientists and machine learning scientists to develop the exact system identification of the retina. By parameterizing the retina as a ReLU neural network, we can provably recover the exact synaptic connectivity of the retina from only the input-output dataset generated from visual stimuli.

In addition to developing statistical methods, I also use machine learning tools to model various neural systems. By modeling visual pathways with a deep convolutional neural network, it is possible to characterize feedback and lateral connections as a recurrent neural network. While these overparameterized neural networks do not provide a one-to-one mapping between biological and artificial computation, these artificial neural networks offer insights at the cognitive-level into visual computations in neural systems. Working with neuroscientists and machine learning scientists, we are currently developing models of visual pathways using generative neural networks.

Selected Publications

Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Doris Y. Tsao, Anima Anandkumar. “Neural Networks with Recurrent Generative Feedback,” in Advances in Neural Information Processing Systems 32, 2020. [ArXiV]

Neuronal anatomy reconstruction and analysis

Reconstructing multiple molecularly defined neurons from individual brains and across multiple brain regions can reveal organizational principles of the nervous system. Understanding how neurons of the similar and different types co-locate themselves requires reconstruction of neuronal somatodendritic morphologies across the same brain. While electron microscopy provides dense neuronal reconstructions to answer these questions, its small field-of-view has limited its use for studying brain-wide organization. Oblique light-sheet microscopy—which can image thousands of individual neurons at a submicron resolution—enables a fast, versatile approach to this problem. Moreover, by registering reconstructed neurons to a common coordinate framework such as the Allen Mouse Brain Atlas, it is possible to study neuronal structure and organization across many brain regions and neuronal cell classes.

However, scaling neuronal reconstructions to such large datasets is not trivial. The gold standard of manual reconstruction is a tedious and labor-intensive process with a single neuronal reconstruction taking a few hours. The gold standard of manual reconstruction is a tedious and labor-intensive process with a single neuronal reconstruction taking a few hours. Many automated methods have appeared for neuronal reconstruction from fluorescence microscopy images; however, these algorithms can have slow reconstruction speeds, make topological errors despite high voxel-wise accuracy, and be vulnerable to rare but important image artifacts. To tackle this issue, I implemented a convolutional neural network to reconstruct neurons in light microscopy images. To prevent topological errors in the reconstructions, I developed a connectivity-based regularization technique as well as simulated the image artifacts using data augmentations. For efficient, scalable processing, I designed a freely available neuronal reconstruction framework—called NeuroTorch.

Selected Publications

James Gornet, Kannan Umadevi Venkataraju, Arun Narasimhan, Nicholas Turner, H. Sebastian Seung, Pavel Osten, Uygar Sümbül. “Reconstructing neuronal anatomy from whole-brain images,” in 2019 IEEE 16th International Symposium for Biomedical Imaging, 2019. [ArXiV]

James Gornet, Nicholas Turner, Kisuk Lee. "NeuroTorch: A framework for reconstructing neuronal morphology from optical microscopy images," 2018. [GitHub]

Kannan Umadevi Venkataraju, James Gornet, Gayathri Murugaiyan, Zhuhao Wu, Pavel Osten. “Development of brain templates for whole brain atlases,” Proc. SPIE 10865, Neural Imaging and Sensing, 2019, 1086511. [SPIE]