research basics lab members publications links contact  



 

The broad goal of our lab is to understand the neural mechanisms that translate sensory inputs into behavioral outputs. Although traditionally genes and neural circuits are studied in isolation, one at a time, recent advances in data acquisition (e.g., RNASeq, dense recordings) and scientific computation have enabled the construction of rigorous quantitative frameworks that describe the overall architecture of, for example, the transcriptome of a particular cell type or population activity in a given brain region. However, we lack a similar quantitative framework for characterizing the output of the brain: behavior itself. The ability to objectively and comprehensively characterize patterns of action is essential if we are to ever understand the myriad interrelationships between genes, neural activity and behavior.

To this end, our lab has recently developed a new method for characterizing the underlying structure of spontaneous or stimulus-evoked mouse behavior (Wiltschko et al, Neuron 88:1-15, 2015). Our approach was inspired by the groundbreaking work done by ethologists like Niko Tinbergen and Konrad Lorenz, who posited that complex patterns of behavior are composed of atomic behavioral modules that are placed into specific sequences by the brain to generate meaningful action. To identify potential behavioral modules in rodents, we image moving mice with high spatial and temporal fidelity using a 3D infrared imaging technique that requires only a single camera. By analyzing data from this 3D imaging stream using recently developed techniques in computational inference, we have found that mouse behavior can be effectively described as a series of reused and stereotyped modules with defined transition probabilities. By analogy to birdsong, we refer to each of these units of behavior as a behavioral “syllable,” which is placed into sequences by the brain using the predictable rules of a behavioral “grammar.” Each behavioral syllable is a 3D motif of behavior (e.g., a turn to the right, a pause, a headbob to the left) that the mouse reuses repeatedly during complex behavior; the number and form of these syllables are specified by experimental context and genetics. For example, in a typical open field assay a C57/b6 mouse will express about 60 syllables whose average duration is approximately 300 milliseconds. By using this combined 3D imaging and computational modeling technique, we can automatically identify both predicted and surprising alterations in behavior induced by a variety of experimental manipulations, ranging from changes in the sensory environment to activity changes within a specific neural circuit. This work demonstrates that mouse body language is built from identifiable components and is organized in a predictable fashion; deciphering this language establishes an objective framework for characterizing the influence of environmental cues, genes and neural activity on behavior.

Ongoing projects in the lab include using this method to characterize the structure of odor-driven innate behaviors, interrogative experiments (involving both recordings and neural manipulations) that correlate neural activity with observed behavioral syllables and grammar, and additional technical development aimed at improving both the spatiotemporal resolution of the imaging, and the underlying models that infer structure in the data. We are also broadening the inferential framework we use to infer joint structure in the behavioral data; this approach will enable rigorous exploration of joint dependencies in the behavior of two interacting mice, and the inference of causal relationships between activity in populations of neurons and specific behavioral syllables.

Requests for code: Please email dattalab@hms.harvard.edu with your full contact information and a valid GitHub account name. You will be sent a Material Transfer Agreement from Harvard Medical School; upon successful completion of the MTA you will be added to a private GitHub repository that contains the image extraction, inference and modeling code described in Wiltschko et al, along with example 3D imaging data to facilitate exploration.

Note that currently we do not have a distributable “pipeline” or a “system” that enables users to simply plug in a depth camera, install some software, and generate segmentations of behavior; from a practical perspective, implementing our approach with newly-acquired datasets requires significant prior computer science experience, particularly with approaches in inferential statistics. The pre-processing code has been optimized for the Kinect for Windows, and will not work with alternative depth cameras without significant modification. The code we are posting (as detailed in the MTA) is research-grade and is being made available as-is; we cannot provide support for specific use instances.

We are acutely aware of the need for a simple plug-and-play system to implement our method broadly across laboratories, and are working actively with software engineers to develop a robust and flexible system that will enable most researchers to easily apply this method. As progress in made in this regard, additional information will be posted to this page.

Last modified: December 1, 2015