One broad goal of our lab is to understand the neural mechanisms that translate sensory inputs into behavioral outputs. Although traditionally genes and neural circuits are studied in isolation, one at a time, recent advances in data acquisition and scientific computation have enabled the construction of rigorous quantitative frameworks that describe the overall architecture of e.g., the transcriptome of a particular cell type, or population activity in a given brain region. However, we lack a similar quantitative framework for characterizing the output of the brain: behavior itself. The ability to objectively and comprehensively characterize patterns of action is essential if we are to ever understand the myriad interrelationships between genes, neural activity and behavior.
To this end, our lab has recently developed a new method for characterizing the underlying structure of spontaneous or stimulus-evoked mouse behavior, which we refer to as Motion Sequencing (MoSeq) (Wiltschko et al.). MoSeq was inspired by groundbreaking work done by ethologists like Niko Tinbergen and Konrad Lorenz, who posited that complex patterns of behavior are composed of atomic behavioral modules that are placed into specific sequences by the brain to generate meaningful action. To identify potential behavioral modules in rodents, we image moving mice with high spatial and temporal fidelity using a 3D infrared imaging technique that requires only a single camera. By analyzing data from this 3D imaging stream using recently developed techniques in computational inference, we find that mouse behavior can be effectively described as a series of reused and stereotyped modules with defined transition probabilities. By analogy to birdsong, we refer to each of these units of behavior as a behavioral “syllable,” which is placed into sequences by the brain using the predictable rules of a behavioral “grammar.” Each behavioral syllable is a 3D motif of behavior (e.g., a turn to the right, a pause, a headbob to the left) that the mouse reuses repeatedly during complex behavior; the number and form of these syllables are specified by experimental context and genetics, and determined based upon regularities in the data without human supervision by MoSeq. By using this combined 3D imaging and computational modeling technique, we can automatically identify both predicted and surprising alterations in behavior induced by a variety of experimental manipulations, ranging from changes in the sensory environment to activity changes within a specific neural circuit. Mouse body language is therefore built from identifiable components and is organized in a predictable fashion; the ability of MoSeq to decipher this language establishes an objective framework for characterizing the influence of environmental cues, genes and neural activity on behavior.
Recently we have rendered MoSeq compatible with simultaneous tethered neural recordings using silicon probes, fiber photometry and miniscopes (Markowitz et al.). This advance has allowed us to identify specific neural representations for both syllables and grammar within corticostriatal circuits, and further to demonstrate that these circuits are required to string together syllables into appropriate sequences that encode both spontaneous and odor-evoked behaviors. These specific experiments demonstrate that action selection occurs on a moment-to-moment basis to enable animals to adapt to the world, and set the stage for future experiments that take advantage of MoSeq to decipher how motor-related circuits “decode” activity in sensory circuits to compose meaningful context-specific behavioral sequences.
Ongoing projects in the lab include using MoSeq to characterize the structure of solitary and social behaviors, to understand how syllables and grammar are influenced by sensory information, to reveal how unrestrained behavior might influence sensory representations, and to measure behavioral variability to better understand how action evolves during learning. We are also broadening the inferential framework we use to infer joint structure in the behavioral data; this approach will enable rigorous exploration of joint dependencies, which will be crucial in understanding how behavior is shaped in response to either internal or external state. Finally, we are going closed-loop — triggering or inhibiting neural activity based upon the expression of a particular syllable or syllable sequence to falsify theories about the relationship between brain and behavior.
Requests for code: Please email email@example.com with your full contact information and a valid GitHub account name. You will be sent a Material Transfer Agreement from Harvard Medical School; upon successful completion of the MTA you will be added to a private GitHub repository that contains the image extraction, inference and modeling code described in Markowitz et al., along with example 3D imaging data to facilitate exploration. Note that the new codebase from Markowitz et al. supercedes that from Wiltschko et al.; we encourage any users of the old codebase (and the original Kinect) to switch, as the new code is better documented and the Kinect 2 has higher resolution and better signal-to-noise.
Note that currently we do not have a distributable “pipeline” or a “system” that enables users to simply plug in a depth camera, install some software, and generate segmentations of behavior; from a practical perspective, implementing our approach with newly-acquired datasets requires significant prior computer science experience, particularly with approaches in inferential statistics. The pre-processing code has been optimized for the Kinect (in Wiltschko et al.) and for the Kinect 2 (in Markowitz et al.), and will not work with alternative depth cameras without significant modification. The code we are posting (as detailed in the MTA) is research-grade and is being made available as-is; we cannot provide support for specific use instances.
We are acutely aware of the need for a simple plug-and-play system to implement our method broadly across laboratories, and are working actively to obtain resources that would enable us to develop a robust and flexible system that will enable most researchers to easily apply this method. As progress in made in this regard, additional information will be posted to this page.
Last modified: August 17, 2018