The acquisition and execution of motor skills are mediated by a distributed motor network, spanning cortical and subcortical brain areas. The sensorimotor striatum is an important cog in this network, yet the roles of its two main inputs, from motor cortex and thalamus, remain largely unknown. To address this, we silenced the inputs in rats trained on a task that results in highly stereotyped and idiosyncratic movement patterns. While striatal-projecting motor cortex neurons were critical for learning these skills, silencing this pathway after learning had no effect on performance. In contrast, silencing striatal-projecting thalamus neurons disrupted the execution of the learned skills, causing rats to revert to species-typical pressing behaviors and preventing them from relearning the task. These results show distinct roles for motor cortex and thalamus in the learning and execution of motor skills and suggest that their interaction in the striatum underlies experience-dependent changes in subcortical motor circuits.
Understanding the biological basis of social and collective behaviors in animals is a key goal of the life sciences, and may yield important insights for engineering intelligent multi-agent systems. A critical step in understanding the mechanisms underlying social behaviors is a precise readout of the full 3D pose of interacting animals. While approaches for multi-animal pose estimation are beginning to emerge, they remain challenging to compare due to the lack of standardized benchmark datasets for multi-animal 3D pose estimation. Here we introduce the PAIR-R24M (Paired Acquisition of Interacting Rats) dataset for multi-animal 3D pose estimation, which contains 21.5 million frames of RGB video and 3D ground-truth motion capture of dyadic interactions in laboratory rats. PAIR-R24M contains data from 18 distinct pairs of rats across diverse behaviors, from 30 different viewpoints. The data are temporally contiguous and annotated with 11 behavioral categories, and 3 interaction behavioral categories, using a multi-animal extension of a recently developed behavioral segmentation approach. We used a novel multi-animal version of the recently published DANNCE network to establish a strong baseline for multi-animal 3D pose estimation without motion capture. These recordings are of sufficient resolution to allow us to examine cross-pair differences in social interactions, and identify different conserved patterns of social interaction across rats.
The basal ganglia are known to influence action selection and modulation of movement vigor, but whether and how they contribute to specifying the kinematics of learned motor skills is not understood. Here, we probe this question by recording and manipulating basal ganglia activity in rats trained to generate complex task-specific movement patterns with rich kinematic structure. We find that the sensorimotor arm of the basal ganglia circuit is crucial for generating the detailed movement patterns underlying the acquired motor skills. Furthermore, the neural representations in the striatum, and the control function they subserve, do not depend on inputs from the motor cortex. Taken together, these results extend our understanding of the basal ganglia by showing that they can specify and control the fine-grained details of learned motor skills through their interactions with lower-level motor circuits.
Comprehensive descriptions of animal behavior require precise three-dimensional (3D) measurements of whole-body movements. Although two-dimensional approaches can track visible landmarks in restrictive environments, performance drops in freely moving animals, due to occlusions and appearance changes. Therefore, we designed DANNCE to robustly track anatomical landmarks in 3D across species and behaviors. DANNCE uses projective geometry to construct inputs to a convolutional neural network that leverages learned 3D geometric reasoning. We trained and benchmarked DANNCE using a dataset of nearly seven million frames that relates color videos and rodent 3D poses. In rats and mice, DANNCE robustly tracked dozens of landmarks on the head, trunk, and limbs of freely moving animals in naturalistic settings. We extended DANNCE to datasets from rat pups, marmosets, and chickadees, and demonstrate quantitative profiling of behavioral lineage during development.
Animal pose estimation from video data is an important step in many biological studies, but current methods struggle in complex environments where occlusions are common and training data is scarce. Recent work has demonstrated improved accuracy with deep neural networks, but these methods often do not incorporate prior distributions that could improve localization. Here we present GIMBAL: a hierarchical von Mises-Fisher-Gaussian model that improves upon deep networks’ estimates by leveraging spatiotemporal constraints. The spatial constraints come from the animal’s skeleton, which induces a curved manifold of keypoint configurations. The temporal constraints come from the postural dynamics, which govern how angles between keypoints change over time. Importantly, the conditional conjugacy of the model permits simple and efficient Bayesian inference algorithms. We assess the model on a unique experimental dataset with video of a freely-behaving rodent from multiple viewpoints and ground-truth motion capture data for 20 keypoints. GIMBAL extends existing techniques, and in doing so offers more accurate estimates of keypoint positions, especially in challenging contexts.
In mammalian animal models, high-resolution kinematic tracking is restricted to brief sessions in constrained environments, limiting our ability to probe naturalistic behaviors and their neural underpinnings. To address this, we developed CAPTURE (Continuous Appendicular and Postural Tracking Using Retroreflector Embedding), a behavioral monitoring system that combines motion capture and deep learning to continuously track the 3D kinematics of a rat’s head, trunk, and limbs for week-long timescales in freely behaving animals. CAPTURE realizes 10- to 100-fold gains in precision and robustness compared with existing convolutional network approaches to behavioral tracking. We demonstrate CAPTURE’s ability to comprehensively profile the kinematics and sequential organization of natural rodent behavior, its variation across individuals, and its perturbation by drugs and disease, including identifying perseverative grooming states in a rat model of fragile X syndrome. CAPTURE significantly expands the range of behaviors and contexts that can be quantitatively investigated, opening the door to a new understanding of natural behavior and its neural basis.
To generate adaptive behaviors, animals must learn from their interactions with the environment. Describing the algorithms that govern this learning process and how they are implemented in the brain is a major goal of neuroscience. Careful and controlled observations of animal learning by Thorndike, Pavlov and others, now more than a century ago, identified intuitive rules by which animals (including humans) can learn from their experiences by associating sensory stimuli and motor actions with rewards. But going from explaining learning in simple paradigms to deciphering how complex problems are solved in rich and dynamic environments has proven difficult (Figure 1). Recently, this effort has received help from computer scientists and engineers hoping to emulate intelligent adaptive behaviors in machines. Inspired by the animal behavior literature, pioneers in artificial intelligence developed a rigorous and mathematically principled framework within which reward-based learning can be formalized and studied. Not only has the field of reinforcement learning become a boon to machine learning and artificial intelligence, it has also provided a theoretical foundation for biologists interested in deciphering how the brain implements reinforcement learning algorithms. The ability of reinforcement learning agents to solve complex, high-dimensional learning problems has been dramatically enhanced by using deep neural networks (deep reinforcement learning, Figure 1). Indeed, aided by ever-increasing computational resources, deep reinforcement learning algorithms can now outperform human experts on a host of well-defined complex tasks …
Though the temporal precision of neural computation has been studied intensively, a data-driven determination of this precision remains a fundamental challenge. Reproducible spike patterns may be obscured on single trials by uncontrolled temporal variability in behavior and cognition and may not be time locked to measurable signatures in behavior or local field potentials (LFP). To overcome these challenges, we describe a general-purpose time warping framework that reveals precise spike-time patterns in an unsupervised manner, even when these patterns are decoupled from behavior or are temporally stretched across single trials. We demonstrate this method across diverse systems: cued reaching in nonhuman primates, motor sequence production in rats, and olfaction in mice. This approach flexibly uncovers diverse dynamical firing patterns, including pulsatile responses to behavioral events, LFP-aligned oscillatory spiking, and even unanticipated patterns, such as 7 Hz oscillations in rat motor cortex that are not time locked to measured behaviors or LFP.
Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience.
Trial-to-trial movement variability can both drive motor learning and interfere with expert performance, suggesting benefits of regulating it in context-specific ways. Here we address whether and how the brain regulates motor variability as a function of performance by training rats to execute ballistic forelimb movements for reward. Behavioral datasets comprising millions of trials revealed that motor variability is regulated by two distinct processes. A fast process modulates variability as a function of recent trial outcomes, increasing it when performance is poor and vice versa. A slower process tunes the gain of the fast process based on the uncertainty in the task's reward landscape. Simulations demonstrated that this regulation strategy optimizes reward accumulation over a wide range of time horizons, while also promoting learning. Our results uncover a sophisticated algorithm implemented by the brain to adaptively regulate motor variability to improve task performance. VIDEO ABSTRACT.
The development of increasingly sophisticated methods for recording and manipulating neural activity is revolutionizing neuroscience. By probing how activity patterns in different types of neurons and circuits contribute to behavior, these tools can help inform mechanistic models of brain function and explain the roles of distinct circuit elements. However, in systems where functions are distributed over large networks, interpreting causality experiments can be challenging. Here we review common assumptions underlying circuit manipulations in behaving animals and discuss the strengths and limitations of different approaches.
Addressing how neural circuits underlie behavior is routinely done by measuring electrical activity from single neurons in experimental sessions. While such recordings yield snapshots of neural dynamics during specified tasks, they are ill-suited for tracking single-unit activity over longer timescales relevant for most developmental and learning processes, or for capturing neural dynamics across different behavioral states. Here we describe an automated platform for continuous long-term recordings of neural activity and behavior in freely moving rodents. An unsupervised algorithm identifies and tracks the activity of single units over weeks of recording, dramatically simplifying the analysis of large datasets. Months-long recordings from motor cortex and striatum made and analyzed with our system revealed remarkable stability in basic neuronal properties, such as firing rates and inter-spike interval distributions. Interneuronal correlations and the representation of different movements and behaviors were similarly stable. This establishes the feasibility of high-throughput long-term extracellular recordings in behaving animals.
Trial-to-trial variability in the execution of movements and motor skills is ubiquitous and widely considered to be the unwanted consequence of a noisy nervous system. However, recent studies have suggested that motor variability may also be a feature of how sensorimotor systems operate and learn. This view, rooted in reinforcement learning theory, equates motor variability with purposeful exploration of motor space that, when coupled with reinforcement, can drive motor learning. Here we review studies that explore the relationship between motor variability and motor learning in both humans and animal models. We discuss neural circuit mechanisms that underlie the generation and regulation of motor variability and consider the implications that this work has for our understanding of motor learning. Expected final online publication date for the Annual Review of Neuroscience Volume 40 is July 8, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in 'tutor' circuits (e.g., LMAN) should match plasticity mechanisms in 'student' circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.
Rapid and reversible manipulations of neural activity in behaving animals are transforming our understanding of brain function. An important assumption underlying much of this work is that evoked behavioural changes reflect the function of the manipulated circuits. We show that this assumption is problematic because it disregards indirect effects on the independent functions of downstream circuits. Transient inactivations of motor cortex in rats and nucleus interface (Nif) in songbirds severely degraded task-specific movement patterns and courtship songs, respectively, which are learned skills that recover spontaneously after permanent lesions of the same areas. We resolve this discrepancy in songbirds, showing that Nif silencing acutely affects the function of HVC, a downstream song control nucleus. Paralleling song recovery, the off-target effects resolved within days of Nif lesions, a recovery consistent with homeostatic regulation of neural activity in HVC. These results have implications for interpreting transient circuit manipulations and for understanding recovery after brain lesions.
Addressing how neural circuits underlie behavior is routinely done by measuring electrical activity from single neurons during experimental sessions. While such recordings yield snapshots of neural dynamics during specified tasks, they are ill-suited for tracking single-unit activity over longer timescales relevant for most developmental and learning processes, or for capturing neural dynamics outside of task context. Here we describe an automated platform for continuous long-term recordings of neural activity and behavior in freely moving animals. An unsupervised algorithm identifies and tracks the activity of single units over weeks of recording, dramatically simplifying the analysis of large datasets. Months-long recordings from motor cortex and striatum made and analyzed with our system revealed remarkable stability in basic neuronal properties, such as firing rates and inter-spike interval distributions. Interneuronal correlations and the representation of different movements and behaviors were similarly stable. This establishes the feasibility of high-throughput long-term extracellular recordings in behaving animals.
Motor cortex is widely believed to underlie the acquisition and execution of motor skills, but its contributions to these processes are not fully understood. One reason is that studies on motor skills often conflate motor cortex's established role in dexterous control with roles in learning and producing task-specific motor sequences. To dissociate these aspects, we developed a motor task for rats that trains spatiotemporally precise movement patterns without requirements for dexterity. Remarkably, motor cortex lesions had no discernible effect on the acquired skills, which were expressed in their distinct pre-lesion forms on the very first day of post-lesion training. Motor cortex lesions prior to training, however, rendered rats unable to acquire the stereotyped motor sequences required for the task. These results suggest a remarkable capacity of subcortical motor circuits to execute learned skills and a previously unappreciated role for motor cortex in "tutoring" these circuits during learning.
Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.