This graduate student working in the labs of CNT members Rajesh Rao and Bing Brunton is starting a new series of posts on our Engage and Enable blog. Strandquist will explain her research and give insight into the process for aspiring engineers and scientists.
Hello! I am a second year Ph.D. student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, and I am fortunate to work with CNT Co-Director Rajesh Rao and CNT member Bingni Brunton, whose work includes investigating neural processes of natural human behaviors. In subsequent posts I’m going to talk about my experiences getting started in research with the hopes of demystifying the process for anyone who is new. For now, I want to introduce myself and the types of research I am interested in.
I come from a non-traditional background with a career in the arts prior to pursuing higher education. As an undergraduate studying computer science, I became curious about how psychological processes such as logic and creativity, which to me seemed very different, could be modeled and studied within neuroscience. This sparked my eventual interest in understanding how the human brain works in everyday life. When searching for potential graduate research programs, I focused on professors whose labs conducted interesting research in such areas, leading me to Rao and Brunton. Brunton’s lab in particular fascinated me because of its dataset of long-term intracranial recordings of patients engaging in normal human behaviors while in a hospital for up to ten days. Corresponding with these neural recordings are time-synced videos which allow visual inspections to be made during any time the patient is in the hospital. This dataset offers the rare opportunity to peek into the brain while patients engage in typical human activities such as conversation, eating, watching television and sleeping.
Studying neural processes for advancing brain-computer interfaces
Strandquist (center) is shown in this September 2019 photo with CNT member Bing Brunton (far right) and other members of the Brunton Lab at the University of Washington.
A compelling motivator for studying the brain is to help people suffering from neurological diseases. Such diseases can damage or cause outright loss of critical functionalities such as speech or movement. One way to improve the lives of people with such impairments is through the use of brain-computer interfaces (BCIs), which are computational devices designed to interface between the brain and the outside world. BCIs take in neural activity data and translate it to intended actions. This can enable a host of different applications, including gaming, education, security and privacy, and restoration of neurologically impaired abilities. Recent advances have shown BCIs that can power an exoskeleton that helped a paralyzed patient to walk. BCIs have enormous potential to impact human quality of life, and the field has only just started to scratch the surface of this potential.
Most BCI literature uses data collected from carefully controlled experiments in the lab. This is useful for enabling researchers to focus on neural activity while minimizing noise. As you might imagine however, the brain is not a carefully controlled environment in day-to-day life. If we are to see how brains work in the real world, and importantly, if we are to solve real-world problems in the human brain, it’s essential that we close this technological gap by increasing our understanding of neural processes occurring outside controlled environments. The Brunton lab’s “naturalistic” dataset offers the chance to do just that.
My initial exploration of this data started with isolating moments in time where patients engaged in conversation with family members or doctors. Prior work has shown different regions of the brain are responsible for speaking versus listening, and I wanted to see if this could be detected with machine learning algorithms given our naturalistic data. I discovered that a simple Random Forest classifier could be trained to discriminate between these two behaviors given this data. Upon probing which areas of the brain yielded the most important features for the classifier’s learning, some subjects’ data showed Broca’s area (a part of the cortex known to be active in speech production) as being particularly useful. While this early result was encouraging, there was an insufficient amount of data to draw solid conclusions — I needed to scale up. This brought me to a difficulty encountered by many in data science, that of data labels.
Unsupervised machine learning
Advances in machine learning are partly due to deep learning models, especially in the realm of supervised deep learning. Supervised learning requires a label to indicate what each data point represents, but labels are generally very time-consuming to collect.
Given this, I stepped back and asked another question. Was it possible for a model to find structures in brain data in an unsupervised way? Unsupervised learning is designed to discover features in data without labels. An additional advantage of this approach is that it makes fewer assumptions about the data, allowing the model to discover any hidden structures in data that may exist. This is a useful attribute given variable neural data where scientists don’t always know what features they should be looking for. Structures found in this way can be used as feature inputs for a classifier. This process of giving features discovered in an unsupervised method to a classifier is considered self-supervised learning, which brings me to my current research focus.
I currently seek to design a self-supervised brain decoder that can identify what behaviors a human is engaged in given naturalistic brain data. There are many ways this idea could be implemented. I was interested in the potential of unsupervised generative models, which simulate how real-world data is produced by learning the joint distribution over all variables that they see. This is particularly useful where labeled data is scarce but unlabeled data is abundant, as in my case. A particular type of generative model is the variational autoencoder (VAE) which requires relatively minimal data for learning. These insights inspired my current endeavor where I am collaborating with neuroscientists in my lab to design a VAE that can find hidden structures within neural data. As an initial test, I am using a publicly available dataset of intracranial activity recorded during experimentally defined motor tasks. My model must first be shown to work on clean data if it is to be successful given noisy real-world data. Preliminary results from this test show that the model is able to find separable features that could correspond to the different motor tasks.
My next steps include further exploration of model interpretation to discover if the features the model learns actually correspond to the tasks known to exist in the data. Once the architecture is sufficiently successful, I will extend it to the naturalistic data. Following this, I am also planning to explore the addition of a recurrent neural network to my design. Recurrent neural networks use memory to learn temporal correlations over time and can learn the transitions that exist between each behavior. This allows behaviors to be decoded continuously in real time. A continuous brain decoder of naturalistic brain data will bring the neural engineering community much closer to practical deployment of BCIs.
This is certainly an ambitious project and I am grateful to be working with my advisors as well as colleagues who are experts in varying areas including machine learning, neuroscience, dynamical systems and statistics. Please feel free to contact me to ask any questions about my project or about the research project in general.