USC Logo

USC

Neural Modeling and Interface Lab

Building Brain-Like Devices for Cognitive Restoration

We develop biomimetic devices and next-generation neural interfaces to understand brain functions and build cortical prostheses. Our research focuses on investigating cognitive processes during naturalistic behaviors, particularly in regions like the hippocampus, to advance treatments for neurological disorders.

Research Focus Areas

The overarching goal of our research is to build biomimetic devices that can be used to treat neurological disorders. Specifically, we develop next-generation modeling and neural interface methodologies to investigate brain functions during naturalistic behaviors in order to understand how brain regions such as the hippocampus perform cognitive functions, and build cortical prostheses that can restore and enhance cognitive functions lost in diseases or injuries.

Neural Interface Technology

Computational Modeling

We synergistically combine mechanistic and input-output modeling approaches to build computational models to investigate the underlying mechanisms of learning and memory, and develop hippocampal prostheses to restore and enhance memory functions lost in diseases or injuries.

Discover More Projects
Cognitive Neuroscience

Neural Interface Development

We develop invasive and noninvasive neural interface technologies for chronic, wireless, multi-region, large-scale recording and stimulation of the nervous system in untethered animals to study neural functions during naturalistic behaviors and develop implantable biomimetic neural prostheses.

Discover More Projects

Current Team Members

Meet the interdisciplinary researchers driving innovation at the Signal Analysis and Interpretation Laboratory.

Dr. Daniel Bone

Dr. Daniel Bone

Senior Research Associate

Behavioral Signal Processing

Dr. Emily Mower Provost

Dr. Emily Mower Provost

Research Scientist

Emotion Recognition

Dr. Victor Martinez

Dr. Victor Martinez

Lead Research Engineer

Speech Processing

Dr. Sarah Taylor

Research Associate working on computer vision and machine learning approaches for facial expression analysis and audiovisual speech processing.

Dr. Sarah Taylor

Research Associate

Computer Vision

Michael Chen

PhD candidate researching deep learning approaches for multimodal emotion recognition in human-computer interaction contexts.

Michael Chen

PhD Candidate

Multimodal Deep Learning

Sophia Rodriguez

PhD student focusing on natural language processing for mental health applications, developing models to analyze linguistic patterns in clinical contexts.

Sophia Rodriguez

PhD Student

NLP for Healthcare

James Wilson

PhD candidate working on speech signal processing for children with developmental disorders, developing automated assessment tools.

James Wilson

PhD Candidate

Speech Signal Processing

Aisha Patel

PhD student researching computer vision approaches for human behavior analysis, with applications in healthcare and assistive technologies.

Aisha Patel

PhD Student

Computer Vision

Featured Publications

Our latest research contributions to the scientific community.

Multimodal Behavior Modeling Research

Multimodal Behavior Modeling for Mental Health Assessment: A Deep Learning Approach

Authors: S. Narayanan, D. Bone, E. Mower Provost, M. Chen, S. Rodriguez

Published in: IEEE Transactions on Affective Computing, 2024

This paper presents a novel deep learning framework for integrating multimodal behavioral signals (speech, language, facial expressions) to assess mental health conditions, demonstrating significant improvements over unimodal approaches.

Speech Emotion Recognition Research

Self-Supervised Learning for Speech Emotion Recognition with Limited Labeled Data

Authors: V. Martinez, S. Taylor, J. Wilson, A. Patel, S. Narayanan

Published in: Proceedings of INTERSPEECH, 2024

This work introduces a novel self-supervised learning approach that leverages large amounts of unlabeled speech data to improve emotion recognition performance when labeled data is scarce.

Transformer-Based Models Research

Transformer-Based Models for Multimodal Fusion in Human Behavior Analysis

Authors: M. Chen, S. Rodriguez, C.-C. Jay Kuo, M. Matarić, S. Narayanan

Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023

This paper presents a novel transformer-based architecture for fusing multimodal information in human behavior analysis tasks, demonstrating state-of-the-art performance on several benchmark datasets.

Latest News

Stay updated with the latest developments from our lab.

Lab Research
June 15, 2025

New Grant Awarded for Neural Interface Research

Our lab has received a $2.5M grant from the NIH to develop next-generation neural interfaces for memory restoration.

Read More
Conference Presentation
June 10, 2025

Best Paper Award at ICML 2025

Our team's work on self-supervised learning for neural data analysis received the Best Paper Award at ICML 2025.

Read More
Student Success
June 5, 2025

PhD Students Win Innovation Awards

Two of our PhD students have been recognized with USC's Outstanding Innovation Awards for their work in neural prosthetics.

Read More

Featured In

Our research has been featured in leading academic and media platforms.

USC Viterbi
MIT Technology Review
NBC News
New Scientist
IEEE Xplore
Nature
USC News

Contact Us

Interested in collaborating or learning more about our research? Get in touch with us.

Send us a message

Visit Us

Address

Signal Analysis and Interpretation Laboratory
University of Southern California
3740 McClintock Avenue, EEB 400
Los Angeles, CA 90089-2564

Email

slab@usc.edu

Phone

+1 (213) 740-3477