Vikram Ramanarayanan

alt text 

Chief Science Officer, Modality.AI
Email: vikram.ramanarayanan@modality.ai

Assistant Adjunct Professor, Otolaryngology - Head and Neck Surgery (OHNS)
University of California, San Francisco (UCSF)

Sr. Principal Research Scientist (WOC), SF VA Health Care System
US Department of Veterans Affairs (VA)

Previous Affiliations:

  • Managing Sr. Research Scientist & Office Manager, ETS (San Francisco, CA)

  • SAIL Lab, University of Southern California (Los Angeles, CA)

  • Speech and Audio Group, Indian Institute of Science (Bangalore, India)

Google Scholar Profile
LinkedIn Profile


I am the Chief Scientist at Modality.AI, which offers clinically-validated and HIPAA-compliant solutions for remote patient monitoring and assessment of neurological and mental health. Modality.AI uses a conversational AI system to monitor patients' speech and facial responses through browser-based video calls. I am also an Assistant Adjunct Professor in the Department of Otolaryngology - Head and Neck Surgery at the University of California, San Francisco (UCSF), where I collaborate with Dr. John Houde and Dr. Srikantan Nagarajan on speech motor control modeling research.

I was previously a Managing Senior Research Scientist in the R&D division at Educational Testing Services (ETS), where I managed the San Francisco Office and directed dialog and multimodal systems research with applications to language learning and behavioral assessment. Our team was awarded the prestigious ETS Presidential Award for our work on multimodal dialog systems. I earned my M.S and Ph.D degrees at the University of Southern California where I worked with an amazing and interdisciplinary group of researchers to explore the cognitive and technological aspects of speech science. My principal advisor was Dr. Shrikanth Narayanan.

I have 17 years of interdisciplinary R&D and leadership experience in the areas of speech science, multimodal signal processing, dialog systems, linguistics, neuroscience and machine learning. My work has won 4 Best Paper awards and an Editor's choice award at top international conferences and journals and has resulted in over 150 publications and 35 patents (10 granted, 25 pending). I am also a Fellow of the USC Sidney Harman Academy for Polymathic Study. My research is also currently supported by two grants awards from the National Institutes of Health on "A digital tool for monitoring speech decline in ALS" (as a co-PI) and "Neurological voice disorders: Differentiating pathophysiology and developing novel treatments".

I am passionate about mentorship, and have mentored or co-mentored a total of 34 students (2 postdoctoral scholars, 8 PhD scholars, 12 Masters Students and 12 undergraduate students) in my career across a variety of research topics, including, but not limited to speech analysis and modeling for health and neuroscience (primarily at UCSF and Modality.AI), multimodal dialog for language learning and assessment (primarily at ETS), and general speech signal processing and modeling (primarily at USC).