Research Publications

Dissertation

  • Vikram Ramanarayanan, Toward understanding speech planning by observing its execution – representations, modeling and analysis. Ph.D. Thesis, University of Southern California, 2014, 171; 3643149. [link]

Book Chapters

  1. Vikram Ramanarayanan and Christina Hagedorn (2021), Magnetic Resonance Imaging, in: Manual of Clinical Phonetics, Martin J. Ball (Editor), Routledge. [link]

  2. Vikram Ramanarayanan, Keelan Evanini and Eugene Tsuprun (2019), Beyond monologues: Automated processing of conversational speech, in: Automated Speaking Assessment: Using Language Technologies to Score Spontaneous Speech., K. Zechner and K. Evanini, Eds., London: Routledge - Taylor and Francis. [link]

  3. Vikram Ramanarayanan, Robert Pugh, Yao Qian and David Suendermann-Oeft (2019). Automatic turn-Level language identification for code-switched Spanish-English dialog, in: 9th International Workshop on Spoken Dialogue System Technology: Lecture Notes in Electrical Engineering, L. F. D'Haro, R. Banchs, H. Li, Eds., Springer, 2019. [link]

  4. Vikram Ramanarayanan, David Suendermann-Oeft, Patrick Lange, Robert Mundkowsky, Alexei V. Ivanov, Zhou Yu, Yao Qian and Keelan Evanini (2017), Assembling the jigsaw: How multiple open standards are synergistically combined in the HALEF multimodal dialog system, in: Multimodal Interaction with W3C Standards: Towards Natural User Interfaces to Everything, D. A. Dahl, Ed., ed New York: Springer, 2017. [link]

  5. Zhou Yu, Vikram Ramanarayanan, Robert Mundkowsky, Patrick Lange, Alexei Ivanov, Alan W. Black, and David Suendermann-Oeft (2017). Multimodal HALEF: An open-source modular web-based multimodal dialog framework, in: Dialogues with Social Robots. Springer, Singapore, 2017. [link]

  6. David Suendermann-Oeft, Vikram Ramanarayanan, Moritz Teckenbrock, Felix Neutatz and Dennis Schmidt (2016), HALEF: an open-source standard-compliant telephony-based spoken dialog system – a review and an outlook, in proceedings of: Natural Language Dialog Systems and Intelligent Assistants, Springer, 2016. [link]

Journals

  1. Joshua Cohen, Vanessa Richter, Michael Neumann, David Black, Allie Haq, Jennifer Wright-Berryman and Vikram Ramanarayanan (2023). A Multimodal Dialog Approach To Mental State Characterization in Clinically Depressed, Anxious, and Suicidal Populations, in: Frontiers in Psychology. [link]

  2. Kwang S. Kim, Jessica L. Gaines, Benjamin Parrell, Vikram Ramanarayanan, Srikantan S. Nagarajan and John F. Houde (2023). Mechanisms of sensorimotor adaptation in a hierarchical state feedback control model of speech, in: PLOS Computational Biology. [link]

  3. Vikram Ramanarayanan, Adam Lammert, Hannah Rowe, Thomas F. Quatieri and Jordan R. Green (2022). Speech as a Biomarker: Opportunities, Interpretability, and Challenges, in: Perspectives of the ASHA Special Interest Groups. [pdf]

  4. Jessica Gaines, Kwang Kim, Benjamin Parrell, Vikram Ramanarayanan, Srikantan Nagarajan and John Houde (2021). Discrete constriction locations describe a comprehensive range of vocal tract shapes in the Maeda model, in: Journal of the Acoustical Society of America Express Letters. [pdf]

  5. Hardik Kothare, Inez Raharjo, Vikram Ramanarayanan, Kamalini Ranasinghe, Benjamin Parrell, Keith Johnson, John F. Houde, and Srikantan S. Nagarajan (2020). Sensorimotor adaptation of speech depends on the direction of auditory feedback alteration, in: Journal of the Acoustical Society of America, 148:6 (3682-3697). [pdf]

  6. Vikram Ramanarayanan, Benjamin Parrell, Srikantan Nagarajan and John Houde (2019). The FACTS model of speech motor control: Fusing state estimation and task-based control, in: PLOS Computational Biology, 15(9): e1007321. [pdf]

  7. Yao Qian, Rutuja Ubale, Patrick Lange, Keelan Evanini, Vikram Ramanarayanan and Frank K. Soong (2019). Spoken Language Understanding of Human-Machine Conversations for Language Learning Applications, in: Journal of Signal Processing Systems. [pdf]

  8. Vikram Ramanarayanan, Sam Tilsen, Michael Proctor, Johannes Toger, Louis Goldstein, Krishna Nayak and Shrikanth S. Narayanan (2018), Analysis of Speech Production Real-Time MRI, in: Computer Speech and Language. [pdf]

  9. Tetyana Sydorenko, Tom Smits, Keelan Evanini and Vikram Ramanarayanan (2018), Simulated Speaking Environments for Language Learning: Insights from Three Cases, in: Computer Assisted Language Learning. [pdf]

  10. Colin Vaz, Vikram Ramanarayanan and Shrikanth S. Narayanan (2018), Acoustic Denoising using Dictionary Learning with Spectral and Temporal Regularization, in: IEEE/ACM Transactions on Audio, Speech, and Language Processing. doi: 10.1109/TASLP.2018.2800280. [pdf]

  11. Ming Li, Jangwon Kim, Adam Lammert, Prasanta Ghosh, Vikram Ramanarayanan and Shrikanth Narayanan (2016), Speaker verification based on the fusion of speech acoustics and inverted articulatory signals, in: Computer Speech and Language. [pdf]

  12. Vikram Ramanarayanan, Maarten Van Segbroeck, and Shrikanth S. Narayanan (2015), Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories, in: Computer Speech and Language. [pdf]

  13. Adam Lammert, Louis Goldstein, Vikram Ramanarayanan, and Shrikanth S. Narayanan (2015), Gestural Control in the English Past-Tense Suffix: An Articulatory Study Using Real-Time MRI, in: Phonetica, 71 (229–248) (DOI:10.1159/000371820). [pdf] (Editor's Choice Article)

  14. Vikram Ramanarayanan, Adam Lammert, Louis Goldstein, and Shrikanth S. Narayanan (2014), Are Articulatory Settings Mechanically Advantageous for Speech Motor Control?, in: PLoS ONE, 9(8): e104168. doi:10.1371/journal.pone.0104168. [pdf]

  15. Shrikanth Narayanan, Asterios Toutios, Vikram Ramanarayanan, Adam Lammert, Jangwon Kim, Sungbok Lee, Krishna Nayak, Yoon-Chul Kim, Yinghua Zhu, Louis Goldstein, Dani Byrd, Erik Bresch, Prasanta Ghosh, Athanasios Katsamanis and Michael Proctor (2014), Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research, in: Journal of Acoustical Society of America, 136:3 (1307–1311). [pdf]

  16. Vikram Ramanarayanan, Louis Goldstein, and Shrikanth S. Narayanan (2013), Articulatory movement primitives – extraction, interpretation and validation, in: Journal of the Acoustical Society of America, 134:2 (1378-1394). [pdf]

  17. Vikram Ramanarayanan, Louis Goldstein, Dani Byrd and Shrikanth S. Narayanan (2013), A real-time MRI investigation of articulatory setting across different speaking styles, in: Journal of the Acoustical Society of America, 134:1 (510-519). [pdf]

  18. Vikram Ramanarayanan, Erik Bresch, Dani Byrd, Louis Goldstein and Shrikanth S. Narayanan (2009), Analysis of pausing behavior in spontaneous speech using real-time magnetic resonance imaging of articulation, in: Journal of the Acoustical Society of America Express Letters, 126:5 (EL160-EL165). [pdf]

Published Research Reports

  1. Vikram Ramanarayanan, David Pautler, Patrick Lange and David Suendermann-Oeft (2018), Interview With an Avatar: A Real-Time Cloud-Based Virtual Dialog Agent for Educational and Job Training Applications. ETS Research Memorandum Series, RM-18-02. Princeton, NJ: Educational Testing Service. [link]

  2. Vikram Ramanarayanan, Patrick Lange, Keelan Evanini, Hillary Molloy, Eugene Tsuprun, Yao Qian and David Suendermann-Oeft (2017), Using Vision and Speech Features for Automated Prediction of Performance Metrics in Multimodal Dialogs. ETS Research Report Series, Wiley, doi:10.1002/ets2.12146. [link]

  3. David Suendermann-Oeft, Vikram Ramanarayanan, Zhou Yu, Yao Qian, Keelan Evanini, Patrick Lange, Xinhao Wang and Klaus Zechner (2017), A Multimodal Dialog System for Language Assessment: Current State and Future Directions. ETS Research Report Series, Wiley, doi:10.1002/ets2.12149. [link]

  4. Vikram Ramanarayanan, David Suendermann-Oeft, Patrick Lange, Alexei V. Ivanov, Keelan Evanini, Zhou Yu, Eugene Tsuprun, and Yao Qian (2016), Bootstrapping Development of a Cloud-Based Spoken Dialog System in the Educational Domain From Scratch Using Crowdsourced Data, in: ETS Research Report Series, Wiley. doi: 10.1002/ets2.12105. [link]

Conference Publications

  1. Vanessa Richter, Michael Neumann Vikram Ramanarayanan (2024). Towards remote differential diagnosis of mental and neurological disorders using automatically extracted speech and facial features, in proceedings of: 38th Annual AAAI Conference on Artificial Intelligence (Workshop on Machine Learning for Cognitive and Mental Health) 2024, Vancouver, Canada, February 2024. [pdf]

  2. Nikhil Sukhdev, Oliver Roesler, Michael Neumann, Meredith Bartlett, Doug Habberstad and Vikram Ramanarayanan (2024). On The Feasibility of Multimodal Dialog Based Remote Balance Assessment, in proceedings of: 38th Annual AAAI Conference on Artificial Intelligence (8th International Workshop on Health Intelligence) 2024, Vancouver, Canada, February 2024. [pdf]

  3. Hardik Kothare, Michael Neumann and Vikram Ramanarayanan (2024). Relationship between sample size and responsiveness of speech-based digital biomarkers in ALS, in proceedings of: International Society for CNS Clinical Trials and Methodology (ISCTM 2024) Spring Conference, Washington, D.C, February 2024.

  4. Abhishek Hosamath, Lakshmi Arbatti, Hardik Kothare, Sejal Desai, Ira Shoulson, Vikram Ramanarayanan (2024). Computational linguistics analysis of cognition-related problems reported by Parkinson’s Disease patients, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  5. Nikhil Sukhdev, Oliver Roesler, Michael Neumann, Sejal Desai, Ira Shoulson and Vikram Ramanarayanan (2024). Multimodal Dialog Based Remote Assessment of Balance in Parkinson’s Disease and Other Movement Disorders, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  6. Jackson Liscombe, Reva Bajjuri, Hardik Kothare, Vikram Ramanarayanan (2024). Analytical validation of Canonical Timing Alignment (CTA) and other timing-related speech biomarkers in Amyotrophic Lateral Sclerosis (ALS) extracted automatically using a remote patient monitoring platform, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  7. Cathy Zhang, Oliver Roesler, Jackson Liscombe, Reva Bajjuri, Hardik Kothare, Vikram Ramanarayanan (2024). Analytical validation of facial metrics in Amyotrophic Lateral Sclerosis (ALS) extracted using a multimodal remote patient monitoring platform, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  8. Carly Demopoulos, Linnea Lampinen, Cristian Preciado, Hardik Kothare and Vikram Ramanarayanan (2024). Objective and Subjective Assessment of Facial and Vocal Affect Production in Autistic and Neurotypical Children and Adolescents, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  9. Jessica Gaines, Kwang Kim, Alvince Pongos, Ben Parrell, Vikram Ramanarayanan, Srikantan Nagarajan, John Houde (2024). Bayesian inference of state feedback control parameters reveals control differences in fo perturbation responses in cerebellar ataxia, in proceedings of: Motor Speech Conference 2024, San Diego, CA, February 2024.

  10. Jackson Liscombe, Hardik Kothare, Michael Neumann and Vikram Ramanarayanan (2023). Speech Biomarkers of Lyme Disease: A First Exploratory Analysis, in proceedings of: American Speech and Hearing Association (ASHA) Convention 2023, Boston, MA, November 2023. [pdf] (ASHA Meritorious Poster Award & ASHA Changemaker Session Laureate)

  11. Michael Neumann, Hardik Kothare, Jackson Liscombe and Vikram Ramanarayanan (2023). Assessing the Utility of Vowel Space Characteristics in Remotely Recorded Speech for ALS Progress Monitoring, in proceedings of: American Speech and Hearing Association (ASHA) Convention 2023, Boston, MA, November 2023. [pdf]

  12. Michael Neumann, Hardik Kothare, Christian Yavorsky, Anzalee Khan, Jean-Pierre Lindenmayer and Vikram Ramanarayanan (2023). Towards an Interpretable Index Score for the Assessment of Schizophrenia based on Multimodal Speech and Facial Biomarkers, in proceedings of: International Society for CNS Clinical Trials and Methodology (ISCTM 2023) Autumn Conference, Barcelona, Spain, October 2023. [pdf]

  13. Hardik Kothare, Michael Neumann, Vanessa Richter, Oliver Roesler, Jackson Liscombe, Anzalee Khan, Sandy Snyder, Christian Yavorsky, Benedicto Parker, Theresa Abad, Jessica E Huber, Jean-Pierre Lindenmayer and Vikram Ramanarayanan (2023). Differentiating primary and secondary expressive negative symptoms for remote digital trials using a multimodal dialogue platform, in proceedings of: International Society for CNS Clinical Trials and Methodology (ISCTM 2023) Autumn Conference, Barcelona, Spain, October 2023. [pdf]

  14. Vikram Ramanarayanan, David Pautler, Lakshmi Arbatti, Abhishek Hosamath, Michael Neumann, Hardik Kothare, Oliver Roesler, Jackson Liscombe, Andrew Cornish, Doug Habberstad, Vanessa Richter, David Fox, David Suendermann-Oeft and Ira Shoulson (2023). When Words Speak Just as Loudly as Actions: Virtual Agent Based Remote Health Assessment Integrating What Patients Say with What They Do, in proceedings of: Interspeech 2023, Dublin, Ireland, August 2023. [pdf]

  15. Michael Neumann, Hardik Kothare, Doug Habberstad, Vikram Ramanarayanan (2023). A Multimodal Investigation of Speech, Text, Cognitive and Facial Video Features for Characterizing Depression With and Without Medication, in proceedings of: Interspeech 2023, Dublin, Ireland, August 2023. [pdf]

  16. Michael Neumann, Hardik Kothare and Vikram Ramanarayanan (2023). Combining Multiple Multimodal Speech Features into an Interpretable Index Score for Capturing Disease Progression in Amyotrophic Lateral Sclerosis, in proceedings of: Interspeech 2023, Dublin, Ireland, August 2023. [pdf]

  17. Hardik Kothare, Michael Neumann, Jackson Liscombe, Jordan Green, Vikram Ramanarayanan (2023). Responsiveness, Sensitivity and Clinical Utility of Timing-Related Speech Biomarkers for Remote Monitoring of ALS Disease Progression, in proceedings of: Interspeech 2023, Dublin, Ireland, August 2023. [pdf]

  18. Vanessa Richter, Michael Neumann, Jordan Green, Brian Richburg, Oliver Roesler, Hardik Kothare, Vikram Ramanarayanan (2023). Remote Assessment for ALS using Multimodal Dialog Agents: Data Quality, Feasibility and Task Compliance, in proceedings of: Interspeech 2023, Dublin, Ireland, August 2023. [pdf]

  19. Hardik Kothare, Andrew Exner, Sandy Snyder, Jessica Huber and Vikram Ramanarayanan (2023). Lexico-semantic differences between people with PD and healthy controls observed in a story retell task, in proceedings of: World Parkinson's Congress 2023, Barcelona, Spain, July 2023. [pdf]

  20. Jessica Huber, Andrew Exner, Sandy Snyder, Vikram Ramanarayanan, Hardik Kothare, Jackson Liscombe, Renee Kohlmeier, Shreya Sridhar, Brianna Coster, Kaylee Patterson and Helen Willis (2023). Patterns and Consistency of Speech Changes in National Sample of People with Parkinson Disease in the United States, in proceedings of: World Parkinson's Congress 2023, Barcelona, Spain, July 2023. [pdf]

  21. Jean-Pierre Lindenmayer, Anzalee Khan, Vikram Ramanarayanan, David Suendermann-Oeft, David Pautler, Benedicto Parker and Mohan Parak (2023). A multimodal speech and facial digital assessment to assess negative symptoms, in proceedings of: Schizophrenia International Research Society (SIRS) Symposium on Digital Biotyping of Negative Symptoms: Advances and Challenges, Toronto, Canada, May 2023. [pdf]

  22. Lakshmi Arbatti, Abhishek Hosamath, Vikram Ramanarayanan, Ira Shoulson (2023). What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems, in: arXiv. [pdf].

  23. Jackson Liscombe, Hardik Kothare, Michael Neumann, David Pautler, and Vikram Ramanarayanan (2023). Pathology-specific settings for voice activity detection in a multimodal dialog agent for digital health monitoring, in proceedings of: International Workshop on Spoken Dialog Systems 2023, Los Angeles, CA, February 2023. [pdf]

  24. Daniel Tisdale, Jackson Liscombe, David Pautler and Vikram Ramanarayanan (2023). Towards integrating eye-gaze tracking into a multimodal dialog agent for remote patient assessment, in proceedings of: International Workshop on Spoken Dialog Systems 2023, Los Angeles, CA, February 2023. [pdf]

  25. Hardik Kothare, Doug Habberstad, Michael Neumann, Sarah White, David Pautler and Vikram Ramanarayanan (2023). Impact of synthetic voice and avatar animation on the usability of a dialogue agent for digital health monitoring, in proceedings of: International Workshop on Spoken Dialog Systems 2023, Los Angeles, CA, February 2023. [pdf]

  26. Anzalee Khan, Vikram Ramanarayanan, Hardik Kothare, David Pautler, Mohan Parak, Benedicto Parker, David Suendermann-Oeft, Christian Yavorsky, and Jean-Pierre Lindenmayer (2023). Using AI-driven platform to Detect Negative Symptoms of Schizophrenia Through Facial and Acoustic Analysis, in proceedings of: International Society for CNS Clinical Trials and Methodology (ISCTM 2023) Spring Conference, Washington, D.C, USA, Feb 2023. [pdf]

  27. Vanessa Richter, Michael Neumann, Hardik Kothare, Oliver Roesler, Jackson Liscombe, David Suendermann-Oeft, Sebastian Prokop, Anzalee Khan, Christian Yavorsky, Jean-Pierre Lindenmayer and Vikram Ramanarayanan (2022). Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia, in proceedings of: International Conference on Multimodal Interaction (ICMI) 2022 Workshop on Social Affective Multimodal Interaction for Health, Bengaluru, India, November 2022. [pdf]

  28. Oliver Roesler, Hardik Kothare, William Burke, Michael Neumann, Jackson Liscombe, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft and Vikram Ramanarayanan (2022). Exploring Facial Metric Normalization For Within- and Between-Subject Comparisons in a Multimodal Health Monitoring Agent, in proceedings of: International Conference on Multimodal Interaction (ICMI) 2022 Workshop on Social Affective Multimodal Interaction for Health, Bengaluru, India, November 2022. [pdf]

  29. Andrew Exner, Vikram Ramanarayanan, David Pautler, Hardik Kothare, Jackson Liscombe and Jessica Huber (2022). Use of a Telehealth Platform to Automatically Assess Prosodic Contours in Parkinson Disease, in proceedings of: American Speech and Hearing Association (ASHA) Convention 2022, New Orleans, LA, November 2022. [pdf]

  30. Hardik Kothare, Michael Neumann, Jackson Liscombe, Oliver Roesler, William Burke, Andrew Exner, Sandy Snyder, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Jessica Huber and Vikram Ramanarayanan (2022). Statistical and Clinical Utility of Multimodal Dialogue-Based Speech and Facial Metrics for Parkinson's Disease Assessment, in proceedings of: Interspeech 2022, Incheon, South Korea, September 2022. [pdf]

  31. Hardik Kothare, Michael Neumann, Jackson Liscombe, Oliver Roesler, Doug Habberstad, William Burke, Andrew Cornish, Lakshmi Arbatti, Abhishek Hosamath, David Fox, David Pautler, David Suendermann-Oeft, Ira Shoulson and Vikram Ramanarayanan (2022). Assessment of atypical speech in Multiple Sclerosis via a multimodal dialogue platform: An exploratory study, in proceedings of: 8th International Conference on Speech Motor Control, Groningen, the Netherlands, August 2022. [pdf]

  32. Jackson Liscombe, Michael Neumann, Hardik Kothare, Oliver Roesler, David Suendermann-Oeft and Vikram Ramanarayanan (2022). On Timing and Pronunciation Metrics for Intelligibility Assessment in Pathological ALS Speech, in proceedings of: 8th International Conference on Speech Motor Control, Groningen, the Netherlands, August 2022. [pdf]

  33. Hardik Kothare, Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, Jackson Liscombe, William Burke, Andrew Cornish, Doug Habberstad, Brandon Kopald, Alison Bai, Yelena Markiv, Levi Cole, Sara Markuson, Yasmine Bensidi-Slimane, Alaa Sakallah, Katherine Brogan, Linnea Lampinen, Sara Skiba, David Suendermann-Oeft, David Pautler and Carly Demopoulos (2022). Atypical speech acoustics and jaw kinematics during affect production in children with Autism Spectrum Disorder assessed by an interactive multimodal conversational platform, in proceedings of: 8th International Conference on Speech Motor Control, Groningen, the Netherlands, August 2022. [pdf]

  34. Kwang Kim, Jessica Gaines, Ben Parrell, Vikram Ramanarayanan, Srikantan Nagarajan, John Houde (2022). Prediction errors drive auditory-motor adaptation in a hierarchical FACTS model, in proceedings of: 8th International Conference on Speech Motor Control, Groningen, the Netherlands, August 2022. [pdf]

  35. Jessica Gaines, Kwang Kim, Ben Parrell, Vikram Ramanarayanan, Richard Ivry, Srikantan Nagarajan, John Houde (2022). Bayesian Inference of State Feedback Control Model Parameters for Pitch Perturbation Responses, in proceedings of: 8th International Conference on Speech Motor Control, Groningen, the Netherlands, August 2022. [pdf]

  36. Hardik Kothare, Oliver Roesler, William Burke, Michael Neumann, Jackson Liscombe, Andrew Exner, Sandy Snyder, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Jessica Huber, and Vikram Ramanarayanan (2022). Speech, Facial and Fine Motor Features for Conversation-Based Remote Assessment and Monitoring of Parkinson’s Disease, in proceedings of: 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2022, Glasgow, Scotland, July 2022. [pdf]

  37. Anzalee Khan, Sebastian Prokop, Saqib Bashir, Jean-Pierre Lindenmayer, Beverly Insel, David Pautler, David Suendermann-Oeft, Christian Yavorsky and Vikram Ramanarayanan (2022). Reliability, validity and internal consistency of multimodal AI based facial and acoustic biomarkers of negative symptoms in schizophrenia, in proceedings of: Annual Meeting of the Schizophrenia International Research Society (SIRS) 2022, Florence, Italy, April 2022. [pdf]

  38. William Burke, David Pautler, Hardik Kothare, Doug Habberstad, Oliver Roesler, Michael Neumann, Jackson Liscombe, Andrew Cornish, David Suendermann-Oeft and Vikram Ramanarayanan (2022). On The Feasibility of Remotely Administered and Self Driven Cognitive Assessments Using a Multimodal Dialog Agent, in proceedings of: Annual Meeting of the Cognitive Neuroscience Society (CNS), San Francisco, CA, April 2022. [pdf]

  39. Vikram Ramanarayanan, Michael Neumann, Aria Anvar, David Suendermann-Oeft, Oliver Roesler, Jackson Liscombe, Hardik Kothare, James D. Berry, Ernest Fraenkel, Raquel Norel, Alex Sherman, Jochen Kumm, and Indu Navar (2022). Lessons Learned From A Large-Scale Audio-Visual Remote Data Collection For Amyotrophic Lateral Sclerosis Research, in proceedings of: Annual Meeting of the American Academy of Neurology (AAN), April 2022. [pdf]

  40. Oliver Roesler, William Burke, Hardik Kothare, Jackson Liscombe, Michael Neumann, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft and Vikram Ramanarayanan (2022). Multimodal dialog based remote patient monitoring of motor function in Parkinson’s Disease and other movement disorders, in proceedings of: Motor Speech Conference 2022, Charleston, SC, February 2022. [pdf]

  41. Jackson Liscombe, Alexander Ocampo, Hardik Kothare, Oliver Roesler, Michael Neumann, Doug Habberstad, Andrew Cornish, David Pautler, David Suendermann-Oeft and Vikram Ramanarayanan (2022). On the robust automatic computation of speaking and articulation duration in ALS patients versus healthy controls, in proceedings of: Motor Speech Conference 2022, Charleston, SC, February 2022. [pdf]

  42. Andrew Exner, Vikram Ramanarayanan, David Pautler, Sandy Snyder, Hardik Kothare, Jackson Liscombe, Shreya Sridhar, Oliver Roesler, William Burke, Michael Neumann, David Suendermann-Oeft and Jessica Huber (2022). Collecting remote voice and movement data from people with Parkinson’s disease (PD) using multimodal conversational AI: Lessons learned from a national study, in proceedings of: Motor Speech Conference 2022, Charleston, SC, February 2022. [pdf]

  43. Michael Neumann, Oliver Roesler, Jackson Liscombe, Hardik Kothare, David Suendermann-Oeft, James D. Berry, Ernest Fraenkel, Raquel Norel, Aria Anvar, Indu Navar, Alexander V. Sherman, Jordan R. Green and Vikram Ramanarayanan (2021). Multimodal dialog based speech and facial biomarkers capture differential disease progression rates for ALS remote patient monitoring, in proceedings of: The 32nd International Symposium on Amyotrophic Lateral Sclerosis and Motor Neuron Disease, Virtual, December 2021. [pdf]

  44. Hardik Kothare, Michael Neumann, Oliver Roesler, Jackson Liscombe, David Pautler, David Suendermann-Oeft, Christian Yavorsky, Anzalee Khan, Jean-Pierre Lindenmayer and Vikram Ramanarayanan (2021). Multimodal conversational technology for remote assessment of symptom severity in people with schizophrenia, in proceedings of: Society for Neuroscience (SfN) Annual Meeting, Chicago, IL, November 2021. [slides, video]

  45. Andrew Exner, Vikram Ramanarayanan, David Pautler, Hardik Kothare, Sandy Snyder, and Jessica E. Huber (2021). Development of a Telehealth Platform for the Assessment of Motor Speech Disorders, in proceedings of: American Speech-Language-Hearing Association (ASHA) Annual Convention, November 2021. [pdf]

  46. Jackson Liscombe, Hardik Kothare, Michael Neumann, Alexander Ocampo, Oliver Roesler, Doug Habberstad, Andrew Cornish, David Pautler, David Suendermann-Oeft and Vikram Ramanarayanan (2021). Voice Activity Detection in Dialog Agents for Dysarthric Speakers, in proceedings of: International Workshop on Spoken Dialog Systems 2021, Singapore, November 2021. [pdf]

  47. Anzalee Khan, Jean-Pierre Lindenmayer, Saqib Basir, Sebastian Prokop, Beverly Insel, Brianna Fitapelli, Krishnapriya Bodicherla, Mohan Parak, Benedicto Parker, Christian Yavorsky, Hardik Kothare, David Pautler, David Suendermann-Oeft and Vikram Ramanarayanan (2021). Computerized facial and acoustic analysis during speech as a measurement of negative symptoms in schizophrenia, in proceedings of: International Society for CNS Clinical Trials and Methodology (ISCTM 2021) Autumn Conference, Lisbon, Portugal, September 2021. [pdf]

  48. Vikram Ramanarayanan, Andrew H. Exner, David Pautler, Sandy Snyder, William Burke, Hardik Kothare, Jackson Liscombe, David Suendermann-Oeft, Ashleigh Lambert, Renee Kohlmeier, and Jessica E. Huber (2021). Multimodal Conversational AI for Remote Patient Monitoring and Analysis of Parkinson’s Disease, in proceedings of: Movement Disorders Society (MDS) Congress 2021, September 2021. [pdf, video]

  49. Michael Neumann, Oliver Roesler, Jackson Liscombe, Hardik Kothare, David Suendermann-Oeft, David Pautler, Indu Navar, Aria Anvar, Jochen Kumm, Raquel Norel, Ernest Fraenkel, Alexander Sherman, James Berry, Gary Pattee, Jun Wang, Jordan Green and Vikram Ramanarayanan (2021). Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures with Applications Toward Early Diagnosis and Monitoring of Amyotrophic Lateral Sclerosis at Scale, in proceedings of: Interspeech 2021, Virtual Conference, Sept 2021. [pdf, video]

  50. Hardik Kothare, Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, Jackson Liscombe, William Burke, Andrew Cornish, Doug Habberstad, Alaa Sakallah, Sara Markuson, Seemran Kansara, Afik Faerman, Yasmine Bensidi-Slimane, Laura Fry, Saige Portera, David Suendermann-Oeft, David Pautler and Carly Demopoulos (2021), Investigating the interplay between affective, phonatory and motoric subsystems in Autism Spectrum Disorder using an audiovisual dialogue agent, in proceedings of: Interspeech 2021, Virtual Conference, Sept 2021 (also presented at the 2021 National Autism Conference, Penn State University, PA, Aug 2021). [pdf].

  51. Rahul R. Divekar, Haley Lepp, Pravin Chopade, Aaron Albin, Daniel Brenner and Vikram Ramanarayanan (2021). Conversational Agents in Language Education: Where They Fit and Their Research Challenges, in proceedings of: 23rd International Conference on Human-Computer Interaction, Washington DC, USA, July 2021. [pdf]

  52. Aria Anvar, David Suendermann-Oeft, David Pautler, Vikram Ramanarayanan, Jochen Kumm, Raquel Norel, Ernest Fraenkel, Indu Navar (2021). Towards A Large-Scale Audio-Visual Corpus For Research on Amyotrophic Lateral Sclerosis, in proceedings of: Annual Meeting of the American Academy of Neurology, April 2021. [pdf]

  53. Katherine Stasaski and Vikram Ramanarayanan (2020). Automatic Feedback Generation for Dialog-Based Language Tutors Using Transformer Models and Active Learning, in proceedings of: NeurIPS 2020 Workshop on Human-in-the-Loop Dialogue Systems, Dec 2020. [pdf]

  54. Hardik Kothare and Vikram Ramanarayanan (2020). Remote monitoring of respiratory function using a cloud-based multimodal dialogue system, in proceedings of: Virtual International Seminar on Speech Production (ISSP), Dec 2020. [pdf, video]

  55. Jessica Gaines, Kwang Kim, Benjamin Parrell, Vikram Ramanarayanan, Srikantan Nagarajan and John Houde (2020). Discrete constriction locations describe a comprehensive range of vocal tract shapes in the Maeda model, in proceedings of: Virtual International Seminar on Speech Production (ISSP), Dec 2020. [pdf]

  56. Vikram Ramanarayanan (2020). Design and Development of a Human-Machine Dialog Corpus for the Automated Assessment of Conversational English Proficiency, in proceedings of: Interspeech 2020, Virtual Conference, Oct 2020. [pdf]

  57. Vikram Ramanarayanan, Matt Mulholland and Debanjan Ghosh (2020). Exploring Recurrent, Memory and Attention Based Architectures for Scoring Interactional Aspects of Human-Machine Text Dialog, arXiv. [pdf]

  58. Haley Lepp, Chee Wee Leong, Katrina Roohr, Michelle Martin-Raugh and Vikram Ramanarayanan (2020). Effect of modality on human and machine scoring of presentation videos, in proceedings of: International Conference on Multimodal Interaction (ICMI 2020), Virtual Conference, Oct 2020 [pdf].

  59. Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, David Pautler, Doug Habberstad, Andrew Cornish, Hardik Kothare, Vignesh Murali, Jackson Liscombe, Dirk Schnelle-Walka, Patrick Lange, and David Suendermann-Oeft (2020), Toward Remote Patient Monitoring of Speech, Video, Cognitive and Respiratory Biomarkers Using Multimodal Dialog Technology, in proceedings of: Interspeech 2020, Virtual Conference, Oct 2020. [pdf]

  60. Michael Neumann, Oliver Roessler, David Suendermann-Oeft and Vikram Ramanarayanan (2020). On the Utility of Audiovisual Dialog Technologies and Signal Analytics for Real-time Remote Monitoring of Depression Biomarkers, in proceedings of: ACL 2020 Virtual Workshop on NLP for Medical Conversations, July 2020. [pdf, video]

  61. Vikram Ramanarayanan, Benjamin Parrell, Srikantan Nagarajan and John Houde (2020). Simulating adaptation in the FACTS model of speech motor control, in proceedings of: Motor Speech Conference 2020, Santa Barbara, CA, Feb 2020 [pdf].

  62. Vikram Ramanarayanan, David Suendermann-Oeft, David Pautler, Oliver Roesler, Jackson Liscombe, Michael Neumann, Doug Habberstad and Andrew Cornish (2020). Leveraging Multimodal Dialog Technologies for Patient Health Diagnosis, Monitoring, and Intervention, in proceedings of: Motor Speech Conference 2020, Santa Barbara, CA, Feb 2020 [pdf].

  63. Rutuja Ubale, Vikram Ramanarayanan, Yao Qian, Keelan Evanini, Chee Wee Leong and Chong Min Lee (2019). Native Language Identification from Raw Waveforms Using Deep Convolutional Neural Networks with Attentive Pooling, in proceedings of: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Sentosa, Singapore, Dec 2019 [pdf].

  64. Vikram Ramanarayanan, Matthew Mulholland and Yao Qian (2019). Scoring Interactional Aspects of Human–Machine Dialog for Language Learning and Assessment using Text Features, in proceedings of: Annual Conference of the Joint ACL/ISCA Special Interest Group on Discourse and Dialogue (SIGDIAL 2019), Stockholm, Sweden, Sept 2019 [pdf].

  65. Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle Martin-Raugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo and Laura McCulla (2019). To Trust or Not to Trust? A Study of Human Bias in Automated Video Interview Assessments, in proceedings of: International Conference on Multimodal Interaction (ICMI 2019), Suzhou, China, Oct 2019 [pdf].

  66. John Houde, Benjamin Parrell, Vikram Ramanarayanan and Srikantan Nagarajan (2019). The FACTS model: using state estimation and task-based feedback control to model the speech motor system, in proceedings of: Annual Conference of the Cognitive Neuroscience Society (CNS 2019), San Francisco, CA, March 2019 [pdf].

  67. Vikram Ramanarayanan, Benjamin Parrell, Srikantan Nagarajan and John Houde (2018). FACTS: A hierarchical task-based control model of speech incorporating sensory feedback, in proceedings of: Interspeech 2018, Hyderabad, India, Sept 2018 [pdf].

  68. Keelan Evanini, Matthew Mulholland, Rutuja Ubale, Yao Qian, Robert Pugh, Vikram Ramanarayanan and Aoife Cahill (2018). Improvements to an Automated Content Scoring System for Spoken CALL Responses: The ETS submission to the Second Spoken CALL Shared Task, in proceedings of: Interspeech 2018, Hyderabad, India, Sept 2018 [pdf].

  69. Vikram Ramanarayanan, David Pautler, Patrick Lange, Eugene Tsuprun, Rutuja Ubale, Keelan Evanini and David Suendermann-Oeft (2018). Toward Scalable Dialog Technology for Conversational Language Learning: A Case Study of the TOEFL MOOC, in proceedings of: Interspeech 2018, Hyderabad, India, Sept 2018 [pdf].

  70. Keelan Evanini, Veronika Timpe-Laughlin, Eugene Tsuprun, Ian Blood, Jeremy Lee, James Bruno, Vikram Ramanarayanan, Patrick Lange and David Suendermann-Oeft (2018). Game-based spoken dialog language learning applications for young students, in proceedings of: Interspeech 2018, Hyderabad, India, Sept 2018 [pdf].

  71. Vikram Ramanarayanan and Robert Pugh (2018). Automatic Token and Turn Level Language Identification for Code-Switched Text Dialog: An Analysis Across Language Pairs and Corpora, in proceedings of: Annual Conference of the Joint ACL/ISCA Special Interest Group on Discourse and Dialogue (SIGDIAL 2018), Melbourne, Australia, July 2018 [pdf].

  72. David Pautler, Vikram Ramanarayanan, Kirby Cofino, Patrick Lange and David Suendermann-Oeft (2018). Leveraging Multimodal Dialog Technology for the Design of Automated and Interactive Student Agents for Teacher Training, in proceedings of: Annual Conference of the Joint ACL/ISCA Special Interest Group on Discourse and Dialogue (SIGDIAL 2018), Melbourne, Australia, July 2018 [pdf].

  73. Vikram Ramanarayanan and Michelle LaMar (2018). Toward Automatically Measuring Learner Ability from Human-Machine Dialog Interactions using Novel Psychometric Models, in proceedings of: 13th NAACL Workshop on Innovative Use of NLP for Building Educational Applications (BEA-NAACL 2018), New Orleans, Louisiana, June 2018 [pdf].

  74. Vikram Ramanarayanan, Rodolfo Long, David Pautler, Jason White and David Suendermann-Oeft (2018). Leveraging Multimodal Dialog Technology for the Design of Accessible Science Simulations, in proceedings of: ACM CHI Conference on Human Factors in Computing Systems (CHI 2018) Workshop on Inclusive Educational Technologies, Montreal, Canada, April 2018 [pdf].

  75. Vikram Ramanarayanan, Robert Pugh, Yao Qian and David Suendermann-Oeft (2018). Automatic Turn-Level Language Identification for Code-Switched Spanish-English Dialog, in proceedings of: International Workshop on Spoken Dialog Systems (IWSDS 2018), Singapore, May 2018 [pdf].

  76. Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft and Keelan Evanini (2017). Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls, in proceedings of: International Conference on Multimodal Interaction (ICMI 2017), Glasgow, Scotland, Nov 2017 [pdf].

  77. Kirby Cofino, Vikram Ramanarayanan, Patrick Lange, David Pautler, David Suendermann-Oeft and Keelan Evanini (2017). A Modular, Multimodal Open-Source Virtual Interviewer Dialog Agent, in proceedings of: International Conference on Multimodal Interaction (ICMI 2017), Glasgow, Scotland, Nov 2017 [pdf].

  78. Vikram Ramanarayanan and David Suendermann-Oeft (2017). Jee haan, I'd like both, por favor”: Elicitation of a Code-Switched Corpus of Hindi-English and Spanish-English Human-Machine Dialog, in proceedings of: Interspeech 2017, Stockholm, Sweden, Aug 2017 [pdf].

  79. Vikram Ramanarayanan, Patrick Lange, Keelan Evanini, Hillary Molloy and David Suendermann-Oeft (2017). Issues in Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human–Machine Spoken Dialog Interactions, in proceedings of: Interspeech 2017, Stockholm, Sweden, Aug 2017 [pdf].

  80. Vikram Ramanarayanan, Chee Wee Leong and David Suendermann-Oeft (2017). Rushing to Judgement: How Do Laypeople Rate Caller Engagement in Thin-Slice Videos of Human–Machine Dialog?, in proceedings of: Interspeech 2017, Stockholm, Sweden, Aug 2017 [pdf].

  81. Yao Qian, Rutuja Ubale, Vikram Ramanaryanan, David Suendermann-Oeft, Keelan Evanini, Patrick Lange and Eugene Tsuprun (2017). Towards End-to-End Modeling of Spoken Language Understanding in a Cloud-based Spoken Dialog System, in proceedings of: 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2017 - SaarDial), Saarbrucken, Germany, Aug 2017 [pdf].

  82. Dirk Schnelle-Walka, Vikram Ramanarayanan, Stefan Radomski, Patrick Lange and David Suendermann-Oeft (2017). An Open Source Standards-Compliant Voice Browser with Support for Multiple Language Understanding Implementations, in proceedings of: 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2017 - SaarDial), Saarbrucken, Germany, Aug 2017 [pdf].

  83. Veronika Timpe-Laughlin, Keelan Evanini, Ashley Green, Ian Blood, Judit Dombi and Vikram Ramanarayanan (2017). Designing interactive, automated dialogues for L2 pragmatics learning, in proceedings of: 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2017 - SaarDial), Saarbrucken, Germany, Aug 2017 [pdf].

  84. Tanner Sorensen, Zisis Skordilis, Asterios Toutios, Yoon-Chul Kim, Yinghua Zhu, Jangwon Kim, Adam Lammert, Vikram Ramanarayanan, Louis Goldstein, Dani Byrd, Krishna Nayak and Shrikanth Narayanan (2017). Database of volumetric and real-time vocal tract MRI for speech science, in proceedings of: Interspeech 2017, Stockholm, Sweden, Aug 2017 [pdf].

  85. Benjamin Parrell, Vikram Ramanarayanan, Srikantan Nagarajan and John Houde (2017). A hierarchical feedback control model for speech simulates task-specific responses to auditory and physical perturbations, in proceedings of: 7th International Conference on Speech Motor Control, Groningen, the Netherlands, July 2017 [pdf].

  86. John Houde, Srikantan S. Nagarajan, Benjamin Parrell and Vikram Ramanarayanan (2017). The state of modeling speech production as state feedback control, in proceedings of: 7th International Conference on Speech Motor Control, Groningen, the Netherlands, July 2017 [pdf].

  87. Keelan Evanini, Eugene Tsuprun, Veronika Timpe-Laughlin, Vikram Ramanarayanan, Patrick Lange and David Suendermann-Oeft (2017). Evaluating the Impact of Local Context on CALL Applications Using Spoken Dialog Systems, in proceedings of: Computer Assisted Language Learning (CALL) Research Conference 2017, Berkeley, CA, July 2017 [pdf].

  88. Zhou Yu, Vikram Ramanarayanan, Patrick Lange and David Suendermann-Oeft (2017). An Open-Source Dialog System with Real-Time User Engagement Coordination for Job Interview Training Applications, in proceedings of: International Workshop on Spoken Dialog Systems (IWSDS 2017), Pittsburgh, PA, June 2017 [pdf].

  89. Vikram Ramanarayanan, Patrick Lange, Keelan Evanini, Hillary Molloy, Eugene Tsuprun and David Suendermann-Oeft (2017). Crowdsourcing multimodal dialog interactions: Lessons learned from the HALEF case, in proceedings of: American Association of Artificial Intelligence (AAAI) 2017 Workshop on Crowdsourcing, Deep Learning and Artificial Intelligence Agents, San Francisco, CA, Feb 2017 [pdf].

  90. Vikram Ramanarayanan, Patrick Lange, David Pautler, Zhou Yu and David Suendermann-Oeft (2016). Interview with an Avatar: A real-time engagement tracking-enabled cloud-based multimodal dialog system for learning and assessment, in proceedings of: Spoken Language Technology (SLT) 2016, San Diego, CA, Dec 2016 [pdf].

  91. Hardik Kothare, Vikram Ramanarayanan, Benjamin Parrell, John F. Houde, Srikantan S. Nagarajan (2016). Sensorimotor adaptation to real-time formant shifts is influenced by the direction and magnitude of shift, in proceedings of: Society for Neuroscience Conference (SfN 2016), San Diego, CA, Nov 2016 [link].

  92. Vikram Ramanarayanan, Benjamin Parrell, Louis Goldstein, Srikantan Nagarajan and John Houde (2016). A new model of speech motor control based on task dynamics and state feedback, in proceedings of: Interspeech 2016, San Francisco, CA, Sept 2016 [pdf].

  93. Yao Qian, Jidong Tao, David Suendermann-Oeft, Keelan Evanini, Alexei V. Ivanov and Vikram Ramanarayanan (2016). Noise and metadata sensitive bottleneck features for improving speaker recognition with non-native speech input, in proceedings of: Interspeech 2016, San Francisco, CA, Sept 2016 [pdf].

  94. Vikram Ramanarayanan and Saad Khan (2016). Novel features for capturing cooccurrence behavior in dyadic collaborative problem solving tasks, in proceedings of: Educational Data Mining (EDM 2016), Raleigh, North Carolina, June 2016 [pdf].

  95. Zhou Yu, Vikram Ramanarayanan, Patrick Lange, Robert Mundkowsky and David Suendermann-Oeft (2016). Multimodal HALEF: An open-Source modular web-based multimodal dialog framework, in proceedings of: International Workshop on Spoken Dialog Systems (IWSDS 2016), Saariselka, Finland, Jan 2016 [pdf].

  96. Alexei V. Ivanov, Patrick L. Lange, Vikram Ramanarayanan and David Suendermann-Oeft (2016). Designing an optimal ASR system for spontaneous non-native speech in a spoken dialog application, International Workshop on Spoken Dialog Systems (IWSDS 2016), Saariselka, Finland, Jan 2016 [pdf].

  97. Vikram Ramanarayanan, Zhou Yu, Robert Mundkowsky, Patrick Lange, Alexei V. Ivanov, Alan W. Black, and David Suendermann-Oeft. A modular open-source standard-compliant dialog system framework with video support, in proceedings of: IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU 2015), Scottsdale, AZ, Dec 2015 [pdf].

  98. Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao and Yao Qian (2015). Using Bidirectional LSTM Recurrent Neural Networks to Learn High-Level Abstractions of Sequential Features for Automated Scoring of Non-Native Spontaneous Speech, in proceedings of: IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2015), Scottsdale, AZ, Dec 2015 [pdf].

  99. Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng and David Suendermann-Oeft (2015). Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring, in proceedings of: International Conference on Multimodal Interaction (ICMI 2015), Seattle, WA, Nov 2015 [pdf].

  100. Vikram Ramanarayanan, David Suendermann-Oeft, Alexei V. Ivanov, and Keelan Evanini (2015). A distributed cloud-based dialog system for conversational application development, in proceedings of: 16th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL 2015), Prague, Czech Republic [pdf].

  101. Alexei V. Ivanov, Vikram Ramanarayanan, David Suendermann-Oeft, Melissa Lopez, Keelan Evanini, and Jidong Tao (2015). Automated speech recognition technology for dialogue interaction with non-native interlocutors, in proceedings of: 16th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL 2015), Prague, Czech Republic [pdf].

  102. Vikram Ramanarayanan, Lei Chen, Chee Wee Leong, Gary Feng and David Suendermann-Oeft (2015). An analysis of time-aggregated and time-series features for scoring different aspects of multimodal presentation data, in proceedings of: Interspeech 2015, Dresden, Germany, Sept 2015 [pdf].

  103. Zisis Skordilis, Vikram Ramanarayanan, Louis Goldstein and Shrikanth Narayanan (2015). Experimental assessment of the tongue incompressibility hypothesis during speech production, in proceedings of: Interspeech 2015, Dresden, Germany, Sept 2015 [pdf].

  104. David Suendermann-Oeft, Vikram Ramanarayanan, Moritz Teckenbrock, Felix Neutatz and Dennis Schmidt (2015). HALEF: an open-source standard-compliant telephony-based modular spoken dialog system – A review and an outlook, in proceedings of: International Workshop on Spoken Dialog Systems, Busan, South Korea, Jan 2015 [pdf].

  105. Vikram Ramanarayanan, Louis Goldstein and Shrikanth Narayanan (2014). Speech motor control primitives arising from a dynamical systems model of vocal tract articulation, in proceedings of: Interspeech 2014, Singapore, Sept 2014 [pdf].

  106. Colin Vaz, Vikram Ramanarayanan and Shrikanth Narayanan (2014). Joint filtering and factorization for recovering latent structure from noisy speech data, in proceedings of: Interspeech 2014, Singapore, Sept 2014 [pdf].

  107. Andres Benitez, Vikram Ramanarayanan, Louis Goldstein and Shrikanth Narayanan (2014). A real-time MRI study of articulatory setting in second language speech, in proceedings of: Interspeech 2014, Singapore, Sept 2014 [pdf].

  108. Vikram Ramanarayanan, Louis Goldstein and Shrikanth Narayanan (2014). Speech motor control primitives arising from a dynamical systems model of vocal tract articulation, in proceedings of: International Seminar on Speech Production 2014, Cologne, Germany, May 2014 [pdf]. (Northern Digital Inc. Excellence Award for Best Paper)

  109. Vikram Ramanarayanan, Adam Lammert, Louis Goldstein and Shrikanth Narayanan (2013). Articulatory settings facilitate mechanically advantageous motor control of vocal tract articulators, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf].

  110. Vikram Ramanarayanan, Maarten Van Segbroeck and Shrikanth Narayanan (2013). On the nature of data-driven primitive representations of speech articulation, in proceedings of: Interspeech 2013 Workshop on Speech Production in Automatic Speech Recognition (SPASR), Lyon, France, Aug 2013 [pdf].

  111. Colin Vaz, Vikram Ramanarayanan and Shrikanth Narayanan (2013). A two-step technique for MRI audio enhancement using dictionary learning and wavelet packet analysis, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf]. (Best Student Paper Award)

  112. Zhaojun Yang, Vikram Ramanarayanan, Dani Byrd and Shrikanth Narayanan (2013). The effect of word frequency and lexical class on articulatory-acoustic coupling, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf].

  113. Adam Lammert, Vikram Ramanarayanan, Michael Proctor and Shrikanth Narayanan (2013). Vocal tract cross-distance estimation from real-time MRI using region-of-interest analysis, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf].

  114. Daniel Bone, Chi-Chun Lee, Vikram Ramanarayanan, Shrikanth Narayanan, Renske S. Hoedemaker and Peter C. Gordon (2013). Analyzing eye-voice coordination in Rapid Automatized Naming, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf].

  115. Ming Li, Jangwon Kim, Prasanta Ghosh, Vikram Ramanarayanan and Shrikanth Narayanan (2013). Speaker verification based on fusion of acoustic and articulatory information, in proceedings of: Interspeech 2013, Lyon, France, Aug 2013 [pdf].

  116. Vikram Ramanarayanan, Prasanta Ghosh, Adam Lammert and Shrikanth S. Narayanan (2012), Exploiting speech production information for automatic speech and speaker modeling and recognition – possibilities and new opportunities, in proceedings of: APSIPA 2012, Los Angeles, CA, Dec 2012 [pdf].

  117. Vikram Ramanarayanan, Naveen Kumar and Shrikanth S. Narayanan (2012), A framework for unusual event detection in videos of informal classroom settings, in: NIPS 2012 workshop on Personalizing Education with Machine Learning, Lake Tahoe, NV, Dec 2012 [pdf].

  118. Vikram Ramanarayanan, Athanasios Katsamanis and Shrikanth Narayanan (2011). Automatic data-driven learning of articulatory primitives from real-time MRI data using convolutive NMF with sparseness constraints, in proceedings of: Interspeech 2011, Florence, Italy, Aug 2011 [pdf].

  119. Athanasios Katsamanis, Erik Bresch, Vikram Ramanarayanan and Shrikanth Narayanan (2011). Validating rt-MRI based articulatory representations via articulatory recognition, in proceedings of: Interspeech 2011, Florence, Italy, Aug 2011 [pdf].

  120. Shrikanth Narayanan, Erik Bresch, Prasanta Ghosh, Louis Goldstein, Athanassios Katsamanis, Yoon Kim, Adam Lammert, Michael Proctor, Vikram Ramanarayanan, and Yinghua Zhu (2011). A Multimodal Real-Time MRI Articulatory Corpus for Speech Research, in proceedings of: Interspeech 2011, Florence, Italy, Aug 2011 (authors after first in alphabetical order) [pdf].

  121. Vikram Ramanarayanan, Dani Byrd, Louis Goldstein and Shrikanth Narayanan (2011). An MRI study of articulatory settings of L1 and L2 speakers of American English, in: International Speech Production Seminar 2011, Montreal, Canada, June 2011 [pdf].

  122. Vikram Ramanarayanan, Adam Lammert, Dani Byrd, Louis Goldstein and Shrikanth Narayanan (2011). Planning and Execution in Soprano Singing and Speaking Behavior: an Acoustic/Articulatory Study Using Real-Time MRI, in: International Speech Production Seminar 2011, Montreal, Canada, June 2011 [pdf].

  123. Vikram Ramanarayanan, Dani Byrd, Louis Goldstein and Shrikanth Narayanan (2010). Investigating articulatory setting - pauses, ready position and rest - using real-time MRI, in proceedings of: Interspeech 2010, Makuhari, Japan, Sept 2010 [pdf].

  124. Vikram Ramanarayanan, Dani Byrd, Louis Goldstein and Shrikanth Narayanan (2010). A joint acoustic-articulatory study of nasal spectral reduction in read versus spontaneous speaking styles, in: Speech Prosody 2010, Chicago, Illinois, May 2010 [pdf].

  125. Vikram Ramanarayanan (2010). Prosodic variation within speech planning and execution - insights from real-time MRI, in: Sao Paulo School of Speech Dynamics, Sao Paulo, Brazil, June 2010 (unpublished poster summarizing research work done during 2009-10) [pdf].

  126. Vikram Ramanarayanan, Erik Bresch, Dani Byrd, Louis Goldstein and Shrikanth S. Narayanan (2009), Real-time MRI tracking of articulation during grammatical and ungrammatical pauses in speech, in: 157th Meeting of the Acoustical Society of America, Portland, Oregon, May 2009 [pdf].

  127. Ed Holsinger, Vikram Ramanarayanan, Dani Byrd, Louis Goldstein, Maria Gorno Tempini, Shrikanth Narayanan (2009). Beyond acoustic data: Characterizing disordered speech using direct articulatory evidence from real time imaging, in: 157th Meeting of the Acoustical Society of America, Portland, Oregon, May 2009 [pdf].

Technical reports

  1. Vikram Ramanarayanan, Panayiotis Georgiou and Shrikanth S. Narayanan (2012). Investigating duration modeling within a statistical data-driven front-end for speech synthesis.

  2. Vikram Ramanarayanan and Shrikanth Narayanan (2010). An approach toward understanding the variant and invariant aspects of speech production using low-rank–sparse matrix decompositions [pdf].