Background: Social communication is a significant area of weakness for individuals with developmental differences like autism spectrum disorder (ASD), but is notoriously expensive and time-consuming to measure. A recent movement toward fine-grained behavioral imaging using cutting-edge technologies could drastically improve our ability to automatically capture subtle and complex social communication impairments, thus laying the groundwork to generate personalized interventions. To this end, we developed a self-contained, scalable 5-minute computer program to elicit vocalizations from individuals aged 6+ (C-SALT; Computerized Social Affective Language Task). Programmed in Unity3D, this cross-platform tool incorporates established research paradigms (e.g., a social narrative task, phonetic measures) into an engaging screen-based interface. Our ultimate goal is to generate clinically meaningful language profiles that can be compared to national norms, for the purposes of: (1) improving screening and diagnosis of social impairment in remote areas, (2) informing treatment planning (clarifying areas of strength and weakness), and (3) revolutionizing how we measure intervention efficacy. To accomplish these goals, we have begun to partner with colleagues and community organizations to deploy C-SALT in summer camps, schools, and via internet. Here, we describe our pilot efforts to collect data from child and adolescent participants with ASD, other psychiatric or psychological conditions, and typical controls.
Assess the feasibility of using C-SALT, a low-cost computer program that children can operate independently, to gather vocalization data as part of a community-based social communication and motor battery.
C-SALT was administered to 67 children (mean age=10.6 years, 77% male) enrolled in summer camps for children with disabilities, or general YMCA programs. Thirty-seven participants had ASD according to parent report, 18 were typically developing controls, and 12 had non-ASD clinical diagnoses or first-degree relatives with ASD. C-SALT was the last task in a 20-minute mobile battery.
Despite being the final task in the battery, C-SALT data was successfully collected from 80% of participants. Of the participants that did not complete C-SALT, 65% had autism and 50% had parent-reported speech-language impairments (mean age: 11.29 years). In response to this finding, we developed C-SALT-PL, containing paradigms modified to suit the needs of pre-literate or minimally verbal participants. Using largely automated methods (e.g., time stamps built into C-SALT output for each child), we have begun segmenting and analyzing facial expressions, gestures, and audio data collected via C-SALT. Although these analyses are preliminary, we expect that participant vocalizations will diverge in two primary areas that affect social communication: acoustic properties of voice (pitch variation, volume control, shimmer and jitter), and word choice (word frequency, lexical diversity, social/nonsocial focus). Our measure of sustained phonation, in particular, holds promise as a language-agnostic measure of vocal-motor control.
This project capitalizes on a natural synergy between computational linguistics and developmental psychopathology to precisely quantify real-world social communication difficulties in children with ASD. We will have C-SALT and C-SALT-PL available and prepared to demonstrate at IMFAR 2017, along with pilot data demonstrating the ability of these measures to distinguish diagnostic groups and quantify social communicative ability with high precision.