Background: Deficits in nonverbal communication are required for an autism spectrum disorder (ASD) diagnosis. Unfortunately, nonverbal communication is poorly captured by informant-based questionnaires, the standard gatekeeper of ASD screening. Thus, novel measures are needed to assess nonverbal communication, which is central to clinical decision making. During the diagnostic process, clinicians integrate developmental history, including when and how atypical behaviors manifest, with behavioral observation. Behavioral observation includes attention to both quantitative (e.g., how often a child makes eye contact) and qualitative features of behavior (e.g., how good was the eye contact). Questionnaires are more appropriate for measuring quantitative features of behavior, thus we sought to explore how classification of ASD vs. typically developing controls (TDCs) would improve when adding a measure well-suited for quantifying behavioral quality – specifically, features of spontaneously produced co-speech gestures. The literature on co-speech gesture in ASD is small, but suggests that qualitative differences, e.g., in how gesture is formed, used functionally, or integrated with speech, best discriminate ASD from controls. Here we employ gesture as an example of broader differences in nonverbal communication.
To determine whether continuous gesture variables, including frequency, size, and amount of informational content, could predict diagnostic group membership (ASD versus TDC).
Adults with ASD (n=24) and TDCs (n=10) were matched on chronological age, gender, and full scale IQ (see Table 1; analyses of data on 14 additional TDCs will be integrated into this presentation by May 2017). Participants completed a 20-minute referential communication task, employed through two networked laptops, designed to elicit back-and-forth conversational interaction in a controlled setting. All hand gestures produced during the task were tagged and coded by reliable coders for a variety of semantic and motor features. Three continuous variables were included in this analysis: rate, size, and amount of information in gesture. All participants completed the Social Responsiveness Scale: Self-Report (SRS-SR).
We predicted that the combination of SRS-SR, which measures frequency of social impairment and repetitive behaviors/interests in everyday contexts, and gesture variables, which measure qualitative features of behavior in vivo, would have particularly high predictive power for ASD diagnosis. Logistic regression was used to test this hypothesis. Entered as lone predictors, SRS-SR scores predicted diagnostic group membership with 82.4% accuracy (p<.001), and gesture variables predicted group membership with 85.3% accuracy (p=.005). When combined into a single model, 97.1% classification accuracy was achieved, with between 57% and 81% of the variance explained (p<.001). All participants with ASD were correctly classified, as were 9/10 TDC participants.
Features of co-speech gestures are able to independently predict ASD diagnosis, and when combined with an informant-based questionnaire, predictive power is very high, suggesting that these two types of measurements capture independent variance associated with ASD diagnosis. Questionnaires are an efficient measure of everyday behavior; however, no efficient proxy for behavioral observation exists. Here we demonstrate that features of co-speech gestures capture behavioral differences in ASD that are not easily measured by a questionnaire, and, that when combined with a questionnaire, can have strong predictive power for ASD diagnosis.