Research

This section includes description of the research projects I have been involved in recently. Please contact me if you are interested in viewing versions of any unpublished work.

Multimodal Speech
Convergence in Dialogue
Selection Effects in Demonym Allomorphy
Affix Positioning

Multimodal Speech

While speech researchers often focus on the acoustic or articulatory details of spoken language, researchers are increasingly demonstrating that hand and head movements, facial expressions, body posture, etc. are closely linked to speech. Researchers are beginning to show, for example, that hand gestures aid speech processing and perception and that bodily movements may influence the production of speech. One issue that has proved problematic for linguistic research in multimodal speech is the need to have humans identify and annotate manual gesture. Speech gestures (movements of the tongue, jaw, lips, larynx, velum, etc.) are easily linked to words that have stable meanings in languages, but most manual gestures (in spoken language) do not have fixed meanings. Nonetheless, early researchers in the field of manual gesture used form and proposed function to identify manual gestures descriptively, leading to researcher- and study-dependent definitions of manual gestures. At issue are questions like how to define the start or end of a gesture, whether some complex movement counts as a single gesture or two or more discrete gestures, or what ‘part’ of a gesture is timed to coincide with some acoustic or articulatory ‘anchor point‘ in speech. These questions are made even more complex by the variety of data collection methods that can be used to study multimodal speech – acoustic recordings of speech, video recordings of bodily movement, and a variety of motion tracking methods have all been used in various research.

My research is aimed at using quantitative measurement techniques to automatically detect manual gesture and to quantitatively analyze the relationship between manual gesture and speech. I have employed a computer vision method called optical flow to quantify manual movement from 2D video recordings, and electromagnetic articulography (EMA) to quantitatively measure the movement of speech articulators simultaneously. This makes it possible to measure quantitative properties of individual manual gestures, such as peak velocity magnitude, path length, and gesturing rate; and properties of the relationship between speech and manual gesture, such as the likelihood of correlation between speech and manual gesture in a given condition or over a span of time.

In addition to using more quantitative methods to collect and analyze data, I show that quantitative analysis can be performed on spontaneous multimodal speech in naturalistic speech settings. Linguistic researchers in multimodal speech have noted that the production of manual or bodily gestures can sometimes be biased or limited by the experimental task, and more research in ecologically valid settings is required. My research investigates whether spontaneous manual gestures and their association with speech is dependent on the speech task or speech context (such as demonstration versus conversation) in which they occur. I am also investigating how speech prosody interacts with manual gesture; for example, whether manual gestures occurring during phrase-initial regions in speech are associated with different properties than manual gestures occurring during peak intensity or phrase-final regions in speech. The results of this work show promising new directions that will help expand understanding and future research in multimodality from a linguistic perspective.

Associated Work

Gordon Danner, S., Vilela Barbosa, A., & Goldstein, L. 2017. Quantitative Analysis of Multimodal Speech Data. Journal of Phonetics. Manuscript submitted for publication.

Gordon Danner, S. 2017. Effects of Speech Context on Characteristics of Manual Gesture. Ms, USC

Samantha Gordon Danner, Louis Goldstein & Eric Vatikiotis-Bateson. 2017. Task-dependent Coordination of Vocal Tract and Manual Gestures. LSA 2017 Annual Meeting, Austin, TX.  (More information about software used coming soon)

Samantha Gordon Danner, Louis Goldstein, Eric Vatikiotis-Bateson, Rob Fuhrman & Adriano Vilela Barbosa. Using Optical Flow and Electromagnetic Articulography in Multimodal Speech Research. 16th Speech Science and Technology Conference, Western Sydney University (Parramatta, Australia).

Gordon Danner, S. 2016. On the Coordination of Vocal Tract and Manual Gestures. Ms, USC

Convergence and Divergence in Dyadic Interaction

In NIH-funded work with Dani Byrd, Louis Goldstein, Yoonjeong Lee, Sungbok Lee and Ben Parrell, we have employed a unique dual-EMA setup to collect articulatory and acoustic data from two speakers before, during, and after a dyadic interaction. Using a prosodically-controlled maze task, we investigate how speakers’ acoustic and articulatory speech behavior adapts to their dyad partner’s speech over the course of an interaction. We find that, in many cases, one speaker in the pair is more ‘malleable’ than the other speaker; however, the means by which speakers converge and diverge (e.g., acoustically,  prosodically, and/or articulatorily), and whether speakers converge at all, varies by speaker pair. The variety of ways that speakers accommodate one another (or not) in dyadic interaction shows that speakers are likely to exhibit cognitive and/or motor control of convergence. This finding also suggests that convergence cannot be solely attributed to low-level imitation or other automatic processes; there may be a social or cognitive benefit to convergence that speakers make use of in interaction.

Associated Work

Lee, Y., Gordon Danner, S., Parrell, B., Lee, S., Goldstein, L., & Byrd, D. 2017. Articulatory, Acoustic, and Prosodic Accommodation in a Cooperative Maze Navigation Task. PLoS ONE. Manuscript submitted for publication (under revisions).

Yoonjeong Lee, Samantha Gordon Danner, Benjamin Parrell, Sungbok Lee, Louis Goldstein & Dani Byrd. 2016. Acoustic and Articulatory Measures of Convergence in a Cooperative Maze Navigation Task. 5th Joint Meeting of the Acoustical Society of America ad Acoustical Society of Japan (Honolulu, Hawaii). (Poster presented by Yoonjeong Lee)

Yoonjeong Lee, Samantha Gordon Danner, Benjamin Parrell, Sungbok Lee, Louis Goldstein & Dani Byrd. 2016. Prosodic Convergence During and After a Cooperative Maze Task.  LabPhon 15, Cornell University.

Demonyms and Allomorphy

Demonyms are names for a person who is from or resides in a city, country, or region; e.g., a person from California is a Californian, a person from New York is a New Yorker, a person from China is Chinese, etc. In English, demonyms are typically denoted with suffixes like -(i)an-ese-er, or -ite, but the rules for when to use each suffix are not always (exclusively) phonologically conditioned. Because there are so many different demonym suffixes used in English, demonyms offer a unique opportunity to study allomorphy, or the different forms that might be taken by an affix. My research proposes that, when more than one allomorph produces an acceptable surface form, speakers rely on some non-phonological information such as familiarity and frequency to produce an output demonym. To test this proposal, I surveyed speakers about the demonyms they preferred for real and fictional place names. The main finding of this work shows that demonym allomorph distributions differed between items that speakers identified as familiar (place names or demonyms they have heard before) and items that speakers identified as unfamiliar. This result has implications for the study of synchronic language change and representations of phonological knowledge.

Associated Work

Danner, S. 2016. Selectional Effects in Allomorph Competition. Proceedings of the Annual Meetings on Phonology, 2. dx.doi.org/10.3765/amp.v2i0.3763

Samantha Gordon. 2015. Factors Informing Conditioned Allomorph Selection. 28th CUNY Conference on Sentence Processing, USC.

Gordon, S. 2014. Selectional Effects in Allomorph Competition. Ms, USC

Samantha Gordon. 2014. What Do You Call a Person From…? Annual Meetings on Phonology 2014, MIT.

English Derivational Affix Positioning

In derivational morphology (the study of how roots and affixed are combined to form words), it is typically understood that words have minimally a stem or root, and various affixes can attach to the stem or root to form words with different meanings and parts of speech. The ways that various affixes and roots combine (as a root + suffix or a prefix + root) is generally fixed within a language, yet speakers understand how to productively apply affixes to form novel words. This research investigates whether speakers recognize properties of affixes that serve to distinguish prefixes from suffixes and whether speakers have mental representations of affix position as part of their knowledge of the language. This research used a corpus study to investigate English affixes and determine whether there is an association between affix position and syllable shape, edge segments, or conditional probabilities of bigrams at affix+stem junctions an artificial language study. The corpus study informed the design of an artificial language study using nonce stem and affix pairs (with English phonotactics) in which speakers had to form words of stems and unmarked affixes in their preferred position and had to read complex words formed from the stem and affix pairs. Speakers’ position preferences and reading response latency were measured. One finding of this research is that participants preferred to create prefixed words and response latencies were shorter in prefixed words (though there is a suffixing bias in many non-artificial languages). Participants also demonstrated that knowledge of syllable shape was important to determining affix position.

Associated Work

Samantha Gordon Danner, Elsi Kaiser & Louis Goldstein. 2015. Positional Preference and Response Latency in a Complex Word Production Task. American International Morphology Meeting 3, University of Massachusetts Amherst.

Samantha Gordon Danner, Louis Goldstein & Elsi Kaiser. 2015. Speech Onset Latency and Preferred Affix Position. Architectures and Mechanisms of Language Processing, University of Malta.

Samantha Gordon, Elsi Kaiser & Louis Goldstein. 2015. Positioning English Derivational Affixes in the Lexicon. 9th International Conference on the Mental Lexicon, Niagara-on-the-Lake, ON, Canada.

Gordon, S. 2014. Deriving Attraction: Influences on the Affixal Position of Derivational Morphemes. Ms, USC

 

Advertisements