Will Styler's Homepage

Will Styler

Post-Doctoral Research Fellow - University of Michigan Linguistics

Publications

Here's an up-to-date listing of my publications, along with PDF copies for most of them. Please see my full CV (PDF) for additional details. To see citations in other people's work, visit Will's Google Scholar Page.

Refereed Publications

A. Coetzee, P.S. Beddor, K. Shedden, W. Styler, Daan Wissing. Plosive voicing in Afrikaans: Differential cue weighting and tonogenesis. Journal of Phonetics, 66:185 - 216. 2018. - Download as a PDF file

W. Styler. On the Acoustical Features of Vowel Nasality in English and French. Journal of the Acoustical Society of America. 142(4):2469-2482. Oct. 2017. - Download as a PDF file

G. Savova, S. Pradhan, M. Palmer, W. Styler, W. Chapman, and N. Elhadad. Annotating the Clinical Text- MiPACQ, ShARe, SHARPn, and THYME Corpora. In Handbook of Linguistic Annotations. Ed. James Pustejovsky and Nancy Ide. Springer. 2017.

R. Scarborough, W. Styler, and L. Marques. Coarticulation and contrast: Neighborhood density conditioned phonetic variation in French. In Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow, Aug. 2015. - Download as a PDF file

W. Styler, S. Bethard, S. Finan, M. Palmer, S. Pradhan, P. C. De Groen, B. Erickson, T. Miller, C. Lin, G. K. Savova, and J. Pustejovsky. Temporal annotation in the clinical domain. Transactions of the Association of Computational Linguistics, 2, 2014. - Download as a PDF file

R. Ikuta, W. Styler, M. Hamang, T. O’Gorman, and M. Palmer. Challenges of adding causation to Richer Event Descriptions. In Proceedings of the 2014 ACL EVENT Workshop. Association for Computational Linguistics, June 2014. - Download as a PDF file

W.-T. Chen and W. Styler. Anafora: A web-based general purpose annotation tool. In Proceedings of the 2013 NAACL HLT Demonstration Session, pages 14-19, Atlanta, Georgia, June 2013. Association for Computational Linguistics. - Download as a PDF file

D. Albright, A. Lanfranchi, A. Fredriksen, W. Styler, C. Warner, J. D. Hwang, J. D. Choi, D. Dligach, R. D. Nielsen, J. Martin, W. Ward, M. Palmer, and G. K. Savova. Towards comprehensive syntactic and semantic annotations of the clinical narrative. Journal of the American Medical Informatics Association, December 2012. - Download as a PDF file

R. Scarborough, W. Styler, and G. Zellou. Nasal Coarticulation in Lexical Perception: The Role of Neighborhood-Conditioned Variation. In Proceedings of the 17th International Congress of Phonetic Sciences, pages 1-4, Hong Kong, Aug. 2011. - Download as a PDF file

G. K. Savova, S. Bethard, W. Styler, J. Martin, and M. Palmer. Towards temporal relation discovery from the clinical narrative. In AMIA Annual Symposium Proceedings, page 445. AMIA, 2009. - Download as a PDF file

Non-Refereed Publications

W. Styler. Using Praat for Linguistic Research. Published in July 2011 for the 2011 LSA Linguistic Institute’s Praat Workshop, and continuously maintained at http://savethevowels.org/praat/.

Dissertation: 'On the Acoustical and Perceptual Features of Vowel Nasality'

Overview

Vowel nasality is, simply put, the difference in the vowel sound between the English words "pat" and "pant", or between the French "beau" and "bon". This phenomenon is used in languages around the world, but is relatively poorly understood from an acoustical standpoint, meaning that although we as human listeners can easily hear that a vowel is or isn't nasalized, it's quite difficult for us to measure or identify that nasality in a laboratory context.

The goal of my dissertation is to better understand vowel nasality in language by discovering not just what parts of the sound signal change in oral vs. nasal vowels, but which parts of the signal are actually used by listeners to perceive differences in nasality.

I've written up a summary of the process, aimed at a more general audience, here, or you can read the abstract below.

Dissertation Abstract

Although much is known about the linguistic function of vowel nasality, either contrastive (as in French) or coarticulatory (as in English), less is known about its perception. This study uses careful examination of production patterns, along with data from both machine learning and human listeners to establish which acoustical features are useful (and used) for identifying vowel nasality.

A corpus of 4,778 oral and nasal or nasalized vowels in English and French was collected, and feature data for 29 potential perceptual features was extracted. A series of Linear Mixed-Effects Regressions showed 7 promising features with large oral-to-nasal feature differences, and highlighted some cross-linguistic differences in the relative importance of these features.

Two machine learning algorithms, Support Vector Machines and RandomForests, were trained on this data to identify features or feature groupings that were most effective at predicting nasality token-by-token in each language. The list of promising features was thus narrowed to four: A1-P0, Vowel Duration, Spectral Tilt, and Formant Frequency/Bandwidth.

These four features were manipulated in vowels in oral and nasal contexts in English, adding nasal features to oral vowels and reducing nasal features in nasalized vowels, in an attempt to influence oral/nasal classification. These stimuli were presented to native English listeners in a lexical choice task with phoneme masking, measuring oral/nasal classification accuracy and reaction time. Only modifications to vowel formant structure caused any perceptual change for listeners, resulting in increased reaction times, as well as increased oral/nasal confusion in the oral-to-nasal (feature addition) stimuli. Classification of already-nasal vowels was not affected by any modifications, suggesting a perceptual role for other acoustical characteristics alongside nasality-specific cues. A Support Vector Machine trained on the same stimuli showed a similar pattern of sensitivity to the experimental modifications.

Thus, based on both the machine learning and human perception results, formant structure, particularly F1 bandwidth, appears to be the primary cue to the perception of nasality in English. This close relationship of nasal- and oral-cavity derived acoustical cues leads to a strong perceptual role for both the oral and nasal aspects of nasal vowels.

Dissertation Details

Title: "On the Acoustical and Perceptual Features of Vowel Nasality"

Advisor: Dr. Rebecca Scarborough

Defense Date: March 18th, 2015

Download: Download a PDF Copy (3.4 MB) - BibTeX Citation

Related Work: