In day-to-day life, producing and comprehending language appears effortless—we can do it for hours on end, often while engaging in other activities. But in reality, it is breathtaking to consider the amount of information that must be processed, and the different types of knowledge that must be integrated. Here are just a few of the things that we accomplish in the course of normal language comprehension: First, we have to locate the beginnings and endings of words from a continuous stream of sound. We have to make on-the-fly decisions about how the words should be put together in phrases, often deciding between multiple possibilities. And most complex of all, we have to integrate information from the linguistic stream together with contextual information, including the visual scene, information about the speaker's state of mind or goals, our past experiences of the world, and so on. Much of my scientific research has explored how these many threads of information are put together during real-time language processing, and how children acquire the knowledge and skills necessary to accomplish the same feat.

Selected Publications:

Grodner, D. & Sedivy, J. (In press). The effects of speaker-specific information on pragmatic inferences. In N. Pearlmutter & E. Gibson (eds). The Processing and Acquisition of Reference. MIT Press: Cambridge, MA.

Sedivy, J. (2007). Implicatures in real-time conversation: A view from language processing research. Philosophy Compass, 2/3 475-496, 10.1111/j.1747-9991.2007.00082.x

Yee, E. & Sedivy, J. (2006). Eye movements reveal transient semantic activation during spoken word recognition. Journal of Experimental Psychology: Learning, Memory and Cognition, 32(1), pp. 1-14.

Myung, J-Y., Blumstein, S. & Sedivy, J. (2006). Playing on the typewriter, typing on the piano: manipulation knowledge of objects. Cognition 98, 224-243

Sedivy, J. (2003). Pragmatic versus form-based accounts of referential contrast: Evidence for effects of informativity expectations. Journal of Psycholinguistic Research.

Sussman, R. & Sedivy, J. (2003). The time course of processing syntactic dependencies: Evidence from eye movements during spoken narratives. Language and Cognitive Processes.18, 143-163.>

Spivey, M., Tanenhaus, M., Eberhard, K., & Sedivy, J. (2002). Eye movements and spoken language comprehension: Effects of visual context on syntactic ambiguity resolution. Cognitive Psychology, 45, 447-481.

Sedivy, J. (2002). Invoking discourse-based contrast sets and resolving syntactic ambiguities. Journal of Memory and Language, 341-370.

Nadig, A. & Sedivy, J. (2002). Evidence of perspective-taking constraints in children's on-line reference resolution. Psychological Science, 13 (4), pp. 329-336.

Sedivy, J., Tanenhaus, M., Chambers, C., & Carlson, G. (1999). Achieving incremental semantic interpretation through contextual representation.Cognition, 71, 109-147.

Spivey-Knowlton, M., & Sedivy, J. (1995). Parsing attachment ambiguities with multiple constraints. Cognition, 55, 227-267.

Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., & Sedivy, J. (1995). Integration of visual and linguistic information during spoken language comprehension. Science, 268, 1632-1634.