A workshop held on July 9/10, 2011 at the University of Colorado at Boulder. The workshop was funded by the National Science Foundation.
Natural Language Technology is moving from text retrieval and search applications to tasks that require genuine understanding of natural language as well as the interaction between language understanding and reasoning. Within the linguistic computational world a common perspective has emerged on what is common to these natural language understanding tasks under the heading “textual inferencing”. The aim is to develop systems that can decide when given two natural language statements, what the inferential relation between the two is. Textual inference simplifies the general language understanding problem by limiting its interest to direct inferences avoiding complicated chains of inferences and specialized world knowledge. Semantics as practiced by linguists could play a role in the development of textual inference systems, but most of current work in linguistic semantics has a very different focus. This workshop aims to bring together researchers interested in semantics, “lexical” and “formal”, and in computational textual inference to discuss the virtues and drawbacks of various semantic approaches. The aim of the workshop is to make the community of semanticists more aware of the computational issues in natural language understanding and to expose computer scientists to a variety of semantic approaches.
|9:30||Cleo Condoravdi (PARC and Stanford) and Annie Zaenen (Stanford CSLI) Introduction to Textual Inference|
|10:00||Mark Sammons (UIUC) Tools to support textual inference|
|10:45||Lauri Karttunen and Annie Zaenen (Stanford CSLI) From syntax to abstract knowledge representation|
|13:00||James Allen (University of Rochester) Deep understanding and textual inference|
|14:00||Len Schubert (University of Rochester) Natural Logic-like inference and commonsense reasoning|
|15:30||Mark Sammons (UIUC) Inference chains based on transformation rules|
|16:30||Discussion on practical proposals to improve the analysis of textual inference||
|9:30||Larry Moss (Indiana University) Monotonicity and polarity in Natural Logic|
|10:00||Alex Djalali and Chris Potts (Stanford University) Synthetic logic characterizations of meanings extracted from large corpora|
|11:30||Jason Perry and Chung-chieh Shan (Ruttgers University) Back to the model|
|12:30||Discussion lead by Matthew Stone (Ruttgers University)|
|14:30||Graham Katz (Georgetown University) Generics and habituals|
|15:30||Cleo Condoravdi (PARC and Stanford University) Linking semantics for modification|
|16:30||General Discussion introduced by Mats Rooth (Cornell University)|