Patents by Inventor David Leo Wright Hall
David Leo Wright Hall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10276160Abstract: An interaction assistant conducts multiple turn interaction dialogs with a user in which context is maintained between turns, and the system manages the dialog to achieve an inferred goal for the user. The system includes a linguistic interface to a user and a parser for processing linguistic events from the user. A dialog manager of the system is configured to receive alternative outputs from the parser, and selecting an action and causing the action to be performed based on the received alternative outputs. The system further includes a dialog state for an interaction with the user, and the alternative outputs represent alternative transitions from a current dialog state to a next dialog state. The system further includes a storage for a plurality of templates, and wherein each dialog state is defined in terms of an interrelationship of one or more instances of the templates.Type: GrantFiled: November 10, 2016Date of Patent: April 30, 2019Assignee: Semantic Machines, Inc.Inventors: Jacob Daniel Andreas, Taylor Darwin Berg-Kirkpatrick, Pengyu Chen, Jordan Rian Cohen, Laurence Steven Gillick, David Leo Wright Hall, Daniel Klein, Michael Newman, Adam David Pauls, Daniel Lawrence Roth, Jesse Daniel Eskes Rusak, Andrew Robert Volpe, Steven Andrew Wegmann
-
Publication number: 20190103107Abstract: A method includes receiving an utterance at a computerized automated assistant system, and detecting, via a date/time constraint module of the computerized automated assistant system, one or more constraints in the utterance associated with a date or time. The utterance is associated with a domain. The method further comprises generating, via the date/time constraint module, a periodic set for each of the one or more constraints associated with the date or time, and combining, via the date/time constraint module, the one or more periodic sets. The method further comprises processing, via a dialogue manager module of the computerized automated assistant system, the combined periodic sets to determine an action, and executing the action at the computerized automated assistant system.Type: ApplicationFiled: July 13, 2018Publication date: April 4, 2019Applicant: Semantic Machines, Inc.Inventors: Jordan Rian Cohen, David Leo Wright Hall, Jason Andrew Wolfe, Daniel Lawrence Roth, Daniel Klein
-
Publication number: 20190103092Abstract: A method for a dialogue system includes establishing a dialogue session between an application executing on a server and a remote machine. The dialogue session includes one or more utterances received from a user at the remote machine. A natural language processing machine identifies a request associated with a computer-readable representation of an utterance. A dialogue expansion machine generates a plurality of alternative actions for responding to the request. A previously-trained machine learning confidence model assesses a confidence score for each alternative. If a highest confidence score for a top alternative does not satisfy a threshold, the plurality of alternatives including the top alternative are transmitted to a remote machine (which may be the same remote machine or a different remote machine) for review by a human reviewer. After the dialogue system and/or the human reviewer select an alternative, computer-readable instructions defining the selected alternative are executed.Type: ApplicationFiled: July 16, 2018Publication date: April 4, 2019Applicant: Semantic Machines, Inc.Inventors: Jesse Daniel Eskes Rusak, David Leo Wright Hall, Jason Andrew Wolfe, Daniel Lawrence Roth, Daniel Klein, Jordan Rian Cohen
-
Publication number: 20190066660Abstract: An automated natural dialogue system provides a combination of structure and flexibility to allow for ease of annotation of dialogues as well as learning and expanding the capabilities of the dialogue system based on natural language interactions.Type: ApplicationFiled: August 28, 2018Publication date: February 28, 2019Applicant: Semantic Machines, Inc.Inventors: Percy Shuo Liang, David Leo Wright Hall, Jesse Daniel Eskes Rusak, Daniel Klein
-
Publication number: 20180374479Abstract: A system that provides a sharable language interface for implementing automated assistants in new domains and applications. A dialogue assistant that is trained in a first domain can receive a specification in a second domain. The specification can include language structure data such as schemas, recognizers, resolvers, constraints and invariants, actions, language hints, generation template, and other data. The specification data is applied to the automated assistant to enable the automated assistant to provide interactive dialogue with a user in a second domain associated with the received specification. In some instances, portions of the specification may be automatically mapped to portions of the first domain. By having the ability to learn new domains and applications through receipt of objects and properties rather than retooling the interface entirely, the present system is much more efficient at learning how to provide interactive dialogue in new domains than previous systems.Type: ApplicationFiled: March 2, 2018Publication date: December 27, 2018Applicant: Semantic Machines, Inc.Inventors: David Leo Wright Hall, Daniel Klein, David Ernesto Heekin Burkett, Jordan Rian Cohen, Daniel Lawrence Roth
-
Publication number: 20180350349Abstract: A system that allows non-engineers administrators, without programming, machine language, or artificial intelligence system knowledge, to expand the capabilities of a dialogue system. The dialogue system may have a knowledge system, user interface, and learning model. A user interface allows non-engineers to utilize the knowledge system, defined by a small set of primitives and a simple language, to annotate a user utterance. The annotation may include selecting actions to take based on the utterance and subsequent actions and configuring associations. A dialogue state is continuously updated and provided to the user as the actions and associations take place. Rules are generated based on the actions, associations and dialogue state that allows for computing a wide range of results.Type: ApplicationFiled: February 23, 2018Publication date: December 6, 2018Applicant: Semantic Machines, Inc.Inventors: Percy Shuo Liang, David Leo Wright Hall, Joshua James Clausman
-
Publication number: 20180308481Abstract: A system that transforms queries for each dialogue domain into constraint graphs, including both constraints explicitly provided by the user as well as implicit constraints that are inherent to the domain. Once all the domain-specific constraints have been collected into a graph, general-purpose domain-independent algorithms can be used to draw inferences for both intent disambiguation and constraint propagation. Given a candidate interpretation of a user utterance as the posting, modification, or retraction of a constraint, constraint inference techniques such as arc consistency and satisfiability checking can be used to answer questions. The underlying engine can also handle soft constraints, in cases where the constraint may be violated for some cost or in cases where there are different degrees of violations.Type: ApplicationFiled: April 20, 2018Publication date: October 25, 2018Applicant: Semantic Machines, Inc.Inventors: Jordan Cohen, Daniel Klein, David Leo Wright Hall, Jason Wolfe, Daniel Roth
-
Publication number: 20180261205Abstract: A system that allows non-engineers administrators, without programming, machine language, or artificial intelligence system knowledge, to expand the capabilities of a dialogue system. The dialogue system may have a knowledge system, user interface, and learning model. A user interface allows non-engineers to utilize the knowledge system, defined by a small set of primitives and a simple language, to annotate a user utterance. The annotation may include selecting actions to take based on the utterance and subsequent actions and configuring associations. A dialogue state is continuously updated and provided to the user as the actions and associations take place. Rules are generated based on the actions, associations and dialogue state that allows for computing a wide range of results.Type: ApplicationFiled: May 8, 2018Publication date: September 13, 2018Applicant: Semantic Machines, Inc.Inventors: Percy Shuo Liang, David Leo Wright Hall, Joshua James Clausman
-
Publication number: 20180246954Abstract: A system that generates natural language content. The system generates and maintains a dialogue state representation having a process view, query view, and data view. The three-view dialogue state representation is continuously updated during discourse between an agent and a user, and rules can be automatically generated based on the discourse. Upon a content generation event, an object description can be generated based on the dialogue state representation. A string is then determined from the object description, using a hybrid approach of the automatically generated rules and other rules learned from annotation and other user input. The string is translated to text or speech and output by the agent. The present system also incorporates learning techniques, for example when ranking output and processing annotation templates.Type: ApplicationFiled: February 8, 2018Publication date: August 30, 2018Applicant: Semantic Machines, Inc.Inventors: Jacob Daniel Andreas, David Leo Wright Hall, Daniel Klein, Adam Pauls
-
Publication number: 20180226068Abstract: A conversational system receives an utterance, and a parser performs a parsing operation on the utterance, resulting in some words being parsed and some words not being parsed. For the words that are not parsed, words or phrases determined to be unimportant are ignored. The resulting unparsed words are processed to determine the likelihood they are important and whether they should be addressed by the automated assistant. For example, if a score associated with an important unparsed word achieves a particular threshold, then a course of action to take for the utterance may include providing a message that the portion of the utterance associated with the important unparsed word cannot be handled.Type: ApplicationFiled: January 31, 2018Publication date: August 9, 2018Applicant: Semantic Machines, Inc.Inventors: David Leo Wright Hall, Daniel Klein
-
Publication number: 20180174585Abstract: An interaction assistant conducts multiple turn interaction dialogs with a user in which context is maintained between turns, and the system manages the dialog to achieve an inferred goal for the user. The system includes a linguistic interface to a user and a parser for processing linguistic events from the user. A dialog manager of the system is configured to receive alternative outputs from the parser, and selecting an action and causing the action to be performed based on the received alternative outputs. The system further includes a dialog state for an interaction with the user, and the alternative outputs represent alternative transitions from a current dialog state to a next dialog state. The system further includes a storage for a plurality of templates, and wherein each dialog state is defined in terms of an interrelationship of one or more instances of the templates.Type: ApplicationFiled: February 14, 2018Publication date: June 21, 2018Inventors: Jacob Andreas, Taylor D. Berg-Kirkpatrick, Pengyu Chen, Jordan R. Cohen, Laurence S. Gillick, David Leo Wright Hall, Daniel Klein, Michael Newman, Adam D. Pauls, Daniel L. Roth, Jesse Daniel Eskes Rusak, Andrew R. Volpe, Steven A. Wegmann
-
Publication number: 20180114522Abstract: A system eliminates alignment processing and performs TTS functionality using a new neural architecture. The neural architecture includes an encoder and a decoder. The encoder receives an input and encodes it into vectors. The encoder applies a sequence of transformations to the input and generates a vector representing the entire sentence. The decoder takes the encoding and outputs an audio file, which can include compressed audio frames.Type: ApplicationFiled: October 24, 2017Publication date: April 26, 2018Applicant: Semantic Machines, Inc.Inventors: David Leo Wright Hall, Daniel Klein, Daniel Roth, Lawrence Gillick, Andrew Maas, Steven Wegmann
-
Publication number: 20180061408Abstract: An automated assistant automatically recognizes speech, decode paraphrases in the recognized speech, performs an action or task based on the decoder output, and provides a response to the user. The response may be text or audio, and may be translated to include paraphrasing. The automatically recognized speech may be processed to determine partitions in the speech, which may be in turn processed to identify paraphrases in the partitions. A decoder may process an input utterance text to identify paraphrases content to include in a segment or sentence. The decoder may paraphrase the input utterance to make the utterance, updated with one or more paraphrases, more easily parsed by a parser. A translator may process a generated response to make the response sound more natural. The translator may replace content of the generated response with paraphrase content based on the state of the conversation with the user, including salience data.Type: ApplicationFiled: August 4, 2017Publication date: March 1, 2018Applicant: Semantic Machines, Inc.Inventors: Jacob Daniel Andreas, David Ernesto Heekin Burkett, Pengyu Chen, Jordan Rian Cohen, Gregory Christopher Durrett, Laurence Steven Gillick, David Leo Wright Hall, Daniel Klein, Adam David Pauls, Daniel Lawrence Roth, Jesse Daniele Eskes Rusak, Yan Virin, Charles Clayton Wooters
-
Publication number: 20170352344Abstract: The intonation model of the present technology disclosed herein assigns different words within a sentence to be prominent, analyzes multiple prominence possibilities (in some cases, all prominence possibilities), and learns parameters of the model using large amounts of data. Unlike previous systems, intonation patterns are discovered from data.Type: ApplicationFiled: February 9, 2017Publication date: December 7, 2017Applicant: Semantic Machines, Inc.Inventors: Taylor Darwin Berg-Kirkpatrick, William Hui-Dee Chang, David Leo Wright Hall, Daniel Klein
-
Publication number: 20170147554Abstract: A method for configuring an automated dialogue system uses traces of interactions via a graphical user interface (GUI) for an application. Each trace includes interactions in the context of a plurality of presentations of the GUI. Elements of one or more presentations of the GUI are identified, and templates are associated with portions of the trace. Each template has one or more defined inputs and a defined output. For each template of the plurality of templates, the portions of the traces are processed to automatically configure the template by specifying a procedure for providing values of inputs to the template via the GUI and obtaining a value of an output. The automated dialogue system is configured with the configured templates, thereby avoiding manual configuration of the dialogue system.Type: ApplicationFiled: November 22, 2016Publication date: May 25, 2017Inventors: Pengyu Chen, Jordan R. Cohen, Laurence S. Gillick, David Leo Wright Hall, Daniel Klein, Adam D. Pauls, Daniel L. Roth, Jesse Daniel Eskes Rusak
-
Publication number: 20170140755Abstract: An interaction assistant conducts multiple turn interaction dialogs with a user in which context is maintained between turns, and the system manages the dialog to achieve an inferred goal for the user. The system includes a linguistic interface to a user and a parser for processing linguistic events from the user. A dialog manager of the system is configured to receive alternative outputs from the parser, and selecting an action and causing the action to be performed based on the received alternative outputs. The system further includes a dialog state for an interaction with the user, and the alternative outputs represent alternative transitions from a current dialog state to a next dialog state. The system further includes a storage for a plurality of templates, and wherein each dialog state is defined in terms of an interrelationship of one or more instances of the templates.Type: ApplicationFiled: November 10, 2016Publication date: May 18, 2017Inventors: Jacob Andreas, Taylor D. Being-Kirkpatrick, Pengyu Chen, Jordan R. Cohen, Laurence S Gillsick, David Leo Wright Hall, Daniel Klein, Michael Newman, Adam D. Pauls, Daniel L. Roth, Jesse Daniel Eskes Rusak, Andrew R. Volpe, Steven A. Wegmann
-
Publication number: 20170118344Abstract: An approach to providing communication assistance to an operator of a vehicle makes use software having a first component executing on a personal device of the operator as well as a second component executing on a server in communication with the personal device. In some implementations, handling a call involves establishing a first two-way audio link between the server and the calling device is established, and a second two-way audio link between a server and the user device. The server passes some of the audio from the calling device to the user device, and monitors a user's voice input, of lack thereof, to determine how to handle the call.Type: ApplicationFiled: October 20, 2016Publication date: April 27, 2017Inventors: Jordan R. Cohen, Daniel L. Roth, David Leo Wright Hall, Jesse Daniel Eskes Rusak, Andrew Robert Volpe, Sean Daniel True, Damon R. Pender, Laurence S. Gillick, Yan Virin