Parsing For Meaning Understanding (epo) Patents (Class 704/E15.026)
  • Patent number: 11874844
    Abstract: From a set of natural language text documents, a concept tree is constructed. For a node in the concept tree a polarity of the subset represented by the node is scored. A second set of natural language text documents is added to the subset, the adding resulting in a modified subset of natural language text documents having a polarity score within a predefined neutral polarity score range. From the modified subset, a bin of sentences is selected according to a sentence selection parameter, a sentence in the bin of sentences being extracted from a selected document in the modified subset. A sentence having a factuality score below a threshold factuality score is removed from the bin of sentences. From the filtered bin of sentences a new natural language text document corresponding to the filtered bin of sentences is generated using a transformer deep learning narration generation model.
    Type: Grant
    Filed: March 9, 2023
    Date of Patent: January 16, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Gray Franklin Cannon, Stephen C Hammer, Shikhar Kwatra
  • Patent number: 11848013
    Abstract: Implementations set forth herein relate to an automated assistant capable of bypassing soliciting a user for supplemental data for completing an action when a previously-queried application is capable of providing the supplemental data. For instance, when a user invokes the automated assistant to complete a first action with a first application, the user may provide many pertinent details. Those details may be useful to a second application that the user may subsequently invoke via the automated assistant for completing a second action. In order to save the user from having to repeat the details to the automated assistant, the automated assistant can interact with the first application in order to obtain any information that may be essential for the second application to complete the second action. The automated assistant can then provide the information to the second application, without soliciting the user for the information.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: December 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Scott Davies, Ruxandra Davies
  • Patent number: 11687539
    Abstract: From a set of natural language text documents, a concept tree is constructed. For a node in the concept tree a polarity of the subset represented by the node is scored. A second set of natural language text documents is added to the subset, the adding resulting in a modified subset of natural language text documents having a polarity score within a predefined neutral polarity score range. From the modified subset, a bin of sentences is selected according to a sentence selection parameter, a sentence in the bin of sentences being extracted from a selected document in the modified subset. A sentence having a factuality score below a threshold factuality score is removed from the bin of sentences. From the filtered bin of sentences a new natural language text document corresponding to the filtered bin of sentences is generated using a transformer deep learning narration generation model.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: June 27, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Gray Franklin Cannon, Stephen C Hammer, Shikhar Kwatra
  • Patent number: 10162456
    Abstract: Touch events can be predicted relative to a visual display by maintaining a database of aggregated touch event history data relative to the visual display and from a plurality of touch screen devices. The database can be queried according to a set of input parameters defining an environment for use of the visual display. The results from the querying of the database can be analyzed to predict a set of touch events within the environment and based upon inferences obtained from the results. A representation of the set of touch events can be displayed along with the visual display.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: December 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Trudy L. Hewitt, Debra J. McKinney, Christina L. Wetli
  • Patent number: 9406025
    Abstract: Touch events can be predicted relative to a visual display by maintaining a database of aggregated touch event history data relative to the visual display and from a plurality of touch screen devices. The database can be queried according to a set of input parameters defining an environment for use of the visual display. The results from the querying of the database can be analyzed to predict a set of touch events within the environment and based upon inferences obtained from the results. A representation of the set of touch events can be displayed along with the visual display.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: August 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Trudy L. Hewitt, Debra J. McKinney, Christina L. Wetli
  • Publication number: 20100217591
    Abstract: The present invention provides systems, software and methods method for accurate vowel detection in speech to text conversion, the method including the steps of applying a voice recognition algorithm to a first user speech input so as to detect known words and residual undetected words; and detecting at least one undetected vowel from the residual undetected words by applying a user-fitted vowel recognition algorithm to vowels from the known words so as to accurately detect the vowels in the undetected words in the speech input, to enhance conversion of voice to text.
    Type: Application
    Filed: January 8, 2008
    Publication date: August 26, 2010
    Inventor: Avraham Shpigel
  • Publication number: 20100128042
    Abstract: A system and method for generating and displaying text on a screen as an animated flow from a digital input of conventional text. The Invention divides text into short-scan lines of coherent semantic value that progressively animate from invisible to visible and back to invisible. Multiple line displays are frequent. The effect is aesthetically engaging, perceptually focusing, and cognitively immersive. The reader watches the text like watching a movie. The Invention may exist in whole or in part as a standalone application on a specific screen device. The Invention includes a manual authoring tool that allows the insertion of non-text media such as sound, image, and advertisements.
    Type: Application
    Filed: July 10, 2009
    Publication date: May 27, 2010
    Inventors: Anthony Confrey, Dennis Downey
  • Publication number: 20100100547
    Abstract: A method and system for generating information tags from product-related documents. The system includes an accessible storage storing text documents, wherein the text documents are related to a plurality of products. The system includes a memory access module for retrieving a document from the accessible storage related to a specified product selected from the plurality of products. The system includes a parser module for parsing the retrieved document into sentences, wherein each sentence is stored as an array. The system includes a filter module for filtering the parsed sentences into a result set, wherein the result set includes a set of tags extracted from the retrieved document relevant to the selected product. The system includes an output module for outputting the result set to the accessible storage.
    Type: Application
    Filed: October 20, 2009
    Publication date: April 22, 2010
    Applicant: Flixbee, Inc.
    Inventors: Hamilton A. Ulmer, Svyatoslav Mishchenko
  • Publication number: 20100076761
    Abstract: Non-verbalized tokens, such as punctuation, are automatically predicted and inserted into a transcription of speech in which the tokens were not explicitly verbalized. Token prediction may be integrated with speech decoding, rather than performed as a post-process to speech decoding.
    Type: Application
    Filed: September 25, 2009
    Publication date: March 25, 2010
    Inventors: Fritsch Juergen, Anoop Deoras, Detlef Koll
  • Publication number: 20090112594
    Abstract: Disclosed are systems, methods and computer readable media for training acoustic models for an automatic speech recognition systems (ASR) system. The method includes receiving a speech signal, defining at least one syllable boundary position in the received speech signal, based on the at least one syllable boundary position, generating for each consonant in a consonant phoneme inventory a pre-vocalic position label and a post-vocalic position label to expand the consonant phoneme inventory, reformulating a lexicon to reflect an expanded consonant phoneme inventory, and training a language model for an automated speech recognition (ASR) system based on the reformulated lexicon.
    Type: Application
    Filed: October 31, 2007
    Publication date: April 30, 2009
    Applicant: AT&T Labs
    Inventors: Yeon-Jun Kim, Alistair Conkie, Andrej Ljolje, Ann K. Syrdal
  • Publication number: 20080221903
    Abstract: Improved techniques are disclosed for permitting a user to employ more human-based grammar (i.e., free form or conversational input) while addressing a target system via a voice system. For example, a technique for determining intent associated with a spoken utterance of a user comprises the following steps/operations. Decoded speech uttered by the user is obtained. An intent is then extracted from the decoded speech uttered by the user. The intent is extracted in an iterative manner such that a first class is determined after a first iteration and a sub-class of the first class is determined after a second iteration. The first class and the sub-class of the first class are hierarchically indicative of the intent of the user, e.g., a target and data that may be associated with the target. The multi-stage intent extraction approach may have more than two iterations.
    Type: Application
    Filed: May 22, 2008
    Publication date: September 11, 2008
    Applicant: International Business Machines Corporation
    Inventors: Dimitri Kanevsky, Joseph Simon Reisinger, Robert Sicconi, Mahesh Viswanathan
  • Publication number: 20080154581
    Abstract: Methods and systems for dynamic natural language understanding. A hierarchical structure of semantic categories is exploited to assist in the natural language understanding. Optionally, the natural language to be understood includes a request.
    Type: Application
    Filed: March 12, 2008
    Publication date: June 26, 2008
    Applicant: Intelligate, Ltd.
    Inventors: Ofer Lavi, Gadiel Auerbach, Eldad Persky