Patents Examined by Jesse S Pullias
  • Patent number: 10529330
    Abstract: The present invention relates to a speech recognition apparatus and system for recognizing speech and converting the speech into text, and displaying an input state thereof in real-time for correction. In the speech recognition apparatus, speech input from a speech input unit is converted into text in units of words to display the converted text in real time on a first display window, and words displayed on the first display window are combined to generate a sentence such that the generated sentence is displayed on a second display window in real time. Therefore, a process through which what kind of sentence is formed by a combination of what kind of words may be intuitively confirmed so that text generated through speech recognition may be easily corrected.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: January 7, 2020
    Assignee: SORIZAVA CO., LTD.
    Inventor: Munhak An
  • Patent number: 10515627
    Abstract: A method and apparatus of building an acoustic feature extracting model, and an acoustic feature extracting method and apparatus. The method of building an acoustic feature extracting model comprises: considering first acoustic features extracted respectively from speech data corresponding to user identifiers as training data; using the training data to train a deep neural network to obtain an acoustic feature extracting model; wherein a target of training the deep neural network is to maximize similarity between the same user's second acoustic features and minimize similarity between different users' second acoustic features. The acoustic feature extracting model according to the present disclosure can self-learn optimal acoustic features that achieves a training target. As compared with a conventional acoustic feature extracting manner with a preset feature type and transformation manner, the acoustic feature extracting manner of the present disclosure achieves better flexibility and higher accuracy.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: December 24, 2019
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Chao Li, Xiaokong Ma, Bing Jiang, Xiangang Li
  • Patent number: 10511554
    Abstract: Techniques facilitating maintenance of tribal knowledge for accelerated compliance control deployment are provided. In one example, a system includes a memory that stores computer executable components and a processor that executes computer executable components stored in the memory, wherein the computer executable components include a knowledge base generation component that generates a knowledge graph corresponding to respective commitments created via tribal exchanges, the knowledge graph comprising a semantic level and an operational level; a semantic graph population component that populates the semantic level of the knowledge graph based on identified parties to the respective commitments; and an operational graph population component that populates the operational level of the knowledge graph based on tracked status changes associated with the respective commitments.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: December 17, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Constantin Mircea Adam, Muhammed Fatih Bulut, Richard Baxter Hull, Anup Kalia, Maja Vukovic, Jin Xiao
  • Patent number: 10503836
    Abstract: A method and a computer program product are implementable on a computing device are configured for generating natural language communication. A user interface displays a first number of recipient-specific pseudo-predefined elements. A semantic allocation of the elements is received from a user, wherein at least part of the first number of recipient-specific pseudo-predefined elements are allocated into at least two different classes. The semantically allocated elements are allocated into at least one logical class which defines, for each semantically allocated element, a specific portion or context of the communication to be automatically generated. A natural language sentence is generated in each logical class containing at least one of the semantically allocated elements. The generated sentence includes elements allocated to the respective logical class in said at least one generated sentence, or the sentence may include words describing a semantic meaning of the elements allocated to the logical class.
    Type: Grant
    Filed: April 13, 2016
    Date of Patent: December 10, 2019
    Assignee: EQUIVALENTOR OY
    Inventors: Joni Latvala, Saku Valkama
  • Patent number: 10490185
    Abstract: A method and system for providing dynamic conversation between an application and a user is discussed. The method includes utilizing a computing device to receive a requirement input from the user for the application. The method further includes determining a goal of the user based on the requirement input. Based on the goal, a plurality of conversation threads is initiated with the user, wherein each of the plurality of conversation threads has a degree of association with the goal. Thereafter, a plurality of slots is dynamically generated based on the goal and the plurality of conversation threads. A slot of the plurality of slots stores a data value corresponding to the requirement input of the user.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: November 26, 2019
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Meenakshi Sundaram Murugeshan
  • Patent number: 10489488
    Abstract: A system and method for automatically generating a narrative story receives data and information pertaining to a domain event. The received data and information and/or one or more derived features are then used to identify a plurality of angles for the narrative story. The plurality of angles is then filtered, for example through use of parameters that specify a focus for the narrative story, length of the narrative story, etc. Points associated with the filtered plurality of angles are then assembled and the narrative story is rendered using the filtered plurality of angles and the assembled points.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: November 26, 2019
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Lawrence A. Birnbaum, Kristian J. Hammond, Nicholas D. Allen, John R. Templon
  • Patent number: 10489498
    Abstract: Techniques and systems are described in which a document management system is configured to update content of document portions of digital documents. In one example, an update to the digital document is initially triggered by a document management system by detecting a triggering change applied to an initial portion of the digital document. The document management system, in response to the triggering change, then determines whether trailing changes are to be made to other document portions, such as to other document portions in the same digital document or another digital document. To do so, triggering and trailing change representations are generated and compared to determine similarity of candidate document portions with an initial document portion.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Vishwa Vinay, Sopan Khosla, Sanket Vaibhav Mehta, Sahith Thallapally, Gaurav Verma
  • Patent number: 10468025
    Abstract: The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: November 5, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hongbo Jin, Zhuolin Jiang
  • Patent number: 10453441
    Abstract: In some implementations, a language proficiency of a user of a client device is determined by one or more computers. The one or more computers then determines a text segment for output by a text-to-speech module based on the determined language proficiency of the user. After determining the text segment for output, the one or more computers generates audio data including a synthesized utterance of the text segment. The audio data including the synthesized utterance of the text segment is then provided to the client device for output.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: October 22, 2019
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Jakob Foerster
  • Patent number: 10453452
    Abstract: A pillow remote controller using acoustic commands includes a pillow defining an open interior compartment configured for housing one or more pressure sensors electrically and operably coupled with an electronics enclosure. The electronics enclosure contains various electronic components including a power control and alarm circuit, an audio receive, de-code and function command circuit, a transmitter selector and interface circuit, and various transmitters for transmitting an operating function to an electronic product. The electronics are operable for receiving an acoustic command, de-coding the acoustic command, and providing a function command to the transmitter selector and interface circuit. The function command is interfaced with a particular type of transmitter and transmitted optically or wirelessly to the electronic product. One or more human presence sensors may be operably coupled with the power control and alarm circuit to provide an assurance of human presence before enabling operation.
    Type: Grant
    Filed: August 11, 2017
    Date of Patent: October 22, 2019
    Inventor: Raymond Henry Hardman
  • Patent number: 10430557
    Abstract: Methods and systems for monitoring compliance of a patient with a prescribed treatment regimen are described. Patient activity is detected unobtrusively with an activity sensor at the patient location, and activity data is transmitted to a monitoring location. Patient speech detected during use of a communication system such as a mobile telephone by the patient may also be used as an activity signal. Patient activity and/or speech is processed at the patient location or monitoring location to identify activity parameters or patterns that indicate whether the patient has complied with the prescribed treatment regimen. The activity sensor and other components at the patient location may be incorporated into, or associated with, a cell phone, computing system, game system, or vehicle system, for example. The system may provide a report to an interested party, for example a medical care provider or insurance company, regarding patient compliance with the prescribed treatment regimen.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: October 1, 2019
    Assignee: Elwha LLC
    Inventors: Jeffrey A. Bowers, Paul Duesterhoft, Daniel Hawkins, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Eric C. Leuthardt, Nathan P. Myhrvold, Michael A. Smith, Elizabeth A. Sweeney, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 10416956
    Abstract: A display apparatus controlled based on a user's uttered voice and a method of controlling a display apparatus based on a user's uttered voice are provided. A display apparatus includes a processor, a memory, and a display. The processor is configured to receive an uttered voice of a user, determine text corresponding to the uttered voice of the user as an intermediate recognition result, determine a command based on a result obtained by comparing the intermediate recognition result with a previous intermediate recognition result that is stored in the memory, and perform an operation according to the command.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: September 17, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jang-ho Jin, Young-jun Ryu, Ho-gun Lim
  • Patent number: 10381000
    Abstract: Compact finite state transducers (FSTs) for automatic speech recognition (ASR). An HCLG FST and/or G FST may be compacted at training time to reduce the size of the FST to be used at runtime. The compact FSTs may be significantly smaller (e.g., 50% smaller) in terms of memory size, thus reducing the use of computing resources at runtime to operate the FSTs. The individual arcs and states of each FST may be compacted by binning individual weights, thus reducing the number of bits needed for each weight. Further, certain fields such as a next state ID may be left out of a compact FST if an estimation technique can be used to reproduce the next state at runtime. During runtime portions of the FSTs may be decompressed for processing by an ASR engine.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: August 13, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Denis Sergeyevich Filimonov, Gautam Tiwari, Shaun Nidhiri Joseph, Ariya Rastrow
  • Patent number: 10360066
    Abstract: In one example in accordance with the present disclosure, a method may include classifying each word in a natural language statement and determining an implementation, from a set of possible implementations, for a workflow platform based on the classified words. The method may also include mapping a first of the classified words to a task selected from a set of possible tasks associated with the implementation and mapping a second of the classified words to an input parameter associated with the task. The method may also include generating a workflow for the workflow platform using the task and the input.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: July 23, 2019
    Assignee: ENTIT SOFTWARE LLC
    Inventors: Adarsh Suparna, Pramod Annachira Vitala
  • Patent number: 10339931
    Abstract: The present disclosure involves systems, software, and computer implemented methods for personalizing interactions within a conversational interface based on an input context. One example system performs operations including receiving a conversational input via a conversational interface associated with a particular user profile. The input is analyzed via a natural language processing engine to determine an intent and a personality input type. A persona response type associated with the determined personality input type is identified, and responsive content is determined. A particular persona associated with the particular user profile based on a related set of social network activity information associated with the user profile and that corresponds to the identified persona response type is identified.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: July 2, 2019
    Assignee: The Toronto-Dominion Bank
    Inventors: Dean C. N. Tseretopoulos, Robert Alexander McCarter, Sarabjit Singh Walia, Vipul Kishore Lalka, Nadia Moretti, Paige Elyse Dickie, Denny Devasia Kuruvilla, Milos Dunjic, Dino Paul D'Agostino, Arun Victor Jagga, John Jong-Suk Lee, Rakesh Thomas Jethwa
  • Patent number: 10332518
    Abstract: Speech recognition is performed on a received utterance to determine a plurality of candidate text representations of the utterance, including a primary text representation and one or more alternative text representations. Natural language processing is performed on the primary text representation to determine a plurality of candidate actionable intents, including a primary actionable intent and one or more alternative actionable intents. A result is determined based on the primary actionable intent. The result is provided to the user. A recognition correction trigger is detected. In response to detecting the recognition correction trigger, a set of alternative intent affordances and a set of alternative text affordances are concurrently displayed.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: June 25, 2019
    Assignee: Apple Inc.
    Inventors: Ashish Garg, Harry J. Saddler, Shweta Grampurohit, Robert A. Walker, Rushin N. Shah, Matthew S. Seigel, Matthias Paulik
  • Patent number: 10318630
    Abstract: In various example embodiments, a textual identification system is configured to receive a set of search terms and identify a set of textual data based on the search terms. The textual identification system retrieves a data structure including textual identifications for the set of textual data and processes the data structure to generate a modified data structure. The textual identification system sums rows within the modified data structure and identifies one or more elements of interest. The textual identification system then causes presentation of the elements of interest in a first portion of a graphical user interface and the textual identifications for the set of textual data in a second portion of the graphical user interface.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: June 11, 2019
    Assignee: Palantir Technologies Inc.
    Inventors: Maxim Kesin, Paul Gribelyuk
  • Patent number: 10319390
    Abstract: A system and method for improving intelligibility of speech is provided. The system and method may include obtaining an input audio signal frame, classifying the input audio signal frame into a first category or a second category, wherein the first category corresponds to the noise being stronger than the speech signal, and the second category corresponds to the speech signal being stronger than the noise, decomposing the input audio signal frame into a plurality of sub-band components; de-noising each sub-band component of the input audio signal frame in parallel by applying a first wavelet de-noising method including a first wavelet transform and a predetermined threshold for the sub-band component, and a second wavelet de-noising method including a second wavelet transform and the predetermined threshold for the sub-band component, wherein the predetermined threshold for each sub-band component is based on at least one previous noise-dominant signal frame received by the receiving arrangement.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: June 11, 2019
    Assignee: New York University
    Inventors: Roozbeh Soleymani, Ivan W. Selesnick, David M. Landsberger
  • Patent number: 10319381
    Abstract: An interaction assistant conducts multiple turn interaction dialogs with a user in which context is maintained between turns, and the system manages the dialog to achieve an inferred goal for the user. The system includes a linguistic interface to a user and a parser for processing linguistic events from the user. A dialog manager of the system is configured to receive alternative outputs from the parser, and selecting an action and causing the action to be performed based on the received alternative outputs. The system further includes a dialog state for an interaction with the user, and the alternative outputs represent alternative transitions from a current dialog state to a next dialog state. The system further includes a storage for a plurality of templates, and wherein each dialog state is defined in terms of an interrelationship of one or more instances of the templates.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: June 11, 2019
    Assignee: Semantic Machines, Inc.
    Inventors: Jacob Daniel Andreas, Daniel Lawrence Roth, Jesse Daniel Eskes Rusak, Andrew Robert Volpe, Steven Andrew Wegmann, Taylor Darwin Berg-Kirkpatrick, Pengyu Chen, Jordan Rian Cohen, Laurence Steven Gillick, David Leo Wright Hall, Daniel Klein, Michael Newman, Adam David Pauls
  • Patent number: 10319382
    Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items. The voice-navigable UI can also present a first visible menu that includes at least a portion of the voice navigable menu. In response to a first utterance comprising one of the one or more menu items, the voice-navigable UI can modify the first visible menu to display one or more commands associated with the first menu item. In response to a second utterance comprising a first command, the voice-navigable UI can invoke the first command. In some embodiments, the voice-navigable UI can display a second visible menu, where the first command can be displayed above other menu items in the second visible menu.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: June 11, 2019
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, Clifford Ivar Nass