Patents Examined by Jesse Pullias
  • Patent number: 10019993
    Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) that can be implemented on a head-mountable device (HMD). The UI can include a voice-navigable UI. The voice-navigable UI can include a voice navigable menu that includes one or more menu items. The voice-navigable UI can also present a first visible menu that includes at least a portion of the voice navigable menu. In response to a first utterance comprising one of the one or more menu items, the voice-navigable UI can modify the first visible menu to display one or more commands associated with the first menu item. In response to a second utterance comprising a first command, the voice-navigable UI can invoke the first command. In some embodiments, the voice-navigable UI can display a second visible menu, where the first command can be displayed above other menu items in the second visible menu.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: July 10, 2018
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, Clifford Ivar Nass
  • Patent number: 10013980
    Abstract: A user is allowed to communicate with a chatbot. A menu is provided to the user that includes a list of actions that can be performed by the user. Whenever natural language input asking a question is received from the user, this input is forwarded to the chatbot, a response to this input is received from the chatbot, this response is provided to the user, and the menu is again provided to the user. Whenever natural language input is received from the user requesting an action that is not one of the actions in the menu, this input is forwarded to the chatbot, a response to this input is received from the chatbot, where this response includes another menu that includes a list of subsequent actions that are related to the requested action and can be performed by the user, and this other menu is provided to the user.
    Type: Grant
    Filed: October 4, 2016
    Date of Patent: July 3, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yuval Pinchas Borsutsky, Keren Damari, William D. Ramsey, Benny Schlesinger, Eldar Cohen
  • Patent number: 10007720
    Abstract: Example embodiments provide a system and method for analyzing conversations and determining whether to participate with a response. A networked system receives, over a network, a communication that is a part of a conversation involving one or more users, whereby the networked system is a participant in the conversation. The networked system analyzes the communication including parsing key terms from the communication. The networked system then identifies a sentiment of a user among the one or more users based on the parsed key terms. Based on the identified sentiment, the networked system determines whether to respond to the communication. In response to a determination to respond, the networked system generates a customized response and transmits the customized response, over the network, to a device of the user. The customized response may comprise questions or a set of options related to the conversation.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: June 26, 2018
    Assignee: Hipmunk, Inc.
    Inventors: Adam Julian Goldstein, Alex Quintana, Eric Palm, Gregory Millam, Zohaib Ahmed
  • Patent number: 10002192
    Abstract: Systems for receiving, analyzing, and organizing audio content contained within a plurality of media files are disclosed. The systems generally include a server that is configured to receive, index, and store a plurality of media files, which are received by the server from a plurality of sources, within at least one database in communication with the server. In addition, the server is configured to make one or more of the media files accessible to and searchable by, one or more persons other than the original sources of such media files. Still further, the server may be configured to organize audio content included within each of the plurality of media files into bipartite graphs; segment media files into parts that exhibit similar attributes; extract and present meta data to a user that pertain to each media file; and employ multi-variable ranking methods to prioritize media file search results.
    Type: Grant
    Filed: July 7, 2015
    Date of Patent: June 19, 2018
    Inventors: Jan Jannink, Walter Bachtiger
  • Patent number: 9996532
    Abstract: Systems and methods for building a dialog-state specific multi-turn contextual language understanding system are provided. More specifically, the systems and methods infer or are configured to infer a state-specific schema and/or state-specific rules from a formed single-shot language understanding model and/or a single-shot rule set. As such, the systems and methods only require the information necessary to form a single-shot language understanding model and/or a single-shot rule set from a builder to form or build the dialog-state specific multi-turn contextual language understanding system.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: June 12, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Young-Bum Kim, Alexandre Rochette
  • Patent number: 9990337
    Abstract: A system and method for automatically generating a narrative story receives data and information pertaining to a domain event. The received data and information and/or one or more derived features are then used to identify a plurality of angles for the narrative story. The plurality of angles is then filtered, for example through use of parameters that specify a focus for the narrative story, length of the narrative story, etc. Points associated with the filtered plurality of angles are then assembled and the narrative story is rendered using the filtered plurality of angles and the assembled points.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: June 5, 2018
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Lawrence A. Birnbaum, Kristian J. Hammond, Nicholas D. Allen, John R. Templon
  • Patent number: 9990924
    Abstract: The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: June 5, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hongbo Jin, Zhuolin Jiang
  • Patent number: 9984689
    Abstract: Disclosed is an apparatus and method for correcting pronunciation by contextual recognition. The apparatus may include an interface configured to receive, from a speech recognition server, first text data obtained by converting speech data to a text, and a processor configured to extract a keyword from the received first text data, calculate a suitability of a word in the first text data in association with the extracted keyword, and update the first text data to second text data by replacing, with an alternative word, a word in the first text data having a suitability less than a preset reference value.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: May 29, 2018
    Inventor: Sung Hyuk Kim
  • Patent number: 9978361
    Abstract: Systems and methods for building a dialog-state specific multi-turn contextual language understanding system are provided. More specifically, the systems and methods infer or are configured to infer a state-specific schema and/or state-specific rules from a formed single-shot language understanding model and/or a single-shot rule set. As such, the systems and methods only require the information necessary to form a single-shot language understanding model and/or a single-shot rule set from a builder to form or build the dialog-state specific multi-turn contextual language understanding system.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 22, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ruhi Sarikaya, Young-Bum Kim, Alexandre Rochette
  • Patent number: 9972334
    Abstract: A device includes a decoder configured to receive an encoded audio signal at a decoder and to generate a synthesized signal based on the encoded audio signal. The device further includes a classifier configured to classify the synthesized signal based on at least one parameter determined from the encoded audio signal.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: May 15, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Subasingha Shaminda Subasingha, Vivek Rajendran, Venkata Subrahmanyam Chandra Sekhar Chebiyyam, Venkatraman Atti, Pravin Kumar Ramadas, Daniel Jared Sinder, Stephane Pierre Villette
  • Patent number: 9972308
    Abstract: Methods, a system, and a classifier are provided. A method includes preparing, by a processor, pairs for an information retrieval task. Each pair includes (i) a training-stage speech recognition result for a respective sequence of training words and (ii) an answer label corresponding to the training-stage speech recognition result. The method further includes obtaining, by the processor, a respective rank for the answer label included in each pair to obtain a set of ranks. The method also includes determining, by the processor, for each pair, an end of question part in the training-stage speech recognition result based on the set of ranks. The method additionally includes building, by the processor, the classifier such that the classifier receives a recognition-stage speech recognition result and returns a corresponding end of question part for the recognition-stage speech recognition result, based on the end of question part determined for the pairs.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: May 15, 2018
    Assignee: International Business Machines Corporation
    Inventors: Tohru Nagano, Ryuki Tachibana
  • Patent number: 9972305
    Abstract: An apparatus for normalizing input data of an acoustic model includes a window extractor configured to extract windows of frame data to be input to an acoustic model from frame data of a speech to be recognized, and a normalizer configured to normalize the frame data to be input to the acoustic model in units of the extracted windows.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: May 15, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: In Chul Song, Young Sang Choi, Hwi Dong Na
  • Patent number: 9972314
    Abstract: Techniques and architectures may be used to generate and perform a process using weighted finite-state transducers involving generic input search graphs. The process need not pursue theoretical optimality and instead search graphs may be optimized without an a priori optimization step. The process may result in an automatic speech recognition (ASR) decoder that is substantially faster than ASR decoders the include the optimization step.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: May 15, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zhenghao Wang, Xuedong Huang, Huaming Wang
  • Patent number: 9959869
    Abstract: Features are disclosed for processing a user utterance with respect to multiple subject matters or domains, and for selecting a likely result from a particular domain with which to respond to the utterance or otherwise take action. A user utterance may be transcribed by an automatic speech recognition (“ASR”) module, and the results may be provided to a multi-domain natural language understanding (“NLU”) engine. The multi-domain NLU engine may process the transcription(s) in multiple individual domains rather than in a single domain. In some cases, the transcription(s) may be processed in multiple individual domains in parallel or substantially simultaneously. In addition, hints may be generated based on previous user interactions and other data. The ASR module, multi-domain NLU engine, and other components of a spoken language processing system may use the hints to more efficiently process input or more accurately generate output.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: May 1, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Lambert Mathias, Ying Shi, Imre Attila Kiss, Ryan Paul Thomas, Frederic Johan Georges Deramat
  • Patent number: 9953654
    Abstract: A voice command recognition apparatus and method thereof are described. The voice command recognition apparatus includes audio sensors placed at different locations; a context determiner configured to determine user context based on a voice received at the audio sensors, wherein the context comprises a vocalization from a user. A command recognizer in the voice command recognition apparatus is configured to activate to recognize a voice command or remain inactive according to the recognized context.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: April 24, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Min Young Mun, Young Sang Choi
  • Patent number: 9953639
    Abstract: Disclosed are a voice recognition system and a construction method for the voice recognition system. By way of layering the system, a general semantic recognition operation for the system is separated from a specific semantic recognition operation for an application program; and by way of classifying the application programs and abstracting out a common performance function, the system can find the application program matching the voice content semantics very efficiently and a third-party program is easily added into the existing voice recognition system. The present invention maps the performance function to a regular expression with semantic variables, so that the system can recognize more semantic expression manners with optimization of the semantic recognition. Therefore, the system can show more humanized characteristics.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: April 24, 2018
    Inventor: Hua Xu
  • Patent number: 9940925
    Abstract: An apparatus comprises: a memory; and a processor coupled to the memory and configured to: receive a spoken phrase associated with a printed phrase from a tamper-evident component of a product; obtain a notification associated with authentication of the product based on the spoken phrase; and provide the notification in a visual manner, in an audio manner, or a combined audio and visual manner. A method comprises: creating a tamper-evident component comprising an obscuring mechanism and a printed phrase, wherein the obscuring mechanism obscures the printed phrase from view; providing the tamper-evident component for integration into a product; receiving a spoken phrase from a first consumer; analyzing the spoken phrase; generating a notification associated with authentication of the product based on the analyzing; and transmitting the notification to the first consumer.
    Type: Grant
    Filed: June 17, 2016
    Date of Patent: April 10, 2018
    Assignee: Authentix, Inc.
    Inventors: Mohamed Lazzouni, Will Ewin
  • Patent number: 9928234
    Abstract: An example method for natural language text classification based on semantic features comprises: performing semantico-syntactic analysis of a natural language text to produce a semantic structure representing a set of semantic classes; associating a first semantic class of the set of semantic classes with a first value reflecting a specified semantic class attribute; identifying a second semantic class associated with the first semantic class by a pre-defined semantic relationship; associating the second semantic class with a second value reflecting the specified semantic class attribute, wherein the second value is determined by applying a pre-defined transformation to the first value; evaluating a feature of the natural language text based on the first value and the second value; and determining, by a classifier model using the evaluated feature of the natural language text, a degree of association of the natural language text with a category of a pre-defined set of categories.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: March 27, 2018
    Assignee: ABBYY PRODUCTION LLC
    Inventors: Sergey Kolotienko, Konstantin Anisimovich, Andrey Valerievich Myakutin, Evgeny Mikhaylovich Indenbom
  • Patent number: 9922667
    Abstract: Various embodiments relating to detecting at least one of conversation, the presence and the identity of others during presentation of digital content on a computing device. When another person is detected, one or more actions may be taken with respect to the digital content. For example, the digital content may be minimized, moved, resized or otherwise modified.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: March 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Dave Hill, Jonathan Paulovich, Evan Michael Keibler, Jason Scott, Cameron G. Brown, Thomas Forsythe, Jeffrey A. Kohler, Brian Murphy
  • Patent number: 9916302
    Abstract: Provided is a text processing system capable of classifying a plurality of texts into groups whose overviews are able to be grasped and classifying texts semantically having entailment relation into the same group even if the texts are not determined to have the entailment relation. Entailment recognition means 71 performs entailment recognition between texts on given texts. Group generation means 72 selects an individual text and generates a group including texts entailing the selected text as members. Group integration means 73 integrates groups in the case where groups satisfy a predetermined condition based on the degree of overlap of members between groups.
    Type: Grant
    Filed: July 10, 2015
    Date of Patent: March 13, 2018
    Assignee: NEC Corporation
    Inventors: Masaaki Tsuchida, Kai Ishikawa, Takashi Onishi, Kosuke Yamamoto