Patents Examined by Jesse Pullias
  • Patent number: 9911407
    Abstract: A system and method are presented for the synthesis of speech from provided text. Particularly, the generation of parameters within the system is performed as a continuous approximation in order to mimic the natural flow of speech as opposed to a step-wise approximation of the feature stream. Provided text may be partitioned and parameters generated using a speech model. The generated parameters from the speech model may then be used in a post-processing step to obtain a new set of parameters for application in speech synthesis.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: March 6, 2018
    Assignee: Interactive Intelligence Group, Inc.
    Inventors: Yingyi Tan, Aravind Ganapathiraju, Felix Immanuel Wyss
  • Patent number: 9911409
    Abstract: A speech recognition apparatus includes a processor configured to recognize a user's speech using any one or combination of two or more of an acoustic model, a pronunciation dictionary including primitive words, and a language model including primitive words; and correct word spacing in a result of speech recognition based on a word-spacing model.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: March 6, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Seokjin Hong
  • Patent number: 9904678
    Abstract: A graphical user interface displays multiple display areas that include menu items displayed in an original language. User input selecting a translation language for at least one display area of the plurality of display areas is received. The way in which a menu item is used within the at least one display area is determined, in context of the at least one display area. Based on this determination, a translated version of the menu item is generated, and displayed in the display area.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: February 27, 2018
    Assignee: iHeartMedia Management Services, Inc.
    Inventor: David C. Jellison, Jr.
  • Patent number: 9898457
    Abstract: Examples for detecting and removing non-natural language within natural language to enhance performing content analysis on the natural language are provided herein. A plurality of terms is identified in a phrase, and a sliding window having a defined length is placed over a first sequence of terms from the plurality of terms. The first sequence of terms includes a first term, a second term, and a third term, the first term and the third term being adjacent to the second term. Based on the first term, the second term, and the third term, a determination is made as to whether the second term represents non-natural language. Upon determining that the second term is non-natural language, the second term is labeled as non-natural language and is removed from the plurality of terms based on determining the second term as non-natural language.
    Type: Grant
    Filed: October 3, 2016
    Date of Patent: February 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Pranab Mohanty, Intaik Park, Kieran Brantner-Magee, Lucas Lin, Saikat Sen, Korhan Ileri
  • Patent number: 9892112
    Abstract: Embodiments relate to a system, program product, and method for use with an intelligent computer platform to decipher analogical phrases. The aspect of deciphering the phrase includes parsing a phrase into subcomponents, identifying a category for each parsed subcomponent and a syntactic structure of the phrase, and generating a list of definitions for each parsed subcomponent. Definitions in the list are ranked according to relevance and an outcome base is identified based on ranked relevancy, with the outcome being a definition with the highest relevance in the list. A corpus is searched for evidence of a pattern associated with the list. Each definition in the list is scored according to a weighted calculation based on congruence of corpus evidence with the pattern. An outcome is generated from the pattern that receives the highest score based on congruence with the corpus evidence.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Andrew R. Freed
  • Patent number: 9886501
    Abstract: A method, system and computer-usable medium are disclosed for using a contextual graph to summarize a corpus of content. Natural Language Processing (NLP) preprocessing operations are performed on text within an input corpus to form a grammatical analysis. In turn, the grammatical analysis is used to generate semantic associations between phrases in the input corpus. The resulting semantic associations are then used to determine the thematic relevance of the individual sentences in the input corpus to form a context-based ranking. In turn, the context-based ranking is used to construct a context graph, the vertices of which are represented by phrases, and the edges are represented by an aggregate score resulting from performing calculations associated with semantic similarity of the phrases. The resulting context graph is then used to generate a content summarization for the input corpus.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayanan Krishnamurthy, Niyati Parameswaran, Sridhar Sudarsan
  • Patent number: 9886942
    Abstract: In some implementations, a language proficiency of a user of a client device is determined by one or more computers. The one or more computers then determines a text segment for output by a text-to-speech module based on the determined language proficiency of the user. After determining the text segment for output, the one or more computers generates audio data including a synthesized utterance of the text segment. The audio data including the synthesized utterance of the text segment is then provided to the client device for output.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: February 6, 2018
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Jakob Nicolaus Foerster
  • Patent number: 9881082
    Abstract: A method, system and computer-usable medium are disclosed for generating a context-sensitive summarization of a corpus of content. Natural Language Processing (NLP) operations are performed on text within an input corpus to extract phrases, which are then used to generate a grammatical analysis. In turn, the grammatical analysis is used to determine the thematic relevance of individual sentences in the input corpus. Sentences within the input corpus are then ranked according to their respective thematic relevance. This ranking is used to construct a contextualized content graph, which in turn is used to generate a content summarization for the input corpus.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: January 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayanan Krishnamurthy, Niyati Parameswaran, Sridhar Sudarsan
  • Patent number: 9881636
    Abstract: Systems and methods for escalation detection using sentiment analysis are disclosed. A computer-implemented method of the invention includes: determining, by a computer device, the occurrence of an interaction event between a first party and a second party within a recording including audio data; analyzing, by the computer device, the audio data for a change in tone over time; analyzing, by the computer device, the audio data for the presence of any negative tones; determining, by the computer device, whether the change in tone, the presence of any negative tones, or a combination of the change in tone and the presence of any negative tones, indicates an escalation during the interaction event to generate escalation data; and saving, by the computer device, the escalation data.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: January 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rhonda L. Childress, Kim A. Eckert, Ryan D. McNair
  • Patent number: 9875235
    Abstract: Non-limiting examples of the present disclosure describe natural language translation capabilities that enable automated process flow diagram generation from received input. Input may be received through an application for automated generation of a process flow diagram. The received input may be provided to a natural language processing component of a language understanding intelligence service. A data object, received from the natural language processing component, may be accessed. The data object provides data for creation of a process flow diagram based on the received input. In examples, the data object is generated based on natural language processing by the natural language processing component and at least one user defined grammar rule, provided by the application, for converting the received input to one or more process flow steps. The process flow diagram may be presented within the application. Other examples are also described such as reverse engineering an existing process flow diagram.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: January 23, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dinesh Chanra Das, Unmesh Tambwekar, Meng Khan Seah, Terence H. Lee, Srinivasu Geddam, Vedant Dharnidharka, Archit Shukla, Vivekanand Pandey
  • Patent number: 9875735
    Abstract: Disclosed herein are systems, methods, and computer readable-media for providing an automatic synthetically generated voice describing media content, the method comprising receiving one or more pieces of metadata for a primary media content, selecting at least one piece of metadata for output, and outputting the at least one piece of metadata as synthetically generated speech with the primary media content. Other aspects of the invention involve alternative output, output speech simultaneously with the primary media content, output speech during gaps in the primary media content, translate metadata in foreign language, tailor voice, accent, and language to match the metadata and/or primary media content. A user may control output via a user interface or output may be customized based on preferences in a user profile.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: January 23, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Linda Roberts, Hong Thi Nguyen, Horst J Schroeter
  • Patent number: 9865258
    Abstract: A method for recognizing a voice context for a voice control function in a vehicle. The method encompasses reading in a gaze direction datum regarding a current gaze direction of an occupant of the vehicle; allocating the gaze direction datum to a viewing zone in an interior of the vehicle in order to obtain a viewing zone datum regarding a viewing zone currently being viewed by the occupant; and determining, by utilization of the viewing zone datum, a voice context datum regarding a predetermined voice context allocated to the viewing zone currently being viewed.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: January 9, 2018
    Assignee: ROBERT BOSCH GMBH
    Inventor: Philippe Dreuw
  • Patent number: 9865259
    Abstract: An electronic device may operate in different modes of operations. In a first mode of operation, the electronic device may receive user speech via a microphone, generate an audio signal that represents the user speech, and then send the audio signal to one or more remote computing device for analysis. In a second mode of operation, the electronic device may receive audio data from a peripheral device and then output audible content represented by the audio data. In some instances, the electronic device may operate in the first mode of operation and/or the second mode of operation based on whether the electronic device can communicate with the one or more computing devices over a wide-area network.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: January 9, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Marcello Typrin, Steve Hoonsuck Yum, Chris Stewart Hagler
  • Patent number: 9865268
    Abstract: Techniques for authenticating users at devices that interact with the users via voice input. For instance, the described techniques may allow a voice-input device to safely verify the identity of a user by engaging in a back-and-forth conversation. The device or another device coupled thereto may then verify the accuracy of the responses from the user during the conversation, as well as compare an audio signature associated with the user's responses to a pre-stored audio signature associated with the user. By utilizing multiple checks, the described techniques are able to accurately and safely authenticate the user based solely on an audible conversation between the user and the voice-input device.
    Type: Grant
    Filed: March 14, 2016
    Date of Patent: January 9, 2018
    Assignee: Amazon Technologies, Inc.
    Inventor: Preethi Narayanan
  • Patent number: 9865254
    Abstract: Compact finite state transducers (FSTs) for automatic speech recognition (ASR). An HCLG FST and/or G FST may be compacted at training time to reduce the size of the FST to be used at runtime. The compact FSTs may be significantly smaller (e.g., 50% smaller) in terms of memory size, thus reducing the use of computing resources at runtime to operate the FSTs. The individual arcs and states of each FST may be compacted by binning individual weights, thus reducing the number of bits needed for each weight. Further, certain fields such as a next state ID may be left out of a compact FST if an estimation technique can be used to reproduce the next state at runtime. During runtime portions of the FSTs may be decompressed for processing by an ASR engine.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: January 9, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Denis Sergeyevich Filimonov, Gautam Tiwari, Shaun Nidhiri Joseph, Ariya Rastrow
  • Patent number: 9865273
    Abstract: A tangible multimedia content playback method and apparatus is provided. The tangible multimedia content playback method includes extracting effect data from multimedia content, mapping the extracted effect data to a timeline of the multimedia content, establishing, when the multimedia content is played, a connection to at least one peripheral device pertaining to the effect data, and controlling the at least one peripheral device pertaining to the effect data to match with the timeline.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: January 9, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Sungjoon Won, Sehee Han
  • Patent number: 9858930
    Abstract: An electronic device and an audio converting method thereof are provided. The method includes determining state information of the electronic device, receiving or outputting an audio signal based on a first mode, when the state information is first state information, and receiving or outputting the audio signal based on a second mode, when the state information is second state information.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: January 2, 2018
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Hanho Ko, Namil Lee, Sunghoon Kim, Sohae Kim
  • Patent number: 9858926
    Abstract: A control device includes: a storage that stores a dialog model in which a question to a user, a reply candidate to the question from the user and a control content of each electronic device are associated with an input query from the user; an acquirer that acquires environmental data in a surrounding of the user; a calculator that, based on the environmental data, calculates environment predicted data to predict environment in the surrounding of the user after elapse of a predetermined period of time in cases where each control content corresponding to the input query is executed; and a question selector that selects a question corresponding to the control content that maximizes data indicative of a degree of comfort of the surrounding environment of the user in cases where each control is executed based on the environment predicted data.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: January 2, 2018
    Assignee: DENSO CORPORATION
    Inventors: Mitsuo Yamamoto, Satoshi Kondo, Ryuichi Suzuki, Naoyori Tanzawa
  • Patent number: 9852214
    Abstract: Methods and systems are provided for generating automatic program recommendations based on user interactions. In some embodiments, control circuitry processes verbal data received during an interaction between a user of a user device and a person with whom the user is interacting. The control circuitry analyzes the verbal data to automatically identify a media asset referred to during the interaction by at least one of the user and the person with whom the user is interacting. The control circuitry adds the identified media asset to a list of media assets associated with the user of the user device. The list of media assets is transmitted to a second user device of the user.
    Type: Grant
    Filed: March 23, 2016
    Date of Patent: December 26, 2017
    Assignee: Rovi Guides, Inc.
    Inventors: Brian Fife, Jason Braness, Michael Papish, Thomas Steven Woods
  • Patent number: 9836457
    Abstract: Different forward-translated sentences are generated by translating a received translation-source sentence in a first language into a second language. Backward-translated sentences are generated by backward-translating the different forward-translated sentences into the first language. When an operation for selecting one of the backward-translated sentences is received during output of the backward-translated sentences on an information output device, the forward-translated sentence corresponding to the selected backward-translated sentence is output onto the information output device.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: December 5, 2017
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Nanami Fujiwara, Masaki Yamauchi