Patents Examined by Jesse Pullias
  • Patent number: 9633661
    Abstract: A portable music device may operate in response to user speech. In situations in which the music device is operating primarily from battery power, a push-to-talk (PTT) button may be used to indicate when the user is directing speech to the device. When the music device is receiving external power, the music device may continuously monitor a microphone signal to detect a user utterance of a wakeword, which may be used to indicate that subsequent speech is directed to the device. When operating from battery power, the device may send audio to a network-based support service for speech recognition and natural language understanding. When operating from external power, the speech recognition and/or natural language understanding may be performed by the music device itself.
    Type: Grant
    Filed: February 2, 2015
    Date of Patent: April 25, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Marcello Typrin, Steve Hoonsuck Yum, Chris Stewart Hagler
  • Patent number: 9633005
    Abstract: A system for natural language processing is provided. A first natural language processing program may be constructed using language-independent semantic descriptions, and language-dependent morphological descriptions, lexical descriptions, and syntactic descriptions of one or more target languages. The natural language processing program may include any of machine translation, fact extraction, semantic indexing, semantic search, sentiment analysis, document classification, summarization, big data analysis, or another program. Additional sets of natural language processing programs may be constructed.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 25, 2017
    Assignee: ABBYY InfoPoisk LLC
    Inventors: Tatiana Danielyan, Anatoly Starostin, Konstantin Zuev, Konstantin Anisimovich, Vladimir Selegey
  • Patent number: 9633660
    Abstract: The present disclosure generally relates to systems and methods for processing received voice inputs for user identification. In an example process, voice input can be processed using a subset of words from a library used to identify the words or phrases of the voice input. The subset can be selected such that voice inputs provided by the user are more likely to include words from the subset. The subset of the library can be selected using any suitable approach, including for example based on the user's interests and words that relate to those interests. For example, the subset can include one or more words related to media items stored by the user on the electronic device, names of the user's contacts, applications or processes used by the user, or any other words relating to the user's interactions with the device.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: April 25, 2017
    Assignee: Apple Inc.
    Inventor: Allen P. Haughay
  • Patent number: 9626352
    Abstract: An approach is provided to resolve anaphors between posts, or threads, in a threaded discussion, for example an online forum. The approach analyzes a number of posts that are included in threads of an online forum. During the analysis, the approach identifies terms in parent posts, detects anaphors in child posts that reference the terms in the parent posts, and resolves the anaphor found in the child post with the term. The parent post with the identified term and the child post with the resolved anaphor are then stored in the memory for use by information handling systems, such as question answering (QA) systems.
    Type: Grant
    Filed: December 2, 2014
    Date of Patent: April 18, 2017
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Donna K. Byron, Andrew R. Freed
  • Patent number: 9626961
    Abstract: Systems and methods are described for personifying communications. According to at least one embodiment, the computer-implemented method for personifying a natural-language communication includes observing a linguistic pattern of a user. The method may also include analyzing the linguistic pattern of the user and adapting the natural-language communication based at least in part on the analyzed linguistic pattern of the user. In some embodiments, observing the linguistic pattern of the user may include receiving data indicative of the linguistic pattern of the user. The data may be one of verbal data or written data. Written data may include at least one of a text message, email, social media post, or computer-readable note. Verbal data may include at least one of a recorded telephone conversation, voice command, or voice message.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: April 18, 2017
    Assignee: Vivint, Inc.
    Inventors: Jefferson Lyman, Nic Brunson, Wade Shearer, Mike Warner, Stefan Walger
  • Patent number: 9626338
    Abstract: According to one embodiment, a markup assistance apparatus includes an acquisition unit, a first calculation unit, a detection unit and a presentation unit. The acquisition unit acquires a feature amount for respective tags, each of the tags being used to control text-to-speech processing of a markup text. The first calculation unit calculates, for respective character strings, a variance of feature amounts of the tags which are assigned to the character string in a markup text. The detection unit detects a first character string assigned a first tag having the variance not less than a first threshold value as a first candidate including the tag to be corrected. The presentation unit presents the first candidate.
    Type: Grant
    Filed: January 15, 2015
    Date of Patent: April 18, 2017
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kouichirou Mori, Masahiro Morita
  • Patent number: 9613660
    Abstract: A computing device may receive or otherwise access a base audio layer and one or more enhancement audio layers. The computing device can reconstruct the retrieved base layer and/or enhancement layers into a single data stream or audio file. The local computing device may process audio frames in a highest enhancement layer retrieved in which the data can be validated (or a lower layer if the data in audio frames in the enhancement layer(s) cannot be validated) and build a stream or audio file based on the audio frames in that layer.
    Type: Grant
    Filed: April 4, 2014
    Date of Patent: April 4, 2017
    Assignee: DTS, Inc.
    Inventors: Mark Rogers Johnson, Phillip L. Maness
  • Patent number: 9585616
    Abstract: Methods and systems are described for monitoring patient speech to determine compliance of the patient with a prescribed regimen for treating for a brain-related disorder. Patient speech is detected with an audio sensor at the patient location, and speech data is transmitted to a monitoring location. The audio sensor and other components at the patient location may be incorporated into, or associated with, a cell phone, computing system, or stand-alone microprocessor-based device, for example. Patient speech is processed at the patient location and/or monitoring location to identify speech parameters and/or patterns that indicate whether the patient has complied with the prescribed treatment regimen. Patient identity may be determined through biometric identification or other authentication techniques. The system may provide a report to an interested party, for example a medical care provider, based on whether (and/or the extent to which) the patient has complied with the prescribed treatment regimen.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: March 7, 2017
    Assignee: Elwha LLC
    Inventors: Jeffrey A. Bowers, Paul Duesterhoft, Daniel Hawkins, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Eric C. Leuthardt, Nathan P. Myhrvold, Michael A. Smith, Elizabeth A. Sweeney, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 9589107
    Abstract: Methods and systems are described for monitoring patient speech to determine compliance of the patient with a prescribed regimen for treating for a brain-related disorder. Patient speech is detected with an audio sensor at the patient location, and speech data is transmitted to a monitoring location. Patient speech is processed at the patient location and/or monitoring location to identify speech parameters and/or patterns that indicate whether the patient has complied with the prescribed treatment regimen. Patient identity may be determined through biometric identification or other authentication techniques. The system may provide a report to an interested party, for example a medical care provider, based on whether (and/or the extent to which) the patient has complied with the prescribed treatment regimen. The monitoring system may transmit a report to a wireless device such as a pager or cell phone, generate an alarm or notification, and/or store information for later use.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: March 7, 2017
    Assignee: Elwha LLC
    Inventors: Jeffrey A. Bowers, Paul Duesterhoft, Daniel Hawkins, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Eric C. Leuthardt, Nathan P. Myhrvold, Michael A. Smith, Elizabeth A. Sweeney, Clarence T. Tegreene, Lowell L. Wood, Jr.
  • Patent number: 9583097
    Abstract: In an electronic device, a method includes analyzing help information associated with a software application to identify a sequence of manipulations of viewable elements associated with an instance of an operation by the software application. The method further includes generating a voice command set based on the sequence of manipulations of viewable elements and storing the voice command set. The method further includes receiving voice input from a user, determining the voice input represents a voice command of the voice command set, and performing an emulated manipulation sequence of viewable elements based on the voice command to actuate an instance of the operation by the software application, the emulated manipulation sequence based on the sequence of manipulations of viewable elements.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: February 28, 2017
    Assignee: Google Inc.
    Inventors: Amit Kumar Agrawal, Raymond B. Essick, IV, Satyabrata Rout
  • Patent number: 9582246
    Abstract: A contextual state of a graphical user interface presented via a display of the computing system is identified. A voice command is selected from a set of voice commands based on the contextual state of the graphical user interface. A context-specific voice-command suggestion corresponding to the selected voice command is identified. A graphical user interface including the context-specific voice-command suggestion is presented via a display.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: February 28, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Christian Klein, Zach Johnson, Matthew Fleming, Gregg Wygonik
  • Patent number: 9583102
    Abstract: A method of controlling an interactive system includes the steps of: referring to a storage portion storing a plurality of response information about a manner of operation responsive to a user, each associated with a priority serving as an index when being selected; selecting one response information in accordance with the priorities of the plurality of response information; executing response processing for the user based on the selected response information; accepting voice input for the response processing from the user; evaluating the user's reaction to the response processing based on a manner of voice of the accepted voice input; and changing the priority of the selected response information stored in the storage portion based on an evaluation result.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: February 28, 2017
    Assignee: Sharp Kabushiki Kaisha
    Inventor: Makoto Shinkai
  • Patent number: 9583101
    Abstract: The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: February 28, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hongbo Jin, Zhuolin Jiang
  • Patent number: 9575624
    Abstract: The amount of speech output to a blind or low-vision user using a screen reader application is automatically adjusted based on how the user navigates to a control in a graphic user interface. Navigation by mouse presumes the user has greater knowledge of the identity of the control than navigation by tab keystroke which is more indicative of a user searching for a control. In addition, accelerator keystrokes indicate a higher level of specificity to set focus on a control and thus less verbosity is required to sufficiently inform the screen reader user.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: February 21, 2017
    Assignee: Freedom Scientific
    Inventors: Garald Lee Voorhees, Glen Gordon, Eric Damery
  • Patent number: 9570092
    Abstract: Devices, systems, methods, media, and programs for detecting an emotional state change in an audio signal are provided. A number of segments of the audio signal are analyzed based on separate lexical and acoustic evaluations, and, for each segment, an emotional state and a confidence score of the emotional state are determined. A current emotional state of the audio signal is tracked for each of the number of segments. For a particular segment, it is determined whether the current emotional state of the audio signal changes to another emotional state based on the emotional state and a comparison of the confidence score of the particular segment to a predetermined threshold.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: February 14, 2017
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Dimitrios Dimitriadis, Mazin E. Gilbert, Taniya Mishra, Horst J. Schroeter
  • Patent number: 9570072
    Abstract: An exemplary noise reduction system and method processes a speech signal that is delivered in a noisy channel or with ambient noise. Some exemplary embodiments of the system and method use filters to extract speech information, and focus on a subset of harmonics that are least corrupted by noise. Some exemplary embodiments disregard signal harmonics with low signal-to-noise ratio(s), and disregard amplitude modulations that are inconsistent with speech. An exemplary system and method processes a signal that focuses on a subset of harmonics that are least corrupted by noise, disregards the signal harmonics with low signal-to-noise ratio(s), and disregards amplitude modulations that are inconsistent with speech.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: February 14, 2017
    Assignee: SCTI HOLDINGS, INC.
    Inventor: Mark Pinson
  • Patent number: 9570095
    Abstract: In accordance with an implementation of the disclosure, systems and methods are provided for providing an estimate for noise in a speech signal. An instantaneous power value is received that corresponds to a frequency index of a portion of the speech signal. A first weighted power value is updated based on the instantaneous power value and a first weighting parameter. A second weighted power value is updated based on the first weighed power value and a second weighting parameter. An estimate of the noise is computed from the instantaneous power value and the second weighted power value.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: February 14, 2017
    Assignee: Marvell International Ltd.
    Inventor: Kapil Jain
  • Patent number: 9558462
    Abstract: Methods and systems for identifying conditional actions in a business process are disclosed. In accordance with one such method, text fragments are extracted from input documents. In addition, a plurality of pairs of the text fragments that respectively include text fragments that are similar according to a pre-defined similarity standard are determined. For each pair of at least a subset of the pairs, at least one difference between the text fragments of the corresponding pair is determined. Further, at least two particular pairs of the subset of the pairs are merged in response to determining that the particular pairs have at least one of the determined differences in common. Additionally, the merged particular pairs are output to indicate the conditional actions in the business process.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Taiga Nakamura, Hironori Takeuchi
  • Patent number: 9558735
    Abstract: Disclosed herein are systems, methods, and computer readable-media for providing an automatic synthetically generated voice describing media content, the method comprising receiving one or more pieces of metadata for a primary media content, selecting at least one piece of metadata for output, and outputting the at least one piece of metadata as synthetically generated speech with the primary media content. Other aspects of the invention involve alternative output, output speech simultaneously with the primary media content, output speech during gaps in the primary media content, translate metadata in foreign language, tailor voice, accent, and language to match the metadata and/or primary media content. A user may control output via a user interface or output may be customized based on preferences in a user profile.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: January 31, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Linda Roberts, Hong Thi Nguyen, Horst J Schroeter
  • Patent number: 9552808
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for decoding parameters for Viterbi search are disclosed. In one aspect, a method includes the actions of receiving lattice data that defines a plurality of lattices. The actions include for each defined lattice determining a particular path that traverses the lattice; determining a node cost of a path from the start node to the frame node; determining a beam size for each frame; determining a beam cost width for each frame; determining a maximum beam size from the beam sizes determined for frames; and determining a maximum beam cost width from the beam cost widths determine for the frames. The actions include selecting a particular beam size and a particular beam cost width. The actions include determining paths for additional lattices using the pruning parameters of the particular beam size and the particular beam cost width.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: January 24, 2017
    Assignee: Google Inc.
    Inventor: Yasuhisa Fujii