Patents Examined by Shaun Roberts
  • Patent number: 10430522
    Abstract: An adaptive localization system translates and displays translated content to a user, for example through a website or application using the adaptive localization system. A user can view, receive, or otherwise interact with the translated content, which can be differently translated based on desired language, geographic location, an intended user, or other relevant characteristics of the viewing user. The adaptive localization engine can translate the inherent meaning of content rather than, for example, creating an exact grammatical or “word-for-word” translation of individual words or phrases in the content. The adaptive localization engine displays alternate variations of the same translation of content to different users and based on user response to the alternate translations, determines the accuracy or correctness of a certain translations of content and modifies future translations accordingly.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: October 1, 2019
    Assignee: Qordoba, Inc.
    Inventors: Waseem Alshikh, May Habib
  • Patent number: 10430896
    Abstract: There is provided an information processing server including a specification unit configured to specify an individual registered in a predetermined database on the basis of identification information sent from a near-field communication device and collected voice information.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: October 1, 2019
    Assignee: SONY CORPORATION
    Inventor: Kazuyoshi Horie
  • Patent number: 10418026
    Abstract: Systems and methods are described for processing and interpreting audible commands spoken in one or more languages. Speech recognition systems disclosed herein may be used as a stand-alone speech recognition system or comprise a portion of another content consumption system. A requesting user may provide audio input (e.g., command data) to the speech recognition system via a computing device to request an entertainment system to perform one or more operational commands. The speech recognition system may analyze the audio input across a variety of linguistic models, and may parse the audio input to identify a plurality of phrases and corresponding action classifiers. In some embodiments, the speech recognition system may utilize the action classifiers and other information to determine the one or more identified phrases that appropriately match the desired intent and operational command associated with the user's spoken command.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: September 17, 2019
    Assignee: Comcast Cable Communications, LLC
    Inventors: George Thomas Des Jardins, Vikrant Sagar
  • Patent number: 10417335
    Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for automated quantitative assessment of text complexity. A system may include processing at least one body of text in a text-based query using a natural language processing engine. The processed text may include sub-blocks of text in a predetermined sequence size such as an n-gram. The system may compare reference bases to the processed text, where each reference base is associated with a different natural language. The system determines which of the reference bases has a highest number of matching words within the body of text, and thereby identifies the reference base as the source language of the supplied text. The system then determines an average complexity score for n-gram using a quantitative assessment engine. The system then applies a readability score to the body of text based on the average complexity scores of the n-grams.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: September 17, 2019
    Assignee: Colossio, Inc.
    Inventor: Joseph A. Jaroch
  • Patent number: 10403291
    Abstract: Methods, systems, apparatus, including computer programs encoded on computer storage medium, to facilitate language independent-speaker verification. In one aspect, a method includes actions of receiving, by a user device, audio data representing an utterance of a user. Other actions may include providing, to a neural network stored on the user device, input data derived from the audio data and a language identifier. The neural network may be trained using speech data representing speech in different languages or dialects. The method may include additional actions of generating, based on output of the neural network, a speaker representation and determining, based on the speaker representation and a second representation, that the utterance is an utterance of the user. The method may provide the user with access to the user device based on determining that the utterance is an utterance of the user.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: September 3, 2019
    Assignee: Google LLC
    Inventors: Ignacio Lopez Moreno, Li Wan, Quan Wang
  • Patent number: 10403278
    Abstract: Systems and processes for operating an intelligent automated assistant to provide media items based on phonetic matching techniques are provided. An example method includes receiving a speech input from a user and determining whether the speech input includes a user request for a media item. The method further includes, in accordance with a determination that the speech input includes a user request for obtaining a media item, determining a candidate media item from a plurality of media items. The method further includes determining, based on a difference between a phonetic representation of the candidate media item and a phonetic representation of the speech input, whether the candidate media item is to be provided to the user. The method further includes, in accordance with a determination that the candidate media item is to be provided to the user, providing the candidate media item to the user.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: September 3, 2019
    Assignee: Apple Inc.
    Inventors: Adrian Skilling, Melvyn J. Hunt, Gunnar Evermann
  • Patent number: 10390160
    Abstract: An apparatus and method for verifying voice messages generated by notification devices in an emergency address system includes one or more verification devices and a validation module, which can be part of a control panel or a connected services system. The verification devices can be mobile computing devices or permanently installed devices associated with each notification device in the emergency address system. The verification devices include microphones, network interfaces, and controllers executing speech to text conversion processes. During testing, the notification devices play voice messages and the verification devices detect the messages, convert the messages to text and send the text-converted messages to the validation module, which validates the text-converted messages against the intended messages for each notification device and confirms that the messages were played in the correct locations.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: August 20, 2019
    Assignee: Tyco Fire & Security GmbH
    Inventor: Joseph Piccolo, III
  • Patent number: 10373604
    Abstract: An acoustic model is adapted, relating acoustic units to speech vectors. The acoustic model comprises a set of acoustic model parameters related to a given speech factor. The acoustic model parameters enable the acoustic model to output speech vectors with different values of the speech factor. The method comprises inputting a sample of speech which is corrupted by noise; determining values of the set of acoustic model parameters which enable the acoustic model to output speech with said first value of the speech factor; and employing said determined values of the set of speech factor parameters in said acoustic model. The acoustic model parameters are obtained by obtaining corrupted speech factor parameters using the sample of speech, and mapping the corrupted speech factor parameters to clean acoustic model parameters using noise characterization paramaters characterizing the noise.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: August 6, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kayoko Yanagisawa
  • Patent number: 10373614
    Abstract: In one example, an assistant support server may maintain a web portal to crowdsource responses to a user input. The assistant support server may maintain a web portal accessible by a developer device. The assistant support server may store an assistant rule based on a developer input associating an input word set describing a hypothetical user input with a deep link for a website. The assistant support server may receive in the web portal the developer input. The assistant support server may direct a smart assistant module executed by a user device to connect to the deep link in response to receiving a user input from the smart assistant module matching the input word set.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: August 6, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kiril Seksenov, Avishek Mazumder, Kevin Hill, Aditya Pruthi
  • Patent number: 10366725
    Abstract: Systems and methods are provided to implement and facilitate cross-fading, interstitials and other effects/processing of two or more media elements in a personalized media delivery service. Effects or crossfade processing can occur on the broadcast, publisher or server-side, but can still be personalized to a specific user, in a manner that minimizes processing on the downstream side or client device. The cross-fade can be implemented after decoding, processing, re-encoding, and rechunking the relevant chunks of each component clip. Alternatively, the cross-fade or other effect can be implemented on the relevant chunks in the compressed domain, thus obviating any loss of quality by re-encoding. A large scale personalized content delivery service can limit the processing to essentially the first and last chunks of any file, there being no need to process the full clip.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: July 30, 2019
    Assignee: Sirius XM Radio Inc.
    Inventors: Raymond Lowe, Christopher Ward
  • Patent number: 10346544
    Abstract: Approaches presented herein enable assignment of translated work to an agent in a support environment based on a confidence factor that measures accuracy of translation and an agent's language skill. Specifically, agent proficiencies in a set of natural languages are measured and scored. An incoming communication is translated into one or more natural languages and each language translation is assigned a translation score based on a confidence of translation. The skill score and translation score are utilized to calculate a confidence factor for each language. In one approach, the communication is assigned to an agent that has a confidence factor greater than a predetermined threshold confidence factor. In another approach, the communication is only assigned if a rule optimizing agent availability and risk of constrained resources is satisfied.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: July 9, 2019
    Assignee: International Business Machines Corporation
    Inventors: Gary R. Brophy, Dennis D. Koski, Todd A. Mueller, Jeffrey A. Schmidt
  • Patent number: 10332516
    Abstract: A method is implemented to move media content display between two media output devices. A server system determines in a voice message recorded by an electronic device a media transfer request that includes a user voice command to transfer media content to a destination media output device and a user voice designation of the destination media output device. The server system then obtains from a source cast device instant media play information including information of a media play application, the media content that is being played, and a temporal position. The server system further identifies a destination cast device associated in a user domain coupled to the destination media output device, and sends to the destination cast device a media play request including the instant media play information, thereby enabling the destination cast device to execute the media play application for playing the media content from the temporal location.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: June 25, 2019
    Assignee: GOOGLE LLC
    Inventors: Raunaq Shah, Matt Van Der Staay
  • Patent number: 10325594
    Abstract: Techniques related to key phrase detection for applications such as wake on voice are discussed. Such techniques may include updating a start state based rejection model and a key phrase model based on scores of sub-phonetic units from an acoustic model to generate a rejection likelihood score and a key phrase likelihood score and determining whether received audio input is associated with a predetermined key phrase based on the rejection likelihood score and the key phrase likelihood score.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: June 18, 2019
    Assignee: Intel IP Corporation
    Inventors: Tobias Bocklet, Joachim Hofer
  • Patent number: 10318632
    Abstract: A data input system is described which has a user interface which receives one or more context text items in a sequence of text items input by a user. A processor of the data input system uses a plurality of language models to predict, from each language model, a next item in the sequence of text items. The processor uses a dynamic model which is bespoke to the user as a result of learning text items which the user has previously used, to predict a next item in the sequence of text items. The processor weights the predicted next item from the dynamic model using at least per term weights, each per term weight representing a likelihood of an associated term of the dynamic model given one of the language models.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: June 11, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph Osborne, Abigail Bunyan
  • Patent number: 10311144
    Abstract: The present disclosure generally relates to systems and processes for emoji word sense disambiguation. In one example process, a word sequence is received. A word-level feature representation is determined for each word of the word sequence and a global semantic representation for the word sequence is determined. For a first word of the word sequence, an attention coefficient is determined based on a congruence between the word-level feature representation of the first word and the global semantic representation for the word sequence. The word-level feature representation of the first word is adjusted based on the attention coefficient. An emoji likelihood is determined based on the adjusted word-level feature representation of the first word. In accordance with the emoji likelihood satisfying one or more criteria, an emoji character corresponding to the first word is presented for display.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: June 4, 2019
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman
  • Patent number: 10313536
    Abstract: An information processing system that that confirms the meanings of difference information in meeting content with reference to the sound information is provided. Differences between meeting content contained in image information of a paper medium taken at each detection cycle and meeting content contained in master information are extracted. Surrounding sounds are recorded while the meeting is being held. The sound information of the recorded sounds is associated with the difference information of the extracted differences and also associated with the master information with the timing when the differences of the meeting content are detected.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: June 4, 2019
    Assignee: Konica Minolta, Inc.
    Inventor: Yoshifumi Wagatsuma
  • Patent number: 10303745
    Abstract: Systems and methods associated with pagination point identification are disclosed. One example system includes an interface logic. The interface logic may receive a content element to be arranged within a layout having a first page with a fixed size. The system also includes a pagination logic. The pagination logic may identify a pagination point within a content element based on semantic information from the content element. The system also includes a layout logic. The layout logic may arrange a portion of the content element within the first page based on the pagination point and the fixed size.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: May 28, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Joshua Halpern, Niranjan Damera Venkata
  • Patent number: 10304461
    Abstract: The present disclosure relates to remote service requesting and processing. The method includes: receiving a service processing request sent by a terminal; recognizing voice data in the service processing request to obtain voice recognition information; and sending the voice recognition information and an Internet application identifier to a corresponding third-party server according to a public identifier, so that the third-party server processes a corresponding service according to the voice recognition information. The voice data in the service processing request sent by the terminal is recognized to obtain the voice recognition information. The voice recognition information and the Internet application identifier are sent to the third-party server, so that the third-party server processes the corresponding service according to the voice recognition information.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: May 28, 2019
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Hao Chen
  • Patent number: 10276158
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: April 30, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
  • Patent number: 10262664
    Abstract: When combining digital data sets in the time domain into a combined digital data set a subset of samples of each digital data set is adjusted to enable unraveling the data when decoding. To enable correction during decoding of an error introduced by the adjustment, an error approximation is stored for each adjusted sample. A set of error approximations is created which is indexed allowing substantial reduction in size of the error approximations to be stored for the adjusted samples. Instead of creating a set of error approximations for each combined digital data set one set of error approximations is created based on the errors introduced when creating multiple combined digital data sets.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: April 16, 2019
    Assignee: AURO TECHNOLOGIES
    Inventors: Wilfried Van Baelen, Bert Van Daele