Patents Examined by Thierry L Pham
  • Patent number: 9843859
    Abstract: Preprocessing speech signals from an indirect conduction microphone. One exemplary method preprocesses the speech signal in two stages. In stage one, an external speech sample is characterized using an auto regression model, and coefficients from the model are convolved with the internal speech signal from the indirect conduction microphone to produce a pre-conditioned internal speech signal. In stage two, a training sound is received by the indirect conduction microphone and filtered through a low-pass filter. The result is then modeled using auto regression, and inverted to produce an inverted filter model. The pre-conditioned internal speech signal is convolved with the inverted filter model to remove negative or undesirable acoustic characteristics and loss from the speech signal from the indirect conduction microphone.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: December 12, 2017
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Cheah Heng Tan, Linus Francis, Robert J. Novorita
  • Patent number: 9842100
    Abstract: A system and method for analyzing narrative data based on a functional ontology using semiotic square functions to produce analyzed data outputs. A computer implemented method accesses narrative data and reads a semiotic square function data table for each verb in the sequence of words, each semiotic square function data table classifies at least one verb in each sentence pattern as a functional type and includes one or more words in a semiotic square relationship to the verb classified, the functional type applying at least one symmetrical relationship between a first actor and a second actor in the narrative data. The method parses each sentence which includes a verb matching a functional type to match sentence subjects and objects to an event template and outputs an analysis of the narrative data relative to a common story theme based on a sequence of event records.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: December 12, 2017
    Assignee: TRIPLEDIP, LLC
    Inventors: Claude Vogel, Susan Decker
  • Patent number: 9836458
    Abstract: A method, system and computer program product for enabling attendees of a web conference to view materials of the web conference in their native language. When the conference server determines that the preferred native language of the attendee differs from the preferred native language of the presenter of the web conference, the conference server creates a virtual environment that is a clone of a host environment of the presenter that runs a native language pack of the preferred native language of the attendee. Upon the presenter starting the web conference, the screen shot shared by the presenter to the attendees is captured from the host environment of the presenter and then translated into the preferred native language of the attendee using the native language pack of the attendee's virtual environment. The translated screen shot is then sent to the attendee in the attendee's preferred native language from the virtual environment.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: December 5, 2017
    Assignee: International Business Machines Corporation
    Inventors: Qi En Jiang, Joey H. Y. Tseng, Di Wu, Xi Bo Zhu, Dong Jun Zong
  • Patent number: 9837076
    Abstract: Methods and systems are provided for customizing an action. In some implementations, voice input is received from a user and a context is determined from the voice input. Potential contextual data is identified based on the context and the voice input. A level of confidence is determined for an association of the potential contextual data and the context. An action is performed based on the voice input, the potential contextual data, and the level of confidence. The potential contextual data is used to customize the action.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: December 5, 2017
    Assignee: Google Inc.
    Inventors: Zoltan Stekkelpak, Gyula Simonyi
  • Patent number: 9817819
    Abstract: In one embodiment, a method includes send, to a client system of a first user, instructions configured to present a translation prompt comprising a first text string; receive, from the client system, a first input by the first user, wherein the first input corresponds to a first translation for the first text string; and calculate a reliability-value for the first translation based on the first input and a credibility-score of the first user, wherein the credibility-score of the first user is based on responses by the first user to checker-translation prompts, wherein the checker-translation prompts each comprise a control string for which a correct translation is known, and wherein the credibility-score is based on a number of responses by the first user that match the respective correct translations for the control strings.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: November 14, 2017
    Assignee: Facebook, Inc.
    Inventor: Luis Francisco Sarmenta
  • Patent number: 9813589
    Abstract: A printing apparatus includes a receiving unit which receives print data, an operating unit which receives a print instruction from a user, a display unit which displays a password entry screen for receiving a password entry from a user, and a printing unit which receives a print instruction from a user through the operating unit and prints print data without accepting a password through a password entry screen if a password added to the print data is matched with a fixed password and print data to be printed if a print instruction from a user is received through the operating unit, if the password added to the print data is matched with the fixed password, and if the password received through a password entry screen is matched with the password added to the print data.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: November 7, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Naoya Kakutani
  • Patent number: 9799335
    Abstract: Embodiments of the present disclosure provide a method and device for speech recognition. The solution comprises: receiving a first speech signal issued by a user; performing analog to digital conversion on the first speech signal to generate a first digital signal after the analog to digital conversion; extracting a first speech parameter from the first digital signal, the first speech parameter describing a speech feature of the first speech signal; if the first speech parameter coincides with a first prestored speech parameter in a sample library, executing control signalling instructed by the first digital signal, the sample library prestoring prestored speech parameters of N users, N?1. The solution can be applied in a speech recognition process and can improve the accuracy of speech recognition.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: October 24, 2017
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., BEIJING BOE MULTIMEDIA TECHNOLOGY CO., LTD.
    Inventor: Bendeng Lv
  • Patent number: 9786295
    Abstract: A voice processing apparatus includes: a feature amount acquisition unit configured to acquire a spectrum of an audio signal for each frame; an utterance state determination unit configured to determine an utterance state for each frame on the basis of the audio signal; and a spectrum normalization unit configured to calculate a normalized spectrum in a current utterance by normalizing a spectrum for each frame in the current utterance using at least an average spectrum acquired until the present time.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: October 10, 2017
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Keisuke Nakamura, Kazuhiro Nakadai
  • Patent number: 9779724
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting alternates in speech recognition. In some implementations, data is received that indicates multiple speech recognition hypotheses for an utterance. Based on the multiple speech recognition hypotheses, multiple alternates for a particular portion of a transcription of the utterance are identified. For each of the identified alternates, one or more features scores are determined, the features scores are input to a trained classifier, and an output is received from the classifier. A subset of the identified alternates is selected, based on the classifier outputs, to provide for display. Data indicating the selected subset of the alternates is provided for display.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: October 3, 2017
    Assignee: Google Inc.
    Inventors: Alexander H. Gruenstein, Dave Harwath, Ian C. McGraw
  • Patent number: 9779085
    Abstract: A natural language processing (“NLP”) manager is provided that manages NLP model training. An unlabeled corpus of multilingual documents is provided that span a plurality of target languages. A multilingual embedding is trained on the corpus of multilingual documents as input training data, the multilingual embedding being generalized across the target languages by modifying the input training data and/or transforming multilingual dictionaries into constraints in an underlying optimization problem. An NLP model is trained on training data for a first language of the target languages, using word embeddings of the trained multilingual embedding as features. The trained NLP model is applied for data from a second of the target languages, the first and second languages being different.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: October 3, 2017
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Michael Louis Wick, Pallika Haridas Kanani, Adam Craig Pocock
  • Patent number: 9773214
    Abstract: A user requests a plurality of content feeds. Content associated with the content feeds is periodically retrieved and converted to a print format. Contents associated with the content feeds are stored in the print format. Indications corresponding to the content feeds are provided to a network connected printer and displayed on a user interface thereof, including an indication that new content is available in the print format.
    Type: Grant
    Filed: August 6, 2012
    Date of Patent: September 26, 2017
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Vishwanath Ramaiah Nanjundaiah, Venugopal Kumarahalli Srinivasmurthy
  • Patent number: 9760569
    Abstract: Methods and/or systems for providing a translation result based on various semantic categories may be provided. A translation result providing method using a computer may include generating translations by translating a source sentence of a source language into a target language, and classifying the translations into semantic categories, respectively, and providing the classified translations to the user terminal.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: September 12, 2017
    Assignee: Naver Corporation
    Inventors: Joong-Hwi Shin, Jin-I Park, Jong-Hwan Kim, Kyong-Hee Kwon, Jun-Seok Kim
  • Patent number: 9747276
    Abstract: Embodiments relate to determining a crowd behavior. A method of determining a crowd behavior is provided. The method collects, at one or more recording points in a crowd of individuals, audible expressions that the individuals of the crowd make. The method generates a graph of the audible expressions as the audible expressions are collected from the individuals. The method determines a crowd behavior by performing a graphical text analysis on the graph. The method outputs an indication of the crowd behavior to trigger a crowd control measure.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: August 29, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guillermo A. Cecchi, James R. Kozloski, Clifford A. Pickover, Irina Rish
  • Patent number: 9747927
    Abstract: A system for multifaceted singing analysis for retrieval of songs or music including singing voices having some relationship in latent semantics with a singing voice included in one particular song or music. A topic analyzing processor uses a topic model to analyze a plurality of vocal symbolic time series obtained for a plurality of musical audio signals. The topic analyzing processor generates a vocal topic distribution for each of the musical audio signals whereby the vocal topic distribution is composed of a plurality of vocal topics each indicating a relationship of one of the musical audio signals with the other musical audio signals. The topic analyzing processor generates a vocal symbol distribution for each of the vocal topics whereby the vocal symbol distribution indicates occurrence probabilities for the vocal symbols. A multifaceted singing analyzing processor performs analysis of singing voices included in musical audio signals, in the multifaceted viewpoint.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: August 29, 2017
    Assignee: NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE AND TECHNOLOGY
    Inventors: Tomoyasu Nakano, Kazuyoshi Yoshii, Masataka Goto
  • Patent number: 9747277
    Abstract: Embodiments relate to determining a crowd behavior. A method of determining a crowd behavior is provided. The method collects, at one or more recording points in a crowd of individuals, audible expressions that the individuals of the crowd make. The method generates a graph of the audible expressions as the audible expressions are collected from the individuals. The method determines a crowd behavior by performing a graphical text analysis on the graph. The method outputs an indication of the crowd behavior to trigger a crowd control measure.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: August 29, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guillermo A. Cecchi, James R. Kozloski, Clifford A. Pickover, Irina Rish
  • Patent number: 9734143
    Abstract: Technology is disclosed that improves language processing engines by using multi-media (image, video, etc.) context data when training and applying language models. Multi-media context data can be obtained from one or more sources such as object/location/person identification in the multi-media, multi-media characteristics, labels or characteristics provided by an author of the multi-media, or information about the author of the multi-media. This context data can be used as additional input for a machine learning process that creates a model used in language processing. The resulting model can be used as part of various language processing engines such as a translation engine, correction engine, tagging engine, etc., by taking multi-media context/labeling for a content item as part of the input for computing results of the model.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: August 15, 2017
    Assignee: Facebook, Inc.
    Inventors: Kay Rottmann, Mirjam Maess
  • Patent number: 9727700
    Abstract: A computer network system and method for printing accompanying information and prescription labels in pharmacies, comprises: a central CS; a PMS; a data transmission network through which said PMS and said central CS can communicate; wherein said PMS includes an I/O terminal, a scanner, and a first printer; wherein said PMS includes a PMS SO and a Catalina SO; said PMS SO is configured to receive and store prescription information for a prescription, and to associate a prescription identification with said prescription; said Catalina SO is configure to select accompanying information for said prescription, to format and save said accompanying information in an accompanying information print file; and said PMS is configured to print a prescription label for said prescription and said accompanying information print file.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: August 8, 2017
    Assignee: inVentiv Health, Inc.
    Inventors: Michael F. Roberts, Simon Banfield
  • Patent number: 9721008
    Abstract: A method for generating a recipe from a literary work. The method may include ingesting a plurality of recipe content using a plurality of natural language processing (NLP) technology. The method may further include creating an ingredient ontology based on the ingested plurality of recipe content. The method may further include ingesting a plurality of content of the literary work using the plurality of NLP technology. The method may further include generating a knowledge graph based on the ingested literary work. The method may further include calculating a relatedness score based on the number of edges between the first plurality of nodes and the second plurality of nodes. The method may further include generating a plurality of recipes based on the calculated relatedness score satisfying a predetermined threshold.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: August 1, 2017
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Florian Pinel
  • Patent number: 9703775
    Abstract: In one embodiment, a method includes selecting a first text string from a set of text strings to be translated, wherein each text string of the set of text strings is associated with a priority value that is based on a previously-calculated reliability-values of one or more translations for the first text string, and wherein the first text string is selected based on its priority value; sending, to a client system of the user, instructions configured to present a translation prompt comprising the first text string and a translation-input field, wherein the user is associated with a credibility-score based on prior translation activity; receiving, from the client system, an input by the user corresponding to a translation for the first text string; and calculating a reliability-value for the translation based on the input and the credibility-score of the first user.
    Type: Grant
    Filed: August 16, 2016
    Date of Patent: July 11, 2017
    Assignee: Facebook, Inc.
    Inventor: Luis Francisco Sarmenta
  • Patent number: 9704504
    Abstract: The voice analysis device includes: a voice information acquiring unit that acquires a voice signal generated by plural voice acquiring units disposed at different distances from a speaking section of a speaker and acquiring voice of the speaker; and an identification unit that identifies the speaker corresponding to the voice having been acquired, on the basis of intensities of respective peaks in a frequency spectrum of a first enhanced waveform and a frequency spectrum of a second enhanced waveform. The first enhanced waveform is a waveform where a voice signal of a predetermined target speaker has been enhanced, and the second enhanced waveform is a waveform where a voice signal of a speaker other than the target speaker has been enhanced.
    Type: Grant
    Filed: July 6, 2015
    Date of Patent: July 11, 2017
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Seiya Inagi, Haruo Harada, Hirohito Yoneyama, Kei Shimotani, Akira Fujii, Kiyoshi Iida