Patents by Inventor Michel Assayag

Michel Assayag has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10242670
    Abstract: A system and method for syntactic re-ranking of possible transcriptions generated by automatic speech recognition are disclosed. A computer system accesses acoustic data for a recorded spoken language and generates a plurality of potential transcriptions for the acoustic data. The computer system scores the plurality of potential transcriptions to create an initial likelihood score for the plurality of potential transcriptions. For a particular potential transcription in the plurality of transcriptions, the computer system generates a syntactical likelihood score. The computer system creates an adjusted score for the particular potential transcription by combining the initial likelihood score and the syntactic likelihood score for the particular potential transcription.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Oren Pereg, Moshe Wasserblat, Jonathan Mamou, Michel Assayag
  • Publication number: 20180082680
    Abstract: A system and method for syntactic re-ranking of possible transcriptions generated by automatic speech recognition are disclosed. A computer system accesses acoustic data for a recorded spoken language and generates a plurality of potential transcriptions for the acoustic data. The computer system scores the plurality of potential transcriptions to create an initial likelihood score for the plurality of potential transcriptions. For a particular potential transcription in the plurality of transcriptions, the computer system generates a syntactical likelihood score. The computer system creates an adjusted score for the particular potential transcription by combining the initial likelihood score and the syntactic likelihood score for the particular potential transcription.
    Type: Application
    Filed: September 21, 2016
    Publication date: March 22, 2018
    Inventors: Oren Pereg, Moshe Wasserblat, Jonathan Mamou, Michel Assayag
  • Publication number: 20180075841
    Abstract: Technologies for detecting an end of a sentence in automatic speech recognition are disclosed. An automatic speech recognition device may acquire speech data, and identify phonemes and words of the speech data. The automatic speech recognition device may perform a syntactic parse based on the recognized words, and determine an end of a sentence based on the syntactic parse. For example, if the syntactic parse indicates that a certain set of consecutive recognized words form a syntactically complete and correct sentence, the automatic speech recognition device may determine that there is an end of a sentence at the end of that set of words.
    Type: Application
    Filed: November 15, 2017
    Publication date: March 15, 2018
    Inventors: Oren Shamir, Oren Pereg, Moshe Wasserblat, Jonathan Mamou, Michel Assayag
  • Patent number: 9858923
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for adaptation of language models and semantic tracking to improve automatic speech recognition (ASR). A system for recognizing phrases of speech from a conversation may include an ASR circuit configured to transcribe a user's speech to a first estimated text sequence, based on a generalized language model. The system may also include a language model matching circuit configured to analyze the first estimated text sequence to determine a context and to select a personalized language model (PLM), from a plurality of PLMs, based on that context. The ASR circuit may further be configured to re-transcribe the speech based on the selected PLM to generate a lattice of paths of estimated text sequences, wherein each of the paths of estimated text sequences comprise one or more words and an acoustic score associated with each of the words.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: January 2, 2018
    Assignee: INTEL CORPORATION
    Inventors: Moshe Wasserblat, Oren Pereg, Michel Assayag, Alexander Sivak, Shahar Taite, Tomer Rider
  • Patent number: 9837069
    Abstract: Technologies for detecting an end of a sentence in automatic speech recognition are disclosed. An automatic speech recognition device may acquire speech data, and identify phonemes and words of the speech data. The automatic speech recognition device may perform a syntactic parse based on the recognized words, and determine an end of a sentence based on the syntactic parse. For example, if the syntactic parse indicates that a certain set of consecutive recognized words form a syntactically complete and correct sentence, the automatic speech recognition device may determine that there is an end of a sentence at the end of that set of words.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: December 5, 2017
    Assignee: Intel Corporation
    Inventors: Oren Shamir, Oren Pereg, Moshe Wasserblat, Jonathan Mamou, Michel Assayag
  • Publication number: 20170178625
    Abstract: System and techniques for direct motion sensor input to rendering pipeline are described herein. A ranked list of ASR hypotheses may be obtained. A set of ASR hypotheses may be selected from the list. The set of ASR hypothesis may be re-ranked using semantic coherence scoring between words in the ASR hypotheses. An ASR hypothesis from the set of ASR hypotheses with a highest re-rank may be outputted.
    Type: Application
    Filed: December 21, 2015
    Publication date: June 22, 2017
    Inventors: Jonathan Mamou, Moshe Wasserblat, Oren Pereg, Michel Assayag, Orgad Keller
  • Publication number: 20170178623
    Abstract: Technologies for detecting an end of a sentence in automatic speech recognition are disclosed. An automatic speech recognition device may acquire speech data, and identify phonemes and words of the speech data. The automatic speech recognition device may perform a syntactic parse based on the recognized words, and determine an end of a sentence based on the syntactic parse. For example, if the syntactic parse indicates that a certain set of consecutive recognized words form a syntactically complete and correct sentence, the automatic speech recognition device may determine that there is an end of a sentence at the end of that set of words.
    Type: Application
    Filed: December 22, 2015
    Publication date: June 22, 2017
    Inventors: Oren Shamir, Oren Pereg, Moshe Wasserblat, Jonathan Mamou, Michel Assayag
  • Publication number: 20170092266
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for adaptation of language models and semantic tracking to improve automatic speech recognition (ASR). A system for recognizing phrases of speech from a conversation may include an ASR circuit configured to transcribe a user's speech to a first estimated text sequence, based on a generalized language model. The system may also include a language model matching circuit configured to analyze the first estimated text sequence to determine a context and to select a personalized language model (PLM), from a plurality of PLMs, based on that context. The ASR circuit may further be configured to re-transcribe the speech based on the selected PLM to generate a lattice of paths of estimated text sequences, wherein each of the paths of estimated text sequences comprise one or more words and an acoustic score associated with each of the words.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Applicant: INTEL CORPORATION
    Inventors: MOSHE WASSERBLAT, OREN PEREG, MICHEL ASSAYAG, ALEXANDER SIVAK, SHAHAR TAITE, TOMER RIDER
  • Patent number: 9591349
    Abstract: Various systems and methods for providing a repositionable video display on a mobile device, to emulate the effect of user-controlled binoculars, are described herein. In one example, one or more high resolution video sources (such as UltraHD video cameras) obtain video that is wirelessly broadcasted to mobile devices. The mobile device processes the broadcast based on the approximate location of the spectator's mobile device, relative to a scene within the field of view of the mobile device. The location of the mobile device may be derived from a combination of network monitoring, camera inputs, object recognition, and the like. Accordingly, the spectator can obtain a virtual magnification of a scene from an external video source displayed on the spectator's mobile device.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: March 7, 2017
    Assignee: Intel Corporation
    Inventors: Michel Assayag, Shahar Taite, Moshe Wasserblat, Tomer Rider, Oren Pereg, Alexander Sivak
  • Publication number: 20160379630
    Abstract: Various systems and methods for providing speech recognition services are described herein. A user device for providing speech recognition services includes a speech module to maintain a speech recognition model of a user of the user device; a user interaction module to detect an initiation of an interaction between the user and a target device; and a transmission module to transmit the speech recognition model to the target device, the target device to use the speech recognition model to enhance a speech recognition process executed by the target device during the interaction between the user and the target device.
    Type: Application
    Filed: June 25, 2015
    Publication date: December 29, 2016
    Applicant: Intel Corporation
    Inventors: Michel Assayag, Moshe Wasserblat, Oren Pereg, Shahar Taite, Alexander Sivak, Tomer Rider
  • Publication number: 20160379056
    Abstract: Various systems and methods for capturing media moments are described herein. An autonomous camera system for capturing media moments includes a configuration module to receive configuration parameters; a flight control module to autonomously maneuver the autonomous camera system over a crowd of people; a search module to search for a subject in the crowd of people based on the configuration parameters; and a control module to perform an action when the subject is found in the crowd of people.
    Type: Application
    Filed: June 24, 2015
    Publication date: December 29, 2016
    Inventors: Shahar Taite, Tomer Rider, Michel Assayag
  • Publication number: 20160189037
    Abstract: One embodiment provides an apparatus. The apparatus includes a processor; at least one peripheral device coupled to the processor; a memory coupled to the processor; a generic sentiment model and a first domain training corpus stored in memory; and a hybrid sentiment analyzer logic stored in memory and to execute on the processor. The hybrid sentiment analyzer logic includes a sentiment lexicon generator logic to generate a domain sentiment lexicon based, at least in part, on the first domain training corpus and to store the domain sentiment lexicon in memory, a lexicon-based sentiment classifier logic to generate an annotated training corpus unsupervisedly, based, at least in part, on the domain sentiment lexicon and to store the annotated training corpus in memory, and a model-based sentiment adaptor logic to adapt the generic sentiment model based, at least in part, on the annotated training corpus to generate an adapted sentiment model and to store the adapted sentiment model in memory.
    Type: Application
    Filed: December 24, 2014
    Publication date: June 30, 2016
    Applicant: Intel Corporation
    Inventors: Oren Pereg, Moshe Wasserblat, Michel Assayag, Alexander Sivak, Saurav Sahay, Junaith Ahemed Shahabdeen
  • Publication number: 20160182940
    Abstract: Various systems and methods for providing a repositionable video display on a mobile device, to emulate the effect of user-controlled binoculars, are described herein. In one example, one or more high resolution video sources (such as UltraHD video cameras) obtain video that is wirelessly broadcasted to mobile devices. The mobile device processes the broadcast based on the approximate location of the spectator's mobile device, relative to a scene within the field of view of the mobile device. The location of the mobile device may be derived from a combination of network monitoring, camera inputs, object recognition, and the like. Accordingly, the spectator can obtain a virtual magnification of a scene from an external video source displayed on the spectator's mobile device.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Inventors: Michel Assayag, Shahar Taite, Moshe Wasserblat, Tomer Rider, Oren Pereg, Alexander Sivak