Patents by Inventor Shamim Begum

Shamim Begum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10695663
    Abstract: Systems, apparatus and methods may provide for audio processing of received user audio input from a microphone that may optionally be a tissue conducting microphone. Audio processing may be further conducted on received ambient audio from one or more additional microphones. A translator may translate the ambient audio into content to be output to a user. In an embodiment, ambient audio is translated into visual content to be displayed on a virtual reality device.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Shamim Begum, Kofi C. Whitney
  • Patent number: 10621968
    Abstract: A method for establishing an articulatory speech synthesis model of a person's voice includes acquiring image data representing a visage of a person, in which the visage includes facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice; selecting a predefined articulatory speech synthesis model from among stores of predefined models, the selection based at least in part on one or both of the facial characteristics or the exteriorly visible articulatory speech synthesis model parameters; and associating at least a portion of the selected predefined articulatory speech synthesis model with the articulatory speech synthesis model of the person's voice.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: April 14, 2020
    Assignee: Intel Corporation
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Patent number: 10540975
    Abstract: Technologies for automatic speech recognition using articulatory parameters are disclosed. An automatic speech recognition device may capture speech data from a speaker and also capture an image of the speaker. The automatic speech recognition device may determine one or more articulatory parameters based on the image, such as such as a jaw angle, a lip protrusion, or a lip height, and compare those parameters with articulatory parameters of training users. After selecting training users with similar articulatory parameters as the training speaker, the automatic speech recognition device may select training data associated with the selected training speakers, including parameters to use for an automatic speech recognition algorithm. By using the parameters already optimized for training users with similar articulatory parameters as the speaker, the automatic speech recognition device may quickly adapt an automatic speech recognition algorithm to the speaker.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Publication number: 20180322862
    Abstract: A method for establishing an articulatory speech synthesis model of a person's voice includes acquiring image data representing a visage of a person, in which the visage includes facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice; selecting a predefined articulatory speech synthesis model from among stores of predefined models, the selection based at least in part on one or both of the facial characteristics or the exteriorly visible articulatory speech synthesis model parameters; and associating at least a portion of the selected predefined articulatory speech synthesis model with the articulatory speech synthesis model of the person's voice.
    Type: Application
    Filed: July 18, 2018
    Publication date: November 8, 2018
    Applicant: INTEL CORPORATION
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Patent number: 10055661
    Abstract: Various systems and methods for implementing skin texture-based authentication are described herein. A system comprises a capture module to obtain at a wearable device worn by a user, an input representation of the user's skin; an analysis module to identify a set of features in the input representation; and an authentication module to authenticate the user based on the set of features.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: August 21, 2018
    Assignee: Intel Corporation
    Inventors: Alexander Oganezov, Shamim Begum
  • Patent number: 10056073
    Abstract: A method, performed by a user equipment device, for text-to-speech conversion entails sending to an articulatory model server exterior facial structural information of a person, receiving from the articulatory model server at least a portion of a predefined articulatory model that corresponds to the exterior facial structural information, the predefined articulatory model representing a voice of a modeled person who is different from the person, and generating, based at least partly on the predefined articulatory model, speech from text stored in a memory of the user equipment device. Furthermore, a method of configuring text-to-speech conversion for a user equipment device entails determining at least a portion of an articulatory model that corresponds to exterior facial structural information based on a comparison of the exterior facial structural information to exterior facial structural information stored in a database of articulatory models.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: August 21, 2018
    Assignee: INTEL CORPORATION
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Publication number: 20170287464
    Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.
    Type: Application
    Filed: February 23, 2017
    Publication date: October 5, 2017
    Applicant: INTEL CORPORATION
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Publication number: 20170278517
    Abstract: Technologies for automatic speech recognition using articulatory parameters are disclosed. An automatic speech recognition device may capture speech data from a speaker and also capture an image of the speaker. The automatic speech recognition device may determine one or more articulatory parameters based on the image, such as such as a jaw angle, a lip protrusion, or a lip height, and compare those parameters with articulatory parameters of training users. After selecting training users with similar articulatory parameters as the training speaker, the automatic speech recognition device may select training data associated with the selected training speakers, including parameters to use for an automatic speech recognition algorithm. By using the parameters already optimized for training users with similar articulatory parameters as the speaker, the automatic speech recognition device may quickly adapt an automatic speech recognition algorithm to the speaker.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 28, 2017
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Publication number: 20170173454
    Abstract: Systems, apparatus and methods may provide for audio processing of received user audio input from a microphone that may optionally be a tissue conducting microphone. Audio processing may be further conducted on received ambient audio from one or more additional microphones. A translator may translate the ambient audio into content to be output to a user. In an embodiment, ambient audio is translated into visual content to be displayed on a virtual reality device.
    Type: Application
    Filed: December 22, 2015
    Publication date: June 22, 2017
    Inventors: Shamim Begum, Kofi C. Whitney
  • Patent number: 9607609
    Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: March 28, 2017
    Assignee: INTEL CORPORATION
    Inventors: Shamim Begum, Alexander A. Oganezov
  • Publication number: 20160283808
    Abstract: Various systems and methods for implementing skin texture-based authentication are described herein. A system comprises a capture module to obtain at a wearable device worn by a user, an input representation of the user's skin; an analysis module to identify a set of features in the input representation; and an authentication module to authenticate the user based on the set of features.
    Type: Application
    Filed: March 24, 2015
    Publication date: September 29, 2016
    Inventors: Alexander Oganezov, Shamim Begum
  • Publication number: 20160285929
    Abstract: A mechanism is described for facilitating dynamic and seamless transitioning into online meetings at computing devices according to one embodiment. A method of embodiments, as described herein, includes receiving a request from a participant of a plurality of participants of an online meeting, where the request indicates disengagement of the participant from the meeting. The method may further include initiating recording of proceedings of the online meeting during absence of the participant, where the proceedings include conversations of remaining participants of the plurality of participants during the absence of the participant. The method may further include intelligently formatting the recording, where the replaying engine is further to replay the formatted recording to the participant while transitioning the participant back into the online meeting.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Applicant: INTEL CORPORATION
    Inventors: ALEXANDER A. OGANEZOV, SHAMIM BEGUM
  • Publication number: 20160093284
    Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.
    Type: Application
    Filed: September 25, 2014
    Publication date: March 31, 2016
    Inventors: Shamim Begum, Alexander A. Oganezov