Patents by Inventor Shamim Begum
Shamim Begum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10695663Abstract: Systems, apparatus and methods may provide for audio processing of received user audio input from a microphone that may optionally be a tissue conducting microphone. Audio processing may be further conducted on received ambient audio from one or more additional microphones. A translator may translate the ambient audio into content to be output to a user. In an embodiment, ambient audio is translated into visual content to be displayed on a virtual reality device.Type: GrantFiled: December 22, 2015Date of Patent: June 30, 2020Assignee: Intel CorporationInventors: Shamim Begum, Kofi C. Whitney
-
Patent number: 10621968Abstract: A method for establishing an articulatory speech synthesis model of a person's voice includes acquiring image data representing a visage of a person, in which the visage includes facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice; selecting a predefined articulatory speech synthesis model from among stores of predefined models, the selection based at least in part on one or both of the facial characteristics or the exteriorly visible articulatory speech synthesis model parameters; and associating at least a portion of the selected predefined articulatory speech synthesis model with the articulatory speech synthesis model of the person's voice.Type: GrantFiled: July 18, 2018Date of Patent: April 14, 2020Assignee: Intel CorporationInventors: Shamim Begum, Alexander A. Oganezov
-
Patent number: 10540975Abstract: Technologies for automatic speech recognition using articulatory parameters are disclosed. An automatic speech recognition device may capture speech data from a speaker and also capture an image of the speaker. The automatic speech recognition device may determine one or more articulatory parameters based on the image, such as such as a jaw angle, a lip protrusion, or a lip height, and compare those parameters with articulatory parameters of training users. After selecting training users with similar articulatory parameters as the training speaker, the automatic speech recognition device may select training data associated with the selected training speakers, including parameters to use for an automatic speech recognition algorithm. By using the parameters already optimized for training users with similar articulatory parameters as the speaker, the automatic speech recognition device may quickly adapt an automatic speech recognition algorithm to the speaker.Type: GrantFiled: March 25, 2016Date of Patent: January 21, 2020Assignee: Intel CorporationInventors: Shamim Begum, Alexander A. Oganezov
-
Publication number: 20180322862Abstract: A method for establishing an articulatory speech synthesis model of a person's voice includes acquiring image data representing a visage of a person, in which the visage includes facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice; selecting a predefined articulatory speech synthesis model from among stores of predefined models, the selection based at least in part on one or both of the facial characteristics or the exteriorly visible articulatory speech synthesis model parameters; and associating at least a portion of the selected predefined articulatory speech synthesis model with the articulatory speech synthesis model of the person's voice.Type: ApplicationFiled: July 18, 2018Publication date: November 8, 2018Applicant: INTEL CORPORATIONInventors: Shamim Begum, Alexander A. Oganezov
-
Patent number: 10055661Abstract: Various systems and methods for implementing skin texture-based authentication are described herein. A system comprises a capture module to obtain at a wearable device worn by a user, an input representation of the user's skin; an analysis module to identify a set of features in the input representation; and an authentication module to authenticate the user based on the set of features.Type: GrantFiled: March 24, 2015Date of Patent: August 21, 2018Assignee: Intel CorporationInventors: Alexander Oganezov, Shamim Begum
-
Patent number: 10056073Abstract: A method, performed by a user equipment device, for text-to-speech conversion entails sending to an articulatory model server exterior facial structural information of a person, receiving from the articulatory model server at least a portion of a predefined articulatory model that corresponds to the exterior facial structural information, the predefined articulatory model representing a voice of a modeled person who is different from the person, and generating, based at least partly on the predefined articulatory model, speech from text stored in a memory of the user equipment device. Furthermore, a method of configuring text-to-speech conversion for a user equipment device entails determining at least a portion of an articulatory model that corresponds to exterior facial structural information based on a comparison of the exterior facial structural information to exterior facial structural information stored in a database of articulatory models.Type: GrantFiled: February 23, 2017Date of Patent: August 21, 2018Assignee: INTEL CORPORATIONInventors: Shamim Begum, Alexander A. Oganezov
-
Publication number: 20170287464Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.Type: ApplicationFiled: February 23, 2017Publication date: October 5, 2017Applicant: INTEL CORPORATIONInventors: Shamim Begum, Alexander A. Oganezov
-
Publication number: 20170278517Abstract: Technologies for automatic speech recognition using articulatory parameters are disclosed. An automatic speech recognition device may capture speech data from a speaker and also capture an image of the speaker. The automatic speech recognition device may determine one or more articulatory parameters based on the image, such as such as a jaw angle, a lip protrusion, or a lip height, and compare those parameters with articulatory parameters of training users. After selecting training users with similar articulatory parameters as the training speaker, the automatic speech recognition device may select training data associated with the selected training speakers, including parameters to use for an automatic speech recognition algorithm. By using the parameters already optimized for training users with similar articulatory parameters as the speaker, the automatic speech recognition device may quickly adapt an automatic speech recognition algorithm to the speaker.Type: ApplicationFiled: March 25, 2016Publication date: September 28, 2017Inventors: Shamim Begum, Alexander A. Oganezov
-
Publication number: 20170173454Abstract: Systems, apparatus and methods may provide for audio processing of received user audio input from a microphone that may optionally be a tissue conducting microphone. Audio processing may be further conducted on received ambient audio from one or more additional microphones. A translator may translate the ambient audio into content to be output to a user. In an embodiment, ambient audio is translated into visual content to be displayed on a virtual reality device.Type: ApplicationFiled: December 22, 2015Publication date: June 22, 2017Inventors: Shamim Begum, Kofi C. Whitney
-
Patent number: 9607609Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.Type: GrantFiled: September 25, 2014Date of Patent: March 28, 2017Assignee: INTEL CORPORATIONInventors: Shamim Begum, Alexander A. Oganezov
-
Publication number: 20160283808Abstract: Various systems and methods for implementing skin texture-based authentication are described herein. A system comprises a capture module to obtain at a wearable device worn by a user, an input representation of the user's skin; an analysis module to identify a set of features in the input representation; and an authentication module to authenticate the user based on the set of features.Type: ApplicationFiled: March 24, 2015Publication date: September 29, 2016Inventors: Alexander Oganezov, Shamim Begum
-
Publication number: 20160285929Abstract: A mechanism is described for facilitating dynamic and seamless transitioning into online meetings at computing devices according to one embodiment. A method of embodiments, as described herein, includes receiving a request from a participant of a plurality of participants of an online meeting, where the request indicates disengagement of the participant from the meeting. The method may further include initiating recording of proceedings of the online meeting during absence of the participant, where the proceedings include conversations of remaining participants of the plurality of participants during the absence of the participant. The method may further include intelligently formatting the recording, where the replaying engine is further to replay the formatted recording to the participant while transitioning the participant back into the online meeting.Type: ApplicationFiled: March 27, 2015Publication date: September 29, 2016Applicant: INTEL CORPORATIONInventors: ALEXANDER A. OGANEZOV, SHAMIM BEGUM
-
Publication number: 20160093284Abstract: Disclosed are embodiments for use in an articulatory-based text-to-speech conversion system configured to establish an articulatory speech synthesis model of a person's voice based on facial characteristics defining exteriorly visible articulatory speech synthesis model parameters of the person's voice and on a predefined articulatory speech synthesis model selected from among stores of predefined models.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Inventors: Shamim Begum, Alexander A. Oganezov