Patents by Inventor Phu Nguyen
Phu Nguyen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220130374Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.Type: ApplicationFiled: January 10, 2022Publication date: April 28, 2022Applicant: Google LLCInventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
-
Patent number: 11238845Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.Type: GrantFiled: November 14, 2019Date of Patent: February 1, 2022Assignee: Google LLCInventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
-
Publication number: 20220005465Abstract: A method for performing speech recognition using sequence-to-sequence models includes receiving audio data for an utterance and providing features indicative of acoustic characteristics of the utterance as input to an encoder. The method also includes processing an output of the encoder using an attender to generate a context vector, generating speech recognition scores using the context vector and a decoder trained using a training process, and generating a transcription for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.Type: ApplicationFiled: September 20, 2021Publication date: January 6, 2022Applicant: Google LLCInventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A.u. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
-
Publication number: 20210358491Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.Type: ApplicationFiled: July 27, 2021Publication date: November 18, 2021Applicant: Google LLCInventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
-
Patent number: 11145293Abstract: Methods, systems, and apparatus, including computer-readable media, for performing speech recognition using sequence-to-sequence models. An automated speech recognition (ASR) system receives audio data for an utterance and provides features indicative of acoustic characteristics of the utterance as input to an encoder. The system processes an output of the encoder using an attender to generate a context vector and generates speech recognition scores using the context vector and a decoder trained using a training process that selects at least one input to the decoder with a predetermined probability. An input to the decoder during training is selected between input data based on a known value for an element in a training example, and input data based on an output of the decoder for the element in the training example. A transcription is generated for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.Type: GrantFiled: July 19, 2019Date of Patent: October 12, 2021Assignee: Google LLCInventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-Cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A. U. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
-
Publication number: 20210304475Abstract: A virtual make-up apparatus and method: store cosmetic item information of cosmetic items of different colors; store a different texture component for each stored cosmetic item of a specific color; extract an object portion image of a virtual make-up from a facial image; extract color information from the object portion image; designate an item of the virtual make-up corresponding to a stored cosmetic item and output a color image by applying a color corresponding to the designated item on the object portion image; output a texture image, based on analyzed color information corresponding to a stored cosmetic item, by adding a texture component to a part of the object portion image; and display a virtual make-up image of virtual make-up using the designated item applied on the facial image, by using the color and texture images, and the object portion image of the virtual make-up of the facial image.Type: ApplicationFiled: June 11, 2021Publication date: September 30, 2021Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Phu NGUYEN, Yoshiteru TANAKA, Hiroto TOMITA
-
Patent number: 11107463Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.Type: GrantFiled: August 1, 2019Date of Patent: August 31, 2021Assignee: Google LLCInventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
-
Patent number: 11069105Abstract: This virtual make-up apparatus extracts an object portion image of a virtual make-up from a facial image captured by a camera, applies, on the object portion image, a color corresponding to an item in accordance with designation of the item for the virtual make-up, and adds a texture component different for each item, to a part of the object portion image. The virtual make-up apparatus displays, on a display unit, a virtual make-up image in which a virtual make-up using an item is applied on the facial image, by using the object portion image to which color is applied, an image in which a texture component is added to the part of the object portion image, and the object portion image of the virtual make-up of the facial image.Type: GrantFiled: August 23, 2017Date of Patent: July 20, 2021Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Phu Nguyen, Yoshiteru Tanaka, Hiroto Tomita
-
Publication number: 20210174391Abstract: A computer-implemented system including a data store: a content items database, a user account database, and one or more servers configured to execute a computer program to perform one or more of the following: receiving content items from content database that are associated with a topic selected by a user for posting on a social network, wherein at least one content item is associated with an URL; estimating a post to a reaction filter for a time interval for the social network for the user, calculating a reaction profile associated with reactions to posts on the social network by aggregating reaction time of a plurality of users on the social network for one or more content items posted on the social network; determining a schedule for posting the content items on the social network as a function of the post to reaction filter and reaction profile.Type: ApplicationFiled: November 18, 2020Publication date: June 10, 2021Applicant: Khoros, LLCInventors: Allison Savage, Morten Moeller, Phu Nguyen, Gouning Hu, Nemanja Spasojevic
-
Patent number: 10933605Abstract: A reduced vibration structure comprises honeycomb and a vibration damping coating on at least a portion of the internal surface of at least a portion of the cells of the honeycomb. The vibration damping coating is formed by curing a coating composition comprising acrylic polymer or copolymer emulsion and a vibration damping filler. The structure can include an adhesive coupled to both the upper surface and the lower surface of the honeycomb and two pieces of sheathing coupled to the adhesive, one on the upper surface and one on the lower surface of the honeycomb.Type: GrantFiled: July 22, 2016Date of Patent: March 2, 2021Assignee: The Gill CorporationInventors: Hongbin Shen, Phu Nguyen
-
Patent number: 10902462Abstract: A computer-implemented system is disclosed for providing a platform for managing a campaign for publishing data content on social networks to increase audience member reaction to the data content.Type: GrantFiled: April 28, 2017Date of Patent: January 26, 2021Assignee: Khoros, LLCInventors: Allison Savage, Morten Moeller, Phu Nguyen, Guoning Hu, Nemanja Spasojevic
-
Publication number: 20210012089Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data representing a sensor measurement of a scene captured by one or more sensors to generate an object detection output that identifies locations of one or more objects in the scene. When deployed within an on-board system of a vehicle, the object detection output that is generated can be used to make autonomous driving decisions for the vehicle with enhanced accuracy.Type: ApplicationFiled: July 8, 2020Publication date: January 14, 2021Inventors: Jonathon Shlens, Patrick An Phu Nguyen, Benjamin James Caine, Jiquan Ngiam, Wei Han, Brandon Chauloon Yang, Yuning Chai, Pei Sun, Yin Zhou, Xi Yi, Ouais Alsharif, Zhifeng Chen, Vijay Vasudevan
-
Publication number: 20200258091Abstract: Social customer service and support systems integrated with social media and social networks are disclosed. More particularly, a social customer care platform system is disclosed to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.Type: ApplicationFiled: November 25, 2019Publication date: August 13, 2020Applicant: Khoros, LLCInventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller
-
Publication number: 20200214427Abstract: A terminal images first and second images respectively indicating facial images of a user before and after makeup, acquires information on a type or region of the makeup performed by the user, and transmits the first and second images and the information on the type or region of the makeup in association with each other to a server. The server deduces a makeup color of the makeup performed by the user based on the first and second images and the information on the type or region of the makeup performed by the user, and extracts at least one similar makeup item having the makeup color based on information on the makeup color and a makeup item database, and transmits information on at least one similar makeup item to the terminal. A terminal displays information on at least one similar makeup item transmitted from the server to a display unit.Type: ApplicationFiled: September 22, 2017Publication date: July 9, 2020Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Yoshiteru TANAKA, Phu NGUYEN
-
Publication number: 20200184575Abstract: The present invention relates to social customer service and support systems integrated with social media and social networks. More particularly, the invention provides a social customer care platform system to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.Type: ApplicationFiled: December 2, 2019Publication date: June 11, 2020Applicant: Khoros, LLCInventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller
-
Publication number: 20200160836Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.Type: ApplicationFiled: November 14, 2019Publication date: May 21, 2020Inventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
-
Publication number: 20200051298Abstract: This virtual make-up apparatus extracts an object portion image of a virtual make-up from a facial image captured by a camera, applies, on the object portion image, a color corresponding to an item in accordance with designation of the item for the virtual make-up, and adds a texture component different for each item, to a part of the object portion image. The virtual make-up apparatus displays, on a display unit, a virtual make-up image in which a virtual make-up using an item is applied on the facial image, by using the object portion image to which color is applied, an image in which a texture component is added to the part of the object portion image, and the object portion image of the virtual make-up of the facial image.Type: ApplicationFiled: August 23, 2017Publication date: February 13, 2020Applicant: Panasonic Intellectual Property Management Co., Ltd.Inventors: Phu NGUYEN, Yoshiteru TANAKA, Hiroto TOMITA
-
Publication number: 20200043483Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.Type: ApplicationFiled: August 1, 2019Publication date: February 6, 2020Inventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
-
Publication number: 20200027444Abstract: Methods, systems, and apparatus, including computer-readable media, for performing speech recognition using sequence-to-sequence models. An automated speech recognition (ASR) system receives audio data for an utterance and provides features indicative of acoustic characteristics of the utterance as input to an encoder. The system processes an output of the encoder using an attender to generate a context vector and generates speech recognition scores using the context vector and a decoder trained using a training process that selects at least one input to the decoder with a predetermined probability. An input to the decoder during training is selected between input data based on a known value for an element in a training example, and input data based on an output of the decoder for the element in the training example. A transcription is generated for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.Type: ApplicationFiled: July 19, 2019Publication date: January 23, 2020Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-Cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A.U. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
-
Patent number: 10497069Abstract: Social customer service and support systems integrated with social media and social networks are disclosed. More particularly, a social customer care platform system is disclosed to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.Type: GrantFiled: October 28, 2016Date of Patent: December 3, 2019Assignee: Khoros, LLCInventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller