Patents by Inventor Phu Nguyen

Phu Nguyen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220130374
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
    Type: Application
    Filed: January 10, 2022
    Publication date: April 28, 2022
    Applicant: Google LLC
    Inventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
  • Patent number: 11238845
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
  • Publication number: 20220005465
    Abstract: A method for performing speech recognition using sequence-to-sequence models includes receiving audio data for an utterance and providing features indicative of acoustic characteristics of the utterance as input to an encoder. The method also includes processing an output of the encoder using an attender to generate a context vector, generating speech recognition scores using the context vector and a decoder trained using a training process, and generating a transcription for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Applicant: Google LLC
    Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A.u. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
  • Publication number: 20210358491
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.
    Type: Application
    Filed: July 27, 2021
    Publication date: November 18, 2021
    Applicant: Google LLC
    Inventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
  • Patent number: 11145293
    Abstract: Methods, systems, and apparatus, including computer-readable media, for performing speech recognition using sequence-to-sequence models. An automated speech recognition (ASR) system receives audio data for an utterance and provides features indicative of acoustic characteristics of the utterance as input to an encoder. The system processes an output of the encoder using an attender to generate a context vector and generates speech recognition scores using the context vector and a decoder trained using a training process that selects at least one input to the decoder with a predetermined probability. An input to the decoder during training is selected between input data based on a known value for an element in a training example, and input data based on an output of the decoder for the element in the training example. A transcription is generated for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: October 12, 2021
    Assignee: Google LLC
    Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-Cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A. U. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
  • Publication number: 20210304475
    Abstract: A virtual make-up apparatus and method: store cosmetic item information of cosmetic items of different colors; store a different texture component for each stored cosmetic item of a specific color; extract an object portion image of a virtual make-up from a facial image; extract color information from the object portion image; designate an item of the virtual make-up corresponding to a stored cosmetic item and output a color image by applying a color corresponding to the designated item on the object portion image; output a texture image, based on analyzed color information corresponding to a stored cosmetic item, by adding a texture component to a part of the object portion image; and display a virtual make-up image of virtual make-up using the designated item applied on the facial image, by using the color and texture images, and the object portion image of the virtual make-up of the facial image.
    Type: Application
    Filed: June 11, 2021
    Publication date: September 30, 2021
    Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Phu NGUYEN, Yoshiteru TANAKA, Hiroto TOMITA
  • Patent number: 11107463
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 31, 2021
    Assignee: Google LLC
    Inventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
  • Patent number: 11069105
    Abstract: This virtual make-up apparatus extracts an object portion image of a virtual make-up from a facial image captured by a camera, applies, on the object portion image, a color corresponding to an item in accordance with designation of the item for the virtual make-up, and adds a texture component different for each item, to a part of the object portion image. The virtual make-up apparatus displays, on a display unit, a virtual make-up image in which a virtual make-up using an item is applied on the facial image, by using the object portion image to which color is applied, an image in which a texture component is added to the part of the object portion image, and the object portion image of the virtual make-up of the facial image.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: July 20, 2021
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Phu Nguyen, Yoshiteru Tanaka, Hiroto Tomita
  • Publication number: 20210174391
    Abstract: A computer-implemented system including a data store: a content items database, a user account database, and one or more servers configured to execute a computer program to perform one or more of the following: receiving content items from content database that are associated with a topic selected by a user for posting on a social network, wherein at least one content item is associated with an URL; estimating a post to a reaction filter for a time interval for the social network for the user, calculating a reaction profile associated with reactions to posts on the social network by aggregating reaction time of a plurality of users on the social network for one or more content items posted on the social network; determining a schedule for posting the content items on the social network as a function of the post to reaction filter and reaction profile.
    Type: Application
    Filed: November 18, 2020
    Publication date: June 10, 2021
    Applicant: Khoros, LLC
    Inventors: Allison Savage, Morten Moeller, Phu Nguyen, Gouning Hu, Nemanja Spasojevic
  • Patent number: 10933605
    Abstract: A reduced vibration structure comprises honeycomb and a vibration damping coating on at least a portion of the internal surface of at least a portion of the cells of the honeycomb. The vibration damping coating is formed by curing a coating composition comprising acrylic polymer or copolymer emulsion and a vibration damping filler. The structure can include an adhesive coupled to both the upper surface and the lower surface of the honeycomb and two pieces of sheathing coupled to the adhesive, one on the upper surface and one on the lower surface of the honeycomb.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: March 2, 2021
    Assignee: The Gill Corporation
    Inventors: Hongbin Shen, Phu Nguyen
  • Patent number: 10902462
    Abstract: A computer-implemented system is disclosed for providing a platform for managing a campaign for publishing data content on social networks to increase audience member reaction to the data content.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: January 26, 2021
    Assignee: Khoros, LLC
    Inventors: Allison Savage, Morten Moeller, Phu Nguyen, Guoning Hu, Nemanja Spasojevic
  • Publication number: 20210012089
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data representing a sensor measurement of a scene captured by one or more sensors to generate an object detection output that identifies locations of one or more objects in the scene. When deployed within an on-board system of a vehicle, the object detection output that is generated can be used to make autonomous driving decisions for the vehicle with enhanced accuracy.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 14, 2021
    Inventors: Jonathon Shlens, Patrick An Phu Nguyen, Benjamin James Caine, Jiquan Ngiam, Wei Han, Brandon Chauloon Yang, Yuning Chai, Pei Sun, Yin Zhou, Xi Yi, Ouais Alsharif, Zhifeng Chen, Vijay Vasudevan
  • Publication number: 20200258091
    Abstract: Social customer service and support systems integrated with social media and social networks are disclosed. More particularly, a social customer care platform system is disclosed to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.
    Type: Application
    Filed: November 25, 2019
    Publication date: August 13, 2020
    Applicant: Khoros, LLC
    Inventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller
  • Publication number: 20200214427
    Abstract: A terminal images first and second images respectively indicating facial images of a user before and after makeup, acquires information on a type or region of the makeup performed by the user, and transmits the first and second images and the information on the type or region of the makeup in association with each other to a server. The server deduces a makeup color of the makeup performed by the user based on the first and second images and the information on the type or region of the makeup performed by the user, and extracts at least one similar makeup item having the makeup color based on information on the makeup color and a makeup item database, and transmits information on at least one similar makeup item to the terminal. A terminal displays information on at least one similar makeup item transmitted from the server to a display unit.
    Type: Application
    Filed: September 22, 2017
    Publication date: July 9, 2020
    Applicant: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yoshiteru TANAKA, Phu NGUYEN
  • Publication number: 20200184575
    Abstract: The present invention relates to social customer service and support systems integrated with social media and social networks. More particularly, the invention provides a social customer care platform system to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.
    Type: Application
    Filed: December 2, 2019
    Publication date: June 11, 2020
    Applicant: Khoros, LLC
    Inventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller
  • Publication number: 20200160836
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 21, 2020
    Inventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
  • Publication number: 20200051298
    Abstract: This virtual make-up apparatus extracts an object portion image of a virtual make-up from a facial image captured by a camera, applies, on the object portion image, a color corresponding to an item in accordance with designation of the item for the virtual make-up, and adds a texture component different for each item, to a part of the object portion image. The virtual make-up apparatus displays, on a display unit, a virtual make-up image in which a virtual make-up using an item is applied on the facial image, by using the object portion image to which color is applied, an image in which a texture component is added to the part of the object portion image, and the object portion image of the virtual make-up of the facial image.
    Type: Application
    Filed: August 23, 2017
    Publication date: February 13, 2020
    Applicant: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Phu NGUYEN, Yoshiteru TANAKA, Hiroto TOMITA
  • Publication number: 20200043483
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses N-best lists of decoded hypotheses, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.
    Type: Application
    Filed: August 1, 2019
    Publication date: February 6, 2020
    Inventors: Rohit Prakash Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick An Phu Nguyen, Zhifeng Chen, Chung-Cheng Chiu, Anjuli Patricia Kannan
  • Publication number: 20200027444
    Abstract: Methods, systems, and apparatus, including computer-readable media, for performing speech recognition using sequence-to-sequence models. An automated speech recognition (ASR) system receives audio data for an utterance and provides features indicative of acoustic characteristics of the utterance as input to an encoder. The system processes an output of the encoder using an attender to generate a context vector and generates speech recognition scores using the context vector and a decoder trained using a training process that selects at least one input to the decoder with a predetermined probability. An input to the decoder during training is selected between input data based on a known value for an element in a training example, and input data based on an output of the decoder for the element in the training example. A transcription is generated for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.
    Type: Application
    Filed: July 19, 2019
    Publication date: January 23, 2020
    Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-Cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A.U. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
  • Patent number: 10497069
    Abstract: Social customer service and support systems integrated with social media and social networks are disclosed. More particularly, a social customer care platform system is disclosed to allow customer care functions, and in particular to allow customer service agents to identify, prioritize, match and triage customer support requests that may arise through a social network and may be serviced using a social network. It manages and tracks a high-volume of customer interactions and provides for monitoring of Internet social network posts relevant to a business's products or services along with the ability to capture, monitor, filter, make sense of and respond to, in near real-time, tens of thousands of social interactions.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: December 3, 2019
    Assignee: Khoros, LLC
    Inventors: Dewey Gaedcke, Phu Nguyen, James David Evans, Morten Moeller