Patents by Inventor Justin Lu
Justin Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240347060Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.Type: ApplicationFiled: June 21, 2024Publication date: October 17, 2024Inventors: Victor Carbune, Matthew Sharifi, Ondrej Skopek, Justin Lu, Daniel Valcarce, Kevin Kilgour, Mohamad Hassan Rom, Nicolo D'Ercole, Michael Golikov
-
Publication number: 20240321277Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.Type: ApplicationFiled: May 29, 2024Publication date: September 26, 2024Inventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Marius Sajgalik, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Patent number: 12057119Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.Type: GrantFiled: January 3, 2023Date of Patent: August 6, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Matthew Sharifi, Ondrej Skopek, Justin Lu, Daniel Valcarce, Kevin Kilgour, Mohamad Hassan Rom, Nicolo D'Ercole, Michael Golikov
-
Patent number: 12033637Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.Type: GrantFiled: June 3, 2021Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Márius {hacek over (S)}ajgalík, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Publication number: 20240081519Abstract: A brush for applying a product including a cosmetic, care, or pharmaceutical product onto the keratinous surface of a user. The applying member includes a support head and a plurality of bristles extends from a distal end face of the support head along a longitudinal axis of the support head. The distal end face of the support head comprises a first distal end face and a second distal end face. The first distal end face and the second distal end face are substantially flat surfaces that intersect each other to form a ridge on the distal end face. The ridge on the distal end face extends along an axis perpendicular to a longitudinal axis of the support head.Type: ApplicationFiled: September 11, 2022Publication date: March 14, 2024Inventors: Justin LU, Feng-Ying FU, Jun SHEN
-
Publication number: 20230143177Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.Type: ApplicationFiled: January 3, 2023Publication date: May 11, 2023Inventors: Victor Carbune, Matthew Sharifi, Ondrej Skopek, Justin Lu, Daniel Valcarce, Kevin Kilgour, Mohamad Hassan Rom, Nicolo D'Ercole, Michael Golikov
-
Publication number: 20230061929Abstract: Implementations described herein relate to configuring a dynamic warm word button, that is associated with a client device, with particular assistant commands based on detected occurrences of warm word activation events at the client device. In response to detecting an occurrence of a given warm word activation event at the client device, implementations can determine whether user verification is required for a user that actuated the warm word button. Further, in response to determining that the user verification is required for the user that actuated the warm word button, the user verification can be performed. Moreover, in response to determining that the user that actuated the warm word button has been verified, implementations can cause an automated assistant to perform the particular assistant command associated with the warm word activation event. Audio-based and/or non-audio-based techniques can be utilized to perform the user verification.Type: ApplicationFiled: November 22, 2021Publication date: March 2, 2023Inventors: Victor Carbune, Antonio Gaetani, Bastiaan Van Eeckhoudt, Daniel Valcarce, Michael Golikov, Justin Lu, Ondrej Skopek, Nicolo D'Ercole, Zaheed Sabur, Behshad Behzadi, Luv Kothari
-
Patent number: 11557293Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.Type: GrantFiled: May 17, 2021Date of Patent: January 17, 2023Assignee: GOOGLE LLCInventors: Victor Carbune, Matthew Sharifi, Ondrej Skopek, Justin Lu, Daniel Valcarce, Kevin Kilgour, Mohamad Hassan Rom, Nicolo D'Ercole, Michael Golikov
-
Publication number: 20220366903Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.Type: ApplicationFiled: May 17, 2021Publication date: November 17, 2022Inventors: Victor Carbune, Matthew Sharifi, Ondrej Skopek, Justin Lu, Daniel Valcarce, Kevin Kilgour, Mohamad Hassan Rom, Nicolo D'Ercole, Michael Golikov
-
Publication number: 20220366911Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements.Type: ApplicationFiled: June 3, 2021Publication date: November 17, 2022Inventors: Victor Carbune, Krishna Sapkota, Behshad Behzadi, Julia Proskurnia, Jacopo Sannazzaro Natta, Justin Lu, Magali Boizot-Roche, Márius Sajgalík, Nicolo D'Ercole, Zaheed Sabur, Luv Kothari
-
Patent number: 11364640Abstract: Presented are automatic end-of-arm tool changing devices, methods for making/using such tool changing devices, and automated robotic systems with such tool changing devices. A tool changing device for an automated robotic system includes a quick-change (QC) interlock subassembly that attaches to an end effector. The QC subassembly includes a housing, one or more locking pins movable on the QC housing, and one or more anchor pins projecting from the QC housing. A finger block (FB) subassembly, which performs a task on a target object, includes a housing, a robot tool mounted to the FB housing, and one or more key slots and one or more pin holes in the FB housing. Each key slot receives an anchor pin; once the anchor pin is slid to a locking end of the key slot, each locking pin automatically slides into a pin hole to thereby lock together the QC and FB subassemblies.Type: GrantFiled: April 16, 2021Date of Patent: June 21, 2022Assignee: Sirius Automation Group Inc.Inventors: Lawrence Markus, Justin Lu
-
Patent number: D1064400Type: GrantFiled: September 23, 2022Date of Patent: February 25, 2025Assignee: APR Beauty Group, Inc.Inventors: Justin Lu, Feng-Ying Fu, Jun Shen