Patents by Inventor Alex Park

Alex Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250119834
    Abstract: A method is disclosed herein. The method includes receiving, from a server, first data that is indicative of a first likelihood of the UE successfully performing at least one of a position fix or a transmission of sensor data based on a first set of characteristics associated with the UE and at least one additional UE. The method includes computing, based on the first data and a second set of characteristics associated with the UE, a second likelihood of the UE successfully performing at least one of the position fix or the transmission of the sensor data. The method includes scheduling, based on the second likelihood, a wake-up time instance or a sleep time instance. The method includes transitioning the UE (1) from a sleep state to an active state at the wake-up time instance or (2) from the active state to the sleep state at the sleep time instance.
    Type: Application
    Filed: October 9, 2023
    Publication date: April 10, 2025
    Inventors: An CHEN, Alex PARK
  • Publication number: 20250029624
    Abstract: A method for automatic speech recognition using joint acoustic echo cancellation, speech enhancement, and voice separation includes receiving, at a contextual frontend processing model, input speech features corresponding to a target utterance. The method also includes receiving, at the contextual frontend processing model, at least one of a reference audio signal, a contextual noise signal including noise prior to the target utterance, or a speaker embedding including voice characteristics of a target speaker that spoke the target utterance. The method further includes processing, using the contextual frontend processing model, the input speech features and the at least one of the reference audio signal, the contextual noise signal, or the speaker embedding vector to generate enhanced speech features.
    Type: Application
    Filed: October 4, 2024
    Publication date: January 23, 2025
    Applicant: Google LLC
    Inventors: Arun Narayanan, Tom O'malley, Quan Wang, Alex Park, James Walker, Nathan David Howard, Yanzhang He, Chung-Cheng Chiu
  • Patent number: 12119014
    Abstract: A method for automatic speech recognition using joint acoustic echo cancellation, speech enhancement, and voice separation includes receiving, at a contextual frontend processing model, input speech features corresponding to a target utterance. The method also includes receiving, at the contextual frontend processing model, at least one of a reference audio signal, a contextual noise signal including noise prior to the target utterance, or a speaker embedding including voice characteristics of a target speaker that spoke the target utterance. The method further includes processing, using the contextual frontend processing model, the input speech features and the at least one of the reference audio signal, the contextual noise signal, or the speaker embedding vector to generate enhanced speech features.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: October 15, 2024
    Assignee: Google LLC
    Inventors: Arun Narayanan, Tom O'malley, Quan Wang, Alex Park, James Walker, Nathan David Howard, Yanzhang He, Chung-Cheng Chiu
  • Publication number: 20230038982
    Abstract: A method for automatic speech recognition using joint acoustic echo cancellation, speech enhancement, and voice separation includes receiving, at a contextual frontend processing model, input speech features corresponding to a target utterance. The method also includes receiving, at the contextual frontend processing model, at least one of a reference audio signal, a contextual noise signal including noise prior to the target utterance, or a speaker embedding including voice characteristics of a target speaker that spoke the target utterance. The method further includes processing, using the contextual frontend processing model, the input speech features and the at least one of the reference audio signal, the contextual noise signal, or the speaker embedding vector to generate enhanced speech features.
    Type: Application
    Filed: December 14, 2021
    Publication date: February 9, 2023
    Applicant: Google LLC
    Inventors: Arun Narayanan, Tom O'malley, Quan Wang, Alex Park, James Walker, Nathan David Howard, Yanzhang He, Chung-Cheng Chiu
  • Publication number: 20090132252
    Abstract: Disclosed methods and apparatus segment a signal, such as an acoustic speech signal, into coherent segments, such as coherent topics. In the case of an acoustic speech signal, the segmentation relies on only raw acoustic information and may be performed without requiring access to, or generation of, a transcript of the acoustic speech signal. Recurring acoustic patterns are found by matching pairs of sounds, based on acoustic similarity. Information about distributional similarity from multiple local comparisons is aggregated and is further processed to fill gaps in the data by growing regions that represent recurring acoustic patterns. Selection criteria are used to identify coherent topics represented by the grown regions and topic boundaries therebetween. Another signal, such as a video signal, may be partitioned according to topic boundaries identified in an acoustic speech signal that is related to the video signal.
    Type: Application
    Filed: November 20, 2007
    Publication date: May 21, 2009
    Applicant: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Igor Malioutov, Alex Park