Patents by Inventor Junwen Bai

Junwen Bai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230299446
    Abstract: A method for producing a battery includes discharging bubbles, which stick to an inner surface of an injection nozzle, to outside of an injection nozzle together with an electrolyte ejected from the injection nozzle by ejecting the electrolyte from the injection nozzle under atmospheric pressure before evacuating is performed.
    Type: Application
    Filed: December 13, 2022
    Publication date: September 21, 2023
    Inventors: Takeyuki OZAKI, Junwen BAI
  • Publication number: 20230104228
    Abstract: A method includes receiving audio features and generating a latent speech representation based on the audio features. The method also includes generating a target quantized vector token and a target token index for a corresponding latent speech representation. The method also includes generating a contrastive context vector for a corresponding unmasked or masked latent speech representation and deriving a contrastive self-supervised loss based on the corresponding contrastive context vector and the corresponding target quantized vector token. The method also include generating a high-level context vector based on the contrastive context vector and, for each high-level context vector, learning to predict the target token index at the corresponding time step using a cross-entropy loss based on the target token index.
    Type: Application
    Filed: September 6, 2022
    Publication date: April 6, 2023
    Applicant: Google LLC
    Inventors: Bo Li, Junwen Bai, Yu Zhang, Ankur Bapna, Nikhil Siddhartha, Khe Chai Sim, Tara N. Sainath
  • Publication number: 20220067534
    Abstract: Embodiments described herein combine both masked reconstruction and predictive coding. Specifically, unlike contrastive learning, the mutual information between past states and future states are directly estimated. The context information can also be directly captured via shifted masked reconstruction—unlike standard masked reconstruction, the target reconstructed observations are shifted slightly towards the future to incorporate more predictability. The estimated mutual information and shifted masked reconstruction loss can then be combined as the loss function to update the neural model.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 3, 2022
    Inventors: Junwen Bai, Weiran Wang, Yingbo Zhou, Caiming Xiong