Patents by Inventor Jiahui YU

Jiahui YU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240132731
    Abstract: Provided are a surface zwitterionized silicone antifouling coating and a preparation method thereof. In the disclosure, hydroxyl-terminated polydimethylsiloxane and N-aminoethyl-3-aminopropyl methyl dimethoxysilane are used as raw materials, and tetramethylammonium hydroxide pentahydrate is used as a catalyst, thereby preparing a series of polydimethylsiloxane resins containing amino side chains in different proportions; then side chain amino modified polydimethylsiloxane is crosslinked and cured by using 3-glycidyloxypropyltrimethoxysilane as a curing agent; after that, the cured silicone coating is soaked in a solution of 1,3-propane sultone in acetone for ionization.
    Type: Application
    Filed: September 27, 2023
    Publication date: April 25, 2024
    Inventors: Jun WANG, Zhixin WANG, Dalei SONG, Rongrong CHEN, Qi LIU, Jingyuan LIU, Jing YU, Jiahui ZHU, Gaohui SUN
  • Publication number: 20240128377
    Abstract: A display panel includes a gate electrode, a source electrode, a drain electrode, and a metal oxide layer disposed corresponding to the gate electrode. The metal oxide layer includes a lower metal oxide layer and an upper metal oxide layer stacked on the lower metal oxide layer. The lower metal oxide layer includes an indium oxide and a lanthanoid oxide. The upper metal oxide layer is located on a surface of the lower metal oxide layer adjacent to the source electrode and the drain electrode. The source electrode and the drain electrode are connected to the upper metal oxide layer. The upper metal oxide layer includes an indium oxide and a lanthanoid oxide, and the upper metal oxide layer includes polycrystalline phase.
    Type: Application
    Filed: December 30, 2022
    Publication date: April 18, 2024
    Applicant: GUANGZHOU CHINA STAR OPTOELECTRONICS SEMICONDUCTOR DISPLAY TECHNOLOGY CO., LTD.
    Inventors: Jiahui Huang, Zhixiong Jiang, Qiang Wang, Cheng Gong, Mingjiue Yu, Zhihui Cai
  • Patent number: 11951445
    Abstract: The present disclosure relates to the field of materials for uranium extraction from seawater (UES), and in particular, to a photothermal photocatalytic membrane for seawater desalination and uranium extraction and a preparation method therefor. The present disclosure provides a photothermal photocatalytic membrane for seawater desalination and uranium extraction and a preparation method therefor. The preparation method includes: fixing a treated carbon cloth to a glass plate, pouring a casting solution 1 onto the carbon cloth to form a first layer of film, forming a second layer of film using a casting solution 2, and putting the second layer of film into a first coagulation bath and a second coagulation bath in sequence to form the photothermal photocatalytic membrane. The photothermal photocatalytic membrane is supported by the carbon cloth, and a surface of the photothermal photocatalytic membrane is of a micro-nano structure.
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: April 9, 2024
    Assignee: Harbin Engineering University
    Inventors: Jun Wang, Bingtao Zhang, Hongsen Zhang, Qi Liu, Jiahui Zhu, Jingyuan Liu, Jing Yu, Rongrong Chen, Lele Wang
  • Publication number: 20240112088
    Abstract: Systems and methods are provided for vector-quantized image modeling using vision transformers and improved codebook handling. In particular, the present disclosure provides a Vector-quantized Image Modeling (VIM) approach that involves pretraining a machine learning model (e.g., Transformer model) to predict rasterized image tokens autoregressively. The discrete image tokens can be encoded from a learned Vision-Transformer-based VQGAN (example implementations of which can be referred to as ViT-VQGAN). The present disclosure proposes multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity. The improved ViT-VQGAN further improves vector-quantized image modeling tasks, including unconditional image generation, conditioned image generation (e.g., class-conditioned image generation), and unsupervised representation learning.
    Type: Application
    Filed: November 27, 2023
    Publication date: April 4, 2024
    Inventors: Jiahui Yu, Xin Li, Han Zhang, Vijay Vasudevan, Alexander Yeong-Shiuh Ku, Jason Michael Baldridge, Yuanzhong Xu, Jing Yu Koh, Thang Minh Luong, Gunjan Baid, Zirui Wang, Yonghui Wu
  • Patent number: 11925906
    Abstract: A preparation method and a device for a seawater desalination-seawater extraction uranium membrane lining are provided. The preparation method is as follow: the lining is configured as a finished product for standby through the following process, including cleaning, drying, restoration of circular, generating burrs, fixing the length of burrs; the device includes the first module, the second module, the third module, the fourth module and the fifth module. The present invention has the advantages of simple operation, short time, low cost and obvious treatment effect, the bonding strength between the separation function layer and the lining is enhanced, and the separation function layer is not easy to fall off, the physical damage resistance is greatly increased, it is not easy to fall off and the initial bubble point pressure of the prepared enhanced film is high.
    Type: Grant
    Filed: October 16, 2023
    Date of Patent: March 12, 2024
    Assignee: HARBIN ENGINEERING UNIVERSITY
    Inventors: Jun Wang, Bingtao Zhang, Hongsen Zhang, Qi Liu, Jing Yu, Jiahui Zhu, Jingyuan Liu, Rongrong Chen
  • Patent number: 11916289
    Abstract: A foldable terminal device includes a feed source, a first part, and a second part. The first part is configured with a first antenna element that is fed by the first feed source. The second part is configured with a second antenna element that is coupled to the first antenna element for coupled feeding when the foldable terminal device is folded. An operating frequency band of the second antenna element includes an operating frequency band of the first antenna element.
    Type: Grant
    Filed: March 7, 2020
    Date of Patent: February 27, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jiahui Chu, Dong Yu, Meng Hou
  • Publication number: 20230351149
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing multi-modal inputs using contrastive captioning neural networks.
    Type: Application
    Filed: April 28, 2023
    Publication date: November 2, 2023
    Inventors: Jiahui Yu, Zirui Wang, Vijay Vasudevan, Ho Man Yeung, Seyed Mojtaba Seyedhosseini Tarzjani, Yonghui Wu
  • Publication number: 20230281400
    Abstract: Example embodiments of the present disclosure relate to systems and methods for pretraining image-processing models on weakly-supervised image-text pairs. The pretraining can include receiving a training sequence for the machine-learned image-processing model. The training sequence can include text tokens and image tokens. A prefix sequence can contain the image tokens. A remainder sequence can include a remainder set of the text tokens. The pretraining can include determining, using the prefix sequence as an input to the machine-learned image-processing model, an objective based on recovery of the remainder sequence. The pretraining can include updating one or more learnable parameters of the machine-learned image-processing model based on the objective.
    Type: Application
    Filed: March 3, 2022
    Publication date: September 7, 2023
    Inventors: Zirui Wang, Jiahui Yu, Yuan Cao, Wei Yu, Zihang Dai
  • Publication number: 20230237993
    Abstract: Systems and methods of the present disclosure are directed to a computing system, including one or more processors and a machine-learned multi-mode speech recognition model configured to operate in a streaming recognition mode or a contextual recognition mode. The computing system can perform operations including obtaining speech data and a ground truth label and processing the speech data using the contextual recognition mode to obtain contextual prediction data. The operations can include evaluating a difference between the contextual prediction data and the ground truth label and processing the speech data using the streaming recognition mode to obtain streaming prediction data. The operations can include evaluating a difference between the streaming prediction data and the ground truth label and the contextual and streaming prediction data. The operations can include adjusting parameters of the speech recognition model.
    Type: Application
    Filed: October 1, 2021
    Publication date: July 27, 2023
    Inventors: Jiahui Yu, Ruoming Pang, Wei Han, Anmol Gulati, Chung-Cheng Chiu, Bo Li, Tara N. Sainath, Yonghui Hu
  • Publication number: 20230130634
    Abstract: A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 27, 2023
    Applicant: Google LLC
    Inventors: Tara N. Sainath, Rami Botros, Anmol Gulati, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu
  • Publication number: 20230107493
    Abstract: A method includes receiving a sequence of input audio frames corresponding to an utterance captured by a user device, the utterance including a plurality of words. For each input audio frame, the method includes predicting, using a word boundary detection model configured receive the sequence of input audio frames as input, whether the input audio frame is a word boundary. The method includes batching the input audio frames into a plurality of batches based on the input audio frames predicted as word boundaries, wherein each batch includes a corresponding plurality of batched input audio frames. For each of the plurality of batches, the method includes processing, using a speech recognition model, the corresponding plurality of batched input audio frames in parallel to generate a speech recognition result.
    Type: Application
    Filed: September 21, 2022
    Publication date: April 6, 2023
    Applicant: Google LLC
    Inventors: Shaan Jagdeep Patrick Bijwadia, Tara N. Sainath, Jiahui Yu, Shuo-yiin Chang, Yangzhang He
  • Publication number: 20220405579
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting a neural network to perform a particular machine learning task while satisfying a set of constraints.
    Type: Application
    Filed: March 3, 2021
    Publication date: December 22, 2022
    Inventors: Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Mintzer Bender, Pieter-Jan Kindermans, Mingxing Tan, Xiaodan Song, Ruoming Pang, Quoc V. Le
  • Patent number: 11436775
    Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 6, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20220207321
    Abstract: Systems and methods can utilize a conformer model to process a data set for various data processing tasks, including, but not limited to, speech recognition, sound separation, protein synthesis determination, video or other image set analysis, and natural language processing. The conformer model can use feed-forward blocks, a self-attention block, and a convolution block to process data to learn global interactions and relative-offset-based local correlations of the input data.
    Type: Application
    Filed: December 31, 2020
    Publication date: June 30, 2022
    Inventors: Anmol Gulati, Ruoming Pang, Niki Parmar, Jiahui Yu, Wei Han, Chung-Cheng Chiu, Yu Zhang, Yonghui Wu, Shibo Wang, Weikeng Qin, Zhengdong Zhang
  • Patent number: 11334971
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 17, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20220122622
    Abstract: An automated speech recognition (ASR) model includes a first encoder, a second encoder, and a decoder. The first encoder receives, as input, a sequence of acoustic frames, and generates, at each of a plurality of output steps, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The second encoder receives, as input, the first higher order feature representation generated by the first encoder at each of the plurality of output steps, and generates, at each of the plurality of output steps, a second higher order feature representation for a corresponding first higher order feature frame. The decoder receives, as input, the second higher order feature representation generated by the second encoder at each of the plurality of output steps, and generates, at each of the plurality of time steps, a first probability distribution over possible speech recognition hypotheses.
    Type: Application
    Filed: April 21, 2021
    Publication date: April 21, 2022
    Applicant: Google LLC
    Inventors: Arun Narayanan, Tara Sainath, Chung-Cheng Chiu, Ruoming Pang, Rohit Prabhavalkar, Jiahui Yu, Ehsan Variani, Trevor Strohman
  • Publication number: 20220122586
    Abstract: A computer-implemented method of training a streaming speech recognition model that includes receiving, as input to the streaming speech recognition model, a sequence of acoustic frames. The streaming speech recognition model is configured to learn an alignment probability between the sequence of acoustic frames and an output sequence of vocabulary tokens. The vocabulary tokens include a plurality of label tokens and a blank token. At each output step, the method includes determining a first probability of emitting one of the label tokens and determining a second probability of emitting the blank token. The method also includes generating the alignment probability at a sequence level based on the first probability and the second probability. The method also includes applying a tuning parameter to the alignment probability at the sequence level to maximize the first probability of emitting one of the label tokens.
    Type: Application
    Filed: September 9, 2021
    Publication date: April 21, 2022
    Applicant: Google LLC
    Inventors: Jiahui Yu, Chung-cheng Chiu, Bo Li, Shuo-yiin Chang, Tara Sainath, Wei Han, Anmol Gulati, Yanzhang He, Arun Narayanan, Yonghui Wu, Ruoming Pang
  • Patent number: 11250548
    Abstract: Digital image completion using deep learning is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a framework that combines generative and discriminative neural networks based on learning architecture of the generative adversarial networks. From the holey digital image, the generative neural network generates a filled digital image having hole-filling content in place of holes. The discriminative neural networks detect whether the filled digital image and the hole-filling digital content correspond to or include computer-generated content or are photo-realistic. The generating and detecting are iteratively continued until the discriminative neural networks fail to detect computer-generated content for the filled digital image and hole-filling content or until detection surpasses a threshold difficulty.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: February 15, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10839575
    Abstract: Certain embodiments involve using an image completion neural network to perform user-guided image completion. For example, an image editing application accesses an input image having a completion region to be replaced with new image content. The image editing application also receives a guidance input that is applied to a portion of a completion region. The image editing application provides the input image and the guidance input to an image completion neural network that is trained to perform image-completion operations using guidance input. The image editing application produces a modified image by replacing the completion region of the input image with the new image content generated with the image completion network. The image editing application outputs the modified image having the new image content.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: November 17, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20200342576
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Application
    Filed: July 14, 2020
    Publication date: October 29, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu