Patents by Inventor Yijun Li

Yijun Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250061346
    Abstract: A method of determining interaction information, an electronic device and a storage medium are provided, which relates to a field of artificial intelligence technology, in particular to a large model, a generative model, an NLP, an intelligent search and other fields. An implementation is to determine a plurality of questioning dimensions according to query information of a subject and historical query information, where each questioning dimension includes a dimension name and a plurality of options; determine a target questioning dimension from the plurality of questioning dimensions according to evaluation values of the plurality of questioning dimensions and whether semantic information of the plurality of questioning dimensions are consistent with semantic information of a query result associated with the query information; and determine the interaction information according to the dimension name and the plurality of options in the target questioning dimension.
    Type: Application
    Filed: October 31, 2024
    Publication date: February 20, 2025
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Xiao LI, Xin JIA, Simiu GU, Junfeng WANG, Haibo SHI, Yu LU, Sheng XU, Liang ZHANG, Wenjie ZHOU, Yijun LIU, Mei LU, Zichen WU, Min YANG, Huanjie WANG, Qiao TANG, Mengmeng CUI
  • Patent number: 12230014
    Abstract: An image generation system enables user input during the process of training a generative model to influence the model's ability to generate new images with desired visual features. A source generative model for a source domain is fine-tuned using training images in a target domain to provide an adapted generative model for the target domain. Interpretable factors are determined for the source generative model and the adapted generative model. A user interface is provided that enables a user to select one or more interpretable factors. The user-selected interpretable factor(s) are used to generate a user-adapted generative model, for instance, by using a loss function based on the user-selected interpretable factor(s). The user-adapted generative model can be used to create new images in the target domain.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: February 18, 2025
    Assignee: ADOBE INC.
    Inventors: Yijun Li, Utkarsh Ojha, Richard Zhang, Jingwan Lu, Elya Shechtman, Alexei A. Efros
  • Publication number: 20250054473
    Abstract: The present disclosure provides a method for configuring a learning model for music generation and the corresponding learning model. The method includes training a masked autoencoder with training data comprising a combination of a reconstruction loss over time and frequency domains and a patch-based adversarial objective operating at different resolutions. An omnidirectional latent diffusion model is trained based on music data represented in a latent space to obtain a pretrained diffusion model. The pretrained diffusion model is fine-tuned based on text-guided music generation, bidirectional music in-painting, and unidirectional music continuation. The method enables high-fidelity music generation conditioned on text or music representations while maintaining computational efficiency.
    Type: Application
    Filed: August 6, 2024
    Publication date: February 13, 2025
    Inventors: Yijun Wang, Yao Yao, Peike Li, Boyu Chen, David McDonald, Nicolas Fourrier, Erin Zink, Aaron McDonald, Yilun Wang, Yikai Wang
  • Publication number: 20250037431
    Abstract: Systems and methods for training a Generative Adversarial Network (GAN) using feature regularization are described herein. Embodiments are configured to generate a candidate image using a generator network of a GAN, classify the candidate image as real or generated using a discriminator network of the GAN, and train the GAN to generate realistic images based on the classifying of the candidate image. The training process includes regularizing a gradient with respect to features extracted using a discriminator network of the GAN.
    Type: Application
    Filed: July 24, 2023
    Publication date: January 30, 2025
    Inventors: Min Jin Chong, Krishna Kumar Singh, Yijun Li, Jingwan Lu
  • Publication number: 20250023788
    Abstract: A data management method includes network management service producer network elements that send, to a network management service consumer network element, moments at which measurement data of network objects is obtained for the first time after measurement jobs are activated. The network management service consumer network element creates digital twins based on measurement data of the network objects at a latest moment among the moments at which the network management service producer network elements obtain the measurement data of the network objects for the first time and after the latest moment.
    Type: Application
    Filed: September 30, 2024
    Publication date: January 16, 2025
    Inventors: Yexing Li, Yijun Yu
  • Patent number: 12175641
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly restoring degraded digital images utilizing a deep learning framework for repairing local defects, correcting global imperfections, and/or enhancing depicted faces. In particular, the disclosed systems can utilize a defect detection neural network to generate a segmentation map indicating locations of local defects within a digital image. In addition, the disclosed systems can utilize an inpainting algorithm to determine pixels for inpainting the local defects to reduce their appearance. In some embodiments, the disclosed systems utilize a global correction neural network to determine and repair global imperfections. Further, the disclosed systems can enhance one or more faces depicted within a digital image utilizing a face enhancement neural network as well.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: December 24, 2024
    Assignee: Adobe Inc.
    Inventors: Ionut Mironica, Yijun Li
  • Patent number: 12162466
    Abstract: Methods, systems, apparatus, and articles of manufacture to control a vehicle based on signal blending are disclosed. An example apparatus disclosed herein includes programmable circuitry at least determine a first yaw rate signal based on first signal data output by a yaw rate sensor of a vehicle, determine a second yaw rate signal based on second signal data output by a steering wheel angle sensor of the vehicle, determine a blended yaw rate signal based on the first yaw rate signal and the second yaw rate signal, and adjust a torque to be applied by a motor of the vehicle based on the blended yaw rate signal.
    Type: Grant
    Filed: November 30, 2023
    Date of Patent: December 10, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Ashrit Das, Joshua Guerra, Benjamin James Northrup, Lodewijk Maarten Erik Wijffels, Ziyu Ke, Ronald Loyd Chadwick, Yijun Li
  • Patent number: 12159413
    Abstract: In implementations of systems for image inversion using multiple latent spaces, a computing device implements an inversion system to generate a segment map that segments an input digital image into a first image region and a second image region and assigns the first image region to a first latent space and the second image region to a second latent space that corresponds to a layer of a convolutional neural network. An inverted latent representation of the input digital image is computed using a binary mask for the second image region. The inversion system modifies the inverted latent representation of the input digital image using an edit direction vector that corresponds to a visual feature. An output digital image is generated that depicts a reconstruction of the input digital image having the visual feature based on the modified inverted latent representation of the input digital image.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: December 3, 2024
    Assignee: Adobe Inc.
    Inventors: Gaurav Parmar, Krishna Kumar Singh, Yijun Li, Richard Zhang, Jingwan Lu
  • Publication number: 20240338799
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: October 10, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Publication number: 20240331236
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: October 3, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Patent number: 12083730
    Abstract: A rotating extrusion rheometer includes a control monitoring mechanism, a melt extrusion mechanism, a rotating extrusion rheology machine head, a sensor, a drive chain wheel, a coupler and an electric motor. The control monitoring mechanism, the melt extrusion mechanism, the rotating extrusion rheology machine head are sequentially connected. The rotating extrusion rheology machine head is formed by a connecting pipe (1), a flow dividing support (3), a lower machine neck (12), a machine head piece (15), an opening mold (17), an opening-mold driving chain wheel (20), a core bar (21) and a core-bar driving mechanism. The rheology measurement method comprises the steps where some parameter values of the rheometer are collected first, and then the rheological behaviors of the polymer melt in the rotating extrusion process are obtained by performing calculation by means of using the derived formula.
    Type: Grant
    Filed: November 22, 2018
    Date of Patent: September 10, 2024
    Assignee: SICHUAN UNIVERSITY
    Inventors: Qi Wang, Min Nie, Lin Pi, Yijun Li, Shibing Bai
  • Publication number: 20240296607
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate modified digital images. In particular, in some embodiments, the disclosed systems generate image editing directions between textual identifiers of two visual features utilizing a language prediction machine learning model and a text encoder. In some embodiments, the disclosed systems generated an inversion of a digital image utilizing a regularized inversion model to guide forward diffusion of the digital image. In some embodiments, the disclosed systems utilize cross-attention guidance to preserve structural details of a source digital image when generating a modified digital image with a diffusion neural network.
    Type: Application
    Filed: March 3, 2023
    Publication date: September 5, 2024
    Inventors: Yijun Li, Richard Zhang, Krishna Kumar Singh, Jingwan Lu, Gaurav Parmar, Jun-Yan Zhu
  • Publication number: 20240290022
    Abstract: Avatar generation from an image is performed using semi-supervised machine learning. An image space model undergoes unsupervised training from images to generate latent image vectors responsive to image inputs. An avatar parameter space model undergoes unsupervised training from avatar parameter values for avatar parameters to generate latent avatar parameter vectors responsive to avatar parameter value inputs. A cross-modal mapping model undergoes supervised training on image-avatar parameter pair inputs corresponding to the latent image vectors and the latent avatar parameter vectors. The trained image space model generates a latent image vector from an image input. The trained cross-modal mapping model translates the latent image vector to a latent avatar parameter vector. The trained avatar parameter space model generates avatar parameter values from the latent avatar parameter vector. The latent avatar parameter vector can be used to render an avatar having features corresponding to the input image.
    Type: Application
    Filed: February 28, 2023
    Publication date: August 29, 2024
    Inventors: Yijun LI, Yannick HOLD-GEOFFROY, Manuel Rodriguez Ladron DE GUEVARA, Jose Ignacio Echevarria VALLESPI, Daichi ITO, Cameron Younger SMITH
  • Publication number: 20240233318
    Abstract: An image generation system implements a multi-branch GAN to generate images that each express visually similar content in a different modality. A generator portion of the multi-branch GAN includes multiple branches that are each tasked with generating one of the different modalities. A discriminator portion of the multi-branch GAN includes multiple fidelity discriminators, one for each of the generator branches, and a consistency discriminator, which constrains the outputs generated by the different generator branches to appear visually similar to one another. During training, outputs from each of the fidelity discriminators and the consistency discriminator are used to compute a non-saturating GAN loss. The non-saturating GAN loss is used to refine parameters of the multi-branch GAN during training until model convergence. The trained multi-branch GAN generates multiple images from a single input, where each of the multiple images depicts visually similar content expressed in a different modality.
    Type: Application
    Filed: October 21, 2022
    Publication date: July 11, 2024
    Applicant: Adobe Inc.
    Inventors: Yijun Li, Zhixin Shu, Zhen Zhu, Krishna Kumar Singh
  • Publication number: 20240221252
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure identify an original image depicting a face, identify a scribble image including a mask that indicates a portion of the original image for adding makeup to the face, and generate a target image depicting the face using a machine learning model based on the original image and the scribble image, where the target image includes the makeup in the portion indicated by the scribble image.
    Type: Application
    Filed: January 4, 2023
    Publication date: July 4, 2024
    Inventors: Abhishek Lalwani, Xiaoyang Li, Yijun Li
  • Publication number: 20240169488
    Abstract: Systems and methods for synthesizing images with increased high-frequency detail are described. Embodiments are configured to identify an input image including a noise level and encode the input image to obtain image features. A diffusion model reduces a resolution of the image features at an intermediate stage of the model using a wavelet transform to obtain reduced image features at a reduced resolution, and generates an output image based on the reduced image features using the diffusion model. In some cases, the output image comprises a version of the input image that has a reduced noise level compared to the noise level of the input image.
    Type: Application
    Filed: November 17, 2022
    Publication date: May 23, 2024
    Inventors: Nan Liu, Yijun Li, Michaƫl Yanis Gharbi, Jingwan Lu
  • Publication number: 20240135672
    Abstract: An image generation system implements a multi-branch GAN to generate images that each express visually similar content in a different modality. A generator portion of the multi-branch GAN includes multiple branches that are each tasked with generating one of the different modalities. A discriminator portion of the multi-branch GAN includes multiple fidelity discriminators, one for each of the generator branches, and a consistency discriminator, which constrains the outputs generated by the different generator branches to appear visually similar to one another. During training, outputs from each of the fidelity discriminators and the consistency discriminator are used to compute a non-saturating GAN loss. The non-saturating GAN loss is used to refine parameters of the multi-branch GAN during training until model convergence. The trained multi-branch GAN generates multiple images from a single input, where each of the multiple images depicts visually similar content expressed in a different modality.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 25, 2024
    Applicant: Adobe Inc.
    Inventors: Yijun Li, Zhixin Shu, Zhen Zhu, Krishna Kumar Singh
  • Publication number: 20240135572
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz
  • Publication number: 20240135511
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Publication number: 20240135512
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 25, 2024
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin