Patents by Inventor Qiqi Hou

Qiqi Hou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250166236
    Abstract: Certain aspects of the present disclosure provide techniques for generating an output image based on a text prompt. A method may include receiving the text prompt; providing a user interface comprising one or more input elements associated with one or more words of the text prompt; receiving input corresponding to at least one of the one or more input elements, the input indicating a semantic importance for each of at least one of the one or more words associated with the at least one of the one or more input elements; and generating the output image based on the text prompt and the input.
    Type: Application
    Filed: November 16, 2023
    Publication date: May 22, 2025
    Inventors: Kambiz AZARIAN YAZDI, Fatih Murat PORIKLI, Qiqi HOU, Debasmit DAS
  • Publication number: 20250131277
    Abstract: A method for training a control neural network includes initializing a baseline diffusion model for training the control neural network, each stage of a control neural network training pipeline corresponding to an element of the baseline diffusion model. The method also includes training, the control neural network, in a stage-wise manner, each stage of the control neural network training pipeline receiving an input from a previous stage of the control neural network training pipeline and the corresponding element of the diffusion model.
    Type: Application
    Filed: October 23, 2023
    Publication date: April 24, 2025
    Inventors: Risheek GARREPALLI, Shubhankar Mangesh BORSE, Jisoo JEONG, Qiqi HOU, Shreya KADAMBI, Munawar HAYAT, Fatih Murat PORIKLI
  • Publication number: 20250131276
    Abstract: A method for training a diffusion model includes randomly selecting, for each iteration of a step distillation training process, a teacher model of a group of teacher models. The method also includes applying, at each iteration, a clipped input space within step distillation of the randomly selected teacher model. The method further includes updating, at each iteration, parameters of the diffusion model based on guidance from the randomly selected teacher model.
    Type: Application
    Filed: October 23, 2023
    Publication date: April 24, 2025
    Inventors: Risheek GARREPALLI, Shubhankar Mangesh BORSE, Jisoo JEONG, Qiqi HOU, Shreya KADAMBI, Munawar HAYAT, Fatih Murat PORIKLI
  • Publication number: 20250131606
    Abstract: A processor-implemented method includes receiving a text-semantic input at a first stage of a neural network, including a first convolutional block and no attention layers. The method receives, at a second stage, a first output from the first stage. The second stage comprises a first down sampling block including a first attention layer and a second convolutional block. The method receives, at a third stage, a second output from the second stage. The third stage comprises a first up sampling block including a second attention layer and a first set of convolutional blocks. The method receives, at a fourth stage, the first output from the first stage and a third output from the third stage. The fourth stage comprises a second up sampling block including no attention layers and a second set of convolutional blocks. The method generates an image at the fourth stage, based on the text-semantic input.
    Type: Application
    Filed: October 23, 2023
    Publication date: April 24, 2025
    Inventors: Shubhankar Mangesh BORSE, Risheek GARREPALLI, Qiqi HOU, Jisoo JEONG, Shreya KADAMBI, Munawar HAYAT, Fatih Murat PORIKLI
  • Publication number: 20250131325
    Abstract: A method for training a diffusion model includes compressing the diffusion model by removing at least one of: one or more model parameters or one or more giga multiply-accumulate operations (GMACs). The method also includes performing guidance conditioning to train the compressed diffusion model, the guidance conditioning combining a conditional output and an unconditional output from respective teacher models. The method further includes performing, after the guidance conditioning, step distillation on the compressed diffusion model.
    Type: Application
    Filed: October 23, 2023
    Publication date: April 24, 2025
    Inventors: Risheek GARREPALLI, Shubhankar Mangesh BORSE, Jisoo JEONG, Qiqi HOU, Shreya KADAMBI, Munawar HAYAT, Fatih Murat PORIKLI
  • Publication number: 20240368618
    Abstract: The present invention relates to the field of biotechnology, and specifically relates to a PPO polypeptide tolerant to PPO-inhibiting herbicides and use thereof. The said polypeptide contains the motif “LLLNYI”, wherein leucine L at position 3 in the said motif is substituted with any other amino acid, or tyrosine Y at position 5 is substituted with any other amino acid. It can be used in plants, including commercial crops, to greatly improve plant resistance to PPO-inhibiting herbicides according to the herbicide resistance characteristics and herbicide selectivity, so as to control weed growth economically.
    Type: Application
    Filed: March 25, 2022
    Publication date: November 7, 2024
    Inventors: Sudong MO, Guizhi LIU, Lei WANG, Qiqi HOU, Bo CHEN
  • Publication number: 20240364925
    Abstract: Systems and techniques are described herein for processing video data. For example, a machine-learning based stereo video coding system can obtain video data including at least a right-view image of a right view of a scene and a left-view image of a left view of the scene. The machine-learning based stereo video coding system can compress the right-view image and the left-view image in parallel to generate a latent representation of the right-view image and the left-view image. The right-view image and the left-view image can be compressed in parallel based on inter-view information between the right-view image and the left-view image, determined using one or more parallel autoencoders.
    Type: Application
    Filed: April 15, 2024
    Publication date: October 31, 2024
    Inventors: Hoang Cong Minh LE, Qiqi HOU, Farzad FARHADZADEH, Amir SAID, Auke Joris WIGGERS, Guillaume Konrad SAUTIERE, Reza POURREZA
  • Publication number: 20240043859
    Abstract: The present invention belongs to the field of biotechnology, and in particular relates to a protein and gene that can confer resistance to hormone herbicides and ACCase inhibitor herbicides, and the use thereof, as well as a plant, seed, cell and plant part with herbicide tolerance and the method of application thereof.
    Type: Application
    Filed: August 4, 2021
    Publication date: February 8, 2024
    Inventors: Huarong LI, Wei QI, Guizhi LIU, Qiqi HOU
  • Patent number: 10733699
    Abstract: A face replacement system for replacing a target face with a source face can include a facial landmark determination model having a cascade multichannel convolutional neural network (CMC-CNN) to process both the target and the source face. A face warping module is able to warp the source face using determined facial landmarks that match the determined facial landmarks of the target face, and a face selection module is able to select a facial region of interest in the source face. An image blending module is used to blend the target face with the selected source region of interest.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: August 4, 2020
    Assignee: DEEP NORTH, INC.
    Inventors: Jinjun Wang, Qiqi Hou
  • Publication number: 20190122329
    Abstract: A face replacement system for replacing a target face with a source face can include a facial landmark determination model having a cascade multichannel convolutional neural network (CMC-CNN) to process both the target and the source face. A face warping module is able to warp the source face using determined facial landmarks that match the determined facial landmarks of the target face, and a face selection module is able to select a facial region of interest in the source face. An image blending module is used to blend the target face with the selected source region of interest.
    Type: Application
    Filed: October 24, 2017
    Publication date: April 25, 2019
    Inventors: Jinjun Wang, Qiqi Hou