Patents by Inventor Can Qin

Can Qin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250057990
    Abstract: A tissue-specific manganese based magnetic resonance imaging (MRI) agent includes the formula: wherein T includes at least one cell specific or tissue specific targeting moiety; C includes at least one pyclen based chelating agent complexed with a Mn ion, and L includes at least one optional linker that covalently links the at least one targeting moiety to the at least one chelating agent.
    Type: Application
    Filed: December 20, 2022
    Publication date: February 20, 2025
    Inventors: Zheng-Rong Lu, Jing-Can Qin, Ryan Hall
  • Publication number: 20240386623
    Abstract: Embodiments described herein provide a method of image generation. The method includes a fixed diffusion model, and a trainable diffusion model. The fixed diffusion model may be pretrained on a large training corpus. The trainable diffusion model may be used to control the image generation of the fixed diffusion model by modifying internal representations of the fixed diffusion model. A task instruction may be provided in addition to a text prompt, and the task instruction may guide the trainable diffusion model together with the visual conditions. The visual conditions may be adapted according to the task instruction. During training, a fixed number of task instructions may be used. At inference, unseen task instructions may be used by combining convolutional kernels of the visual condition adapter.
    Type: Application
    Filed: September 29, 2023
    Publication date: November 21, 2024
    Inventors: Ning YU, Can QIN, Shu ZHANG, Yihao FENG, Xinyi YANG, Ran XU
  • Publication number: 20240185035
    Abstract: Embodiments described herein provide a mechanism for replacing existing text encoders in text-to-image generation models with more powerful pre-trained language models. Specifically, a translation network is trained to map features from the pre-trained language model output into the space of the target text encoder. The training preserves the rich structure of the pre-trained language model while allowing it to operate within the text-to-image generation model. The resulting modularized text-to-image model receives prompt and generates an image representing the features contained in the prompt.
    Type: Application
    Filed: January 31, 2023
    Publication date: June 6, 2024
    Inventors: Ning Yu, Can Qin, Chen Xing, Shu Zhang, Stefano Ermon, Caiming Xiong, Ran Xu
  • Publication number: 20240152771
    Abstract: Tabular data machine-learning model techniques and systems are described. In one example, common-sense knowledge is infused into training data through use of a knowledge graph to provide external knowledge to supplement a tabular data corpus. In another example, a dual-path architecture is employed to configure an adapter module. In an implementation, the adapter module is added as part of a pre-trained machine-learning model for general purpose tabular models. Specifically, dual-path adapters are trained using the knowledge graphs and semantically augmented trained data. A path-wise attention layer is applied to fuse a cross-modality representation of the two paths for a final result.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Can Qin, Sungchul Kim, Tong Yu, Ryan A. Rossi, Handong Zhao