Patents by Inventor Zhe Lin

Zhe Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12045963
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: July 23, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
  • Publication number: 20240229097
    Abstract: A device for detecting cellular mechanical force is provided. The device includes a base and an array of micropillars located on the base. These micropillars are deformable in response to cellular mechanical force, and one or more of the micropillars have a light-reflective layer. Furthermore, a detection system, a detection method, and a manufacturing method are provided. The solution offers advantages such as high-throughput, low cost, single-cell resolution, real-time monitoring, high sensitivity, capability of simulating a cell microenvironment, capability of simulating components and morphologies of extracellular matrices, and more. It can suit a wide range of technical demands.
    Type: Application
    Filed: March 22, 2024
    Publication date: July 11, 2024
    Inventor: Zhe LIN
  • Patent number: 12019671
    Abstract: Digital content search techniques are described. In one example, the techniques are incorporated as part of a multi-head self-attention module of a transformer using machine learning. A localized self-attention module, for instance, is incorporated as part of the multi-head self-attention module that applies local constraints to the sequence. This is performable in a variety of ways. In a first instance, a model-based local encoder is used, examples of which include a fixed-depth recurrent neural network (RNN) and a convolutional network. In a second instance, a masking-based local encoder is used, examples of which include use of a fixed window, Gaussian initialization, and an adaptive predictor.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhankui He, Zhaowen Wang, Ajinkya Gorakhnath Kale, Zhe Lin
  • Patent number: 12020414
    Abstract: The present disclosure relates to an object selection system that accurately detects and automatically selects target instances of user-requested objects (e.g., a query object instance) in a digital image. In one or more embodiments, the object selection system can analyze one or more user inputs to determine an optimal object attribute detection model from multiple specialized and generalized object attribute models. Additionally, the object selection system can utilize the selected object attribute model to detect and select one or more target instances of a query object in an image, where the image includes multiple instances of the query object.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Scott Cohen, Zhe Lin, Mingyang Ling
  • Patent number: 12008464
    Abstract: Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 11, 2024
    Assignee: ADOBE INC.
    Inventors: Haoxiang Li, Zhe Lin, Jonathan Brandt, Xiaohui Shen
  • Patent number: 12008698
    Abstract: A non-transitory computer-readable medium includes program code that is stored thereon. The program code is executable by one or more processing devices for performing operations including generating, using a model, a learned image representation of a target image. The operations further include generating, using a text embedding model, a text embedding of a text query. The text embedding and the learned image representation of the target image are in a same embedding space. Additionally, the operations include convolving the learned image representation of the target image with the text embedding of the text query. Moreover, the operations include generating an object-segmented image based on the convolving of the learned image representation of the target image with the text embedding.
    Type: Grant
    Filed: March 3, 2023
    Date of Patent: June 11, 2024
    Assignee: Adobe Inc.
    Inventors: Midhun Harikumar, Pranav Aggarwal, Baldo Faieta, Ajinkya Kale, Zhe Lin
  • Patent number: 12008739
    Abstract: The present disclosure relates to systems and methods for automatically processing images based on a user request. In some examples, a request is divided into a retouching command (e.g., a global edit) and an inpainting command (e.g., a local edit). A retouching mask and an inpainting mask are generated to indicate areas where the edits will be applied. A photo-request attention and a multi-modal modulation process are applied to features representing the image, and a modified image that incorporates the user's request is generated using the modified features.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: June 11, 2024
    Assignee: ADOBE INC.
    Inventors: Ning Xu, Zhe Lin, Franck Dernoncourt
  • Publication number: 20240185393
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”).
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventors: He Zhang, Yifan Jiang, Yilin Wang, Jianming Zhang, Kalyan Sunkavalli, Sarah Kong, Su Chen, Sohrab Amirghodsi, Zhe Lin
  • Publication number: 20240169624
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems generate utilizing a segmentation neural network, an object mask for each object of a plurality of objects of a digital image. The disclosed systems detect a first user interaction with an object in the digital image displayed via a graphical user interface. The disclosed systems surface, via the graphical user interface, the object mask for the object in response to the first user interaction. The disclosed systems perform an object-aware modification of the digital image in response to a second user interaction with the object mask for the object.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Jonathan Brandt, Scott Cohen, Zhe Lin, Zhihong Ding, Darshan Prasad, Matthew Joss, Celso Gomes, Jianming Zhang, Olena Soroka, Klaas Stoeckmann, Michael Zimmermann, Thomas Muehrke
  • Publication number: 20240169631
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing to remove a shadow for an object. For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems access a shadow mask of the shadow in a first location. Further, the disclosed systems generate the modified digital image without the shadow by generating a fill for the first location that preserves a visible location of the first location. Moreover, the disclosed systems generate the digital image without the shadow for the object by combining the fill with the digital image.
    Type: Application
    Filed: December 7, 2023
    Publication date: May 23, 2024
    Inventors: Soo Ye Kim, Zhe Lin, Scott Cohen, Jianming Zhang, Luis Figueroa, Zhihong Ding
  • Publication number: 20240169623
    Abstract: Systems and methods for multi-modal image generation are provided. One or more aspects of the systems and methods includes obtaining a text prompt and layout information indicating a target location for an element of the text prompt within an image to be generated and computing a text feature map including a plurality of values corresponding to the element of the text prompt at pixel locations corresponding to the target location. Then the image is generated based on the text feature map using a diffusion model. The generated image includes the element of the text prompt at the target location.
    Type: Application
    Filed: November 22, 2022
    Publication date: May 23, 2024
    Inventors: Yu Zeng, Zhe Lin, Jianming Zhang, Qing Liu, Jason Wen Yong Kuen, John Philip Collomosse
  • Publication number: 20240171848
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems provide, for display within a graphical user interface of a client device, a digital image displaying a plurality of objects, the plurality of objects comprising a plurality of different types of objects. The disclosed systems generate, utilizing a segmentation neural network and without user input, an object mask for objects of the plurality of objects. The disclosed systems determine, utilizing a distractor detection neural network, a classification for the objects of the plurality of objects. The disclosed systems remove at least one object from the digital image, based on classifying the at least one object as a distracting object, by deleting the object mask for the at least one object.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Luis Figueroa, Zhihong Ding, Scott Cohen, Zhe Lin, Qing Liu
  • Publication number: 20240169500
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive an image comprising a first region that includes content and a second region to be inpainted. Noise is then added to the image to obtain a noisy image, and a plurality of intermediate output images are generated based on the noisy image using a diffusion model trained using a perceptual loss. The intermediate output images predict a final output image based on a corresponding intermediate noise level of the diffusion model. The diffusion model then generates the final output image based on the intermediate output image. The final output image includes inpainted content in the second region that is consistent with the content in the first region.
    Type: Application
    Filed: November 22, 2022
    Publication date: May 23, 2024
    Inventors: Haitian Zheng, Zhe Lin, Jianming Zhang, Connelly Stuart Barnes, Elya Shechtman, Jingwan Lu, Qing Liu, Sohrab Amirghodsi, Yuqian Zhou, Scott Cohen
  • Publication number: 20240169685
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems receive a digital image from a client device. The disclosed systems detect, utilizing a shadow detection neural network, an object portrayed in the digital image. The disclosed systems detect, utilizing the shadow detection neural network, a shadow portrayed in the digital image. The disclosed systems generate, utilizing the shadow detection neural network, an object-shadow pair prediction that associates the shadow with the object.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Luis Figueroa, Zhe Lin, Zhihong Ding, Scott Cohen
  • Publication number: 20240169502
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Scott Cohen, Zhe Lin, Zhihong Ding, Luis Figueroa, Kushal Kafle
  • Publication number: 20240169630
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing to synthesize shadows for object(s). For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems access an object mask of the object depicting in the digital image. The disclosed systems further combine the object mask, the digital image, and a noise representation to generate a combined representation. Moreover, the disclosed systems generate a shadow for the object from the combined representation and further generates the modified digital image by combining the shadow with the digital image.
    Type: Application
    Filed: December 7, 2023
    Publication date: May 23, 2024
    Inventors: Soo Ye Kim, Zhe Lin, Scott Cohen, Jianming Zhang, Luis Figueroa
  • Publication number: 20240169628
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that provides a graphical user interface experience to move objects and generate new shadows within a digital image scene. For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems receive a selection to position an object in a first location within the scene. Further, the disclosed systems composite an image by placing the object at the first location within the scene of the digital image. Moreover, the disclosed systems generate a modified digital image having a shadow of the object where the shadow is consistent with the scene and provides the modified digital image to the client device.
    Type: Application
    Filed: September 1, 2023
    Publication date: May 23, 2024
    Inventors: Soo Ye Kim, Zhe Lin, Scott Cohen, Jianming Zhang, Luis Figueroa, Zhihong Ding
  • Publication number: 20240169501
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems generate, utilizing a segmentation neural network and without user input, object masks for objects in a digital image. The disclosed systems determine foreground and background abutting an object mask. The disclosed systems generate an expanded object mask by expanding the object mask into the foreground abutting the object mask by a first amount and expanding the object mask into the background abutting the object mask by a second amount that differs from the first amount. The disclosed systems inpaint a hole corresponding to the expanded object mask utilizing an inpainting neural network.
    Type: Application
    Filed: November 23, 2022
    Publication date: May 23, 2024
    Inventors: Qing Liu, Zhe Lin, Luis Figueroa, Scott Cohen
  • Patent number: D1029088
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: May 28, 2024
    Inventors: Zhe Lin, David Protet, Weichao Zhou
  • Patent number: D1029926
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: June 4, 2024
    Inventors: Zhe Lin, David Protet, Pinggang Lai