Patents by Inventor Zhe Lin

Zhe Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230104262
    Abstract: Various disclosed embodiments are directed to refining or correcting individual semantic segmentation/instance segmentation masks that have already been produced by baseline models in order to generate a final coherent panoptic segmentation map. Specifically, a refinement model, such as an encoder-decoder-based neural network, generates or predicts various data objects, such as foreground masks, bounding box offset maps, center maps, center offset maps, and coordinate convolution. This, among other functionality described herein, improves the inaccuracies and computing resource consumption of existing technologies.
    Type: Application
    Filed: October 6, 2021
    Publication date: April 6, 2023
    Inventors: Zhe Lin, Simon Su Chen, Jason Wen-youg Kuen, Bo Sun
  • Publication number: 20230105994
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Application
    Filed: December 9, 2022
    Publication date: April 6, 2023
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Publication number: 20230097192
    Abstract: The present invention provides a string-type mooring system. A support frame is provided on a dock. Two free guide rollers are provided at vertical corresponding positions that are respectively below a cross arm of the support frame and above the dock. The two free guide rollers are respectively wound with a cable. One end of each cable is connected to a platform arm fixed on a platform, and the other end thereof is horizontally connected with one end of a spring. The other ends of the two springs are respectively connected with a hydraulic device. The present invention provides an omnidirectional restoring force for the moored platform through the elastic deformation of the springs to control the movement response of the platform within a certain range. The present invention can adjust the slow change of the vertical position of the platform caused by tidal fluctuation.
    Type: Application
    Filed: March 17, 2020
    Publication date: March 30, 2023
    Inventors: Lei Sun, Chong Fu, Zhe Lin
  • Publication number: 20230096636
    Abstract: The present invention provides a long-term mooring device. A support frame is provided on a dock. The dock is provided with a free guide roller. The free guide roller is wound with a cable. An upper end of the cable is horizontally connected to a spring fixed on a lower side of a cross arm of the support frame, through a free guide roller provided on the lower side of the cross arm of the support frame (corresponding to the free guide roller on the dock). The middle of the cable penetrates through an inertial induction self-locking connection joint fixed on an end of a platform arm. The platform arm is fixed on a platform. The present invention provides an omnidirectional restoring force for the moored platform through the elastic deformation of the springs to control the movement response of the platform within a certain range.
    Type: Application
    Filed: March 17, 2020
    Publication date: March 30, 2023
    Inventors: Lei Sun, Chong Fu, Zhe Lin
  • Patent number: 11615567
    Abstract: A non-transitory computer-readable medium includes program code that is stored thereon. The program code is executable by one or more processing devices for performing operations including generating, by a model that includes trainable components, a learned image representation of a target image. The operations further include generating, by a text embedding model, a text embedding of a text query. The text embedding and the learned image representation of the target image are in a same embedding space. Additionally, the operations include generating a class activation map of the target image by, at least, convolving the learned image representation of the target image with the text embedding of the text query. Moreover, the operations include generating an object-segmented image using the class activation map of the target image.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: March 28, 2023
    Assignee: Adobe Inc.
    Inventors: Midhun Harikumar, Pranav Aggarwal, Baldo Faieta, Ajinkya Kale, Zhe Lin
  • Patent number: 11610393
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently learning parameters of a distilled neural network from parameters of a source neural network utilizing multiple augmentation strategies. For example, the disclosed systems can generate lightly augmented digital images and heavily augmented digital images. The disclosed systems can further learn parameters for a source neural network from the lightly augmented digital images. Moreover, the disclosed systems can learn parameters for a distilled neural network from the parameters learned for the source neural network. For example, the disclosed systems can compare classifications of heavily augmented digital images generated by the source neural network and the distilled neural network to transfer learned parameters from the source neural network to the distilled neural network via a knowledge distillation loss function.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: March 21, 2023
    Assignee: Adobe Inc.
    Inventors: Jason Wen Yong Kuen, Zhe Lin, Jiuxiang Gu
  • Publication number: 20230079886
    Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
    Type: Application
    Filed: October 20, 2022
    Publication date: March 16, 2023
    Inventors: Sohrab AMIRGHODSI, Zhe LIN, Yilin WANG, Tianshu YU, Connelly BARNES, Elya SHECHTMAN
  • Patent number: 11605156
    Abstract: Methods and systems are provided for accurately filling holes, regions, and/or portions of images using iterative image inpainting. In particular, iterative inpainting utilize a confidence analysis of predicted pixels determined during the iterations of inpainting. For instance, a confidence analysis can provide information that can be used as feedback to progressively fill undefined pixels that comprise the holes, regions, and/or portions of an image where information for those respective pixels is not known. To allow for accurate image inpainting, one or more neural networks can be used. For instance, a coarse result neural network (e.g., a GAN comprised of a generator and a discriminator) and a fine result neural network (e.g., a GAN comprised of a generator and two discriminators).
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: March 14, 2023
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Yu Zeng, Jimei Yang, Jianming Zhang, Elya Shechtman
  • Patent number: 11605168
    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
  • Patent number: 11604822
    Abstract: Multi-modal differential search with real-time focus adaptation techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Patent number: 11605019
    Abstract: Visually guided machine-learning language model and embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. In one example, a model is trained to support a visually guided machine-learning embedding space that supports visual intuition as to “what” is represented by text. The visually guided language embedding space supported by the model, once trained, may then be used to support visual intuition as part of a variety of functionality. In one such example, the visually guided language embedding space as implemented by the model may be leveraged as part of a multi-modal differential search to support search of digital images and other digital content with real-time focus adaptation which overcomes the challenges of conventional techniques.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Vineet Aggarwal, Zhe Lin, Baldo Antonio Faieta, Saeid Motiian
  • Patent number: 11594077
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified digital images based on verbal and/or gesture input by utilizing a natural language processing neural network and one or more computer vision neural networks. The disclosed systems can receive verbal input together with gesture input. The disclosed systems can further utilize a natural language processing neural network to generate a verbal command based on verbal input. The disclosed systems can select a particular computer vision neural network based on the verbal input and/or the gesture input. The disclosed systems can apply the selected computer vision neural network to identify pixels within a digital image that correspond to an object indicated by the verbal input and/or gesture input. Utilizing the identified pixels, the disclosed systems can generate a modified digital image by performing one or more editing actions indicated by the verbal input and/or gesture input.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: February 28, 2023
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Zhe Lin, Walter Chang, Nham Le, Franck Dernoncourt
  • Patent number: 11593948
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize a progressive refinement network to refine alpha mattes generated utilizing a mask-guided matting neural network. In particular, the disclosed systems can use the matting neural network to process a digital image and a coarse guidance mask to generate alpha mattes at discrete neural network layers. In turn, the disclosed systems can use the progressive refinement network to combine alpha mattes and refine areas of uncertainty. For example, the progressive refinement network can combine a core alpha matte corresponding to more certain core regions of a first alpha matte and a boundary alpha matte corresponding to uncertain boundary regions of a second, higher resolution alpha matte. Based on the combination of the core alpha matte and the boundary alpha matte, the disclosed systems can generate a final alpha matte for use in image matting processes.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: February 28, 2023
    Assignee: Adobe Inc.
    Inventors: Qihang Yu, Jianming Zhang, He Zhang, Yilin Wang, Zhe Lin, Ning Xu
  • Publication number: 20230042221
    Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that perform language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism. For example, the disclosed systems generate an editing description network that generates language embeddings which represent image transformations applied between a digital image and a modified digital image. The disclosed systems can further train a GAN to generate modified images by providing an input image and natural language embeddings generated by the editing description network (representing various modifications to the digital image from a ground truth modified image). In some instances, the disclosed systems also utilize an image request attention approach with the GAN to generate images that include adaptive edits in different spatial locations of the image.
    Type: Application
    Filed: July 23, 2021
    Publication date: February 9, 2023
    Inventors: Ning Xu, Zhe Lin
  • Patent number: 11574142
    Abstract: The technology described herein is directed to a reinforcement learning based framework for training a natural media agent to learn a rendering policy without human supervision or labeled datasets. The reinforcement learning based framework feeds the natural media agent a training dataset to implicitly learn the rendering policy by exploring a canvas and minimizing a loss function. Once trained, the natural media agent can be applied to any reference image to generate a series (or sequence) of continuous-valued primitive graphic actions, e.g., sequence of painting strokes, that when rendered by a synthetic rendering environment on a canvas, reproduce an identical or transformed version of the reference image subject to limitations of an action space and the learned rendering policy.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xihui Liu, Quan Hung Tran, Jianming Zhang, Handong Zhao
  • Patent number: 11574392
    Abstract: The present disclosure relates to an image merging system that automatically and seamlessly detects and merges missing people for a set of digital images into a composite group photo. For instance, the image merging system utilizes a number of models and operations to automatically analyze multiple digital images to identify a missing person from a base image, segment the missing person from the second image, and generate a composite group photo by merging the segmented image of the missing person into the base image. In this manner, the image merging system automatically creates merged group photos that appear natural and realistic.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Vipul Dalal, Vera Lychagina, Shabnam Ghadar, Saeid Motiian, Rohith mohan Dodle, Prethebha Chandrasegaran, Mina Doroudi, Midhun Harikumar, Kannan Iyer, Jayant Kumar, Gaurav Kukal, Daniel Miranda, Charles R McKinney, Archit Kalra
  • Patent number: 11568544
    Abstract: The present disclosure relates to utilizing a neural network having a two-stream encoder architecture to accurately generate composite digital images that realistically portray a foreground object from one digital image against a scene from another digital image. For example, the disclosed systems can utilize a foreground encoder of the neural network to identify features from a foreground image and further utilize a background encoder to identify features from a background image. The disclosed systems can then utilize a decoder to fuse the features together and generate a composite digital image. The disclosed systems can train the neural network utilizing an easy-to-hard data augmentation scheme implemented via self-teaching. The disclosed systems can further incorporate the neural network within an end-to-end framework for automation of the image composition process.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: January 31, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Jianming Zhang, He Zhang, Federico Perazzi
  • Publication number: 20230024955
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for detecting and classifying an exposure defect in an image using neural networks trained via a limited amount of labeled training images. An image may be applied to a first neural network to determine whether the images includes an exposure defect. Detected defective image may be applied to a second neural network to determine an exposure defect classification for the image. The exposure defect classification can includes severe underexposure, medium underexposure, mild underexposure, mild overexposure, medium overexposure, severe overexposure, and/or the like. The image may be presented to a user along with the exposure defect classification.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 26, 2023
    Inventors: Akhilesh Kumar, Zhe Lin, William Lawrence Marino
  • Patent number: 11551093
    Abstract: In implementations of resource-aware training for neural network, one or more computing devices of a system implement an architecture optimization module for monitoring parameter utilization while training a neural network. Dead neurons of the neural network are identified as having activation scales less than a threshold. Neurons with activation scales greater than or equal to the threshold are identified as survived neurons. The dead neurons are converted to reborn neurons by adding the dead neurons to layers of the neural network having the survived neurons. The reborn neurons are prevented from connecting to the survived neurons for training the reborn neurons.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Siyuan Qiao, Jianming Zhang
  • Patent number: 11544831
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh