Patents by Inventor Xiaohui Shen

Xiaohui Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10810721
    Abstract: Digital image defect identification and correction techniques are described. In one example, a digital medium environment is configured to identify and correct a digital image defect through identification of a defect in a digital image using machine learning. The identification includes generating a plurality of defect type scores using a plurality of defect type identification models, as part of machine learning, for a plurality of different defect types and determining the digital image includes the defect based on the generated plurality of defect type scores. A correction is generated for the identified defect and the digital image is output as included the generated correction.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: October 20, 2020
    Assignee: Adobe Inc.
    Inventors: Radomir Mech, Ning Yu, Xiaohui Shen, Zhe Lin
  • Patent number: 10810707
    Abstract: Techniques of generating depth-of-field blur effects on digital images by digital effect generation system of a computing device are described. The digital effect generation system is configured to generate depth-of-field blur effects on objects based on focal depth value that defines a depth plane in the digital image and a aperture value that defines an intensity of blur effect applied to the digital image. The digital effect generation system is also configured to improve the accuracy with which depth-of-field blur effects are generated by performing up-sampling operations and implementing a unique focal loss algorithm that minimizes the focal loss within digital images effectively.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: October 20, 2020
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Oliver Wang, Lijun Wang
  • Patent number: 10783622
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: 10776671
    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 15, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Shanghang Zhang, Radomir Mech
  • Patent number: 10762608
    Abstract: Embodiments of the present disclosure relate to a sky editing system and related processes for sky editing. The sky editing system includes a composition detector to determine the composition of a target image. A sky search engine in the sky editing system is configured to find a reference image with similar composition with the target image. Subsequently, a sky editor replaces content of the sky in the target image with content of the sky in the reference image. As such, the sky editing system transforms the target image into a new image with a preferred sky background.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: September 1, 2020
    Assignee: ADOBE INC.
    Inventors: Xiaohui Shen, Yi-Hsuan Tsai, Kalyan K. Sunkavalli, Zhe Lin
  • Publication number: 20200272822
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Application
    Filed: May 14, 2020
    Publication date: August 27, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 10755391
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10755099
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 10747811
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 18, 2020
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20200250465
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Application
    Filed: April 20, 2020
    Publication date: August 6, 2020
    Inventors: ZHE LIN, XIAOHUI SHEN, JONATHAN BRANDT, JIANMING ZHANG, CHEN FANG
  • Patent number: 10735096
    Abstract: An example remote radio apparatus is provided, including a body, a mainboard, a mainboard heat sink, a maintenance cavity, an optical module, and an optical module heat sink. The maintenance cavity and the optical module heat sink are integrally connected, while the optical module is mounted on a bottom surface of the optical module heat sink. The maintenance cavity and the optical module heat sink are mounted on a side surface of the body, and the mainboard heat sink is mounted on and covers the mainboard. The mainboard heat sink and the mainboard are installed on a front surface of the body, and the mainboard heat sink and the optical module heat sink are spaced by a preset distance. The temperature of the optical module is controlled within a range required by a specification.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: August 4, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xiaoming Shi, Xiaohui Shen, Dan Liang, Haigang Xiong, Haizheng Tang
  • Publication number: 20200210763
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a deep neural network-based model to identify similar digital images for query digital images. For example, the disclosed systems utilize a deep neural network-based model to analyze query digital images to generate deep neural network-based representations of the query digital images. In addition, the disclosed systems can generate results of visually-similar digital images for the query digital images based on comparing the deep neural network-based representations with representations of candidate digital images. Furthermore, the disclosed systems can identify visually similar digital images based on user-defined attributes and image masks to emphasize specific attributes or portions of query digital images.
    Type: Application
    Filed: March 12, 2020
    Publication date: July 2, 2020
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Kuen, Brett Butterfield
  • Publication number: 20200202601
    Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation.
    Type: Application
    Filed: March 2, 2020
    Publication date: June 25, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20200184610
    Abstract: Digital image completion using deep learning is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a framework that combines generative and discriminative neural networks based on learning architecture of the generative adversarial networks. From the holey digital image, the generative neural network generates a filled digital image having hole-filling content in place of holes. The discriminative neural networks detect whether the filled digital image and the hole-filling digital content correspond to or include computer-generated content or are photo-realistic. The generating and detecting are iteratively continued until the discriminative neural networks fail to detect computer-generated content for the filled digital image and hole-filling content or until detection surpasses a threshold difficulty.
    Type: Application
    Filed: February 14, 2020
    Publication date: June 11, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Publication number: 20200175651
    Abstract: Techniques of generating depth-of-field blur effects on digital images by digital effect generation system of a computing device are described. The digital effect generation system is configured to generate depth-of-field blur effects on objects based on focal depth value that defines a depth plane in the digital image and a aperture value that defines an intensity of blur effect applied to the digital image. The digital effect generation system is also configured to improve the accuracy with which depth-of-field blur effects are generated by performing up-sampling operations and implementing a unique focal loss algorithm that minimizes the focal loss within digital images effectively.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: Adobe Inc.
    Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Oliver Wang, Lijun Wang
  • Publication number: 20200175700
    Abstract: Joint training technique for depth map generation implemented by depth prediction system as part of a computing device is described. The depth prediction system is configured to generate a candidate feature map from features extracted from training digital images, generate a candidate segmentation map and a candidate depth map from the generated candidate feature map, and jointly train portions of the depth prediction system using a loss function. Consequently, depth prediction system is able to generate a depth map that identifies depths of objects using ordinal depth information and accurately delineates object boundaries within a single digital image.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Applicant: Adobe Inc.
    Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Oliver Wang, Lijun Wang
  • Patent number: 10672164
    Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation by: determining the vectors pixels that correspond to the image pixels affected by the image editing operation and mapping the pixel values of the image pixels represented by the determined offset vectors to the affected pixels.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: June 2, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10664719
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: May 26, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan Brandt, Jianming Zhang, Chen Fang
  • Publication number: 20200151448
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 10628708
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a deep neural network-based model to identify similar digital images for query digital images. For example, the disclosed systems utilize a deep neural network-based model to analyze query digital images to generate deep neural network-based representations of the query digital images. In addition, the disclosed systems can generate results of visually-similar digital images for the query digital images based on comparing the deep neural network-based representations with representations of candidate digital images. Furthermore, the disclosed systems can identify visually similar digital images based on user-defined attributes and image masks to emphasize specific attributes or portions of query digital images.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: April 21, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Kuen, Brett Butterfield