Patents by Inventor Xiaohui Shen

Xiaohui Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10614557
    Abstract: Digital image completion using deep learning is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a framework that combines generative and discriminative neural networks based on learning architecture of the generative adversarial networks. From the holey digital image, the generative neural network generates a filled digital image having hole-filling content in place of holes. The discriminative neural networks detect whether the filled digital image and the hole-filling digital content correspond to or include computer-generated content or are photo-realistic. The generating and detecting are iteratively continued until the discriminative neural networks fail to detect computer-generated content for the filled digital image and hole-filling content or until detection surpasses a threshold difficulty.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: April 7, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10607329
    Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: March 31, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
  • Patent number: 10600171
    Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: March 24, 2020
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zhe Lin, Xiaohui Shen, Wei-Chih Hung, Joon-Young Lee
  • Patent number: 10593043
    Abstract: Systems and methods are disclosed for segmenting a digital image to identify an object portrayed in the digital image from background pixels in the digital image. In particular, in one or more embodiments, the disclosed systems and methods use a first neural network and a second neural network to generate image information used to generate a segmentation mask that corresponds to the object portrayed in the digital image. Specifically, in one or more embodiments, the disclosed systems and methods optimize a fit between a mask boundary of the segmentation mask to edges of the object portrayed in the digital image to accurately segment the object within the digital image.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: March 17, 2020
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Yibing Song, Xin Lu, Xiaohui Shen, Jimei Yang
  • Publication number: 20200074600
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20200065956
    Abstract: Systems and methods are disclosed for estimating aesthetic quality of digital images using deep learning. In particular, the disclosed systems and methods describe training a neural network to generate an aesthetic quality score digital images. In particular, the neural network includes a training structure that compares relative rankings of pairs of training images to accurately predict a relative ranking of a digital image. Additionally, in training the neural network, an image rating system can utilize content-aware and user-aware sampling techniques to identify pairs of training images that have similar content and/or that have been rated by the same or different users. Using content-aware and user-aware sampling techniques, the neural network can be trained to accurately predict aesthetic quality ratings that reflect subjective opinions of most users as well as provide aesthetic scores for digital images that represent the wide spectrum of aesthetic preferences of various users.
    Type: Application
    Filed: October 31, 2019
    Publication date: February 27, 2020
    Inventors: Xiaohui Shen, Zhe Lin, Shu Kong, Radomir Mech
  • Patent number: 10565472
    Abstract: In embodiments of event image curation, a computing device includes memory that stores a collection of digital images associated with a type of event, such as a digital photo album of digital photos associated with the event, or a video of image frames and the video is associated with the event. A curation application implements a convolutional neural network, which receives the digital images and a designation of the type of event. The convolutional neural network can then determine an importance rating of each digital image within the collection of the digital images based on the type of the event. The importance rating of a digital image is representative of an importance of the digital image to a person in context of the type of the event. The convolutional neural network generates an output of representative digital images from the collection based on the importance rating of each digital image.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yufei Wang, Radomir Mech, Xiaohui Shen, Gavin Stuart Peter Miller
  • Patent number: 10516830
    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10515443
    Abstract: Systems and methods are disclosed for estimating aesthetic quality of digital images using deep learning. In particular, the disclosed systems and methods describe training a neural network to generate an aesthetic quality score digital images. In particular, the neural network includes a training structure that compares relative rankings of pairs of training images to accurately predict a relative ranking of a digital image. Additionally, in training the neural network, an image rating system can utilize content-aware and user-aware sampling techniques to identify pairs of training images that have similar content and/or that have been rated by the same or different users. Using content-aware and user-aware sampling techniques, the neural network can be trained to accurately predict aesthetic quality ratings that reflect subjective opinions of most users as well as provide aesthetic scores for digital images that represent the wide spectrum of aesthetic preferences of various users.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Shu Kong, Radomir Mech
  • Patent number: 10497122
    Abstract: Various embodiments describe using a neural network to evaluate image crops in substantially real-time. In an example, a computer system performs unsupervised training of a first neural network based on unannotated image crops, followed by a supervised training of the first neural network based on annotated image crops. Once this first neural network is trained, the computer system inputs image crops generated from images to this trained network and receives composition scores therefrom. The computer system performs supervised training of a second neural network based on the images and the composition scores.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 3, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10496699
    Abstract: A framework is provided for associating dense images with topics. The framework is trained utilizing images, each having multiple regions, multiple visual characteristics and multiple keyword tags associated therewith. For each region of each image, visual features are computed from the visual characteristics utilizing a convolutional neural network, and an image feature vector is generated from the visual features. The keyword tags are utilized to generate a weighted word vector for each image by calculating a weighted average of word vector representations representing keyword tags associated with the image. The image feature vector and the weighted word vector are aligned in a common embedding space and a heat map is computed for the image. Once trained, the framework can be utilized to automatically tag images and rank the relevance of images with respect to queried keywords based upon associated heat maps.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: December 3, 2019
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Jianming Zhang, Hailin Jin, Yingwei Li
  • Publication number: 20190362199
    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.
    Type: Application
    Filed: May 25, 2018
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Shanghang Zhang, Radomir Mech
  • Publication number: 20190361994
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Application
    Filed: May 22, 2018
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Patent number: 10489688
    Abstract: Techniques and systems are described to determine personalized digital image aesthetics in a digital medium environment. In one example, a personalized offset is generated to adapt a generic model for digital image aesthetics. A generic model, once trained, is used to generate training aesthetics scores from a personal training data set that corresponds to an entity, e.g., a particular user, group of users, and so on. The image aesthetics system then generates residual scores (e.g., offsets) as a difference between the training aesthetics score and the personal aesthetics score for the personal training digital images. The image aesthetics system then employs machine learning to train a personalized model to predict the residual scores as a personalized offset using the residual scores and personal training digital images.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Radomir Mech, Jian Ren
  • Publication number: 20190354802
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a deep neural network-based model to identify similar digital images for query digital images. For example, the disclosed systems utilize a deep neural network-based model to analyze query digital images to generate deep neural network-based representations of the query digital images. In addition, the disclosed systems can generate results of visually-similar digital images for the query digital images based on comparing the deep neural network-based representations with representations of candidate digital images. Furthermore, the disclosed systems can identify visually similar digital images based on user-defined attributes and image masks to emphasize specific attributes or portions of query digital images.
    Type: Application
    Filed: May 18, 2018
    Publication date: November 21, 2019
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Kuen, Brett Butterfield
  • Publication number: 20190355102
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Application
    Filed: May 15, 2018
    Publication date: November 21, 2019
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 10475169
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190342007
    Abstract: An example remote radio apparatus is provided, including a body, a mainboard, a mainboard heat sink, a maintenance cavity, an optical module, and an optical module heat sink. The maintenance cavity and the optical module heat sink are integrally connected, while the optical module is mounted on a bottom surface of the optical module heat sink. The maintenance cavity and the optical module heat sink are mounted on a side surface of the body, and the mainboard heat sink is mounted on and covers the mainboard. The mainboard heat sink and the mainboard are installed on a front surface of the body, and the mainboard heat sink and the optical module heat sink are spaced by a preset distance. The temperature of the optical module is controlled within a range required by a specification.
    Type: Application
    Filed: July 19, 2019
    Publication date: November 7, 2019
    Inventors: Xiaoming SHI, Xiaohui SHEN, Dan LIANG, Haigang XIONG, Haizheng TANG
  • Patent number: 10467529
    Abstract: In embodiments of convolutional neural network joint training, a computing system memory maintains different data batches of multiple digital image items, where the digital image items of the different data batches have some common features. A convolutional neural network (CNN) receives input of the digital image items of the different data batches, and classifier layers of the CNN are trained to recognize the common features in the digital image items of the different data batches. The recognized common features are input to fully-connected layers of the CNN that distinguish between the recognized common features of the digital image items of the different data batches. A scoring difference is determined between item pairs of the digital image items in a particular one of the different data batches. A piecewise ranking loss algorithm maintains the scoring difference between the item pairs, and the scoring difference is used to train CNN regression functions.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yufei Wang, Radomir Mech, Xiaohui Shen, Gavin Stuart Peter Miller
  • Publication number: 20190332937
    Abstract: Provided are systems and techniques that provide an output phrase describing an image. An example method includes creating, with a convolutional neural network, feature maps describing image features in locations in the image. The method also includes providing a skeletal phrase for the image by processing the feature maps with a first long short-term memory (LSTM) neural network trained based on a first set of ground truth phrases which exclude attribute words. Then, attribute words are provided by processing the skeletal phrase and the feature maps with a second LSTM neural network trained based on a second set of ground truth phrases including words for attributes. Then, the method combines the skeletal phrase and the attribute words to form the output phrase.
    Type: Application
    Filed: July 10, 2019
    Publication date: October 31, 2019
    Inventors: Zhe Lin, Yufei Wang, Scott Cohen, Xiaohui Shen