Patents by Inventor Xiaohui Shen

Xiaohui Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104810
    Abstract: Embodiments of the disclosure provide a method and a device for processing a portrait image, the method includes: acquiring a to-be-processed portrait image; inputting the to-be-processed portrait image into an image processing model, and acquiring a head smear image output by the image processing model, where the image processing model is configured to smear a hair area of a portrait located above a preset boundary in the portrait image, and the image processing model is generated by training a sample data set of a sample portrait image and a sample head smear image corresponding to the sample portrait image; rendering the head smear image with a head effect material to obtain a portrait image added with an effect; and displaying the portrait image added with the effect.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 28, 2024
    Inventors: Xiao YANG, Jianwei LI, Ding LIU, Yangyue WAN, Xiaohui SHEN, Jianchao YANG
  • Patent number: 11868889
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 11836887
    Abstract: A video generation method includes: acquiring an original image corresponding to a target frame, and identifying a target object in the original image; according to a sliding zooming strategy, performing sliding zooming processing on an initial background image in the original image excluding the target object, so as to obtain a target background image, wherein the sliding zooming strategy is at least used for indicating a sliding direction and a zooming direction of the initial background image, and the sliding direction is opposite to the zooming direction; according to the position of the target object in the original image, superimposing an image of the target object onto the target background image to obtain a target image corresponding to the target frame; and generating a target video on the basis of the target image corresponding to the target frame.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: December 5, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Xiaojie Jin, Xiaohui Shen, Yan Wang
  • Publication number: 20230386521
    Abstract: Provided are a transition type determination method, an electronic device and a storage medium. The method includes: acquiring a picture matching degree between a candidate transition type and a transition position of two adjacent video clips, and acquiring a music matching degree of the candidate transition type and background music of a video to which the two adjacent video clips belong; and determining a target transition type for the transition position according to the picture matching degree and the music matching degree, where the target transition type is used for a transition effect between the two adjacent video clips.
    Type: Application
    Filed: August 15, 2023
    Publication date: November 30, 2023
    Inventors: Xiaojie JIN, Xuchen SONG, Gen LI, Yan WANG, Xiaohui SHEN
  • Patent number: 11816888
    Abstract: Embodiments of the present invention provide an automated image tagging system that can predict a set of tags, along with relevance scores, that can be used for keyword-based image retrieval, image tag proposal, and image tag auto-completion based on user input. Initially, during training, a clustering technique is utilized to reduce cluster imbalance in the data that is input into a convolutional neural network (CNN) for training feature data. In embodiments, the clustering technique can also be utilized to compute data point similarity that can be utilized for tag propagation (to tag untagged images). During testing, a diversity based voting framework is utilized to overcome user tagging biases. In some embodiments, bigram re-weighting can down-weight a keyword that is likely to be part of a bigram based on a predicted tag set.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: November 14, 2023
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Jonathan Brandt, Jianming Zhang, Chen Fang
  • Patent number: 11804043
    Abstract: The present disclosure describes techniques of detecting objects in a video. The techniques comprises extracting features from each frame of the video; generating a first attentive feature by applying a first attention model on at least some of features extracted from any particular frame among the plurality of frames, wherein the first attention model identifies correlations between a plurality of locations in the particular frame by computing relationships between any two locations among the plurality of locations; generating a second attentive feature by applying a second attention model on at least one pair of features at different levels selected from the features extracted from the particular frame, wherein the second attention model identifies a correlation between at least one pair of locations corresponding to the at least one pair of features; and generating a representation of an object included in the particular frame.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: October 31, 2023
    Assignee: Lemon Inc.
    Inventors: Xiaojie Jin, Yi-Wen Chen, Xiaohui Shen
  • Patent number: 11783861
    Abstract: Provided are a transition type determination method, an electronic device and a storage medium. The method includes: acquiring a picture matching degree of a candidate transition type with a transition position between two adjacent video clips, and acquiring a music matching degree of the candidate transition type; determining, based on the acquired image matching degree and the acquired music matching degree, a matching degree of the candidate transition type at the transition position; and determining, according to the matching degree, whether to determine the candidate transition type as a target transition type, where the target transition type is used for a transition effect between the two adjacent video clip.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: October 10, 2023
    Assignees: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., BYTEDANCE INC.
    Inventors: Xiaojie Jin, Xuchen Song, Gen Li, Yan Wang, Xiaohui Shen
  • Patent number: 11769466
    Abstract: Embodiments of the present application provide an image display method and apparatus, a device, and a storage medium. The method comprises: displaying a preceding image in a first display period, the preceding image comprising a video image sequence or a single image; superimposing a foreground target area of a succeeding image on an upper layer of the preceding image in a second display period for display, the succeeding image comprising a video image sequence or a single image; and displaying the succeeding image in a third display period. According to the method, a good playback effect can be achieved in scenarios where time variations are desired.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: September 26, 2023
    Assignee: DOUYIN VISION CO., LTD.
    Inventors: Peng Wang, Xiaohui Shen, Yan Wang
  • Patent number: 11663762
    Abstract: Embodiments of the present invention are directed to facilitating region of interest preservation. In accordance with some embodiments of the present invention, a region of interest preservation score using adaptive margins is determined. The region of interest preservation score indicates an extent to which at least one region of interest is preserved in a candidate image crop associated with an image. A region of interest positioning score is determined that indicates an extent to which a position of the at least one region of interest is preserved in the candidate image crop associated with the image. The region of interest preservation score and/or the preserving score are used to select a set of one or more candidate image crops as image crop suggestions.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: May 30, 2023
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zhe Lin, Radomir Mech, Xiaohui Shen
  • Publication number: 20230153941
    Abstract: A video generation method includes: acquiring an original image corresponding to a target frame, and identifying a target object in the original image; according to a sliding zooming strategy, performing sliding zooming processing on an initial background image in the original image excluding the target object, so as to obtain a target background image, wherein the sliding zooming strategy is at least used for indicating a sliding direction and a zooming direction of the initial background image, and the sliding direction is opposite to the zooming direction; according to the position of the target object in the original image, superimposing an image of the target object onto the target background image to obtain a target image corresponding to the target frame; and generating a target video on the basis of the target image corresponding to the target frame.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 18, 2023
    Inventors: Xiaojie JIN, Xiaohui Shen, Yan Wang
  • Publication number: 20230126653
    Abstract: Provided are a transition type determination method, an electronic device and a storage medium. The method includes: acquiring a picture matching degree of a candidate transition type with a transition position between two adjacent video clips, and acquiring a music matching degree of the candidate transition type; determining, based on the acquired image matching degree and the acquired music matching degree, a matching degree of the candidate transition type at the transition position; and determining, according to the matching degree, whether to determine the candidate transition type as a target transition type, where the target transition type is used for a transition effect between the two adjacent video clip.
    Type: Application
    Filed: December 22, 2022
    Publication date: April 27, 2023
    Inventors: Xiaojie JIN, Xuchen SONG, Gen LI, Yan WANG, Xiaohui SHEN
  • Patent number: 11544831
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: January 3, 2023
    Assignee: Adobe Inc.
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Publication number: 20220398402
    Abstract: The present disclosure describes techniques of detecting objects in a video. The techniques comprises extracting features from each frame of the video; generating a first attentive feature by applying a first attention model on at least some of features extracted from any particular frame among the plurality of frames, wherein the first attention model identifies correlations between a plurality of locations in the particular frame by computing relationships between any two locations among the plurality of locations; generating a second attentive feature by applying a second attention model on at least one pair of features at different levels selected from the features extracted from the particular frame, wherein the second attention model identifies a correlation between at least one pair of locations corresponding to the at least one pair of features; and generating a representation of an object included in the particular frame.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 15, 2022
    Inventors: Xiaojie JIN, Yi-Wen Chen, Xiaohui Shen
  • Publication number: 20220383836
    Abstract: Embodiments of the present application provide an image display method and apparatus, a device, and a storage medium. The method comprises: displaying a preceding image in a first display period, the preceding image comprising a video image sequence or a single image; superimposing a foreground target area of a succeeding image on an upper layer of the preceding image in a second display period for display, the succeeding image comprising a video image sequence or a single image; and displaying the succeeding image in a third display period. According to the method, a good playback effect can be achieved in scenarios where time variations are desired.
    Type: Application
    Filed: August 12, 2022
    Publication date: December 1, 2022
    Inventors: Peng WANG, Xiaohui SHEN, Yan WANG
  • Patent number: 11507800
    Abstract: Semantic segmentation techniques and systems are described that overcome the challenges of limited availability of training data to describe the potentially millions of tags that may be used to describe semantic classes in digital images. In one example, the techniques are configured to train neural networks to leverage different types of training datasets using sequential neural networks and use of vector representations to represent the different semantic classes.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: November 22, 2022
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Yufei Wang, Xiaohui Shen, Scott David Cohen, Jianming Zhang
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 11436775
    Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 6, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu
  • Patent number: 11410038
    Abstract: Various embodiments describe frame selection based on training and using a neural network. In an example, the neural network is a convolutional neural network trained with training pairs. Each training pair includes two training frames from a frame collection. The loss function relies on the estimated quality difference between the two training frames. Further, the definition of the loss function varies based on the actual quality difference between these two frames. In a further example, the neural network is trained by incorporating facial heatmaps generated from the training frames and facial quality scores of faces detected in the training frames. In addition, the training involves using a feature mean that represents an average of the features of the training frames belonging to the same frame collection. Once the neural network is trained, a frame collection is input thereto and a frame is selected based on generated quality scores.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: August 9, 2022
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Xiaohui Shen, Radomir Mech, Jian Ren
  • Publication number: 20220157054
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen
  • Patent number: 11334971
    Abstract: Digital image completion by learning generation and patch matching jointly is described. Initially, a digital image having at least one hole is received. This holey digital image is provided as input to an image completer formed with a dual-stage framework that combines a coarse image neural network and an image refinement network. The coarse image neural network generates a coarse prediction of imagery for filling the holes of the holey digital image. The image refinement network receives the coarse prediction as input, refines the coarse prediction, and outputs a filled digital image having refined imagery that fills these holes. The image refinement network generates refined imagery using a patch matching technique, which includes leveraging information corresponding to patches of known pixels for filtering patches generated based on the coarse prediction. Based on this, the image completer outputs the filled digital image with the refined imagery.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 17, 2022
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Xiaohui Shen, Jimei Yang, Jiahui Yu