Patents by Inventor Jianchao Yang

Jianchao Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10565518
    Abstract: The present disclosure is directed to collaborative feature learning using social media data. For example, a machine learning system may identify social media data that includes user behavioral data, which indicates user interactions with content item. Using the identified social user behavioral data, the machine learning system may determine latent representations from the content items. In some embodiments, the machine learning system may train a machine-learning model based on the latent representations. Further, the machine learning system may extract features of the content item from the trained machine-learning model.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, Chen Fang, Jianchao Yang, Zhe Lin
  • Publication number: 20200050866
    Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.
    Type: Application
    Filed: October 16, 2019
    Publication date: February 13, 2020
    Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
  • Patent number: 10474900
    Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: November 12, 2019
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
  • Publication number: 20190279046
    Abstract: Systems, devices, media, and methods are presented for identifying and categorically labeling objects within a set of images. The systems and methods receive an image depicting an object of interest, detect at least a portion of the object of interest within the image using a multilayer object model, determine context information, and identify the object of interest included in two or more bounding boxes.
    Type: Application
    Filed: May 28, 2019
    Publication date: September 12, 2019
    Inventors: Wei Han, Jianchao Yang, Ning Zhang, Jia Li
  • Patent number: 10402689
    Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can be assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: September 3, 2019
    Assignee: Snap Inc.
    Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
  • Publication number: 20190268716
    Abstract: Systems and methods are provided for receiving, at a first computing device, a request from a user to activate a new media collection, sending, by the first computing device, the request to a server computer for activation of the new media collection, receiving, by the first computing device, confirmation that the new media collection was activated, receiving, at the first computing device, a plurality of content messages associated with the new media collection, receiving, at the first computing device, from the user, a selection of the plurality of content messages to be included in the new media collection, sending, to the server computer, an indication of the selection of the content messages to be included in the new media collection, wherein the server computer causes the selection of content messages to be included in the new media collection and displayed in response to a request from at least a second computing device to view the new media collection.
    Type: Application
    Filed: March 5, 2019
    Publication date: August 29, 2019
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Patent number: 10395100
    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: August 27, 2019
    Assignee: Snap Inc.
    Inventors: Jia Li, Xutao Lv, Xiaoyu Wang, Xuehan Xiong, Jianchao Yang
  • Patent number: 10387514
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system receives a plurality of content communications from a plurality of client devices, each content communication comprising an associated piece of content and a corresponding metadata. Each content communication is processed to determine associated context values for each piece of content, each associated context value comprising at least one content value generated by machine vision processing of the associated piece of content. A first content collection is automatically generated based on context values, and a set of user accounts are associated with the collection. An identifier associated with the first content collection is published to user devices associated with user accounts. In various additional embodiments, different content values, image processing operations, and content selection operations are used to curate content collections.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: August 20, 2019
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 10382373
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system receives a content message from a first content source, and analyzes the content message to determine one or more quality scores and one or more content values associated with the content message. The server computer system analyzes the content message with a plurality of content collections of the database to identify a match between at least one of the one or more content values and a topic associated with at least a first content collection of the one or more content collections and automatically adds the content message to the first content collection based at least in part on the match. In various embodiments, different content values, image processing operations, and content selection operations are used to curate content collections.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: August 13, 2019
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 10346723
    Abstract: Systems, devices, media, and methods are presented for identifying and categorically labeling objects within a set of images. The systems and methods receive an image depicting an object of interest, detect at least a portion of the object of interest within the image using a multilayer object model, determine context information, and identify the object of interest included in two or more bounding boxes.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: July 9, 2019
    Assignee: Snap Inc.
    Inventors: Wei Han, Jianchao Yang, Ning Zhang, Jia Li
  • Patent number: 10285001
    Abstract: Systems and methods are provided for receiving, at a first computing device, a request from a user to activate a new media collection, sending the request to a server computer for activation of the new media collection, receiving confirmation that the new media collection was activated, receiving a plurality of content messages associated with the new media collection, receiving from the user, a selection of the plurality of content messages to be included in the new media collection, sending, to the server computer, an indication of the selection of the content messages to be included in the new media collection, wherein the server computer causes the selection of content messages to be included in the new media collection and displayed in response to a request from at least a second computing device to view the new media collection.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: May 7, 2019
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20190087660
    Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.
    Type: Application
    Filed: September 15, 2017
    Publication date: March 21, 2019
    Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
  • Patent number: 10198626
    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: February 5, 2019
    Assignee: Snap Inc.
    Inventors: Jia Li, Xutao Lv, Xiaoyu Wang, Xuehan Xiong, Jianchao Yang
  • Patent number: 10198671
    Abstract: A dense captioning system and method is provided for processing an image to produce a feature map of the image, analyzing the feature map to generate proposed bounding boxes for a plurality of visual concepts within the image, analyzing the feature map to determine a plurality of region features of the image, and analyzing the feature map to determine a context feature for the image. For each region feature of the plurality of region features of the image, the dense captioning system further provides for analyzing the region feature to determine a detection score for the region feature, calculating a caption for a bounding box for a visual concept in the image using the region feature and the context feature, and localizing the visual concept by adjusting the bounding box around the visual concept based on the caption to generate an adjusted bounding box for the visual concept.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: February 5, 2019
    Assignee: Snap Inc.
    Inventors: Linjie Yang, Kevin Dechau Tang, Jianchao Yang, Jia Li
  • Patent number: 10198801
    Abstract: Systems and methods are provided for image enhancement using self-examples in combination with external examples. In one embodiment, an image manipulation application receives an input image patch of an input image. The image manipulation application determines a first weight for an enhancement operation using self-examples and a second weight for an enhancement operation using external examples. The image manipulation application generates a first interim output image patch by applying the enhancement operation using self-examples to the input image patch and a second interim output image patch by applying the enhancement operation using external examples to the input image patch. The image manipulation application generates an output image patch by combining the first and second interim output image patches as modified using the first and second weights.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: February 5, 2019
    Assignee: Adobe Systems Incorporated
    Inventors: Jianchao Yang, Zhe Lin
  • Patent number: 10140261
    Abstract: Font graphs are defined having a finite set of nodes representing fonts and a finite set of undirected edges denoting similarities between fonts. The font graphs enable users to browse and identify similar fonts. Indications corresponding to a degree of similarity between connected nodes may be provided. A selection of a desired font or characteristics associated with one or more attributes of the desired font is received from a user interacting with the font graph. The font graph is dynamically redefined based on the selection.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: November 27, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Hailin Jin, Jonathan Brandt
  • Publication number: 20180260655
    Abstract: Techniques are disclosed for image feature representation. The techniques exhibit discriminative power that can be used in any number of classification tasks, and are particularly effective with respect to fine-grained image classification tasks. In an embodiment, a given image to be classified is divided into image patches. A vector is generated for each image patch. Each image patch vector is compared to the Gaussian mixture components (each mixture component is also a vector) of a Gaussian Mixture Model (GMM). Each such comparison generates a similarity score for each image patch vector. For each Gaussian mixture component, the image patch vectors associated with a similarity score that is too low are eliminated. The selectively pooled vectors from all the Gaussian mixture components are then concatenated to form the final image feature vector, which can be provided to a classifier so the given input image can be properly categorized.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 13, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Jianchao Yang, Jonathan Brandt
  • Patent number: 10055895
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: August 21, 2018
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Sheth, Ning Xu, Jianchao Yang
  • Patent number: 10042866
    Abstract: In various implementations, a personal asset management application is configured to perform operations that facilitate the ability to search multiple images, irrespective of the images having characterizing tags associated therewith or without, based on a simple text-based query. A first search is conducted by processing a text-based query to produce a first set of result images used to further generate a visually-based query based on the first set of result images. A second search is conducted employing the visually-based query that was based on the first set of result images received in accordance with the first search conducted and based on the text-based query. The second search can generate a second set of result images, each having visual similarity to at least one of the images generated for the first set of result images.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: August 7, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Zhe Lin, Jonathan Brandt, Xiaohui Shen, Jae-Pil Heo, Jianchao Yang
  • Patent number: 10043101
    Abstract: Techniques are disclosed for image feature representation. The techniques exhibit discriminative power that can be used in any number of classification tasks, and are particularly effective with respect to fine-grained image classification tasks. In an embodiment, a given image to be classified is divided into image patches. A vector is generated for each image patch. Each image patch vector is compared to the Gaussian mixture components (each mixture component is also a vector) of a Gaussian Mixture Model (GMM). Each such comparison generates a similarity score for each image patch vector. For each Gaussian mixture component, the image patch vectors associated with a similarity score that is too low are eliminated. The selectively pooled vectors from all the Gaussian mixture components are then concatenated to form the final image feature vector, which can be provided to a classifier so the given input image can be properly categorized.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: August 7, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Jonathan Brandt