Patents by Inventor Jianchao Yang

Jianchao Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11023514
    Abstract: Systems and methods are provided for receiving, at a server computer, a plurality of content messages from a plurality of content sources, each content message comprising media content and associated with a predetermined media collection, for each of the plurality of content messages received, analyzing each of the plurality of content messages to determine a quality score for each of the plurality of content messages, and storing each of the plurality of content messages in a database along with the quality score for each of the plurality of content messages.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: June 1, 2021
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20210073597
    Abstract: Systems, devices, media, and methods are presented for identifying and categorically labeling objects within a set of images. The systems and methods receive an image depicting an object of interest, detect at least a portion of the object of interest within the image using a multilayer object model, determine context information, and identify the object of interest included in two or more bounding boxes.
    Type: Application
    Filed: November 17, 2020
    Publication date: March 11, 2021
    Inventors: Wei Han, Jianchao Yang, Ning Zhang, Jia Li
  • Publication number: 20210073613
    Abstract: A compact neural network system can generate multiple individual filters from a compound filter. Each convolutional layer of a convolutional neural network can include a compound filters used to generate individual filters for that layer. The individual filters overlap in the compound filter and can be extracted using a sampling operation. The extracted individual filters can share weights with nearby filters thereby reducing the overall size of the convolutional neural network.
    Type: Application
    Filed: November 23, 2020
    Publication date: March 11, 2021
    Inventors: Yingzhen Yang, Jianchao Yang, Ning Xu
  • Patent number: 10929673
    Abstract: A mobile device can generate real-time complex visual image effects using asynchronous processing pipeline. A first pipeline applies a complex image process, such as a neural network, to keyframes of a live image sequence. A second pipeline generates flow maps that describe feature transformations in the image sequence. The flow maps can be used to process non-keyframes on the fly. The processed keyframes and non-keyframes can be used to display a complex visual effect on the mobile device in real-time or near real-time.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: February 23, 2021
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang, Shah Tanmay Anilkumar
  • Publication number: 20210027100
    Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can he assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented. using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.
    Type: Application
    Filed: August 13, 2020
    Publication date: January 28, 2021
    Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
  • Publication number: 20210021551
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system communicates at least a portion of a first content collection to a first client device, and receives a first selection communication in response, the first selection communication identifying a first piece of content of the first plurality of pieces of content. The server analyzes analyzing the first piece of content to identify a set of context values for the first piece of content, and accesses accessing a second content collection comprising pieces of content sharing at least a portion of the set of context values of the first piece of content. In various embodiments, different content values, image processing operations, and content selection operations are used to curate the content collections.
    Type: Application
    Filed: July 1, 2020
    Publication date: January 21, 2021
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Publication number: 20210014636
    Abstract: Systems and methods are provided for receiving, at a first computing device, a request from a user to activate a new media collection, sending, by the first computing device, the request to a server computer for activation of the new media collection, receiving, by the first computing device, confirmation that the new media collection was activated, receiving, at the first computing device, a plurality of content messages associated with the new media collection, receiving, at the first computing device, from the user, a selection of the plurality of content messages to be included in the new media collection, sending, to the server computer, an indication of the selection of the content messages to be included in the new media collection, wherein the server computer causes the selection of content messages to be included in the new media collection and displayed in response to a request from at least a second computing device to view the new media collection.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20200410773
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Application
    Filed: July 13, 2020
    Publication date: December 31, 2020
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 10872276
    Abstract: Systems, devices, media, and methods are presented for identifying and categorically labeling objects within a set of images. The systems and methods receive an image depicting an object of interest, detect at least a portion of the object of interest within the image using a multilayer object model, determine context information, and identify the object of interest included in two or more bounding boxes.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: December 22, 2020
    Assignee: Snap Inc.
    Inventors: Wei Han, Jianchao Yang, Ning Zhang, Jia Li
  • Patent number: 10872292
    Abstract: A compact neural network system can generate multiple individual filters from a compound filter. Each convolutional layer of a convolutional neural network can include a compound filters used to generate individual filters for that layer. The individual filters overlap in the compound filter and can be extracted using a sampling operation. The extracted individual filters can share weights with nearby filters thereby reducing the overall size of the convolutional neural network.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: December 22, 2020
    Assignee: Snap Inc.
    Inventors: Yingzhen Yang, Jianchao Yang, Ning Xu
  • Patent number: 10834525
    Abstract: Systems and methods are provided for receiving a request to activate a new media collection, input increasing the default predetermined window of time that the new media collection is accessible, and input decreasing or increasing the default geographic boundary size for where media content for the media collection originates. The systems and methods further provide for sending the request to a server computer to activate the new media collection, receiving confirmation that the new media collection was activated and causing a plurality of content messages to be included in the new media collection and displayed in response to a request from at least one computing device to view the new media collection based on determining the request occurs within the increased predetermined window of time that the new media collection is accessible.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: November 10, 2020
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20200320353
    Abstract: A dense captioning system and method is provided for analyzing an image to generate proposed bounding regions for a plurality of visual concepts within the image, generating a region feature for each proposed bounding region to generate a plurality of region features of the image, and determining a context feature for the image using a proposed bounding region that is a largest in size of the proposed bounding regions. For each region feature of the plurality of region features of the image, the dense captioning system and method further provides for analyzing the region feature to determine for the region feature a detection score that indicates a likelihood that the region feature comprises an actual object, and generating a caption for a visual concept in the image using the region feature and the context feature when a detection score is above a specified threshold value.
    Type: Application
    Filed: June 17, 2020
    Publication date: October 8, 2020
    Inventors: Linjie Yang, Kevin Dechau Tang, Jianchao Yang, Jia Li
  • Patent number: 10776663
    Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can be assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: September 15, 2020
    Assignee: Snap Inc.
    Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
  • Patent number: 10748347
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: August 18, 2020
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Publication number: 20200258313
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Application
    Filed: April 29, 2020
    Publication date: August 13, 2020
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Publication number: 20200250870
    Abstract: Systems and methods are provided for receiving, at a server computer, a plurality of content messages from a plurality of content sources, each content message comprising media content, for each of the plurality of content messages received, associating the media content with a predetermined media collection, and storing the content message in a database. The system and methods further providing for causing the plurality of content messages to be displayed on an operator device with other content messages associated with the media collection, determining that a predetermined trigger related to the media collection has been activated, updating an identifier of the media collection from a first indicator to a second indicator indicating an action needs to be taken on the media collection, and causing the updated identifier with the second indicator to be displayed on an operator device.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahanawaz, Jianchao Yang
  • Patent number: 10733255
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system communicates at least a portion of a first content collection to a first client device, and receives a first selection communication in response, the first selection communication identifying a first piece of content of the first plurality of pieces of content. The server analyzes analyzing the first piece of content to identify a set of context values for the first piece of content, and accesses accessing a second content collection comprising pieces of content sharing at least a portion of the set of context values of the first piece of content. In various embodiments, different content values, image processing operations, and content selection operations are used to curate the content collections.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: August 4, 2020
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 10726306
    Abstract: A dense captioning system and method is provided for analyzing an image to generate proposed bounding regions for a plurality of visual concepts within the image, generating a region feature for each proposed bounding region to generate a plurality of region features of the image, and determining a context feature for the image using a proposed bounding region that is a largest in size of the proposed bounding regions. For each region feature of the plurality of region features of the image, the dense captioning system and method further provides for analyzing the region feature to determine for the region feature a detection score that indicates a likelihood that the region feature comprises an actual object, and generating a caption for a visual concept in the image using the region feature and the context feature when a detection score is above a specified threshold value.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: July 28, 2020
    Assignee: Snap Inc.
    Inventors: Linjie Yang, Kevin Dechau Tang, Jianchao Yang, Jia Li
  • Patent number: 10679389
    Abstract: Systems and methods are provided for receiving, at a server computer, a plurality of content messages from a plurality of content sources, each content message comprising media content, for each of the plurality of content messages received, associating the media content with a predetermined media collection, and storing the content message in a database. The system and methods further providing for causing the plurality of content messages to be displayed on an operator device with other content messages associated with the media collection, determining that a predetermined trigger related to the media collection has been activated, updating an identifier of the media collection from a first indicator to a second indicator indicating an action needs to be taken on the media collection, and causing the updated identifier with the second indicator to be displayed on an operator device.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: June 9, 2020
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Patent number: 10679428
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 9, 2020
    Assignee: Snap Inc.
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang