Patents by Inventor Jianchao Yang

Jianchao Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210021551
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system communicates at least a portion of a first content collection to a first client device, and receives a first selection communication in response, the first selection communication identifying a first piece of content of the first plurality of pieces of content. The server analyzes analyzing the first piece of content to identify a set of context values for the first piece of content, and accesses accessing a second content collection comprising pieces of content sharing at least a portion of the set of context values of the first piece of content. In various embodiments, different content values, image processing operations, and content selection operations are used to curate the content collections.
    Type: Application
    Filed: July 1, 2020
    Publication date: January 21, 2021
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Publication number: 20210014636
    Abstract: Systems and methods are provided for receiving, at a first computing device, a request from a user to activate a new media collection, sending, by the first computing device, the request to a server computer for activation of the new media collection, receiving, by the first computing device, confirmation that the new media collection was activated, receiving, at the first computing device, a plurality of content messages associated with the new media collection, receiving, at the first computing device, from the user, a selection of the plurality of content messages to be included in the new media collection, sending, to the server computer, an indication of the selection of the content messages to be included in the new media collection, wherein the server computer causes the selection of content messages to be included in the new media collection and displayed in response to a request from at least a second computing device to view the new media collection.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20200410773
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Application
    Filed: July 13, 2020
    Publication date: December 31, 2020
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Patent number: 10872292
    Abstract: A compact neural network system can generate multiple individual filters from a compound filter. Each convolutional layer of a convolutional neural network can include a compound filters used to generate individual filters for that layer. The individual filters overlap in the compound filter and can be extracted using a sampling operation. The extracted individual filters can share weights with nearby filters thereby reducing the overall size of the convolutional neural network.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: December 22, 2020
    Assignee: Snap Inc.
    Inventors: Yingzhen Yang, Jianchao Yang, Ning Xu
  • Patent number: 10872276
    Abstract: Systems, devices, media, and methods are presented for identifying and categorically labeling objects within a set of images. The systems and methods receive an image depicting an object of interest, detect at least a portion of the object of interest within the image using a multilayer object model, determine context information, and identify the object of interest included in two or more bounding boxes.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: December 22, 2020
    Assignee: Snap Inc.
    Inventors: Wei Han, Jianchao Yang, Ning Zhang, Jia Li
  • Patent number: 10834525
    Abstract: Systems and methods are provided for receiving a request to activate a new media collection, input increasing the default predetermined window of time that the new media collection is accessible, and input decreasing or increasing the default geographic boundary size for where media content for the media collection originates. The systems and methods further provide for sending the request to a server computer to activate the new media collection, receiving confirmation that the new media collection was activated and causing a plurality of content messages to be included in the new media collection and displayed in response to a request from at least one computing device to view the new media collection based on determining the request occurs within the increased predetermined window of time that the new media collection is accessible.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: November 10, 2020
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20200320353
    Abstract: A dense captioning system and method is provided for analyzing an image to generate proposed bounding regions for a plurality of visual concepts within the image, generating a region feature for each proposed bounding region to generate a plurality of region features of the image, and determining a context feature for the image using a proposed bounding region that is a largest in size of the proposed bounding regions. For each region feature of the plurality of region features of the image, the dense captioning system and method further provides for analyzing the region feature to determine for the region feature a detection score that indicates a likelihood that the region feature comprises an actual object, and generating a caption for a visual concept in the image using the region feature and the context feature when a detection score is above a specified threshold value.
    Type: Application
    Filed: June 17, 2020
    Publication date: October 8, 2020
    Inventors: Linjie Yang, Kevin Dechau Tang, Jianchao Yang, Jia Li
  • Patent number: 10776663
    Abstract: A machine learning system can generate an image mask (e.g., a pixel mask) comprising pixel assignments for pixels. The pixels can be assigned to classes, including, for example, face, clothes, body skin, or hair. The machine learning system can be implemented using a convolutional neural network that is configured to execute efficiently on computing devices having limited resources, such as mobile phones. The pixel mask can be used to more accurately display video effects interacting with a user or subject depicted in the image.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: September 15, 2020
    Assignee: Snap Inc.
    Inventors: Lidiia Bogdanovych, William Brendel, Samuel Edward Hare, Fedir Poliakov, Guohui Wang, Xuehan Xiong, Jianchao Yang, Linjie Yang
  • Patent number: 10748347
    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: August 18, 2020
    Assignee: Snap Inc.
    Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
  • Publication number: 20200258313
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Application
    Filed: April 29, 2020
    Publication date: August 13, 2020
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Publication number: 20200250870
    Abstract: Systems and methods are provided for receiving, at a server computer, a plurality of content messages from a plurality of content sources, each content message comprising media content, for each of the plurality of content messages received, associating the media content with a predetermined media collection, and storing the content message in a database. The system and methods further providing for causing the plurality of content messages to be displayed on an operator device with other content messages associated with the media collection, determining that a predetermined trigger related to the media collection has been activated, updating an identifier of the media collection from a first indicator to a second indicator indicating an action needs to be taken on the media collection, and causing the updated identifier with the second indicator to be displayed on an operator device.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahanawaz, Jianchao Yang
  • Patent number: 10733255
    Abstract: Systems, devices, methods, media, and instructions for automated image processing and content curation are described. In one embodiment a server computer system communicates at least a portion of a first content collection to a first client device, and receives a first selection communication in response, the first selection communication identifying a first piece of content of the first plurality of pieces of content. The server analyzes analyzing the first piece of content to identify a set of context values for the first piece of content, and accesses accessing a second content collection comprising pieces of content sharing at least a portion of the set of context values of the first piece of content. In various embodiments, different content values, image processing operations, and content selection operations are used to curate the content collections.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: August 4, 2020
    Assignee: Snap Inc.
    Inventors: Jianchao Yang, Yuke Zhu, Ning Xu, Kevin Dechau Tang, Jia Li
  • Patent number: 10726306
    Abstract: A dense captioning system and method is provided for analyzing an image to generate proposed bounding regions for a plurality of visual concepts within the image, generating a region feature for each proposed bounding region to generate a plurality of region features of the image, and determining a context feature for the image using a proposed bounding region that is a largest in size of the proposed bounding regions. For each region feature of the plurality of region features of the image, the dense captioning system and method further provides for analyzing the region feature to determine for the region feature a detection score that indicates a likelihood that the region feature comprises an actual object, and generating a caption for a visual concept in the image using the region feature and the context feature when a detection score is above a specified threshold value.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: July 28, 2020
    Assignee: Snap Inc.
    Inventors: Linjie Yang, Kevin Dechau Tang, Jianchao Yang, Jia Li
  • Patent number: 10679428
    Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 9, 2020
    Assignee: Snap Inc.
    Inventors: Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
  • Patent number: 10679389
    Abstract: Systems and methods are provided for receiving, at a server computer, a plurality of content messages from a plurality of content sources, each content message comprising media content, for each of the plurality of content messages received, associating the media content with a predetermined media collection, and storing the content message in a database. The system and methods further providing for causing the plurality of content messages to be displayed on an operator device with other content messages associated with the media collection, determining that a predetermined trigger related to the media collection has been activated, updating an identifier of the media collection from a first indicator to a second indicator indicating an action needs to be taken on the media collection, and causing the updated identifier with the second indicator to be displayed on an operator device.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: June 9, 2020
    Assignee: Snap Inc.
    Inventors: Nicholas Richard Allen, Sheldon Chang, Maria Pavlovskaia, Amer Shahnawaz, Jianchao Yang
  • Publication number: 20200152238
    Abstract: Systems and methods are described for determining a first media item related to an event, of a plurality of stored media items each comprising video content related to the event, that was captured in a device orientation corresponding to a first device orientation detected for the first computing device; providing, to the first computing device, the first media item to be displayed on the first computing device; in response to a detected change to a second device orientation for the first computing device, determining a second media item that was captured in a device orientation corresponding to the second device orientation detected for the first computing device; and providing, to the first computing device, the second media item to be displayed on the first computing device.
    Type: Application
    Filed: January 15, 2020
    Publication date: May 14, 2020
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupendra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Publication number: 20200128194
    Abstract: Systems and methods are described for determining that the user interaction with a display of a computing device during display of a video comprising a sequence of frames indicates a region of interest in a current frame of the sequence of frames of the displayed video. For each frame of the sequence of frames after the current frame, the frame is cropped to generate a cropped frame comprising a portion of the frame including the region of interest in the frame, the cropped frame is enlarged based on a display size corresponding to an angle or orientation of the computing device during display of the video, and the enlarged cropped frame replaces the frame such that the enlarged cropped frame is displayed in the sequence of frames of the video on the display of the computing device instead of the frame.
    Type: Application
    Filed: December 20, 2019
    Publication date: April 23, 2020
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupedra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Patent number: 10623662
    Abstract: Systems and methods are described for receiving, at a computing device, a video comprising a plurality of frames and determining, by the computing device, that vertical cropping should be performed for the video. For each frame of the plurality of frames, the computing device processes the video by analyzing the frame to determine a region of interest in the frame, wherein the frame is a first frame, cropping the first frame based on the region of interest in the frame to produce a vertically cropped frame for the video, determining a second frame immediately preceding the first frame, and smoothing a trajectory from the second frame to the vertically cropped frame. The vertically cropped frame is displayed to a user instead of the first frame.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: April 14, 2020
    Assignee: Snap Inc.
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupendra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Patent number: 10622023
    Abstract: Systems and methods are described for receiving, at a computing device, a plurality of video sources related to an event. For each video source of the plurality of video sources, the systems and methods further provide for analyzing, by the computing device, the video source of the plurality of video sources to determine a device orientation to associate with the video source, associating, by the computing device, the device orientation with the video source, and storing, by the computing device, the video source and the associated device orientation.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: April 14, 2020
    Assignee: Snap Inc.
    Inventors: Jia Li, Nathan Litke, Jose Jesus (Joseph) Paredes, Rahul Bhupendra Sheth, Daniel Szeto, Ning Xu, Jianchao Yang
  • Patent number: 10621468
    Abstract: Techniques are disclosed for image feature representation. The techniques exhibit discriminative power that can be used in any number of classification tasks, and are particularly effective with respect to fine-grained image classification tasks. In an embodiment, a given image to be classified is divided into image patches. A vector is generated for each image patch. Each image patch vector is compared to the Gaussian mixture components (each mixture component is also a vector) of a Gaussian Mixture Model (GMM). Each such comparison generates a similarity score for each image patch vector. For each Gaussian mixture component, the image patch vectors associated with a similarity score that is too low are eliminated. The selectively pooled vectors from all the Gaussian mixture components are then concatenated to form the final image feature vector, which can be provided to a classifier so the given input image can be properly categorized.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: April 14, 2020
    Assignee: Adobe Inc.
    Inventors: Jianchao Yang, Jonathan Brandt