Patents by Inventor Baoyuan Wang

Baoyuan Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9886094
    Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
  • Publication number: 20180025749
    Abstract: Described are various technologies that pertain to automatically generating looped videos or cinemagraphs by selecting objects to animate from an input video. In one implementation a group of semantically labeled objects from an input video is received. Candidate objects from the input video that can appear as a moving object in an output cinemagraph or looped video are identified. Candidate video loops are generated using the selected candidate objects. One or more of these candidate video loops are then selected to create a final cinemagraph. The selection of candidate video loops used to create the final cinemagraph can be made by a user or by a predictive model trained to evaluate the subjective attractiveness of the candidate video loops.
    Type: Application
    Filed: September 14, 2016
    Publication date: January 25, 2018
    Inventors: Tae-Hyun Oh, Sing Bing Kang, Neel Suresh Joshi, Baoyuan Wang
  • Publication number: 20180005077
    Abstract: A “Best of Burst Selector,” or “BoB Selector,” automatically selects a subjectively best image from a single set of images of a scene captured in a burst or continuous capture mode, captured as a video sequence, or captured as multiple images of the scene over any arbitrary period of time and any arbitrary timing between images. This set of images is referred to as a burst set. Selection of the subjectively best image is achieved in real-time by applying a machine-learned model to the burst set. The machine-learned model of the BoB Selector is trained to select one or more subjectively best images from the burst set in a way that closely emulates human selection based on subjective subtleties of human preferences. Images automatically selected by the BoB Selector are presented to a user or saved for further processing.
    Type: Application
    Filed: December 12, 2016
    Publication date: January 4, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang, Joshua Bryan Weisberg
  • Patent number: 9807301
    Abstract: A “Moment Capture System” automatically captures and buffers multiple image frames, both prior to and following digital camera shutter activation. This capture and buffering is performed in combination with real-time automatic control of camera settings (e.g., exposure time, capture rate, ISO, white balance, aperture, etc.) based on an ongoing real-time evaluation of contents and characteristics of most recent previously buffered frames. Whenever a shutter activation (e.g., a “tap”) is detected, the Moment Capture System pulls some number of pre-tap frames from the buffer and adds some number of post-tap frames to create an “image moment.” Image moments are defined as sets of sequential frames spanning a time period before and after the detected tap. In various implementations, the Moment Capture System performs automated selection of perceptually best images from the buffered frames associated with each tap.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: October 31, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joshua Bryan Weisberg, Baoyuan Wang, Sing Bing Kang
  • Patent number: 9779774
    Abstract: A cinemagraph is generated that includes one or more video loops. A cinemagraph generator receives an input video, and semantically segments the frames to identify regions that correspond to semantic objects and the semantic object depicted in each identified region. Input time intervals are then computed for the pixels of the frames of the input video. An input time interval for a particular pixel includes a per-pixel loop period and a per-pixel start time of a loop at the particular pixel. In addition, the input time interval of a pixel is based, in part, on one or more semantic terms which keep pixels associated with the same semantic object in the same video loop. A cinemagraph is then created using the input time intervals computed for the pixels of the frames of the input video.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: October 3, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sing Bing Kang, Neel Suresh Joshi, Hugues Hoppe, Tae-Hyun Oh, Baoyuan Wang
  • Publication number: 20150309579
    Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable describing features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 29, 2015
    Applicant: Microsoft Corporation
    Inventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
  • Publication number: 20150003256
    Abstract: A computer implemented method and telecommunications diagnostic apparatus that correlates packets on a core network with those on the access network. Generated is an identification attribute from Information Elements (IEs) in S1AP packets present in the core network and accessible to the access network. The generated identification attribute is integrated to an access and core session of the access and core network to correlate data packets between the access and core session by comparing if a core session contains the same identification attribute as that of an access session within the life span of the access session.
    Type: Application
    Filed: October 28, 2013
    Publication date: January 1, 2015
    Applicant: TEKTRONIX, INC.
    Inventors: Aleksey G. Ivershen, Vignesh Janakiraman, Ge Zhang, Baoyuan Wang
  • Patent number: 8254573
    Abstract: System and method for forwarding a ciphering key to a decipher application comprising capturing a first message carrying the ciphering key from a first network interface, identifying a network node associated with the first network interface, identifying a monitor responsible for processing messages captured from interfaces coupled to the network node, and forwarding the ciphering key to the monitor. In an alternative embodiment, the method may further comprise capturing second messages carrying encrypted messages from a second network interface, and deciphering the second messages using the ciphering key. The method may also comprise identifying user equipment associated with the first messages, and selecting a deciphering application running on the monitor using a user equipment identity.
    Type: Grant
    Filed: March 5, 2008
    Date of Patent: August 28, 2012
    Assignee: Tektronix, Inc.
    Inventors: Fangming Huang, Baoyuan Wang
  • Publication number: 20080240438
    Abstract: System and method for forwarding a ciphering key to a decipher application comprising capturing a first message carrying the ciphering key from a first network interface, identifying a network node associated with the first network interface, identifying a monitor responsible for processing messages captured from interfaces coupled to the network node, and forwarding the ciphering key to the monitor. In an alternative embodiment, the method may further comprise capturing second messages carrying encrypted messages from a second network interface, and deciphering the second messages using the ciphering key. The method may also comprise identifying user equipment associated with the first messages, and selecting a deciphering application running on the monitor using a user equipment identity.
    Type: Application
    Filed: March 5, 2008
    Publication date: October 2, 2008
    Applicant: TEKTRONIX, INC.
    Inventors: FANGMING HUANG, BAOYUAN WANG