Patents by Inventor Baoyuan Wang

Baoyuan Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240082247
    Abstract: The present disclosure relates to the combination of IN10018 and an epidermal growth factor receptor tyrosine kinase inhibitor for the treatment of tumors.
    Type: Application
    Filed: November 6, 2023
    Publication date: March 14, 2024
    Inventors: Baoyuan Zhang, Xuebin Liu, Jiaming Gao, Ping Zhang, Ran Pang, Zaiqi Wang
  • Publication number: 20240069932
    Abstract: A content display method and a terminal device are provided. The method includes obtaining target configuration data for a target quick application. A user terminal determines, in response to a trigger operation on second related content in the target quick application, to-be-displayed content associated with the second related content, and if the target configuration data includes an opening manner for the to-be-displayed content, the user terminal opens an interface including the to-be-displayed content based on the opening manner for the to-be-displayed content that is included in the target configuration data.
    Type: Application
    Filed: December 13, 2021
    Publication date: February 29, 2024
    Inventor: Baoyuan Wang
  • Publication number: 20220392166
    Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.
    Type: Application
    Filed: August 12, 2022
    Publication date: December 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Noranart VESDAPUNT, Wenbin ZHU, Hsiang-Tao WU, Zeyu CHEN, Baoyuan WANG
  • Patent number: 11443484
    Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: September 13, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Noranart Vesdapunt, Wenbin Zhu, Hsiang-Tao Wu, Zeyu Chen, Baoyuan Wang
  • Publication number: 20220081790
    Abstract: The disclosure relates to the technical field of electrolysis cells, and in particular to a solid oxide electrolysis cell (SOEC) and a preparation method thereof. The SOEC provided by the disclosure adopts an n-type TiO2 layer and a p-type La0.6Sr0.4Co0.2Fe0.8O3?? layer as an electrolyte layer. Although the n-type TiO2 and the p-type La0.6Sr0.4Co0.2Fe0.8O3?? have both ionic and electronic conductivities, the electric field effect of a PN junction between the two layers can effectively cut off the transmission of intermediate layer electrons and enable ions to rapidly pass through. The SOEC can effectively avoid short circuit and exhibit excellent performance. Furthermore, the above structure allows the SOEC to have a stable performance output, and the SOEC can be produced on a large scale due to low material cost.
    Type: Application
    Filed: December 30, 2020
    Publication date: March 17, 2022
    Inventors: Xunying Wang, Chongqing Liu, Ying Chen, Baoyuan Wang, Wenjing Dong, Chen Xia
  • Patent number: 11238885
    Abstract: A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 1, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Gaurav Mittal, Baoyuan Wang
  • Publication number: 20210358212
    Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.
    Type: Application
    Filed: July 15, 2020
    Publication date: November 18, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Noranart VESDAPUNT, Wenbin ZHU, Hsiang-Tao WU, Zeyu CHEN, Baoyuan WANG
  • Patent number: 11036975
    Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: June 15, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Noranart Vesdapunt, Baoyuan Wang, Ying Jin, Pierrick Arsenault
  • Patent number: 10713816
    Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
  • Publication number: 20200193152
    Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.
    Type: Application
    Filed: December 14, 2018
    Publication date: June 18, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Noranart VESDAPUNT, Baoyuan WANG, Ying JIN, Pierrick ARSENAULT
  • Patent number: 10671895
    Abstract: A “Best of Burst Selector,” or “BoB Selector,” automatically selects a subjectively best image from a single set of images of a scene captured in a burst or continuous capture mode, captured as a video sequence, or captured as multiple images of the scene over any arbitrary period of time and any arbitrary timing between images. This set of images is referred to as a burst set. Selection of the subjectively best image is achieved in real-time by applying a machine-learned model to the burst set. The machine-learned model of the BoB Selector is trained to select one or more subjectively best images from the burst set in a way that closely emulates human selection based on subjective subtleties of human preferences. Images automatically selected by the BoB Selector are presented to a user or saved for further processing.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang, Joshua Bryan Weisberg
  • Publication number: 20200135226
    Abstract: A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Inventors: Gaurav MITTAL, Baoyuan WANG
  • Patent number: 10530991
    Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: January 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang
  • Patent number: 10445586
    Abstract: Techniques for automatically selecting image frames from a video and providing the selected image frames to a device for display are disclosed.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: October 15, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Utkarsh Sinha, Kandarpkumar J. Makwana, Melissa Regalia, Wei-Chih Chen, Joshua B. Weisberg, Baoyuan Wang, Gil M. Nahmias, Noranart Vesdapunt
  • Publication number: 20190180109
    Abstract: Techniques for automatically selecting image frames from a video and providing the selected image frames to a device for display are disclosed.
    Type: Application
    Filed: March 16, 2018
    Publication date: June 13, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Utkarsh SINHA, Kandarpkumar J. MAKWANA, Melissa REGALIA, Wei-Chih CHEN, Joshua B. WEISBERG, Baoyuan WANG, Gil M. NAHMIAS, Noranart VESDAPUNT
  • Publication number: 20190019311
    Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
  • Patent number: 10104561
    Abstract: A computer implemented method and telecommunications diagnostic apparatus that correlates packets on a core network with those on the access network. Generated is an identification attribute from Information Elements (IEs) in S1AP packets present in the core network and accessible to the access network. The generated identification attribute is integrated to an access and core session of the access and core network to correlate data packets between the access and core session by comparing if a core session contains the same identification attribute as that of an access session within the life span of the access session.
    Type: Grant
    Filed: October 28, 2013
    Date of Patent: October 16, 2018
    Assignee: NETSCOUT SYSTEMS TEXAS, LLC
    Inventors: Aleksey G. Ivershen, Vignesh Janakiraman, Ge Zhang, Baoyuan Wang
  • Publication number: 20180220061
    Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).
    Type: Application
    Filed: August 7, 2017
    Publication date: August 2, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang
  • Publication number: 20180121733
    Abstract: A “Quality Predictor” applies a machine-learned quality model to predict subjective quality of an output video of an image sequence processing algorithm without actually running that algorithm on a temporal sequence of image frames (referred to as “candidate sets”). Candidate sets having sufficiently high predicted quality scores are processed by the image sequence processing algorithm to produce an output video. Therefore, the Quality Predictor reduces computational overhead by eliminating unnecessary processing of candidate sets when the image sequence processing algorithm is not expected to produce acceptable results. The quality model is trained on a combination of human quality scores of output videos generated by the image sequence processing algorithm and image features extracted from frames of image sequences used to generate those output videos.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Baoyuan Wang, Sing Bing Kang
  • Patent number: 9886094
    Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng