Patents by Inventor Baoyuan Wang
Baoyuan Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240082247Abstract: The present disclosure relates to the combination of IN10018 and an epidermal growth factor receptor tyrosine kinase inhibitor for the treatment of tumors.Type: ApplicationFiled: November 6, 2023Publication date: March 14, 2024Inventors: Baoyuan Zhang, Xuebin Liu, Jiaming Gao, Ping Zhang, Ran Pang, Zaiqi Wang
-
Publication number: 20240069932Abstract: A content display method and a terminal device are provided. The method includes obtaining target configuration data for a target quick application. A user terminal determines, in response to a trigger operation on second related content in the target quick application, to-be-displayed content associated with the second related content, and if the target configuration data includes an opening manner for the to-be-displayed content, the user terminal opens an interface including the to-be-displayed content based on the opening manner for the to-be-displayed content that is included in the target configuration data.Type: ApplicationFiled: December 13, 2021Publication date: February 29, 2024Inventor: Baoyuan Wang
-
Publication number: 20220392166Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.Type: ApplicationFiled: August 12, 2022Publication date: December 8, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Noranart VESDAPUNT, Wenbin ZHU, Hsiang-Tao WU, Zeyu CHEN, Baoyuan WANG
-
Patent number: 11443484Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.Type: GrantFiled: July 15, 2020Date of Patent: September 13, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Noranart Vesdapunt, Wenbin Zhu, Hsiang-Tao Wu, Zeyu Chen, Baoyuan Wang
-
Publication number: 20220081790Abstract: The disclosure relates to the technical field of electrolysis cells, and in particular to a solid oxide electrolysis cell (SOEC) and a preparation method thereof. The SOEC provided by the disclosure adopts an n-type TiO2 layer and a p-type La0.6Sr0.4Co0.2Fe0.8O3?? layer as an electrolyte layer. Although the n-type TiO2 and the p-type La0.6Sr0.4Co0.2Fe0.8O3?? have both ionic and electronic conductivities, the electric field effect of a PN junction between the two layers can effectively cut off the transmission of intermediate layer electrons and enable ions to rapidly pass through. The SOEC can effectively avoid short circuit and exhibit excellent performance. Furthermore, the above structure allows the SOEC to have a stable performance output, and the SOEC can be produced on a large scale due to low material cost.Type: ApplicationFiled: December 30, 2020Publication date: March 17, 2022Inventors: Xunying Wang, Chongqing Liu, Ying Chen, Baoyuan Wang, Wenjing Dong, Chen Xia
-
Patent number: 11238885Abstract: A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.Type: GrantFiled: October 29, 2018Date of Patent: February 1, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Gaurav Mittal, Baoyuan Wang
-
Publication number: 20210358212Abstract: Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.Type: ApplicationFiled: July 15, 2020Publication date: November 18, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Noranart VESDAPUNT, Wenbin ZHU, Hsiang-Tao WU, Zeyu CHEN, Baoyuan WANG
-
Patent number: 11036975Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.Type: GrantFiled: December 14, 2018Date of Patent: June 15, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Noranart Vesdapunt, Baoyuan Wang, Ying Jin, Pierrick Arsenault
-
Patent number: 10713816Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.Type: GrantFiled: July 14, 2017Date of Patent: July 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
-
Publication number: 20200193152Abstract: Described herein is a human pose prediction system and method. An image comprising at least a portion of a human body is received. A trained neural network is used to predict one or more human features (e.g., joints/aspects of a human body) within the received image, and, to predict one or more human poses in accordance with the predicted one or more human features. The trained neural network can be an end-to-end trained, single stage deep neural network. An action is performed based on the predicted one or more human poses. For example, the human pose(s) can be displayed as an overlay with received image.Type: ApplicationFiled: December 14, 2018Publication date: June 18, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Noranart VESDAPUNT, Baoyuan WANG, Ying JIN, Pierrick ARSENAULT
-
Patent number: 10671895Abstract: A “Best of Burst Selector,” or “BoB Selector,” automatically selects a subjectively best image from a single set of images of a scene captured in a burst or continuous capture mode, captured as a video sequence, or captured as multiple images of the scene over any arbitrary period of time and any arbitrary timing between images. This set of images is referred to as a burst set. Selection of the subjectively best image is achieved in real-time by applying a machine-learned model to the burst set. The machine-learned model of the BoB Selector is trained to select one or more subjectively best images from the burst set in a way that closely emulates human selection based on subjective subtleties of human preferences. Images automatically selected by the BoB Selector are presented to a user or saved for further processing.Type: GrantFiled: December 12, 2016Date of Patent: June 2, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Baoyuan Wang, Sing Bing Kang, Joshua Bryan Weisberg
-
Publication number: 20200135226Abstract: A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Gaurav MITTAL, Baoyuan WANG
-
Patent number: 10530991Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).Type: GrantFiled: August 7, 2017Date of Patent: January 7, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Baoyuan Wang, Sing Bing Kang
-
Patent number: 10445586Abstract: Techniques for automatically selecting image frames from a video and providing the selected image frames to a device for display are disclosed.Type: GrantFiled: March 16, 2018Date of Patent: October 15, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Utkarsh Sinha, Kandarpkumar J. Makwana, Melissa Regalia, Wei-Chih Chen, Joshua B. Weisberg, Baoyuan Wang, Gil M. Nahmias, Noranart Vesdapunt
-
Publication number: 20190180109Abstract: Techniques for automatically selecting image frames from a video and providing the selected image frames to a device for display are disclosed.Type: ApplicationFiled: March 16, 2018Publication date: June 13, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Utkarsh SINHA, Kandarpkumar J. MAKWANA, Melissa REGALIA, Wei-Chih CHEN, Joshua B. WEISBERG, Baoyuan WANG, Gil M. NAHMIAS, Noranart VESDAPUNT
-
Publication number: 20190019311Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums that correct image color casts by utilizing a fully convolutional network (FCN), where the patches in an input image may differ in influence over the color constancy estimation. This influence is formulated as a confidence weight that reflects the value of a patch for inferring the illumination color. The confidence weights are integrated into a novel pooling layer where they are applied to local patch estimates in determining a global color constancy result.Type: ApplicationFiled: July 14, 2017Publication date: January 17, 2019Inventors: Yuanming Hu, Baoyuan Wang, Stephen S. Lin
-
Patent number: 10104561Abstract: A computer implemented method and telecommunications diagnostic apparatus that correlates packets on a core network with those on the access network. Generated is an identification attribute from Information Elements (IEs) in S1AP packets present in the core network and accessible to the access network. The generated identification attribute is integrated to an access and core session of the access and core network to correlate data packets between the access and core session by comparing if a core session contains the same identification attribute as that of an access session within the life span of the access session.Type: GrantFiled: October 28, 2013Date of Patent: October 16, 2018Assignee: NETSCOUT SYSTEMS TEXAS, LLCInventors: Aleksey G. Ivershen, Vignesh Janakiraman, Ge Zhang, Baoyuan Wang
-
Publication number: 20180220061Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).Type: ApplicationFiled: August 7, 2017Publication date: August 2, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Baoyuan Wang, Sing Bing Kang
-
Publication number: 20180121733Abstract: A “Quality Predictor” applies a machine-learned quality model to predict subjective quality of an output video of an image sequence processing algorithm without actually running that algorithm on a temporal sequence of image frames (referred to as “candidate sets”). Candidate sets having sufficiently high predicted quality scores are processed by the image sequence processing algorithm to produce an output video. Therefore, the Quality Predictor reduces computational overhead by eliminating unnecessary processing of candidate sets when the image sequence processing algorithm is not expected to produce acceptable results. The quality model is trained on a combination of human quality scores of output videos generated by the image sequence processing algorithm and image features extracted from frames of image sequences used to generate those output videos.Type: ApplicationFiled: October 27, 2016Publication date: May 3, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Neel Suresh Joshi, Baoyuan Wang, Sing Bing Kang
-
Patent number: 9886094Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.Type: GrantFiled: April 28, 2014Date of Patent: February 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng