Patents by Inventor Jiaolong YANG
Jiaolong YANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250030816Abstract: According to implementations of the subject matter described herein, there is provided a solution for an immersive video conference. In the solution, a conference mode for the video conference is determined at first, the conference mode indicating a layout of a virtual conference space for the video conference, and viewpoint information associated with the second participant in the video conference is determined based on the layout. Furthermore, a first view of the first participant is determined based on the viewpoint information and then sent to a conference device associated with the second participant to display a conference image to the second participant. Thereby, on the one hand, it is possible to enable the video conference participants to obtain a more authentic and immersive video conference experience, and on the other hand, to obtain a desired virtual conference space layout according to needs more flexibly.Type: ApplicationFiled: November 10, 2022Publication date: January 23, 2025Inventors: Jiaolong YANG, Yizhong Zhang, Xin TONG, Baining GUO
-
Publication number: 20240324837Abstract: A dirt treatment apparatus, a cleaning device and a control method for the cleaning device are provided. The dirt treatment apparatus includes: a box body and a separation apparatus, in which the separation apparatus is assembled in the box body, the box body is provided with a suction channel allowing for communication between an interior and an exterior of the box body, and an air outlet; the separation apparatus includes a main body member and a blocking member; the main body member is provided in the box body, and the main body member and an inner wall of the box body define a separation channel in communication between the suction channel) and the air outlet; and the blocking member is fitted on the main body member and blocks a flow path of all airflows flowing from the suction channel to the air outlet.Type: ApplicationFiled: June 11, 2024Publication date: October 3, 2024Inventors: Jin KANG, Guohui ZENG, Jiaolong YANG, Guiyong NING, Yulong YANG
-
Patent number: 12079936Abstract: In accordance with implementations of the present disclosure, there is provided a solution for portrait editing and synthesis. In this solution, a first image about a head of a user is obtained. A three-dimensional head model representing the head of the user is generated based on the first image. In response to receiving a command of changing a head feature of the user, the three-dimensional head model is transformed to reflect the changed head feature. A second image about the head of the user is generated based on the transformed three-dimensional head model, and reflects the changed head feature of the user. In this way, the solution can realize editing of features like a head pose and/or a facial expression based on a single portrait image without manual intervention and automatically synthesize a corresponding image.Type: GrantFiled: June 8, 2020Date of Patent: September 3, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Jiaolong Yang, Fang Wen, Dong Chen, Xin Tong
-
Publication number: 20240161382Abstract: According to implementations of the present disclosure, there is provided a solution for completing textures of an object. In this solution, a complete texture map of an object is generated from a partial texture map of the object according to a texture generation model. A first prediction on whether a texture of at least one block in the complete texture map is an inferred texture is determined according to a texture discrimination model. A second image of the object is generated based on the complete texture map. A second prediction on whether the first image and the second image are generated images is determined according to an image discrimination model. The texture generation model, the texture and image discrimination models are trained based on the first and second predictions.Type: ApplicationFiled: April 26, 2021Publication date: May 16, 2024Inventors: Jongyoo KIM, Jiaolong YANG, Xin TONG
-
Publication number: 20220222897Abstract: In accordance with implementations of the present disclosure, there is provided a solution for portrait editing and synthesis. In this solution, a first image about a head of a user is obtained. A three-dimensional head model representing the head of the user is generated based on the first image. In response to receiving a command of changing a head feature of the user, the three-dimensional head model is transformed to reflect the changed head feature. A second image about the head of the user is generated based on the transformed three-dimensional head model, and reflects the changed head feature of the user. In this way, the solution can realize editing of features like a head pose and/or a facial expression based on a single portrait image without manual intervention and automatically synthesize a corresponding image.Type: ApplicationFiled: June 8, 2020Publication date: July 14, 2022Inventors: Jiaolong Yang, Fang Wen, Dong Chen, Xin Tong
-
Publication number: 20200226392Abstract: Implementations of the subject matter described herein provide a solution for thin object detection based on computer vision technology. In the solution, a plurality of images containing at least one thin object to be detected are obtained. A plurality of edges are extracted from the plurality of images, and respective depths of the plurality of edges are determined. In addition, the at least one thin object contained in the plurality of images is identified based on the respective depths of the plurality of edges, the identified at least one thin object being represented by at least one of the plurality of edges. The at least one thin object is an object with a significantly small ratio of cross-sectional area to length. It is usually difficult to detect such thin object with a conventional detection solution, but the implementations of the present disclosure effectively solve this problem.Type: ApplicationFiled: May 23, 2018Publication date: July 16, 2020Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Gang HUA, Jiaolong YANG, Chunshui ZHAO, Chen ZHOU
-
Patent number: 10223612Abstract: In a video frame processing system, a feature extractor generates, based on a plurality of data sets corresponding to a plurality of frames of a video, a plurality of feature sets, respective ones of the feature sets including features extracted from respective ones of the data sets. A first stage of the feature aggregator generates a kernel for a second stage of the feature aggregator. The kernel is adapted to content of the feature sets so as to emphasize desirable ones of the feature sets and deemphasize undesirable ones of the feature sets. In the second stage of the feature aggregator the kernel generated by the first stage is applied to the plurality of feature sets to generate a plurality of significances corresponding to the plurality of feature sets. The feature sets are weighted based on corresponding significances and weighted feature sets are aggregated to generate an aggregated feature set.Type: GrantFiled: September 1, 2016Date of Patent: March 5, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Gang Hua, Peiran Ren, Jiaolong Yang
-
Publication number: 20180060698Abstract: In a video frame processing system, a feature extractor generates, based on a plurality of data sets corresponding to a plurality of frames of a video, a plurality of feature sets, respective ones of the feature sets including features extracted from respective ones of the data sets. A first stage of the feature aggregator generates a kernel for a second stage of the feature aggregator. The kernel is adapted to content of the feature sets so as to emphasize desirable ones of the feature sets and deemphasize undesirable ones of the feature sets. In the second stage of the feature aggregator the kernel generated by the first stage is applied to the plurality of feature sets to generate a plurality of significances corresponding to the plurality of feature sets. The feature sets are weighted based on corresponding significances and weighted feature sets are aggregated to generate an aggregated feature set.Type: ApplicationFiled: September 1, 2016Publication date: March 1, 2018Inventors: Gang HUA, Peiran REN, Jiaolong YANG