Patents by Inventor Tianran Wang

Tianran Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11943099
    Abstract: A method according to embodiments of this application includes: A first network device sends a first packet to a second network device, where the first packet includes first indication information, and the first indication information indicates a support status of an iFIT capability corresponding to the first network device or a first service module included in the first network device. The first network device sends the packet to the second network device, to notify the support status of the IFIT capability of the first network device. In this way, the second network device can determine, based on the iFIT capability of the first network device, whether to encapsulate a measurement header, to avoid that a service packet cannot be correctly processed because the first network device cannot strip the measurement header from the service packet.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: March 26, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Shunwan Zhuang, Haibo Wang, Tianran Zhou, Weidong Li, Jie Dong
  • Patent number: 11922605
    Abstract: A method includes receiving, at a conference endpoint, video captured using a wide angle lens. The method further includes selecting a view region in a frame of the video. The method further includes selectively applying, based on a size of the view region, deformation correction or distortion correction to the view region to generate a corrected video frame. The method further includes transmitting the corrected video frame to a remote endpoint.
    Type: Grant
    Filed: November 23, 2018
    Date of Patent: March 5, 2024
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Tianran Wang, Hailin Song, Wenxue He
  • Patent number: 11803984
    Abstract: A method (1000) for operating cameras (202) in a cascaded network (100), comprising: capturing a first view (1200) with a first lens (326) having a first focal point (328) and a first centroid (352), the first view (1200) depicting a subject (1106); capturing a second view (1202) with a second lens (326) having a second focal point (328) and a second centroid (352); detecting a first location of the subject (1106), relative the first lens (326), wherein detecting the first location of the subject (1106), relative the first lens (326), is based on audio captured by a plurality of microphones (204); estimating a second location of the subject (1106), relative the second lens (326), based on the first location of the subject (1106) relative the first lens (326); selecting a portion (1206) of the second view (1202) as depicting the subject (1106) based on the estimate of the second location of the subject (1106) relative the second lens (326).
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: October 31, 2023
    Assignee: PLANTRONICS, INC.
    Inventors: Yongkang Fan, Hai Xu, Wenxue He, Hailin Song, Tianran Wang, Xi Lu
  • Publication number: 20230245271
    Abstract: A real-time method (600) for enhancing facial images (102). Degraded images (102) of a person—such as might be transmitted during a videoconference—are rectified based on a single high definition reference image (604) of a person who is talking. Facial landmarks (501) are used to map (210) image data from the reference image (604) to an intervening image (622) having a landmark configuration like that in a degraded image (102). The degraded images (102) and their corresponding intervening images (622) are blended using an artificial neural network (800, 900) to produce high-quality images (108) of the person who is speaking during a videoconference.
    Type: Application
    Filed: July 6, 2020
    Publication date: August 3, 2023
    Inventors: Hailin Song, Hai Xu, Yongkang Fan, Tianran Wang, Xi Lu
  • Publication number: 20230186654
    Abstract: Systems and methods are provided for identifying and displaying whiteboard text and/or an active speaker in a video-based presentation, e.g., a video conference. Video images of an environment including a whiteboard may be captured by a video camera system. The video images may be analyzed to detect at least one text-containing area in the environment. Each text-containing area may be analyzed to determine whether it is an area of a whiteboard. When a text-containing area is identified as a whiteboard area, an area of view including the text-containing whiteboard area may be selected for display, e.g., a subset of the full frame captured by the video system. A video feed from the video camera system may be controlled to display the selected area of view at a client device, to provide a useful view of the whiteboard text and/or a speaking person located near the whiteboard text.
    Type: Application
    Filed: May 12, 2020
    Publication date: June 15, 2023
    Applicant: Polycom Communications Technology (Beijing) Co., Ltd.
    Inventors: Xi LU, Tianran WANG, Hailin SONG, Hai XU, Yongkang FAN
  • Publication number: 20230136314
    Abstract: A method may include calculating a color gain by applying an automatic white balance (AWB) algorithm to a video frame of a video feed, calculating an illumination color by applying a machine learning model to the video frame, transforming the illumination color into an equivalent color gain, determining that a difference between the color gain and the equivalent color gain exceeds a difference threshold, reversing an effect of the illumination color on the video frame based on the threshold being exceeded to obtain a corrected video frame, and transmitting the corrected video frame to an endpoint.
    Type: Application
    Filed: May 12, 2020
    Publication date: May 4, 2023
    Applicant: Polycom Communications Technology (Beijing) Co., Ltd.
    Inventors: Tianran WANG, Hai XU, Xingyue HUANG, Yongkang FAN, Wenxue HE
  • Publication number: 20220398864
    Abstract: A method performs zooming based on gesture detection. A visual stream is presented using a first zoom configuration for a zoom state. An attention gesture is detected from a set of first images from the visual stream. The zoom state is adjusted from the first zoom configuration to a second zoom configuration to zoom in on a person in response to detecting the attention gesture. The visual stream is presented using the second zoom configuration after adjusting the zoom state to the second zoom configuration. Whether the person is speaking is determined, from a set of second images from the visual stream. The zoom state is adjusted to the first zoom configuration to zoom out from the person in response to determining that the person is not speaking. The visual stream is presented using the first zoom configuration after adjusting the zoom state to the first zoom configuration.
    Type: Application
    Filed: September 24, 2019
    Publication date: December 15, 2022
    Inventors: Xi Lu, Tianran Wang, Hailin Song, Hai Xu, Yongkang Fan
  • Publication number: 20220319032
    Abstract: A method (1000) for operating cameras (202) in a cascaded network (100), comprising: capturing a first view (1200) with a first lens (326) having a first focal point (328) and a first centroid (352), the first view (1200) depicting a subject (1106); capturing a second view (1202) with a second lens (326) having a second focal point (328) and a second centroid (352); detecting a first location of the subject (1106), relative the first lens (326), wherein detecting the first location of the subject (1106), relative the first lens (326), is based on audio captured by a plurality of microphones (204); estimating a second location of the subject (1106), relative the second lens (326), based on the first location of the subject (1106) relative the first lens (326); selecting a portion (1206) of the second view (1202) as depicting the subject (1106) based on the estimate of the second location of the subject (1106) relative the second lens (326).
    Type: Application
    Filed: June 4, 2020
    Publication date: October 6, 2022
    Applicant: PLANTRONICS, INC.
    Inventors: YONGKANG FAN, HAI XU, WENXUE HE, HAILIN SONG, TIANRAN WANG, XI LU
  • Publication number: 20220319034
    Abstract: A teleconferencing system (100) comprises: a first camera (202) including a first lens (326) having a first focal point (328) and a first centroid (352), and configured to capture a first view (900) corresponding to a subject (702); a second lens (326) having a second focal point (328) and a second centroid (352), and configured to capture a second view (902) corresponding to the subject (702); and a processor (206) coupled to the first camera device (202) and the second camera device (202). The processor (206) is configured to: estimate a first orientation (351) of the subject (702) relative the first lens (326) and a second orientation (351) of the subject relative the second lens (326); and determine that the first orientation (351) is more closely aligned with a first line (307) from the first centroid (352) to the first focal point (328) than is the second orientation (351) aligned with a second line (307) from the second centroid (352) to the second focal point (328).
    Type: Application
    Filed: June 4, 2020
    Publication date: October 6, 2022
    Applicant: PLANTRONICS, INC.
    Inventors: Yongkang Fan, HAI XU, Hailin Song, TIANRAN WANG, Xi Lu
  • Publication number: 20220303555
    Abstract: A method may include identifying, in a frame of a video feed, a region of interest (ROI) and a background, encoding the background using a first quantization parameter to obtain an encoded low-quality background, encoding the ROI using a second quantization parameter to obtain an encoded high-quality ROI, and encoding location information of the ROI to obtain encoded location information. The method may further include combining the encoded low-quality background, the encoded high-quality ROI, and the encoded location information to obtain a combined package. The method may further include transmitting the combined package to a remote endpoint.
    Type: Application
    Filed: June 10, 2020
    Publication date: September 22, 2022
    Applicant: Plantronics, Inc.
    Inventors: Xi Lu, Yu Chen, Hai Xu, Tianran Wang, Hailin Song, Lirong Zhang
  • Patent number: 11438549
    Abstract: A videoconferencing endpoint is described that uses a combination of face detection, motion detection, and upper body detection for selecting participants of a videoconference for group framing. Motion detection is used to remove fake faces as well as to detect motion in regions around detected faces during postprocessing. Upper body detection is used in conjunction with the motion detection in postprocessing to allow saving faces that have been initially detected by face detection for group framing even if the participant has turned away from the camera, allowing the endpoint to keep tracking the participants region better than would be possible based only on an unstable result coming from face detection.
    Type: Grant
    Filed: November 22, 2018
    Date of Patent: September 6, 2022
    Assignee: POLY, INC.
    Inventors: Tianran Wang, Wenxue He, Lidan Qin, Hai Xu
  • Publication number: 20220270216
    Abstract: A videoconferencing endpoint can adaptively adjust for lens distortion and image deformation depending on the distance of the subject from a camera and the radial distance of the subject from the center of the camera's field of view.
    Type: Application
    Filed: July 30, 2020
    Publication date: August 25, 2022
    Applicant: Plantronics, Inc.
    Inventors: TIANRAN WANG, HAI XU, XINGYUE HUANG, HAILIN HAILIN SONG
  • Publication number: 20220005162
    Abstract: A method includes receiving, at a conference endpoint, video captured using a wide angle lens. The method further includes selecting a view region in a frame of the video. The method further includes selectively applying, based on a size of the view region, deformation correction or distortion correction to the view region to generate a corrected video frame. The method further includes transmitting the corrected video frame to a remote endpoint.
    Type: Application
    Filed: November 23, 2018
    Publication date: January 6, 2022
    Inventors: Tianran Wang, Hailin Song, Wenxue He
  • Publication number: 20220006974
    Abstract: A videoconferencing endpoint is described that uses a combination of face detection, motion detection, and upper body detection for selecting participants of a videoconference for group framing. Motion detection is used to remove fake faces as well as to detect motion in regions around detected faces during postprocessing. Upper body detection is used in conjunction with the motion detection in postprocessing to allow saving faces that have been initially detected by face detection for group framing even if the participant has turned away from the camera, allowing the endpoint to keep tracking the participants region better than would be possible based only on an unstable result coming from face detection.
    Type: Application
    Filed: November 22, 2018
    Publication date: January 6, 2022
    Inventors: Tianran Wang, Wenxue He, Lidan Qin, Hai Xu