Patents by Inventor Ziheng Wang

Ziheng Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240126986
    Abstract: Examples of the disclosure relate to a method and apparatus for processing a document, a device, and a medium. The method includes: obtaining document parameter information of a document to be created in response to satisfying a preset online document creation condition; and generating a new target online document based on the document parameter information. The preset online document creation condition includes at least one of the following conditions: a preset document update period is reached, and a target data object is collected through the target online document in the update period; and a size of data in a first online document reaches a preset size of data, or a first online document is determined to fail to completely bear a size of data to be written in a next data collection period or a data collection task.
    Type: Application
    Filed: December 14, 2023
    Publication date: April 18, 2024
    Inventors: Changming WANG, Ziheng SONG, Zhiwei YUAN
  • Publication number: 20230368530
    Abstract: Various of the disclosed embodiments relate to systems and methods for recognizing types of surgical operations from data gathered in a surgical theater, such as recognizing a surgery procedure and corresponding specialty from endoscopic video data. Some embodiments select discrete frame sets from the data for individual consideration by a corpus of machine learning models, Some embodiments may include an uncertainty indication with each classification to guide downstream decision-making based upon the classification. For example, where the system is used as part of a data annotation pipeline, uncertain classifications may be flagged for downstream confirmation and review by a human reviewer.
    Type: Application
    Filed: November 17, 2021
    Publication date: November 16, 2023
    Inventors: Ziheng Wang, Kiran Bhattacharyya, Anthony Jarc
  • Publication number: 20230316756
    Abstract: Various of the disclosed embodiments relate to systems and methods for processing surgical data to facilitate further downstream operations. For example, some embodiments may include machine learning systems trained to recognize whether video from surgical visualization tools, such as endoscopes, depicts a field of view inside or outside the patient body. The system may excise or whiteout frames of video appearing outside the patient so as to remove potentially compromising personal information, such as the identities of members of the surgical team, the patients identity, configurations of the surgical theater, etc. Appropriate removal of such non-surgical data may facilitate downstream processing, e.g., by complying with regulatory requirements as well as by removing extraneous data potentially inimical to further downstream processing, such as training a downstream classifier.
    Type: Application
    Filed: November 18, 2021
    Publication date: October 5, 2023
    Inventors: Ziheng Wang, Kiran Bhattacharyya, Samuel Bretz, Anthony Jarc, Xi Liu, Andrea Villa, Aneeq Zia
  • Publication number: 20230053235
    Abstract: Systems, methods, and non-transitory computer-readable media can collect a set of training videos as training data, wherein the set of training videos are labeled with one or more labels based on one or more video quality metrics associated with an evaluation objective. A machine learning model is trained based on the training data. A video to be evaluated is received. The video is assigned to a first video quality category of a plurality of video quality categories based on the machine learning model.
    Type: Application
    Filed: November 2, 2022
    Publication date: February 16, 2023
    Inventors: Wook Jin Chung, Ziheng Wang, Allen Yang Liu, Joyce Marie Hodel
  • Patent number: 11521386
    Abstract: Systems, methods, and non-transitory computer-readable media can collect a set of training videos as training data, wherein the set of training videos are labeled with one or more labels based on one or more video quality metrics associated with an evaluation objective. A machine learning model is trained based on the training data. A video to be evaluated is received. The video is assigned to a first video quality category of a plurality of video quality categories based on the machine learning model.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: December 6, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Wook Jin Chung, Ziheng Wang, Allen Yang Liu, Joyce Marie Hodel
  • Publication number: 20220249019
    Abstract: According to an aspect of the invention, there is provided a low back pain analysis device comprising: a processor; and a storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by the processor, perform processing of: obtaining a relationship between a result of a pattern and a low back pain, using the result of the pattern obtained by classifying gravity center movement data acquired by a sensor, which is attached to furniture and acquires the gravity center movement data for a sitting period including a period for which a person is sitting on the furniture, using clustering.
    Type: Application
    Filed: January 18, 2022
    Publication date: August 11, 2022
    Inventors: Ziheng Wang, Keizo Sato, Ryoichi Nagatomi
  • Patent number: 11138440
    Abstract: Systems, methods, and non-transitory computer-readable media can receive a set of video frames associated with a video. For each video frame of the set of video frames, a plurality of interest points are identified based on an interest point detector. For each video frame of the set of video frames, it is determined whether the video frame depicts the same static image as a next video frame in the set of video frames based on the plurality of interest points identified in each video frame.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: October 5, 2021
    Assignee: Facebook, Inc.
    Inventors: Jianyu Wang, Lei Huang, Guangshuo Liu, Renbin Peng, Ziheng Wang, Di Liu
  • Patent number: 11017237
    Abstract: Systems, methods, and non-transitory computer-readable media can receive a set of video frames associated with a video. Dynamic regions in each video frame of the set of video frames can be filtered out, wherein each dynamic region represents a region in which a threshold level of movement is detected. A determination can be made for each video frame of the set of filtered video frames, whether the video frame comprises synthetic overlaid text based on a machine learning model.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: May 25, 2021
    Assignee: Facebook, Inc.
    Inventors: Lei Huang, Jianyu Wang, Guangshuo Liu, Renbin Peng, Ziheng Wang, Di Liu
  • Patent number: 10956746
    Abstract: Systems, methods, and non-transitory computer-readable media can receive a set of video frames associated with a video. A determination can be made that a first set of consecutive video frames of the set of video frames depicts identical content to a second set of consecutive video frames of the set of video frames, wherein the first set of consecutive video frames and the second set of consecutive video frames satisfy a threshold number of consecutive video frames. The video is identified as a looping video based on the determination that the first set of consecutive video frames depicts identical content to the second set of consecutive video frames.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: March 23, 2021
    Assignee: Facebook, Inc.
    Inventors: Lei Huang, Guangshuo Liu, Renbin Peng, Ziheng Wang, Di Liu, Jianyu Wang
  • Patent number: 10922548
    Abstract: Systems, methods, and non-transitory computer-readable media can receive a set of video frames associated with a video. A determination can be made that a threshold number of video frames of the set of video frames depict two or more reaction icons of a set of reaction icons. The video can be identified as a poll video based on the determining that the threshold number of video frames of the set of video frames depict two or more reaction icons of the set of reaction icons.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 16, 2021
    Assignee: Facebook, Inc.
    Inventors: Lei Huang, Jianyu Wang, Guangshuo Liu, Renbin Peng, Ziheng Wang, Raghu Prasad Chalasani
  • Publication number: 20210027065
    Abstract: Systems, methods, and non-transitory computer-readable media can collect a set of training videos as training data, wherein the set of training videos are labeled with one or more labels based on one or more video quality metrics associated with an evaluation objective. A machine learning model is trained based on the training data. A video to be evaluated is received. The video is assigned to a first video quality category of a plurality of video quality categories based on the machine learning model.
    Type: Application
    Filed: July 26, 2019
    Publication date: January 28, 2021
    Inventors: Wook Jin Chung, Ziheng Wang, Allen Yang Liu, Joyce Marie Hodel
  • Patent number: 10684674
    Abstract: A virtual reality system includes a head-mounted display (HMD) having one or more facial sensors and illumination sources mounted to a surface of the HMD. For example, the facial sensors are image capture devices coupled to a bottom side of the HMD. The illumination sources illuminate portions of a user's face outside of the HMD, while the facial sensors capture images of the illuminated portions of the user's face. A controller receives the captured images and generates a representation of the portions of the user's face by identifying landmarks of the user's face in the captured images and performing other suitable image processing methods. Based on the representation, the controller or another component of the virtual reality system generates content for presentation to the user.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: June 16, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
  • Patent number: 10430988
    Abstract: A facial tracking system generates a virtual rendering of a portion of a face of a user wearing a head-mounted display (HMD). The facial tracking system illuminates portions of the face inside the HMD. The facial tracking system captures a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD. A plurality of planar sections of the portion of the face are identified based at least in part on the plurality of facial data. The plurality of planar sections are mapped to one or more landmarks of the face. Facial animation information is generated based at least in part on the mapping, the facial animation information describing a portion of a virtual face corresponding to the portion of the user's face.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: October 1, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
  • Patent number: 9959678
    Abstract: A head mounted display (HMD) in a VR system includes sensors for tracking the eyes and face of a user wearing the HMD. The VR system records calibration attributes such as landmarks of the face of the user. Light sources illuminate portions of the user's face covered by the HMD. In conjunction, facial sensors capture facial data. The VR system analyzes the facial data to determine the orientation of planar sections of the illuminated portions of face. The VR system aggregates planar sections of the face and maps the planar sections to landmarks of the face to generate a facial animation of the user, which can also include eye orientation information. The facial animation is represented as a virtual avatar and presented to the user.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: May 1, 2018
    Assignee: Oculus VR, LLC
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
  • Publication number: 20170352178
    Abstract: A facial tracking system generates a virtual rendering of a portion of a face of a user wearing a head-mounted display (HMD). The facial tracking system illuminates portions of the face inside the HMD. The facial tracking system captures a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD. A plurality of planar sections of the portion of the face are identified based at least in part on the plurality of facial data. The plurality of planar sections are mapped to one or more landmarks of the face. Facial animation information is generated based at least in part on the mapping, the facial animation information describing a portion of a virtual face corresponding to the portion of the user's face.
    Type: Application
    Filed: June 3, 2016
    Publication date: December 7, 2017
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
  • Publication number: 20170352183
    Abstract: A head mounted display (HMD) in a VR system includes sensors for tracking the eyes and face of a user wearing the HMD. The VR system records calibration attributes such as landmarks of the face of the user. Light sources illuminate portions of the user's face covered by the HMD. In conjunction, facial sensors capture facial data. The VR system analyzes the facial data to determine the orientation of planar sections of the illuminated portions of face. The VR system aggregates planar sections of the face and maps the planar sections to landmarks of the face to generate a facial animation of the user, which can also include eye orientation information. The facial animation is represented as a virtual avatar and presented to the user.
    Type: Application
    Filed: June 3, 2016
    Publication date: December 7, 2017
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon
  • Publication number: 20170287194
    Abstract: A virtual reality system includes a head-mounted display (HMD) having one or more facial sensors and illumination sources mounted to a surface of the HMD. For example, the facial sensors are image capture devices coupled to a bottom side of the HMD. The illumination sources illuminate portions of a user's face outside of the HMD, while the facial sensors capture images of the illuminated portions of the user's face. A controller receives the captured images and generates a representation of the portions of the user's face by identifying landmarks of the user's face in the captured images and performing other suitable image processing methods. Based on the representation, the controller or another component of the virtual reality system generates content for presentation to the user.
    Type: Application
    Filed: April 1, 2016
    Publication date: October 5, 2017
    Inventors: Dov Katz, Michael John Toksvig, Ziheng Wang, Timothy Paul Omernick, Torin Ross Herndon