Patents by Inventor Ruizhe Wang

Ruizhe Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112063
    Abstract: A method and system for estimating the ground state energy of a quantum Hamiltonian. The disclosed algorithm may run on any hardware and is suited for early fault tolerant quantum computers. The algorithm employs low-depth quantum circuits with one ancilla qubit with classical post-processing. The algorithm first draws samples from Hadamard tests in which the unitary is a controlled time evolution of the Hamiltonian. The samples are used for evaluating the convolution of the spectral measure and a filter function, and then inferring the ground state energy from this convolution. Quantum circuit depth is linear in the inverse spectral gap and poly-logarithmic in the inverse target accuracy and inverse initial overlap. Runtime is polynomial in the inverse spectral gap, inverse target accuracy, and inverse initial overlap. The algorithm produces a highly-accurate estimate of the ground state energy with reasonable runtime using low-depth quantum circuits.
    Type: Application
    Filed: September 8, 2023
    Publication date: April 4, 2024
    Inventors: Guoming Wang, Peter Douglas Johnson, Ruizhe Zhang, Daniel Stilck França, Shuchen Zhu
  • Patent number: 11195318
    Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.
    Type: Grant
    Filed: April 23, 2015
    Date of Patent: December 7, 2021
    Assignee: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
  • Patent number: 9940727
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, frames of range scan data captured using one or more three dimensional (3D) sensors are obtained, where the frames correspond to different views of an object or scene; point clouds for the frames are registered with each other by maximizing coherence of projected occluding boundaries of the object or scene within the frames using an optimization algorithm with a cost function that computes pairwise or global contour correspondences; and the registered point clouds are provided for use in 3D modeling of the object or scene. Further, the cost function, which maximizing contour coherence, can be used with more than two point clouds for more than two frames at a time in a global optimization framework.
    Type: Grant
    Filed: June 19, 2015
    Date of Patent: April 10, 2018
    Assignee: University of Southern California
    Inventors: Gerard Guy Medioni, Ruizhe Wang, Jongmoo Choi
  • Patent number: 9418475
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, multiple 3D point clouds, which are captured using one or more 3D cameras, are obtained. At least two of the 3D point clouds correspond to different positions of a body relative to at least a single one of the one or more 3D cameras. Two or more of the 3D point clouds are identified as corresponding to two or more predefined poses, and a segmented representation of the body is generated, in accordance with a 3D part-based volumetric model including cylindrical representations, based on the two 3D point clouds identified as corresponding to the two predefined pose.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: August 16, 2016
    Assignee: University of Southern California
    Inventors: Gerard Guy Medioni, Jongmoo Choi, Ruizhe Wang
  • Publication number: 20150371432
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, frames of range scan data captured using one or more three dimensional (3D) sensors are obtained, where the frames correspond to different views of an object or scene; point clouds for the frames are registered with each other by maximizing coherence of projected occluding boundaries of the object or scene within the frames using an optimization algorithm with a cost function that computes pairwise or global contour correspondences; and the registered point clouds are provided for use in 3D modeling of the object or scene. Further, the cost function, which maximizing contour coherence, can be used with more than two point clouds for more than two frames at a time in a global optimization framework.
    Type: Application
    Filed: June 19, 2015
    Publication date: December 24, 2015
    Inventors: Gerard Guy Medioni, Ruizhe Wang, Jongmoo Choi
  • Publication number: 20150356767
    Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.
    Type: Application
    Filed: April 23, 2015
    Publication date: December 10, 2015
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
  • Publication number: 20130286012
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, multiple 3D point clouds, which are captured using one or more 3D cameras, are obtained. At least two of the 3D point clouds correspond to different positions of a body relative to at least a single one of the one or more 3D cameras. Two or more of the 3D point clouds are identified as corresponding to two or more predefined poses, and a segmented representation of the body is generated, in accordance with a 3D part-based volumetric model including cylindrical representations, based on the two 3D point clouds identified as corresponding to the two predefined pose.
    Type: Application
    Filed: March 13, 2013
    Publication date: October 31, 2013
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Gerard Guy Medioni, Jongmoo Choi, Ruizhe Wang