Patents by Inventor Minh Phuoc Vo

Minh Phuoc Vo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11762080
    Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: September 19, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
  • Patent number: 11651540
    Abstract: In one embodiment, a method includes adjusting parameters of a three-dimensional geometry corresponding to a first person to make the three-dimensional geometry represent a desired pose for the first person, accessing a neural texture encoding an appearance of the first person, generating a first rendered neural texture based on a mapping between (1) a portion of the three-dimensional geometry that is visible from a viewing direction and (2) the neural texture, generating a second rendered neural texture by processing the first rendered neural texture using a first neural network, determining normal information associated with the portion of the three-dimensional geometry that is visible from the viewing direction, and generating a rendered image for the first person in the desired pose by processing the second rendered neural texture and the normal information using a second neural network.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: May 16, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Minh Phuoc Vo, Christoph Lassner, Carsten Sebastian Stoll, Amit Raj
  • Publication number: 20220082679
    Abstract: A method includes receiving a first wireless signal detected by a first device in an environment, the first wireless signal including a first distortion pattern caused by an object moving in the environment, receiving a second wireless signal detected by a second device in the environment, the second wireless signal including a second distortion pattern caused by the object moving in the environment, determining, by comparing the first distortion pattern to the second distortion pattern, that the first distortion pattern and the second distortion pattern correspond to a same movement event associated with the object moving in the environment, determining a timing offset between the first device and the second device based on information associated with the first distortion pattern and the second distortion pattern, and determining, based on the timing offset, temporal correspondences between data generated by the first device and data generated by the second device.
    Type: Application
    Filed: December 2, 2020
    Publication date: March 17, 2022
    Inventors: Minh Phuoc Vo, Kiran Kumar Somasundaram, Steven John Lovegrove
  • Publication number: 20220036626
    Abstract: In one embodiment, a method includes adjusting parameters of a three-dimensional geometry corresponding to a first person to make the three-dimensional geometry represent a desired pose for the first person, accessing a neural texture encoding an appearance of the first person, generating a first rendered neural texture based on a mapping between (1) a portion of the three-dimensional geometry that is visible from a viewing direction and (2) the neural texture, generating a second rendered neural texture by processing the first rendered neural texture using a first neural network, determining normal information associated with the portion of the three-dimensional geometry that is visible from the viewing direction, and generating a rendered image for the first person in the desired pose by processing the second rendered neural texture and the normal information using a second neural network.
    Type: Application
    Filed: April 12, 2021
    Publication date: February 3, 2022
    Inventors: Minh Phuoc Vo, Christoph Lassner, Carsten Sebastian Stoll, Amit Raj
  • Patent number: 10535156
    Abstract: Examples of the present disclosure describe systems and methods for scene reconstruction from bursts of image data. In an example, an image capture device may gather information from multiple positions within the scene. At each position, a burst of image data may be captured, such that other images within the burst may be used to identify common image features, anchor points, and geometry, in order to generate a scene reconstruction as observed from the position. Thus, as a result of capturing bursts from multiple positions in a scene, multiple burst reconstructions may be generated. Each burst may be oriented within the scene by identifying a key frame for each burst and using common image features and anchor points among the key frames to determine a camera position for each key frame. The burst reconstructions may then be combined into a unified reconstruction, thereby generating a high-quality reconstruction of the scene.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Sudipta Narayan Sinha, Minh Phuoc Vo
  • Publication number: 20180225836
    Abstract: Examples of the present disclosure describe systems and methods for scene reconstruction from bursts of image data. In an example, an image capture device may gather information from multiple positions within the scene. At each position, a burst of image data may be captured, such that other images within the burst may be used to identify common image features, anchor points, and geometry, in order to generate a scene reconstruction as observed from the position. Thus, as a result of capturing bursts from multiple positions in a scene, multiple burst reconstructions may be generated. Each burst may be oriented within the scene by identifying a key frame for each burst and using common image features and anchor points among the key frames to determine a camera position for each key frame. The burst reconstructions may then be combined into a unified reconstruction, thereby generating a high-quality reconstruction of the scene.
    Type: Application
    Filed: April 4, 2017
    Publication date: August 9, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Sudipta Narayan Sinha, Minh Phuoc Vo