Patents by Inventor Fabian Andres Prada Nino

Fabian Andres Prada Nino has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240177419
    Abstract: Methods, systems, and storage media for modeling subjects in a virtual environment are disclosed. Exemplary implementations may: receiving, from a client device, image data including at least one subject; extracting, from the image data, a face of the at least one subject and an object interacting with the face, wherein the object may be glasses worn by the subject; generating a set of face primitives based on the face, the set of face primitives comprising geometry and appearance information; generating a set of object primitives based on a set of latent codes for the object; generating an appearance model of photometric interactions between the face and the object; and rendering an avatar in the virtual environment based on the appearance model, the set of face primitives, and the set of object primitives.
    Type: Application
    Filed: November 29, 2023
    Publication date: May 30, 2024
    Inventors: Shunsuke Saito, Junxuan Li, Tomas Simon Kreuz, Jason Saragih, Shun Iwase, Timur Bagautdinov, Rohan Joshi, Fabian Andres Prada Nino, Takaaki Shiratori, Yaser Sheikh, Stephen Anthony Lombardi
  • Publication number: 20220237879
    Abstract: A method for training a real-time, direct clothing modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject, forming a three-dimensional clothing mesh and a three-dimensional body mesh based on the images of the subject, and aligning the three-dimensional clothing mesh to the three-dimensional body mesh to form a skin-clothing boundary and a garment texture. The method also includes determining a loss factor based on a predicted cloth position and garment texture and an interpolated position and garment texture from the images of the subject, and updating a three-dimensional model including the three-dimensional clothing mesh and the three-dimensional body mesh according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to cause the system to execute the above method are also provided.
    Type: Application
    Filed: January 14, 2022
    Publication date: July 28, 2022
    Inventors: Chenglei Wu, Fabian Andres Prada Nino, Timur Bagautdinov, Weipeng Xu, Jessica Hodgins, Donglai Xiang
  • Patent number: 10304244
    Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: May 28, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino
  • Publication number: 20180012407
    Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.
    Type: Application
    Filed: July 8, 2016
    Publication date: January 11, 2018
    Inventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino