Patents by Inventor Alvaro Collet Romea

Alvaro Collet Romea has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334798
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: April 20, 2023
    Publication date: October 19, 2023
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 11715269
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: August 1, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 11546567
    Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: January 3, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
  • Patent number: 11132841
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: September 28, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 10796185
    Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: October 6, 2020
    Assignee: Facebook, Inc.
    Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
  • Publication number: 20200175764
    Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
  • Patent number: 10665028
    Abstract: In one embodiment, a method includes determining, using one or more location sensors of a computing device, an approximate location of the computing device, identifying a content object located within a threshold distance of the approximate location, wherein an augmented-reality map associates the content object with a stored model of a real-world object and specifies a location of the content object on or relative to the stored model of the real-world object, obtaining an image from a camera of the device, identifying, in the image, a target real-world object that matches the stored model of the real-world object, determining a content object location based on a location of the target real-world object and the location of the content object on or relative to the model of the real-world object, and displaying the content object at the content object location.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: May 26, 2020
    Assignee: Facebook, Inc.
    Inventors: Matthew Adam Simari, Alvaro Collet Romea, Krishnan Kumar Ramnath
  • Publication number: 20190379873
    Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
    Type: Application
    Filed: December 7, 2018
    Publication date: December 12, 2019
    Inventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
  • Publication number: 20190371067
    Abstract: In one embodiment, a method includes determining, using one or more location sensors of a computing device, an approximate location of the computing device, identifying a content object located within a threshold distance of the approximate location, wherein an augmented-reality map associates the content object with a stored model of a real-world object and specifies a location of the content object on or relative to the stored model of the real-world object, obtaining an image from a camera of the device, identifying, in the image, a target real-world object that matches the stored model of the real-world object, determining a content object location based on a location of the target real-world object and the location of the content object on or relative to the model of the real-world object, and displaying the content object at the content object location.
    Type: Application
    Filed: August 9, 2018
    Publication date: December 5, 2019
    Inventors: Matthew Adam Simari, Alvaro Collet Romea, Krishnan Kumar Ramnath
  • Patent number: 10394221
    Abstract: Systems, devices, and methods are described herein for transforming three dimensional (3D) video data into a 3D printable model. In one aspect, a method for transforming 3D video data may include receiving 3D video data indicated or selected for 3D printing. The selected portion or 3D video data, which may include a frame of the 3D video data, may be repaired or modified to generate a 3D model that define at least one enclosed volume. At least one of the enclosed volumes of the 3D video data may be re-oriented based on at least one capability of a target 3D printing device. In some aspects, the re-orienting may be performed to optimize at least one of a total print volume or print orientation of the at least one enclosed volume. In some aspects, the method may be performed in response to a single selection or action performed by a user.
    Type: Grant
    Filed: August 12, 2016
    Date of Patent: August 27, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kristofer N. Iverson, Patrick John Sweeney, William Crow, Dennis Evseev, Steven Craig Sullivan, Alvaro Collet Romea, Ming Chuang, Zheng Wang, Emmett Lalish
  • Patent number: 10304244
    Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: May 28, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino
  • Publication number: 20190138834
    Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 9, 2019
    Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
  • Patent number: 10163247
    Abstract: A computing system is configured for context-adaptive allocation of render model resources that may sacrifice some level of detail in a computational description of a 3D scene before rendering in order to accommodate resource limitations in a rendering environment such as available processor cycles, and/or bandwidth for data transmission to a processor. Such resource limitations can often preclude rendering a richly detailed 3D scene, particularly in full-motion and/or in real time. An importance function describing the relative perceptual importance of elements that make up the 3D scene is utilized to enable resources to be adaptively allocated so that more resources go to visual elements of the 3D scene that have a higher perceptual importance. The rendered output may thus optimize visual fidelity for the computational description within the resource constrained rendering environment.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: December 25, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alvaro Collet Romea, Ming Chuang, Pat Sweeney, Steve Sullivan
  • Publication number: 20180046167
    Abstract: Systems, devices, and methods are described herein for transforming three dimensional (3D) video data into a 3D printable model. In one aspect, a method for transforming 3D video data may include receiving 3D video data indicated or selected for 3D printing. The selected portion or 3D video data, which may include a frame of the 3D video data, may be repaired or modified to generate a 3D model that define at least one enclosed volume. At least one of the enclosed volumes of the 3D video data may be re-oriented based on at least one capability of a target 3D printing device. In some aspects, the re-orienting may be performed to optimize at least one of a total print volume or print orientation of the at least one enclosed volume. In some aspects, the method may be performed in response to a single selection or action performed by a user.
    Type: Application
    Filed: August 12, 2016
    Publication date: February 15, 2018
    Inventors: Kristofer N. Iverson, Patrick John Sweeney, William Crow, Dennis Evseev, Steven Craig Sullivan, Alvaro Collet Romea, Ming Chuang, Zheng Wang, Emmett Lalish
  • Publication number: 20180012407
    Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.
    Type: Application
    Filed: July 8, 2016
    Publication date: January 11, 2018
    Inventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino
  • Patent number: 9665978
    Abstract: Consistent tessellation via topology-aware surface tracking is provided in which a series of meshes is approximated by taking one or more meshes from the series and calculating a transformation field to transform the keyframe mesh into each mesh of the series, and substituting the transformed keyframe meshes for the original meshes. The keyframe mesh may be selected advisedly based upon a scoring metric. An error measurement on the transformed keyframe exceeding tolerance or threshold may suggest another keyframe be selected for one or more frames in the series. The sequence of frames may be divided into a number of subsequences to permit parallel processing, including two or more recursive levels of keyframe substitution. The transformed keyframe meshes achieve more consistent tessellation of the object across the series.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: May 30, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ming Chuang, Alvaro Collet Romea, Pat Sweeney, Steve Sullivan, Don Gillett
  • Patent number: 9646410
    Abstract: A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: May 9, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alvaro Collet Romea, Steve Sullivan, Adam Kirk
  • Publication number: 20170024930
    Abstract: Consistent tessellation via topology-aware surface tracking is provided in which a series of meshes is approximated by taking one or more meshes from the series and calculating a transformation field to transform the keyframe mesh into each mesh of the series, and substituting the transformed keyframe meshes for the original meshes. The keyframe mesh may be selected advisedly based upon a scoring metric. An error measurement on the transformed keyframe exceeding tolerance or threshold may suggest another keyframe be selected for one or more frames in the series. The sequence of frames may be divided into a number of subsequences to permit parallel processing, including two or more recursive levels of keyframe substitution. The transformed keyframe meshes achieve more consistent tessellation of the object across the series.
    Type: Application
    Filed: July 20, 2015
    Publication date: January 26, 2017
    Inventors: Ming Chuang, Alvaro Collet Romea, Pat Sweeney, Steve Sullivan, Don Gillett
  • Publication number: 20170018111
    Abstract: A computing system is configured for context-adaptive allocation of render model resources that may sacrifice some level of detail in a computational description of a 3D scene before rendering in order to accommodate resource limitations in a rendering environment such as available processor cycles, and/or bandwidth for data transmission to a processor. Such resource limitations can often preclude rendering a richly detailed 3D scene, particularly in full-motion and/or in real time. An importance function describing the relative perceptual importance of elements that make up the 3D scene is utilized to enable resources to be adaptively allocated so that more resources go to visual elements of the 3D scene that have a higher perceptual importance. The rendered output may thus optimize visual fidelity for the computational description within the resource constrained rendering environment.
    Type: Application
    Filed: July 14, 2015
    Publication date: January 19, 2017
    Inventors: Alvaro Collet Romea, Ming Chuang, Pat Sweeney, Steve Sullivan
  • Publication number: 20170004649
    Abstract: A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 5, 2017
    Inventors: Alvaro Collet Romea, Steve Sullivan, Adam Kirk