Patents by Inventor Alvaro Collet Romea
Alvaro Collet Romea has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230334798Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: April 20, 2023Publication date: October 19, 2023Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
-
Patent number: 11715269Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: August 23, 2021Date of Patent: August 1, 2023Assignee: Meta Platforms Technologies, LLCInventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
-
Patent number: 11546567Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.Type: GrantFiled: December 7, 2018Date of Patent: January 3, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
-
Patent number: 11132841Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: November 30, 2018Date of Patent: September 28, 2021Assignee: Facebook Technologies, LLCInventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
-
Patent number: 10796185Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.Type: GrantFiled: November 3, 2017Date of Patent: October 6, 2020Assignee: Facebook, Inc.Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
-
Publication number: 20200175764Abstract: The disclosed computer-implemented method may include acquiring, from a client device within a real-world environment, information representative of the real-world environment, and transmitting the information representative of the real-world environment to a relocalization service. The method may further include receiving, from the relocalization service, (1) an anchor point that may include a mapped position within the real-world environment, and (2) a determined position within the real-world environment of a client device relative to the mapped position of the anchor point. The method may further include sending an identifier of the anchor point to an asset management service, and obtaining, from the asset management service, a digital asset. The method may further include presenting the digital asset at a position within an artificial environment relative to the mapped position of the anchor point. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: November 30, 2018Publication date: June 4, 2020Inventors: Alvaro Collet Romea, Jingming Dong, Xiaoyang Gao, Jiawen Zhang, Yuheng Ren, Raul Mur Artal, Christopher Sweeney, Jakob Julian Engel
-
Patent number: 10665028Abstract: In one embodiment, a method includes determining, using one or more location sensors of a computing device, an approximate location of the computing device, identifying a content object located within a threshold distance of the approximate location, wherein an augmented-reality map associates the content object with a stored model of a real-world object and specifies a location of the content object on or relative to the stored model of the real-world object, obtaining an image from a camera of the device, identifying, in the image, a target real-world object that matches the stored model of the real-world object, determining a content object location based on a location of the target real-world object and the location of the content object on or relative to the model of the real-world object, and displaying the content object at the content object location.Type: GrantFiled: August 9, 2018Date of Patent: May 26, 2020Assignee: Facebook, Inc.Inventors: Matthew Adam Simari, Alvaro Collet Romea, Krishnan Kumar Ramnath
-
Publication number: 20190379873Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.Type: ApplicationFiled: December 7, 2018Publication date: December 12, 2019Inventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
-
Publication number: 20190371067Abstract: In one embodiment, a method includes determining, using one or more location sensors of a computing device, an approximate location of the computing device, identifying a content object located within a threshold distance of the approximate location, wherein an augmented-reality map associates the content object with a stored model of a real-world object and specifies a location of the content object on or relative to the stored model of the real-world object, obtaining an image from a camera of the device, identifying, in the image, a target real-world object that matches the stored model of the real-world object, determining a content object location based on a location of the target real-world object and the location of the content object on or relative to the model of the real-world object, and displaying the content object at the content object location.Type: ApplicationFiled: August 9, 2018Publication date: December 5, 2019Inventors: Matthew Adam Simari, Alvaro Collet Romea, Krishnan Kumar Ramnath
-
Patent number: 10394221Abstract: Systems, devices, and methods are described herein for transforming three dimensional (3D) video data into a 3D printable model. In one aspect, a method for transforming 3D video data may include receiving 3D video data indicated or selected for 3D printing. The selected portion or 3D video data, which may include a frame of the 3D video data, may be repaired or modified to generate a 3D model that define at least one enclosed volume. At least one of the enclosed volumes of the 3D video data may be re-oriented based on at least one capability of a target 3D printing device. In some aspects, the re-orienting may be performed to optimize at least one of a total print volume or print orientation of the at least one enclosed volume. In some aspects, the method may be performed in response to a single selection or action performed by a user.Type: GrantFiled: August 12, 2016Date of Patent: August 27, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Kristofer N. Iverson, Patrick John Sweeney, William Crow, Dennis Evseev, Steven Craig Sullivan, Alvaro Collet Romea, Ming Chuang, Zheng Wang, Emmett Lalish
-
Patent number: 10304244Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.Type: GrantFiled: July 8, 2016Date of Patent: May 28, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino
-
Publication number: 20190138834Abstract: In one embodiment, a method includes generating, by a device, first tracking data using a first tracking algorithm, based on first video frames associated with a scene. An augmented-reality (AR) effect may be displayed based on the first tracking data. The device may generate a first confidence score associated with the first tracking data and determine that the first confidence score is above a threshold. The device may generate, based on second video frames subsequent to the first video frames, second tracking data using the first tracking algorithm. The device may determine that an associated second confidence score is below a threshold. In response, the device may generate, based on third video frames subsequent to the second video frames, third tracking data using a second tracking algorithm different from the first. The device may then display the AR effect based on the third tracking data.Type: ApplicationFiled: November 3, 2017Publication date: May 9, 2019Inventors: Alvaro Collet Romea, Tullie Murrell, Hermes Germi Pique Corchs, Krishnan Ramnath, Thomas Ward Meyer, Jiao Li, Steven Kish
-
Patent number: 10163247Abstract: A computing system is configured for context-adaptive allocation of render model resources that may sacrifice some level of detail in a computational description of a 3D scene before rendering in order to accommodate resource limitations in a rendering environment such as available processor cycles, and/or bandwidth for data transmission to a processor. Such resource limitations can often preclude rendering a richly detailed 3D scene, particularly in full-motion and/or in real time. An importance function describing the relative perceptual importance of elements that make up the 3D scene is utilized to enable resources to be adaptively allocated so that more resources go to visual elements of the 3D scene that have a higher perceptual importance. The rendered output may thus optimize visual fidelity for the computational description within the resource constrained rendering environment.Type: GrantFiled: July 14, 2015Date of Patent: December 25, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alvaro Collet Romea, Ming Chuang, Pat Sweeney, Steve Sullivan
-
Publication number: 20180046167Abstract: Systems, devices, and methods are described herein for transforming three dimensional (3D) video data into a 3D printable model. In one aspect, a method for transforming 3D video data may include receiving 3D video data indicated or selected for 3D printing. The selected portion or 3D video data, which may include a frame of the 3D video data, may be repaired or modified to generate a 3D model that define at least one enclosed volume. At least one of the enclosed volumes of the 3D video data may be re-oriented based on at least one capability of a target 3D printing device. In some aspects, the re-orienting may be performed to optimize at least one of a total print volume or print orientation of the at least one enclosed volume. In some aspects, the method may be performed in response to a single selection or action performed by a user.Type: ApplicationFiled: August 12, 2016Publication date: February 15, 2018Inventors: Kristofer N. Iverson, Patrick John Sweeney, William Crow, Dennis Evseev, Steven Craig Sullivan, Alvaro Collet Romea, Ming Chuang, Zheng Wang, Emmett Lalish
-
Publication number: 20180012407Abstract: In some examples, a computing device can determine synthetic meshes based on source meshes of a source mesh sequence and target meshes of a target mesh sequence. The computing device can then place the respective synthetic meshes based at least in part on a rigid transformation to define a processor-generated character. For example, the computing device can determine subsets of the mesh sequences based on a similarity criterion. The computing device can determine modified first and second meshes having a connectivity corresponding to a reference mesh. The computing device can then determine the synthetic meshes based on the modified first and second meshes. In some examples, the computing device can project source and target textures onto the synthetic mesh to provide projected source and target textures. The computing device can determine a synthetic texture registered to the synthetic mesh based on the projected source and target textures.Type: ApplicationFiled: July 8, 2016Publication date: January 11, 2018Inventors: Ming Chuang, Alvaro Collet Romea, Hugues H. Hoppe, Fabian Andres Prada Nino
-
Patent number: 9665978Abstract: Consistent tessellation via topology-aware surface tracking is provided in which a series of meshes is approximated by taking one or more meshes from the series and calculating a transformation field to transform the keyframe mesh into each mesh of the series, and substituting the transformed keyframe meshes for the original meshes. The keyframe mesh may be selected advisedly based upon a scoring metric. An error measurement on the transformed keyframe exceeding tolerance or threshold may suggest another keyframe be selected for one or more frames in the series. The sequence of frames may be divided into a number of subsequences to permit parallel processing, including two or more recursive levels of keyframe substitution. The transformed keyframe meshes achieve more consistent tessellation of the object across the series.Type: GrantFiled: July 20, 2015Date of Patent: May 30, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ming Chuang, Alvaro Collet Romea, Pat Sweeney, Steve Sullivan, Don Gillett
-
Patent number: 9646410Abstract: A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence.Type: GrantFiled: June 30, 2015Date of Patent: May 9, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alvaro Collet Romea, Steve Sullivan, Adam Kirk
-
Publication number: 20170024930Abstract: Consistent tessellation via topology-aware surface tracking is provided in which a series of meshes is approximated by taking one or more meshes from the series and calculating a transformation field to transform the keyframe mesh into each mesh of the series, and substituting the transformed keyframe meshes for the original meshes. The keyframe mesh may be selected advisedly based upon a scoring metric. An error measurement on the transformed keyframe exceeding tolerance or threshold may suggest another keyframe be selected for one or more frames in the series. The sequence of frames may be divided into a number of subsequences to permit parallel processing, including two or more recursive levels of keyframe substitution. The transformed keyframe meshes achieve more consistent tessellation of the object across the series.Type: ApplicationFiled: July 20, 2015Publication date: January 26, 2017Inventors: Ming Chuang, Alvaro Collet Romea, Pat Sweeney, Steve Sullivan, Don Gillett
-
Publication number: 20170018111Abstract: A computing system is configured for context-adaptive allocation of render model resources that may sacrifice some level of detail in a computational description of a 3D scene before rendering in order to accommodate resource limitations in a rendering environment such as available processor cycles, and/or bandwidth for data transmission to a processor. Such resource limitations can often preclude rendering a richly detailed 3D scene, particularly in full-motion and/or in real time. An importance function describing the relative perceptual importance of elements that make up the 3D scene is utilized to enable resources to be adaptively allocated so that more resources go to visual elements of the 3D scene that have a higher perceptual importance. The rendered output may thus optimize visual fidelity for the computational description within the resource constrained rendering environment.Type: ApplicationFiled: July 14, 2015Publication date: January 19, 2017Inventors: Alvaro Collet Romea, Ming Chuang, Pat Sweeney, Steve Sullivan
-
Publication number: 20170004649Abstract: A three-dimensional (3D) scene is computationally reconstructed using a combination of plural modeling techniques. Point clouds representing an object in the 3D scene are generated by different modeled techniques and each point is encoded with a confidence value which reflects a degree of accuracy in describing the surface of the object in the 3D scene based on strengths and weaknesses of each modeling technique. The point clouds are merged in which a point for each location on the object is selected according to the modeling technique that provides the highest confidence.Type: ApplicationFiled: June 30, 2015Publication date: January 5, 2017Inventors: Alvaro Collet Romea, Steve Sullivan, Adam Kirk