Patents by Inventor Albert Parra Pozo
Albert Parra Pozo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11967014Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: GrantFiled: May 3, 2023Date of Patent: April 23, 2024Assignee: Meta Platforms Technologies, LLCInventors: Brian Keith Cabral, Albert Parra Pozo
-
Publication number: 20230267675Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: ApplicationFiled: May 3, 2023Publication date: August 24, 2023Inventors: Brian Keith CABRAL, Albert PARRA POZO
-
Patent number: 11676330Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: GrantFiled: March 7, 2022Date of Patent: June 13, 2023Assignee: Meta Platforms Technologies, LLCInventors: Brian Keith Cabral, Albert Parra Pozo
-
Publication number: 20220413434Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.Type: ApplicationFiled: June 28, 2021Publication date: December 29, 2022Inventors: Albert PARRA POZO, Joseph VIRSKUS, Ganesh VENKATESH, Kai LI, Shen-Chi CHEN, Amit KUMAR, Rakesh RANJAN, Brian Keith CABRAL, Samuel Alan JOHNSON, Wei YE, Michael Alexander SNOWER, Yash PATEL
-
Publication number: 20220413433Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.Type: ApplicationFiled: June 28, 2021Publication date: December 29, 2022Inventors: Albert PARRA POZO, Joseph VIRSKUS, Ganesh VENKATESH, Kai LI, Shen-Chi CHEN, Amit KUMAR, Rakesh RANJAN, Brian Keith CABRAL, Samuel Alan JOHNSON, Wei YE, Michael Alexander SNOWER, Yash PATEL
-
Patent number: 11461962Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.Type: GrantFiled: June 28, 2021Date of Patent: October 4, 2022Assignee: Meta Platforms Technologies, LLCInventors: Albert Parra Pozo, Joseph Virskus, Ganesh Venkatesh, Kai Li, Shen-Chi Chen, Amit Kumar, Rakesh Ranjan, Brian Keith Cabral, Samuel Alan Johnson, Wei Ye, Michael Alexander Snower, Yash Patel
-
Publication number: 20220189105Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: ApplicationFiled: March 7, 2022Publication date: June 16, 2022Inventors: Brian Keith CABRAL, Albert PARRA POZO
-
Patent number: 11302063Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: GrantFiled: July 21, 2020Date of Patent: April 12, 2022Assignee: Facebook Technologies, LLCInventors: Brian Keith Cabral, Albert Parra Pozo
-
Publication number: 20220028157Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.Type: ApplicationFiled: July 21, 2020Publication date: January 27, 2022Inventors: Brian Keith Cabral, Albert Parra Pozo
-
Patent number: 10623718Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.Type: GrantFiled: November 29, 2018Date of Patent: April 14, 2020Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
-
Patent number: 10257501Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.Type: GrantFiled: April 11, 2016Date of Patent: April 9, 2019Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
-
Publication number: 20190098287Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.Type: ApplicationFiled: November 29, 2018Publication date: March 28, 2019Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
-
Patent number: 10230904Abstract: A camera system is configured to capture 360 degree image information of a local area, at least a portion of which is in stereo. The camera system includes a plurality of peripheral cameras, a plurality of axis cameras, a first rigid plate, and a second rigid plate, each aligned along an alignment axis. The peripheral cameras are arranged in a ring configuration that allows objects in the local area past a threshold distance to be within the fields of view of at least two peripheral cameras. The first and second rigid plates secure to a top and a bottom surface of the ring of peripheral cameras, respectively. At least one axis camera is arranged along the alignment axis and is coupled perpendicularly to a surface of the first rigid plate.Type: GrantFiled: April 11, 2016Date of Patent: March 12, 2019Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Joyce Hsu, Albert Parra Pozo, Andrew Hamilton Coward
-
Patent number: 10200624Abstract: A camera system is configured to capture, via a plurality of cameras, 360 degree image information of a local area, at least a portion of which is in stereo. The camera system determines respective exposure settings for the plurality of cameras. A minimum shutter speed and a maximum shutter speed are determined from the determined exposure settings. A set of test exposure settings is determined using the determined minimum shutter speed and maximum shutter speed. A set of test images is captured using the plurality of cameras at each test exposure setting in the set of test exposure settings. Each set of test images includes images from each of the plurality of cameras that are captured using a same respective test exposure setting. A global exposure setting is selected based on the captured sets of test images. The selected global exposure setting is applied to the plurality of cameras.Type: GrantFiled: April 11, 2016Date of Patent: February 5, 2019Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs
-
Patent number: 10187629Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.Type: GrantFiled: April 11, 2016Date of Patent: January 22, 2019Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
-
Patent number: 10165258Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.Type: GrantFiled: April 11, 2016Date of Patent: December 25, 2018Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
-
Patent number: 10057562Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.Type: GrantFiled: April 11, 2016Date of Patent: August 21, 2018Assignee: Facebook, Inc.Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
-
Publication number: 20170295309Abstract: A camera system is configured to capture, via a plurality of cameras, 360 degree image information of a local area, at least a portion of which is in stereo. The camera system determines respective exposure settings for the plurality of cameras. A minimum shutter speed and a maximum shutter speed are determined from the determined exposure settings. A set of test exposure settings is determined using the determined minimum shutter speed and maximum shutter speed. A set of test images is captured using the plurality of cameras at each test exposure setting in the set of test exposure settings. Each set of test images includes images from each of the plurality of cameras that are captured using a same respective test exposure setting. A global exposure setting is selected based on the captured sets of test images. The selected global exposure setting is applied to the plurality of cameras.Type: ApplicationFiled: April 11, 2016Publication date: October 12, 2017Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs
-
Publication number: 20170295358Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.Type: ApplicationFiled: April 11, 2016Publication date: October 12, 2017Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
-
Publication number: 20170295359Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.Type: ApplicationFiled: April 11, 2016Publication date: October 12, 2017Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda