Patents by Inventor Albert Parra Pozo

Albert Parra Pozo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11967014
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Publication number: 20230267675
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Application
    Filed: May 3, 2023
    Publication date: August 24, 2023
    Inventors: Brian Keith CABRAL, Albert PARRA POZO
  • Patent number: 11676330
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: June 13, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Publication number: 20220413434
    Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.
    Type: Application
    Filed: June 28, 2021
    Publication date: December 29, 2022
    Inventors: Albert PARRA POZO, Joseph VIRSKUS, Ganesh VENKATESH, Kai LI, Shen-Chi CHEN, Amit KUMAR, Rakesh RANJAN, Brian Keith CABRAL, Samuel Alan JOHNSON, Wei YE, Michael Alexander SNOWER, Yash PATEL
  • Publication number: 20220413433
    Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.
    Type: Application
    Filed: June 28, 2021
    Publication date: December 29, 2022
    Inventors: Albert PARRA POZO, Joseph VIRSKUS, Ganesh VENKATESH, Kai LI, Shen-Chi CHEN, Amit KUMAR, Rakesh RANJAN, Brian Keith CABRAL, Samuel Alan JOHNSON, Wei YE, Michael Alexander SNOWER, Yash PATEL
  • Patent number: 11461962
    Abstract: A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: October 4, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Albert Parra Pozo, Joseph Virskus, Ganesh Venkatesh, Kai Li, Shen-Chi Chen, Amit Kumar, Rakesh Ranjan, Brian Keith Cabral, Samuel Alan Johnson, Wei Ye, Michael Alexander Snower, Yash Patel
  • Publication number: 20220189105
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Application
    Filed: March 7, 2022
    Publication date: June 16, 2022
    Inventors: Brian Keith CABRAL, Albert PARRA POZO
  • Patent number: 11302063
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: April 12, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Publication number: 20220028157
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Application
    Filed: July 21, 2020
    Publication date: January 27, 2022
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Patent number: 10623718
    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: April 14, 2020
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
  • Patent number: 10257501
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: April 9, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
  • Publication number: 20190098287
    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.
    Type: Application
    Filed: November 29, 2018
    Publication date: March 28, 2019
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
  • Patent number: 10230904
    Abstract: A camera system is configured to capture 360 degree image information of a local area, at least a portion of which is in stereo. The camera system includes a plurality of peripheral cameras, a plurality of axis cameras, a first rigid plate, and a second rigid plate, each aligned along an alignment axis. The peripheral cameras are arranged in a ring configuration that allows objects in the local area past a threshold distance to be within the fields of view of at least two peripheral cameras. The first and second rigid plates secure to a top and a bottom surface of the ring of peripheral cameras, respectively. At least one axis camera is arranged along the alignment axis and is coupled perpendicularly to a surface of the first rigid plate.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: March 12, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Joyce Hsu, Albert Parra Pozo, Andrew Hamilton Coward
  • Patent number: 10200624
    Abstract: A camera system is configured to capture, via a plurality of cameras, 360 degree image information of a local area, at least a portion of which is in stereo. The camera system determines respective exposure settings for the plurality of cameras. A minimum shutter speed and a maximum shutter speed are determined from the determined exposure settings. A set of test exposure settings is determined using the determined minimum shutter speed and maximum shutter speed. A set of test images is captured using the plurality of cameras at each test exposure setting in the set of test exposure settings. Each set of test images includes images from each of the plurality of cameras that are captured using a same respective test exposure setting. A global exposure setting is selected based on the captured sets of test images. The selected global exposure setting is applied to the plurality of cameras.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: February 5, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs
  • Patent number: 10187629
    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: January 22, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
  • Patent number: 10165258
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: December 25, 2018
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
  • Patent number: 10057562
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: August 21, 2018
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
  • Publication number: 20170295309
    Abstract: A camera system is configured to capture, via a plurality of cameras, 360 degree image information of a local area, at least a portion of which is in stereo. The camera system determines respective exposure settings for the plurality of cameras. A minimum shutter speed and a maximum shutter speed are determined from the determined exposure settings. A set of test exposure settings is determined using the determined minimum shutter speed and maximum shutter speed. A set of test images is captured using the plurality of cameras at each test exposure setting in the set of test exposure settings. Each set of test images includes images from each of the plurality of cameras that are captured using a same respective test exposure setting. A global exposure setting is selected based on the captured sets of test images. The selected global exposure setting is applied to the plurality of cameras.
    Type: Application
    Filed: April 11, 2016
    Publication date: October 12, 2017
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs
  • Publication number: 20170295358
    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error.
    Type: Application
    Filed: April 11, 2016
    Publication date: October 12, 2017
    Inventors: Brian Keith Cabral, Albert Parra Pozo, Forrest Samuel Briggs, Joyce Hsu
  • Publication number: 20170295359
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Application
    Filed: April 11, 2016
    Publication date: October 12, 2017
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda