Patents by Inventor Matthew Steven Chapman

Matthew Steven Chapman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240109564
    Abstract: A method is provided that can include activating at least two wireless communication channels in parallel, between a first wireless transceiver and a second wireless transceiver. Each of the at least two wireless communication channels can operate at a different radio carrier frequency, and the first wireless transceiver may be part of a first vehicle. The method can also include transmitting, by the first wireless transceiver, common information in parallel on the at least two wireless communication channels to the second wireless transceiver and deactivating the at least two wireless communication channels.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 4, 2024
    Inventors: Padam Dhoj Swar, Carl L. Haas, Danial Rice, Rebecca W. Dreasher, Adam Hausmann, Matthew Steven Vrba, Edward J. Kuchar, James Lucas, Andrew Ryan Staats, Jerrid D. Chapman, Jeffrey D. Kernwein, Janmejay Tripathy, Stephen Craven, Tania Lindsley, Derek K. Woo, Ann K. Grimm, Scott Sollars, Phillip A. Burgart, James Allen Oswald, Shannon K. Struttmann, Stuart J. Barr, Keith Smith, Francois P. Pretorius, Craig K. Green, Kendrick Gawne, Irwin Morris, Joseph W. Gorman, Srivallidevi Muthusami, Mahesh Babu Natarajan, Jeremiah Dirnberger, Adam Franco
  • Publication number: 20220277421
    Abstract: In one embodiment, a method includes receiving a pair of stereo images having a resolution lower than a target resolution, generating an initial first feature map for a first image of the pair based on first channels associated with the first image and generating an initial second feature map for a second image of the pair based on second channels associated with the second image, generating a first feature map based on combining the first channels with the initial first feature map, generating a second feature map based on combining the second channels with the initial second feature map, up-sampling the first feature map and the second feature map to the target resolution, warping the up-sampled second feature map, and generating a reconstructed image corresponding to the first image having the target resolution based on the up-sampled first feature map and the up-sampled and warped second feature map.
    Type: Application
    Filed: May 16, 2022
    Publication date: September 1, 2022
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Patent number: 11367165
    Abstract: In one embodiment, a method includes receiving a first frame associated with a first time and one or more second frames of a video having a resolution lower than a target resolution, wherein each second frame is associated with a second time prior to the first time, generating a first feature map for the first frame and one or more second feature maps for the one or more second frames, up-sampling the first feature map and the one or more second feature maps to the target resolution, warping each of the up-sampled second feature maps according to a motion estimation between the associated second time and the first time, and generating a reconstructed frame having the target resolution corresponding to the first frame by using a machine-learning model to process the up-sampled first feature map and the one or more up-sampled and warped second feature maps.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: June 21, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Publication number: 20210366082
    Abstract: In one embodiment, a method includes receiving a first frame associated with a first time and one or more second frames of a video having a resolution lower than a target resolution, wherein each second frame is associated with a second time prior to the first time, generating a first feature map for the first frame and one or more second feature maps for the one or more second frames, up-sampling the first feature map and the one or more second feature maps to the target resolution, warping each of the up-sampled second feature maps according to a motion estimation between the associated second time and the first time, and generating a reconstructed frame having the target resolution corresponding to the first frame by using a machine-learning model to process the up-sampled first feature map and the one or more up-sampled and warped second feature maps.
    Type: Application
    Filed: September 30, 2020
    Publication date: November 25, 2021
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S. Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Patent number: 11113794
    Abstract: In one embodiment, a computing system may receive current eye-tracking data associated with a user of a head-mounted display. The system may dynamically adjust a focal length of the head-mounted display based on the current eye-tracking data. The system may generate an in-focus image of a scene and a corresponding depth map of the scene. The system may generate a circle-of-confusion map for the scene based on the depth map. The circle-of-confusion map encodes a desired focal surface in the scene. The system may generate, using a machine-learning model, an output image with a synthesized defocus-blur effect by processing the in-focus image, the corresponding depth map, and the circle-of-confusion map of the scene. The system may display the output image with the synthesized defocus-blur effect to the user via the head-mounted display having the adjusted focal length.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: September 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Patent number: 11094075
    Abstract: In one embodiment, a system may access a training sample that includes training images and corresponding training depth maps of a scene, with the training images being associated with different predetermined viewpoints of the scene. The system may generate elemental images of the scene by processing the training images and the training depth maps using a machine-learning model. The elemental images are associated with more viewpoints of the scene than the predetermined viewpoints associated with the training images. The system may update the machine-learning model based on a comparison between the generated elemental images of the scene and target elemental images that are each associated with a predetermined viewpoint. The updated machine-learning model is configured to generate elemental images of a scene of interest based on input images and corresponding depth maps of the scene of interest from different viewpoints.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: August 17, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Publication number: 20200311881
    Abstract: In one embodiment, a computing system may receive current eye-tracking data associated with a user of a head-mounted display. The system may dynamically adjust a focal length of the head-mounted display based on the current eye-tracking data. The system may generate an in-focus image of a scene and a corresponding depth map of the scene. The system may generate a circle-of-confusion map for the scene based on the depth map. The circle-of-confusion map encodes a desired focal surface in the scene. The system may generate, using a machine-learning model, an output image with a synthesized defocus-blur effect by processing the in-focus image, the corresponding depth map, and the circle-of-confusion map of the scene. The system may display the output image with the synthesized defocus-blur effect to the user via the head-mounted display having the adjusted focal length.
    Type: Application
    Filed: June 16, 2020
    Publication date: October 1, 2020
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Patent number: 10740876
    Abstract: In one embodiment, a system may access a training sample from a training dataset, including a training image of a scene and a corresponding depth map. The system may access a circle-of-confusion map for the scene, which is generated based on the depth map and encodes a desired focal surface in the scene. The system may generate an output image by processing the training image, the corresponding depth map, and the corresponding circle-of-confusion map using a machine-learning model. The system may update the machine-learning model based on a comparison between the generated output image and a target image depicting the scene with a desired defocus-blur effect. The updated machine-learning model is configured to generate images with defocus-blur effect based on input images and corresponding depth maps.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: August 11, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Patent number: 10664953
    Abstract: In one embodiment, a system may access a training sample from a training dataset. The training sample may include a training image of a scene and a corresponding depth map of the scene. The system may generate a plurality of decomposition images by processing the training image and the corresponding depth map using a machine-learning model. The system may generate a focal stack based on the plurality of decomposition images and update the machine-learning model based on a comparison between the generated focal stack and a target focal stack associated with the training sample. The updated machine-learning model is configured to generate decomposition images with defocus-blur effect based on input images and corresponding depth maps.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: May 26, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao