Patents by Inventor David DREIZNER

David DREIZNER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11665308
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: May 30, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Patent number: 11632489
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 18, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Publication number: 20180220125
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Application
    Filed: January 25, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY
  • Publication number: 20180220048
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY