Patents Assigned to Tetavi Ltd.
  • Patent number: 11893688
    Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: February 6, 2024
    Assignee: TETAVI LTD.
    Inventors: Sefy Kagarlitsky, Shirley Keinan, Amir Green, Yair Baruch, Roi Lev, Michael Birnboim, Miky Tamir
  • Patent number: 11665308
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: May 30, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Patent number: 11632489
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 18, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Patent number: 11574443
    Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: February 7, 2023
    Assignee: Tetavi Ltd.
    Inventors: Michael Tamir, Gilad Talmon, Vsevolod Kagarlitsky, Shirley Keinan, David Drezner, Yair Baruch, Michael Birnboim
  • Publication number: 20210304495
    Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.
    Type: Application
    Filed: March 10, 2021
    Publication date: September 30, 2021
    Applicant: Tetavi Ltd.,
    Inventors: Michael TAMIR, Gilad TALMON, Vsevolod KAGARLITSKY, Shirley KEINAN, David DREZNER, Yair BARUCH, Michael BIRNBOIM
  • Publication number: 20180220048
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY
  • Publication number: 20180220125
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Application
    Filed: January 25, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY