Patents by Inventor Michael Birnboim

Michael Birnboim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046551
    Abstract: A method for generating a volumetric image of a subject from at least one 2 dimensional image, said at least one 2 dimensional image having a limited number of viewpoints, said volumetric image insertable into an environment.
    Type: Application
    Filed: July 31, 2023
    Publication date: February 8, 2024
    Inventors: Matan EFRIMA, Amir GREEN, Vsevolod KAGARLITSKY, Michael BIRNBOIM, Gilad TALMON
  • Patent number: 11893688
    Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: February 6, 2024
    Assignee: TETAVI LTD.
    Inventors: Sefy Kagarlitsky, Shirley Keinan, Amir Green, Yair Baruch, Roi Lev, Michael Birnboim, Miky Tamir
  • Publication number: 20230252657
    Abstract: A method for generating metadata to accompany a volumetric video for texels in an object in a volumetric video. The method comprises steps of: inputting a 2D representation of the object; identifying areas in the representation that have the same one or more properties with respect to light; and generating input material groups, where all texels in each input material group have the same properties with respect to light. There exists a correspondence between at least part of the input representation and at least part of the object in the volumetric video, so that there is a correspondence between texels in the representation and texels in the object so that output material groups can be generated from the input material groups and the properties with respect to light can be stored with the volumetric video as metadata.
    Type: Application
    Filed: February 1, 2023
    Publication date: August 10, 2023
    Inventors: Yair BARUCH, Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yigal EILAM, Matan EFRIMA, Michael BIRNBOIM, Gilad TALMON
  • Patent number: 11665308
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: May 30, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Patent number: 11632489
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: April 18, 2023
    Assignee: TETAVI, LTD.
    Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
  • Publication number: 20230050535
    Abstract: A method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).
    Type: Application
    Filed: January 6, 2022
    Publication date: February 16, 2023
    Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yair BARUCH, Roi LEV, Michael BIRNBOIM, Michael TAMIR
  • Patent number: 11574443
    Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: February 7, 2023
    Assignee: Tetavi Ltd.
    Inventors: Michael Tamir, Gilad Talmon, Vsevolod Kagarlitsky, Shirley Keinan, David Drezner, Yair Baruch, Michael Birnboim
  • Publication number: 20220385941
    Abstract: The present invention generally pertains to systems, methods, and non-transitory processor-readable mediums for ensuring a match between geometry and texture when playing volumetric videos in a web browser.
    Type: Application
    Filed: May 25, 2022
    Publication date: December 1, 2022
    Inventors: Ofer RUBINSTEIN, Yigal EILAM, Michael BIRNBOIM, Vsevolod KAGARLITSKY, Gilad TALMON, Michael TAMIR
  • Publication number: 20220309733
    Abstract: System and method for texturing a 3D surface using 2D images sourced from a plurality of imaging devices. System and method for applying a realistic texture to a model, based on texture found in one or more two-dimensional (2D) images of the object, with the texture covering the entire 3D model even if there are portions of the object that were invisible in the 2D image. System and method which does not require machine learning, is not incapable of blending between images, and which is not incapable of filling in portions of a 3D model that are invisible in the 2D image.
    Type: Application
    Filed: March 28, 2022
    Publication date: September 29, 2022
    Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Michael BIRNBOIM, Michal HEKER, Gilad TALMON, Michael TAMIR
  • Publication number: 20220217321
    Abstract: A computer-implemented method of generating of a database for training a neural network configured for converting 2d images into 3d models comprising steps of: (a) obtaining 3d models; (b) rendering said 3d models in a 2d format from at least one view point; and (c) collecting pairs further comprising said rendered 2d image frame and said corresponding sampled 3d models each.
    Type: Application
    Filed: December 30, 2021
    Publication date: July 7, 2022
    Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Michael BIRNBOIM, Amir GREEN, Alik MOKEICHEV, Michal HEKER, Yair BARUCH, Gil WOHLSTADTER, Gilad TALMON, Michael TAMIR
  • Patent number: 11373354
    Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: June 28, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Yaacov Chernoi, Antonio Dello Iacono, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Publication number: 20220189115
    Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.
    Type: Application
    Filed: December 15, 2021
    Publication date: June 16, 2022
    Inventors: Sefy KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yair BARUCH, Roi LEV, Michael BIRNBOIM, Miky TAMIR
  • Patent number: 11348255
    Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: May 31, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Antonio Dello Iacono, Yaacov Chernoi, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Publication number: 20210304495
    Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.
    Type: Application
    Filed: March 10, 2021
    Publication date: September 30, 2021
    Applicant: Tetavi Ltd.,
    Inventors: Michael TAMIR, Gilad TALMON, Vsevolod KAGARLITSKY, Shirley KEINAN, David DREZNER, Yair BARUCH, Michael BIRNBOIM
  • Publication number: 20200193671
    Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.
    Type: Application
    Filed: February 25, 2020
    Publication date: June 18, 2020
    Applicant: Track160, Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, Yaacov CHERNOI, Antonio Dello IACONO, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
  • Publication number: 20180350084
    Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
    Type: Application
    Filed: June 4, 2018
    Publication date: December 6, 2018
    Applicant: Track160, Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, Antonio Dello IACONO, Yaacov CHERNOI, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
  • Publication number: 20180220125
    Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.
    Type: Application
    Filed: January 25, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY
  • Publication number: 20180220048
    Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.
    Type: Application
    Filed: January 29, 2018
    Publication date: August 2, 2018
    Applicant: Tetavi Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, David DREIZNER, Michael PRIVEN, Vsevolod KAGARLITSKY
  • Publication number: 20170287521
    Abstract: Disclosed are methods, circuits, devices, systems and associated computer executable code for composing composite content. According to embodiments, there is provided an authoring device which may facilitate acquisition or generation of one or more content segments at least partially based on one or more portions of a composite content authoring template. According to further embodiments, content segments produced by the authoring device may be automatically processed in accordance with instructions embedded within the same template used by the authoring device.
    Type: Application
    Filed: June 20, 2017
    Publication date: October 5, 2017
    Applicant: Showbox Ltd.
    Inventors: Efraim ATAD, Tomer AFEK, Doron SEGEV, Yaron WAXMAN, Michael BIRNBOIM
  • Patent number: 9715900
    Abstract: Disclosed are methods, circuits, devices, systems and associated computer executable code for composing composite content. According to embodiments, there is provided an authoring device which may facilitate acquisition or generation of one or more content segments at least partially based on one or more portions of a composite content authoring template. According to further embodiments, content segments produced by the authoring device may be automatically processed in accordance with instructions embedded within the same template used by the authoring device.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: July 25, 2017
    Assignee: Showbox Ltd.
    Inventors: Effi Atad, Tomer Afek, Doron Segev, Yaron Waxman, Michael Birnboim