Patents by Inventor Michael Birnboim
Michael Birnboim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240307738Abstract: A computer-implemented system for assisting a sports game analyst comprises: a user interface operable to interact with a user; a memory storing records of positions of sports game players and game object within the playing ground; a processor cooperatively operable with the user interface and memory. The processor is configured for performing a trained artificial intelligence algorithm. The processor is configured for inquiring real-time positions of sports game players and game object and predicting future positions of the sports game players and game object within the playing ground by performing the artificial intelligence algorithm.Type: ApplicationFiled: July 14, 2022Publication date: September 19, 2024Inventors: Michael TAMIR, Tamir ANAVI, Ariel GREISAS, Michael BIRNBOIM, Slava CHERNOI, Alex YUDASHKIN
-
Publication number: 20240282046Abstract: Systems and methods for generating a volumetric video in which all frames are temporally coherent.Type: ApplicationFiled: February 16, 2023Publication date: August 22, 2024Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Michal HEKER, Michael BIRNBOIM, Gilad TALMON
-
Publication number: 20240282054Abstract: A method for finding a deformation field transforming a source frame of a volumetric video into a target frame of a volumetric video comprising steps of building a texture implicit function for said target frame and training a neural network to generate said deformation field between said source frame and said target frame, said texture implicit function for said target frame being a texture matching loss for said neural network.Type: ApplicationFiled: February 16, 2023Publication date: August 22, 2024Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Michal HEKER, Michael BIRNBOIM, Gilad TALMON
-
Publication number: 20240267609Abstract: The disclosure relates to providing a method for video capturing by a camera-equipped drone of an activity of at least one athlete practicing in an area. The method includes the drone hovering in a pre-specified flight zoon, the camera capturing video streams of the area and the practicing athlete thereof, analyzing video streams for an analysis of operations of the athlete and for a decision on desired shooting parameters, determining desired drone maneuvering and desired camera alignment, and the drone executing navigation instructions and desired camera parameter alignment in accordance with the determining of drone maneuvering and camera alignment. The method may also include the drone issuing a signal for the attention of the athlete.Type: ApplicationFiled: June 9, 2022Publication date: August 8, 2024Inventors: Miky TAMIR, Michael BIRNBOIM, Gil SHAMAI, Shaked DOVRAT
-
Patent number: 12020363Abstract: System and method for texturing a 3D surface using 2D images sourced from a plurality of imaging devices. System and method for applying a realistic texture to a model, based on texture found in one or more two-dimensional (2D) images of the object, with the texture covering the entire 3D model even if there are portions of the object that were invisible in the 2D image. System and method which does not require machine learning, is not incapable of blending between images, and which is not incapable of filling in portions of a 3D model that are invisible in the 2D image.Type: GrantFiled: March 28, 2022Date of Patent: June 25, 2024Assignee: TETAVI LTD.Inventors: Vsevolod Kagarlitsky, Shirley Keinan, Michael Birnboim, Michal Heker, Gilad Talmon, Michael Tamir
-
Publication number: 20240046551Abstract: A method for generating a volumetric image of a subject from at least one 2 dimensional image, said at least one 2 dimensional image having a limited number of viewpoints, said volumetric image insertable into an environment.Type: ApplicationFiled: July 31, 2023Publication date: February 8, 2024Inventors: Matan EFRIMA, Amir GREEN, Vsevolod KAGARLITSKY, Michael BIRNBOIM, Gilad TALMON
-
Patent number: 11893688Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.Type: GrantFiled: December 15, 2021Date of Patent: February 6, 2024Assignee: TETAVI LTD.Inventors: Sefy Kagarlitsky, Shirley Keinan, Amir Green, Yair Baruch, Roi Lev, Michael Birnboim, Miky Tamir
-
Publication number: 20230252657Abstract: A method for generating metadata to accompany a volumetric video for texels in an object in a volumetric video. The method comprises steps of: inputting a 2D representation of the object; identifying areas in the representation that have the same one or more properties with respect to light; and generating input material groups, where all texels in each input material group have the same properties with respect to light. There exists a correspondence between at least part of the input representation and at least part of the object in the volumetric video, so that there is a correspondence between texels in the representation and texels in the object so that output material groups can be generated from the input material groups and the properties with respect to light can be stored with the volumetric video as metadata.Type: ApplicationFiled: February 1, 2023Publication date: August 10, 2023Inventors: Yair BARUCH, Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yigal EILAM, Matan EFRIMA, Michael BIRNBOIM, Gilad TALMON
-
Patent number: 11665308Abstract: Methods and systems for generating free viewpoint videos (FVVs) based on images captured in a sports arena. A method includes projecting, onto objects within a filming area within the sports arena, a predefined pattern including a large set of features; generating, based on signals captured by each of a plurality of depth cameras, a point cloud for each depth camera, wherein the plurality of depth cameras is deployed in proximity to the filming area, wherein the captured signals are reflected off of the objects within the filming area; creating, based on the plurality of point clouds, a unified point cloud; meshing points in the unified point cloud to generate a three-dimensional (3D) model of the objects; texturing the 3D model based on images captured by the plurality of depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to a viewpoint.Type: GrantFiled: January 25, 2018Date of Patent: May 30, 2023Assignee: TETAVI, LTD.Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
-
Patent number: 11632489Abstract: Systems and methods for foreground/background separation and for studio production of a FVV. A method includes projecting, onto objects in a filming area within a studio, a predefined pattern including a large set of features; generating, based on signals reflected off of the objects and captured by a plurality of depth cameras deployed in proximity to the filming area, a local point cloud for each depth camera; separating, based on the local point clouds, between a background and a foreground of the filming area; creating, based on the local point clouds, a unified point cloud; meshing points in the unified point cloud to generate a 3D model of the objects; texturing the 3D model based on the separation and images captured by the depth cameras; and rendering the textured 3D model as a FVV including a series of video frames with respect to at least one viewpoint.Type: GrantFiled: January 29, 2018Date of Patent: April 18, 2023Assignee: TETAVI, LTD.Inventors: Michael Tamir, Michael Birnboim, David Dreizner, Michael Priven, Vsevolod Kagarlitsky
-
Publication number: 20230050535Abstract: A method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).Type: ApplicationFiled: January 6, 2022Publication date: February 16, 2023Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yair BARUCH, Roi LEV, Michael BIRNBOIM, Michael TAMIR
-
Patent number: 11574443Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.Type: GrantFiled: March 10, 2021Date of Patent: February 7, 2023Assignee: Tetavi Ltd.Inventors: Michael Tamir, Gilad Talmon, Vsevolod Kagarlitsky, Shirley Keinan, David Drezner, Yair Baruch, Michael Birnboim
-
Publication number: 20220385941Abstract: The present invention generally pertains to systems, methods, and non-transitory processor-readable mediums for ensuring a match between geometry and texture when playing volumetric videos in a web browser.Type: ApplicationFiled: May 25, 2022Publication date: December 1, 2022Inventors: Ofer RUBINSTEIN, Yigal EILAM, Michael BIRNBOIM, Vsevolod KAGARLITSKY, Gilad TALMON, Michael TAMIR
-
Publication number: 20220309733Abstract: System and method for texturing a 3D surface using 2D images sourced from a plurality of imaging devices. System and method for applying a realistic texture to a model, based on texture found in one or more two-dimensional (2D) images of the object, with the texture covering the entire 3D model even if there are portions of the object that were invisible in the 2D image. System and method which does not require machine learning, is not incapable of blending between images, and which is not incapable of filling in portions of a 3D model that are invisible in the 2D image.Type: ApplicationFiled: March 28, 2022Publication date: September 29, 2022Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Michael BIRNBOIM, Michal HEKER, Gilad TALMON, Michael TAMIR
-
Publication number: 20220217321Abstract: A computer-implemented method of generating of a database for training a neural network configured for converting 2d images into 3d models comprising steps of: (a) obtaining 3d models; (b) rendering said 3d models in a 2d format from at least one view point; and (c) collecting pairs further comprising said rendered 2d image frame and said corresponding sampled 3d models each.Type: ApplicationFiled: December 30, 2021Publication date: July 7, 2022Inventors: Vsevolod KAGARLITSKY, Shirley KEINAN, Michael BIRNBOIM, Amir GREEN, Alik MOKEICHEV, Michal HEKER, Yair BARUCH, Gil WOHLSTADTER, Gilad TALMON, Michael TAMIR
-
Patent number: 11373354Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.Type: GrantFiled: February 25, 2020Date of Patent: June 28, 2022Assignee: Track160, Ltd.Inventors: Michael Tamir, Michael Birnboim, Yaacov Chernoi, Antonio Dello Iacono, Tamir Anavi, Michael Priven, Alexander Yudashkin
-
Publication number: 20220189115Abstract: A computer-implemented method of smoothing a transition between two mesh sequences to be rendered successively comprises steps of: (a) providing first and second mesh sequences {1} and {2} formed mesh frames, respectively, to be fused into a fused sequence; (b) selecting mesh frames gn and gm being candidates for fusing therebetween; calculating geometric rigid and/or non-rigid transformations of candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; applying calculated geometric rigid and/or non-rigid transformations to candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; calculating textural transformations of said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}; and applying calculated textural transformation to said candidate frames gn and gm belonging to said first and second mesh sequences {1} and {2}.Type: ApplicationFiled: December 15, 2021Publication date: June 16, 2022Inventors: Sefy KAGARLITSKY, Shirley KEINAN, Amir GREEN, Yair BARUCH, Roi LEV, Michael BIRNBOIM, Miky TAMIR
-
Patent number: 11348255Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.Type: GrantFiled: June 4, 2018Date of Patent: May 31, 2022Assignee: Track160, Ltd.Inventors: Michael Tamir, Michael Birnboim, Antonio Dello Iacono, Yaacov Chernoi, Tamir Anavi, Michael Priven, Alexander Yudashkin
-
Publication number: 20210304495Abstract: A method and system for improving a three-dimensional (3D) representation of objects using semantic data. The method comprises receiving an input data generated in response to captured video in a filming area; setting at least one parameter for each region in the input data; and generating a 3D representation based in part on the at least one parameter and semantic data associated with the input data.Type: ApplicationFiled: March 10, 2021Publication date: September 30, 2021Applicant: Tetavi Ltd.,Inventors: Michael TAMIR, Gilad TALMON, Vsevolod KAGARLITSKY, Shirley KEINAN, David DREZNER, Yair BARUCH, Michael BIRNBOIM
-
Publication number: 20200193671Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Applicant: Track160, Ltd.Inventors: Michael TAMIR, Michael BIRNBOIM, Yaacov CHERNOI, Antonio Dello IACONO, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN