Patents by Inventor Matteo Shapira
Matteo Shapira has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230071981Abstract: Embodiments of the present disclosure may include a method to augment pilot control of a drone, the method including receiving a planned flight route. Embodiments may also include receiving sensor information from an at least one environment sensor along the planned flight route. In some embodiments, the at least one environment sensor may be located at a predefined location. Embodiments may also include estimating a drone location from the sensor information. Embodiments may also include receiving a speed vector of the drone. Embodiments may also include comparing the drone location to an expected drone location along the planned flight route. Embodiments may also include deriving a flight control command and a speed vector command to return the drone to a point along the planned flight route.Type: ApplicationFiled: September 9, 2022Publication date: March 9, 2023Applicant: XTEND Reality Expansion Ltd.Inventors: Aviv Shapira, Matteo Shapira, Reuven Rubi Liani, Adir Tubi
-
Patent number: 11463678Abstract: A system for social interaction using a photo-realistic novel view of an event includes a multi-view reconstruction system for developing transmission data of the event a plurality of client-side rendering devices, each rendering device receiving the transmission data from the multi-view reconstruction system and rendering the transmission data as the photo-realistic novel view. A method of social interaction using a photo-realistic novel view of an event includes transmitting by a server side transmission data of the event; receiving by a first user on a first rendering device the data transmission; selecting by the first user a path for rendering on the first rendering device at least on novel view; rendering by the first rendering device the at least one novel view; and saving by the user on the first rendering device novel view date for the at least one novel view.Type: GrantFiled: May 29, 2020Date of Patent: October 4, 2022Assignee: INTEL CORPORATIONInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20220113720Abstract: The present invention relates to a system and method to facilitate remote and accurate maneuvering of unmanned aerial vehicle (UAV) under communication latency. The UAV provides a ground control station (GCS), with a video feed of an area of interest (AOI), after a time T from actual capturing of the video by the UAV, due to latency in video and communication between the UAV and the GCS. The GCS further receives control commands directly from a controller of the user, and transmit the video feed along with interactive marker(s), step by step marking of singular points using raycast vector, and/or virtual UAV being overlaid on top of the video feed to a display module or VR headset associated with the user, to facilitate the user to assess how much movement the user is exerting on the UAV through the controller, and also see how the actual UAV will perform or maneuver in the AOI after time T, thereby facilitating the user to continuously assess and rectify maneuvering or directing of the UAV in the AOI.Type: ApplicationFiled: October 8, 2021Publication date: April 14, 2022Applicant: XTEND Reality Expansion Ltd.Inventors: Aviv Shapira, Matteo Shapira, Reuven Liani, Adir Tubi
-
Publication number: 20220075370Abstract: The disclosure relates to systems, methods and programs for maneuvering unmanned vehicles. More specifically, the disclosure relates to systems, methods and programs for controlling maneuverability of unmanned vehicles (ground, aerial and marine) by coupling vehicle controls with point of regard (PoR) in a 2D plane, translated to a continuously updating flight vector in a 3D space, based on 12 DOF head pose and/or hand gesture of a user.Type: ApplicationFiled: January 24, 2020Publication date: March 10, 2022Applicant: XTEND REALITY EXPANSION LTD.Inventors: Matteo SHAPIRA, Aviv SHAPIRA, Adir TUBI, Rubi LIANI, Erez NEHAMA
-
Publication number: 20200404247Abstract: A system for social interaction using a photo-realistic novel view of an event includes a multi-view reconstruction system for developing transmission data of the event a plurality of client-side rendering devices, each rendering device receiving the transmission data from the multi-view reconstruction system and rendering the transmission data as the photo-realistic novel view. A method of social interaction using a photo-realistic novel view of an event includes transmitting by a server side transmission data of the event; receiving by a first user on a first rendering device the data transmission; selecting by the first user a path for rendering on the first rendering device at least on novel view; rendering by the first rendering device the at least one novel view; and saving by the user on the first rendering device novel view date for the at least one novel view.Type: ApplicationFiled: May 29, 2020Publication date: December 24, 2020Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 10728528Abstract: A system for social interaction using a photo-realistic novel view of an event includes a multi-view reconstruction system for developing transmission data of the event a plurality of client-side rendering devices, each rendering device receiving the transmission data from the multi-view reconstruction system and rendering the transmission data as the photo-realistic novel view. A method of social interaction using a photo-realistic novel view of an event includes transmitting by a server side transmission data of the event; receiving by a first user on a first rendering device the data transmission; selecting by the first user a path for rendering on the first rendering device at least on novel view; rendering by the first rendering device the at least one novel view; and saving by the user on the first rendering device novel view date for the at least one novel view.Type: GrantFiled: April 1, 2015Date of Patent: July 28, 2020Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20200145643Abstract: A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective subvolume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.Type: ApplicationFiled: October 9, 2019Publication date: May 7, 2020Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 10567740Abstract: A method of generating user-selectable novel views of an event on a viewing device includes reconstructing by a server system for each camera image data into at least one foreground model for the respective camera and an environment model for the respective camera; joining by the server system the foreground model for each camera to create a visual atlas of all foreground models; creating by the server system foreground mapping data for foreground image data in the visual atlas; creating by the server system environment mapping data for environment image data in each respective environment model; transmitting by the server system each compressed data in a sequence it was compressed; receiving by the viewing device all compressed data; uncompressing by the viewing device all compressed data; selecting by a user the novel view; and rendering by the viewing device each novel view.Type: GrantFiled: August 27, 2018Date of Patent: February 18, 2020Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 10491887Abstract: A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective subvolume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.Type: GrantFiled: December 18, 2017Date of Patent: November 26, 2019Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 10477189Abstract: A system for multi-view reconstruction of a photo-realistic rendering of an event includes cameras for imaging the event with image frames; a controller having a CEM module for modeling an environment from image data of the image frames, an FES module for segmenting a foreground from the environment from image data of the image frames and constructing a 3D data representation; and a configuration engine includes a path selection module, the configuration engine for configuring and rendering the photo-realistic rendering along a path selected by a user using the path selection module, the path having at least one novel view image. The photo-realistic rendering has less than a 10% discrepancy between output pixel raster values of the novel view image and the image frames imaged by the cameras.Type: GrantFiled: April 1, 2015Date of Patent: November 12, 2019Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20180367788Abstract: A method of generating user-selectable novel views of an event on a viewing device includes reconstructing by a server system for each camera image data into at least one foreground model for the respective camera and an environment model for the respective camera; joining by the server system the foreground model for each camera to create a visual atlas of all foreground models; creating by the server system foreground mapping data for foreground image data in the visual atlas; creating by the server system environment mapping data for environment image data in each respective environment model; transmitting by the server system each compressed data in a sequence it was compressed; receiving by the viewing device all compressed data; uncompressing by the viewing device all compressed data; selecting by a user the novel view; and rendering by the viewing device each novel view.Type: ApplicationFiled: August 27, 2018Publication date: December 20, 2018Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20180261002Abstract: A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective subvolume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.Type: ApplicationFiled: December 18, 2017Publication date: September 13, 2018Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 10063851Abstract: A method of generating user-selectable novel views of an event on a viewing device includes reconstructing by a server system for each camera image data into at least one foreground model for the respective camera and an environment model for the respective camera; joining by the server system the foreground model for each camera to create a visual atlas of all foreground models; creating by the server system foreground mapping data for foreground image data in the visual atlas; creating by the server system environment mapping data for environment image data in each respective environment model; transmitting by the server system each compressed data in a sequence it was compressed; receiving by the viewing device all compressed data; uncompressing by the viewing device all compressed data; selecting by a user the novel view; and rendering by the viewing device each novel view.Type: GrantFiled: April 1, 2015Date of Patent: August 28, 2018Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Patent number: 9846961Abstract: A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective sub-volume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.Type: GrantFiled: April 1, 2015Date of Patent: December 19, 2017Assignee: Intel CorporationInventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20160189421Abstract: A method of limiting processing by a 3D reconstruction system of an environment in a 3D reconstruction of an event includes dividing by the subdivision module the volume into sub-volumes; projecting from each camera each of the sub-volumes to create a set of sub-volumes masks relative to each camera; creating an imaging mask for each camera; comparing for each camera by the subdivision module the respective imaging mask to the respective sub-volume mask and extracting at least one feature from at least one imaging mask; saving by the subdivision module the at least one feature to a subspace division mask; cropping the at least one feature from the imaging frames using the subspace division mask; and processing only the at least one feature for a 3D reconstruction. The system includes cameras for recording the event in imaging frames; and a subdivision module for dividing the volume into sub-volumes.Type: ApplicationFiled: April 1, 2015Publication date: June 30, 2016Applicant: Replay Technologies Inc.Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20160182894Abstract: A method of generating user-selectable novel views of an event on a viewing device includes reconstructing by a server system for each camera image data into at least one foreground model for the respective camera and an environment model for the respective camera; joining by the server system the foreground model for each camera to create a visual atlas of all foreground models; creating by the server system foreground mapping data for foreground image data in the visual atlas; creating by the server system environment mapping data for environment image data in each respective environment model; transmitting by the server system each compressed data in a sequence it was compressed; receiving by the viewing device all compressed data; uncompressing by the viewing device all compressed data; selecting by a user the novel view; and rendering by the viewing device each novel view.Type: ApplicationFiled: April 1, 2015Publication date: June 23, 2016Applicant: Replay Technologies Inc.Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi, Adi Gilat
-
Publication number: 20150319424Abstract: A system for multi-view reconstruction of a photo-realistic rendering of an event includes cameras for imaging the event with image frames; a controller having a CEM module for modeling an environment from image data of the image frames, an FES module for segmenting a foreground from the environment from image data of the image frames and constructing a 3D data representation; and a configuration engine includes a path selection module, the configuration engine for configuring and rendering the photo-realistic rendering along a path selected by a user using the path selection module, the path having at least one novel view image. The photo-realistic rendering has less than a 10% discrepancy between output pixel raster values of the novel view image and the image frames imaged by the cameras.Type: ApplicationFiled: April 1, 2015Publication date: November 5, 2015Applicant: REPLAY TECHNOLOGIES INC.Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi
-
Publication number: 20150317822Abstract: A system for social interaction using a photo-realistic novel view of an event includes a multi-view reconstruction system for developing transmission data of the event a plurality of client-side rendering devices, each rendering device receiving the transmission data from the multi-view reconstruction system and rendering the transmission data as the photo-realistic novel view. A method of social interaction using a photo-realistic novel view of an event includes transmitting by a server side transmission data of the event; receiving by a first user on a first rendering device the data transmission; selecting by the first user a path for rendering on the first rendering device at least on novel view; rendering by the first rendering device the at least one novel view; and saving by the user on the first rendering device novel view date for the at least one novel view.Type: ApplicationFiled: April 1, 2015Publication date: November 5, 2015Applicant: Replay Technologies Inc.Inventors: Oren Haimovitch-Yogev, Matteo Shapira, Aviv Shapira, Diego Prilusky, Yaniv Ben Zvi