Patents by Inventor Yaron Eshet
Yaron Eshet has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11796666Abstract: A radar system includes antenna elements and receive channels. An adaptive switch couples the receive channels to a subset of the antenna elements as selected antenna elements. The selected antenna elements receive reflected signals from reflection by objects and each of the receive channels outputs the digital signal based on the reflected signal from the coupled selected antenna element. A controller processes the digital signal from each receive channel to estimate a direction of arrival (DOA) to each object and generate candidate configurations of the switch. Assessing the candidate configurations includes performing a multi-step assessment using a decision tree with each candidate configuration as a root and examining accuracy of an output at a last step in the decision tree to select a selected candidate configuration based on the accuracy. The switch is configured according to the selected candidate configuration prior to receiving the reflected signals for a next iteration.Type: GrantFiled: January 20, 2021Date of Patent: October 24, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yaron Eshet, Igal Bilik
-
Publication number: 20220229169Abstract: A radar system includes antenna elements and receive channels. An adaptive switch couples the receive channels to a subset of the antenna elements as selected antenna elements. The selected antenna elements receive reflected signals from reflection by objects and each of the receive channels outputs the digital signal based on the reflected signal from the coupled selected antenna element. A controller processes the digital signal from each receive channel to estimate a direction of arrival (DOA) to each object and generate candidate configurations of the switch. Assessing the candidate configurations includes performing a multi-step assessment using a decision tree with each candidate configuration as a root and examining accuracy of an output at a last step in the decision tree to select a selected candidate configuration based on the accuracy. The switch is configured according to the selected candidate configuration prior to receiving the reflected signals for a next iteration.Type: ApplicationFiled: January 20, 2021Publication date: July 21, 2022Inventors: Yaron Eshet, Igal Bilik
-
Patent number: 11009591Abstract: Deep learning in a radar system includes obtaining unaliased time samples from a first radar system. A method includes under-sampling the un-aliased time samples to obtain aliased time samples of a first configuration, matched filtering the un-aliased time samples to obtain an un-aliased data cube and the aliased time samples to obtain an aliased data cube, and using a first neural network to obtain a de-aliased data cube. A first neural network is trained to obtain a trained first neural network. The under-sampling of the un-aliased time samples is repeated to obtain second aliased time samples of a second configuration. The method includes training a second neural network to obtain a trained second neural network, comparing results to choose a selected neural network corresponding with a selected configuration, and using the selected neural network with a second radar system that has the selected configuration to detect one or more objects.Type: GrantFiled: February 1, 2019Date of Patent: May 18, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yaron Eshet, Oded Bialer, Igal Bilik
-
Patent number: 10976412Abstract: A system and method to use deep learning for super resolution in a radar system include obtaining first-resolution time samples from reflections based on transmissions by a first-resolution radar system of multiple frequency-modulated signals. The first-resolution radar system includes multiple transmit elements and multiple receive elements. The method also includes reducing resolution of the first-resolution time samples to obtain second-resolution time samples, implementing a matched filter on the first-resolution time samples to obtain a first-resolution data cube and on the second-resolution time samples to obtain a second-resolution data cube, processing the second-resolution data cube with a neural network to obtain a third-resolution data cube, and training the neural network based on a first loss obtained by comparing the first-resolution data cube with the third-resolution data cube. The neural network is used with a second-resolution radar system to detect one or more objects.Type: GrantFiled: February 1, 2019Date of Patent: April 13, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Yaron Eshet, Igal Bilik, Oded Bialer
-
Publication number: 20200249314Abstract: A system and method to use deep learning for super resolution in a radar system include obtaining first-resolution time samples from reflections based on transmissions by a first-resolution radar system of multiple frequency-modulated signals. The first-resolution radar system includes multiple transmit elements and multiple receive elements. The method also includes reducing resolution of the first-resolution time samples to obtain second-resolution time samples, implementing a matched filter on the first-resolution time samples to obtain a first-resolution data cube and on the second-resolution time samples to obtain a second-resolution data cube, processing the second-resolution data cube with a neural network to obtain a third-resolution data cube, and training the neural network based on a first loss obtained by comparing the first-resolution data cube with the third-resolution data cube. The neural network is used with a second-resolution radar system to detect one or more objects.Type: ApplicationFiled: February 1, 2019Publication date: August 6, 2020Inventors: Yaron Eshet, Igal Bilik, Oded Bialer
-
Publication number: 20200249315Abstract: Deep learning in a radar system includes obtaining unaliased time samples from a first radar system. A method includes under-sampling the un-aliased time samples to obtain aliased time samples of a first configuration, matched filtering the un-aliased time samples to obtain an un-aliased data cube and the aliased time samples to obtain an aliased data cube, and using a first neural network to obtain a de-aliased data cube. A first neural network is trained to obtain a trained first neural network. The under-sampling of the un-aliased time samples is repeated to obtain second aliased time samples of a second configuration. The method includes training a second neural network to obtain a trained second neural network, comparing results to choose a selected neural network corresponding with a selected configuration, and using the selected neural network with a second radar system that has the selected configuration to detect one or more objects.Type: ApplicationFiled: February 1, 2019Publication date: August 6, 2020Inventors: Yaron Eshet, Oded Bialer, Igal Bilik
-
Patent number: 10366278Abstract: A method for processing data includes receiving a depth map of a scene containing at least a humanoid head, the depth map comprising a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values of at least some of the pixels in the matrix. The curvature values are processed in order to identify a face in the scene.Type: GrantFiled: May 11, 2017Date of Patent: July 30, 2019Assignee: APPLE INC.Inventor: Yaron Eshet
-
Patent number: 10043279Abstract: A method for processing data includes receiving a depth map of a scene containing at least a part of a body of a humanoid form. The depth map includes a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values and curvature orientations of at least some of the pixels in the matrix. The depth map is segmented using the curvature values and curvature orientations in the curvature map so as to extract three-dimensional (3D) coordinates of one or more limbs of the humanoid form.Type: GrantFiled: September 22, 2016Date of Patent: August 7, 2018Assignee: Apple Inc.Inventor: Yaron Eshet
-
Publication number: 20180082109Abstract: A method for processing data includes receiving a depth map of a scene containing at least a humanoid head, the depth map comprising a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values of at least some of the pixels in the matrix. The curvature values are processed in order to identify a face in the scene.Type: ApplicationFiled: May 11, 2017Publication date: March 22, 2018Inventor: Yaron Eshet
-
Patent number: 9846960Abstract: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.Type: GrantFiled: August 3, 2012Date of Patent: December 19, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Adam G. Kirk, Yaron Eshet, David Eraker
-
Patent number: 9727776Abstract: The description relates to estimating object orientation. One example includes determining a first estimate of object orientation using a first technique and image data. In this example, a second estimate of the object orientation can be determined using a second technique and the image data. The first estimate can be corrected with the second estimate to generate a corrected object orientation estimate which can be output.Type: GrantFiled: May 27, 2014Date of Patent: August 8, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Bhaven P. Dedhia, Yaron Eshet, Geoffrey J. Hulten
-
Publication number: 20150348269Abstract: The description relates to estimating object orientation. One example includes determining a first estimate of object orientation using a first technique and image data. In this example, a second estimate of the object orientation can be determined using a second technique and the image data. The first estimate can be corrected with the second estimate to generate a corrected object orientation estimate which can be output.Type: ApplicationFiled: May 27, 2014Publication date: December 3, 2015Applicant: MICROSOFT CORPORATIONInventors: Bhaven P. DEDHIA, Yaron ESHET, Geoffrey J. HULTEN
-
Patent number: 9098908Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.Type: GrantFiled: October 21, 2011Date of Patent: August 4, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
-
Publication number: 20130321586Abstract: Cloud based FVV streaming technique embodiments presented herein generally employ a cloud based FVV pipeline to create, render and transmit FVV frames depicting a captured scene as would be viewed from a current synthetic viewpoint selected by an end user and received from a client computing device. The FVV frames use a similar level of bandwidth as a conventional streaming movie would consume. To change viewpoints, a new viewpoint is sent from the client to the cloud, and a new streaming movie is initiated from the new viewpoint. Frames associated with that viewpoint are created, rendered and transmitted to the client until a new viewpoint request is received.Type: ApplicationFiled: August 17, 2012Publication date: December 5, 2013Applicant: MICROSOFT CORPORATIONInventors: Adam Kirk, Patrick Sweeney, Don Gillett, Neil Fishman, Kanchan Mitra, Amit Mital, David Harnett, Yaron Eshet, Simon Winder, David Eraker
-
Publication number: 20130321589Abstract: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.Type: ApplicationFiled: August 3, 2012Publication date: December 5, 2013Applicant: MICROSOFT CORPORATIONInventors: Adam G. Kirk, Yaron Eshet, David Eraker
-
Publication number: 20130321396Abstract: Free viewpoint video of a scene is generated and presented to a user. An arrangement of sensors generates streams of sensor data each of which represents the scene from a different geometric perspective. The sensor data streams are calibrated. A scene proxy is generated from the calibrated sensor data streams. The scene proxy geometrically describes the scene as a function of time and includes one or more types of geometric proxy data which is matched to a first set of current pipeline conditions in order to maximize the photo-realism of the free viewpoint video resulting from the scene proxy at each point in time. A current synthetic viewpoint of the scene is generated from the scene proxy. This viewpoint generation maximizes the photo-realism of the current synthetic viewpoint based upon a second set of current pipeline conditions. The current synthetic viewpoint is displayed.Type: ApplicationFiled: August 30, 2012Publication date: December 5, 2013Applicant: MICROSOFT CORPORATIONInventors: Adam Kirk, Kanchan Mitra, Patrick Sweeney, Don Gillett, Neil Fishman, Simon Winder, Yaron Eshet, David Harnett, Amit Mital, David Eraker
-
Publication number: 20130100256Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.Type: ApplicationFiled: October 21, 2011Publication date: April 25, 2013Applicant: Microsoft CorporationInventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
-
Publication number: 20130095920Abstract: Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.Type: ApplicationFiled: October 13, 2011Publication date: April 18, 2013Applicant: Microsoft CorporationInventors: Kestutis Patiejunas, Kanchan Mitra, Patrick Sweeney, Yaron Eshet, Adam G. Kirk, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, David Harnett, Amit Mital, Simon Winder