Patents by Inventor Yaron Eshet

Yaron Eshet has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11796666
    Abstract: A radar system includes antenna elements and receive channels. An adaptive switch couples the receive channels to a subset of the antenna elements as selected antenna elements. The selected antenna elements receive reflected signals from reflection by objects and each of the receive channels outputs the digital signal based on the reflected signal from the coupled selected antenna element. A controller processes the digital signal from each receive channel to estimate a direction of arrival (DOA) to each object and generate candidate configurations of the switch. Assessing the candidate configurations includes performing a multi-step assessment using a decision tree with each candidate configuration as a root and examining accuracy of an output at a last step in the decision tree to select a selected candidate configuration based on the accuracy. The switch is configured according to the selected candidate configuration prior to receiving the reflected signals for a next iteration.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: October 24, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Yaron Eshet, Igal Bilik
  • Publication number: 20220229169
    Abstract: A radar system includes antenna elements and receive channels. An adaptive switch couples the receive channels to a subset of the antenna elements as selected antenna elements. The selected antenna elements receive reflected signals from reflection by objects and each of the receive channels outputs the digital signal based on the reflected signal from the coupled selected antenna element. A controller processes the digital signal from each receive channel to estimate a direction of arrival (DOA) to each object and generate candidate configurations of the switch. Assessing the candidate configurations includes performing a multi-step assessment using a decision tree with each candidate configuration as a root and examining accuracy of an output at a last step in the decision tree to select a selected candidate configuration based on the accuracy. The switch is configured according to the selected candidate configuration prior to receiving the reflected signals for a next iteration.
    Type: Application
    Filed: January 20, 2021
    Publication date: July 21, 2022
    Inventors: Yaron Eshet, Igal Bilik
  • Patent number: 11009591
    Abstract: Deep learning in a radar system includes obtaining unaliased time samples from a first radar system. A method includes under-sampling the un-aliased time samples to obtain aliased time samples of a first configuration, matched filtering the un-aliased time samples to obtain an un-aliased data cube and the aliased time samples to obtain an aliased data cube, and using a first neural network to obtain a de-aliased data cube. A first neural network is trained to obtain a trained first neural network. The under-sampling of the un-aliased time samples is repeated to obtain second aliased time samples of a second configuration. The method includes training a second neural network to obtain a trained second neural network, comparing results to choose a selected neural network corresponding with a selected configuration, and using the selected neural network with a second radar system that has the selected configuration to detect one or more objects.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: May 18, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Yaron Eshet, Oded Bialer, Igal Bilik
  • Patent number: 10976412
    Abstract: A system and method to use deep learning for super resolution in a radar system include obtaining first-resolution time samples from reflections based on transmissions by a first-resolution radar system of multiple frequency-modulated signals. The first-resolution radar system includes multiple transmit elements and multiple receive elements. The method also includes reducing resolution of the first-resolution time samples to obtain second-resolution time samples, implementing a matched filter on the first-resolution time samples to obtain a first-resolution data cube and on the second-resolution time samples to obtain a second-resolution data cube, processing the second-resolution data cube with a neural network to obtain a third-resolution data cube, and training the neural network based on a first loss obtained by comparing the first-resolution data cube with the third-resolution data cube. The neural network is used with a second-resolution radar system to detect one or more objects.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: April 13, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Yaron Eshet, Igal Bilik, Oded Bialer
  • Publication number: 20200249315
    Abstract: Deep learning in a radar system includes obtaining unaliased time samples from a first radar system. A method includes under-sampling the un-aliased time samples to obtain aliased time samples of a first configuration, matched filtering the un-aliased time samples to obtain an un-aliased data cube and the aliased time samples to obtain an aliased data cube, and using a first neural network to obtain a de-aliased data cube. A first neural network is trained to obtain a trained first neural network. The under-sampling of the un-aliased time samples is repeated to obtain second aliased time samples of a second configuration. The method includes training a second neural network to obtain a trained second neural network, comparing results to choose a selected neural network corresponding with a selected configuration, and using the selected neural network with a second radar system that has the selected configuration to detect one or more objects.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 6, 2020
    Inventors: Yaron Eshet, Oded Bialer, Igal Bilik
  • Publication number: 20200249314
    Abstract: A system and method to use deep learning for super resolution in a radar system include obtaining first-resolution time samples from reflections based on transmissions by a first-resolution radar system of multiple frequency-modulated signals. The first-resolution radar system includes multiple transmit elements and multiple receive elements. The method also includes reducing resolution of the first-resolution time samples to obtain second-resolution time samples, implementing a matched filter on the first-resolution time samples to obtain a first-resolution data cube and on the second-resolution time samples to obtain a second-resolution data cube, processing the second-resolution data cube with a neural network to obtain a third-resolution data cube, and training the neural network based on a first loss obtained by comparing the first-resolution data cube with the third-resolution data cube. The neural network is used with a second-resolution radar system to detect one or more objects.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 6, 2020
    Inventors: Yaron Eshet, Igal Bilik, Oded Bialer
  • Patent number: 10366278
    Abstract: A method for processing data includes receiving a depth map of a scene containing at least a humanoid head, the depth map comprising a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values of at least some of the pixels in the matrix. The curvature values are processed in order to identify a face in the scene.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: July 30, 2019
    Assignee: APPLE INC.
    Inventor: Yaron Eshet
  • Patent number: 10043279
    Abstract: A method for processing data includes receiving a depth map of a scene containing at least a part of a body of a humanoid form. The depth map includes a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values and curvature orientations of at least some of the pixels in the matrix. The depth map is segmented using the curvature values and curvature orientations in the curvature map so as to extract three-dimensional (3D) coordinates of one or more limbs of the humanoid form.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: August 7, 2018
    Assignee: Apple Inc.
    Inventor: Yaron Eshet
  • Publication number: 20180082109
    Abstract: A method for processing data includes receiving a depth map of a scene containing at least a humanoid head, the depth map comprising a matrix of pixels having respective pixel depth values. A digital processor extracts from the depth map a curvature map of the scene. The curvature map includes respective curvature values of at least some of the pixels in the matrix. The curvature values are processed in order to identify a face in the scene.
    Type: Application
    Filed: May 11, 2017
    Publication date: March 22, 2018
    Inventor: Yaron Eshet
  • Patent number: 9846960
    Abstract: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: December 19, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Yaron Eshet, David Eraker
  • Patent number: 9727776
    Abstract: The description relates to estimating object orientation. One example includes determining a first estimate of object orientation using a first technique and image data. In this example, a second estimate of the object orientation can be determined using a second technique and the image data. The first estimate can be corrected with the second estimate to generate a corrected object orientation estimate which can be output.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: August 8, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bhaven P. Dedhia, Yaron Eshet, Geoffrey J. Hulten
  • Publication number: 20150348269
    Abstract: The description relates to estimating object orientation. One example includes determining a first estimate of object orientation using a first technique and image data. In this example, a second estimate of the object orientation can be determined using a second technique and the image data. The first estimate can be corrected with the second estimate to generate a corrected object orientation estimate which can be output.
    Type: Application
    Filed: May 27, 2014
    Publication date: December 3, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Bhaven P. DEDHIA, Yaron ESHET, Geoffrey J. HULTEN
  • Patent number: 9098908
    Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
    Type: Grant
    Filed: October 21, 2011
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
  • Publication number: 20130321586
    Abstract: Cloud based FVV streaming technique embodiments presented herein generally employ a cloud based FVV pipeline to create, render and transmit FVV frames depicting a captured scene as would be viewed from a current synthetic viewpoint selected by an end user and received from a client computing device. The FVV frames use a similar level of bandwidth as a conventional streaming movie would consume. To change viewpoints, a new viewpoint is sent from the client to the cloud, and a new streaming movie is initiated from the new viewpoint. Frames associated with that viewpoint are created, rendered and transmitted to the client until a new viewpoint request is received.
    Type: Application
    Filed: August 17, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Adam Kirk, Patrick Sweeney, Don Gillett, Neil Fishman, Kanchan Mitra, Amit Mital, David Harnett, Yaron Eshet, Simon Winder, David Eraker
  • Publication number: 20130321589
    Abstract: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.
    Type: Application
    Filed: August 3, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Adam G. Kirk, Yaron Eshet, David Eraker
  • Publication number: 20130321396
    Abstract: Free viewpoint video of a scene is generated and presented to a user. An arrangement of sensors generates streams of sensor data each of which represents the scene from a different geometric perspective. The sensor data streams are calibrated. A scene proxy is generated from the calibrated sensor data streams. The scene proxy geometrically describes the scene as a function of time and includes one or more types of geometric proxy data which is matched to a first set of current pipeline conditions in order to maximize the photo-realism of the free viewpoint video resulting from the scene proxy at each point in time. A current synthetic viewpoint of the scene is generated from the scene proxy. This viewpoint generation maximizes the photo-realism of the current synthetic viewpoint based upon a second set of current pipeline conditions. The current synthetic viewpoint is displayed.
    Type: Application
    Filed: August 30, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Adam Kirk, Kanchan Mitra, Patrick Sweeney, Don Gillett, Neil Fishman, Simon Winder, Yaron Eshet, David Harnett, Amit Mital, David Eraker
  • Publication number: 20130100256
    Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
    Type: Application
    Filed: October 21, 2011
    Publication date: April 25, 2013
    Applicant: Microsoft Corporation
    Inventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
  • Publication number: 20130095920
    Abstract: Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.
    Type: Application
    Filed: October 13, 2011
    Publication date: April 18, 2013
    Applicant: Microsoft Corporation
    Inventors: Kestutis Patiejunas, Kanchan Mitra, Patrick Sweeney, Yaron Eshet, Adam G. Kirk, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, David Harnett, Amit Mital, Simon Winder