Patents by Inventor Noa Garnett
Noa Garnett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11893086Abstract: A system for analyzing images includes a processing device includes a receiving module configured to receive an image, and an analysis module configured to apply the received image to a machine learning network and classify one or more features in the received image, the machine learning network configured to propagate image data through a plurality of convolutional layers, each convolutional layer of the plurality of convolutional layers including a plurality of filter channels, the machine learning network including a bottleneck layer configured to recognize an image feature based on a shape of an image component, The system also includes an output module configured to output characterization data that includes a classification of the one or more features.Type: GrantFiled: March 10, 2021Date of Patent: February 6, 2024Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Dan Levi, Noa Garnett, Roy Uziel
-
Patent number: 11731635Abstract: A method of monitoring a driver in an autonomous vehicle includes monitoring, with a driver monitoring system, a driver of a vehicle, collecting, with a data processor, data from the driver monitoring system related to gaze behavior of the driver, classifying, with the data processor, the driver as one of a plurality of driver statuses based on the data from the driver monitoring system, and sending, with the data processor, instructions to at least one vehicle system based on the classification of the driver.Type: GrantFiled: December 20, 2021Date of Patent: August 22, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Ke Liu, Ron Hecht, Noa Garnett, Yi Guo Glaser, Omer Tsimhoni
-
Patent number: 11731639Abstract: A vehicle having an imaging sensor that is arranged to monitor a field-of-view (FOV) that includes a travel surface proximal to the vehicle is described. Detecting the travel lane includes capturing a FOV image of a viewable region of the travel surface. The FOV image is converted, via an artificial neural network, to a plurality of feature maps. The feature maps are projected, via an inverse perspective mapping algorithm, onto a BEV orthographic grid. The feature maps include travel lane segments and feature embeddings, and the travel lane segments are represented as line segments. The line segments are concatenated for the plurality of grid sections based upon the feature embeddings to form a predicted lane. The concatenation, or clustering is accomplished via the feature embeddings.Type: GrantFiled: March 3, 2020Date of Patent: August 22, 2023Assignee: GM Global Technology Operations LLCInventors: Netalee Efrat Sela, Max Bluvstein, Dan Levi, Noa Garnett, Bat El Shlomo
-
Patent number: 11726202Abstract: A method of designing a radar system includes implementing a supervised learning process of a neural network to determine a weight corresponding with each of a plurality of patch antennas of a radar system. Each of the plurality of patch antennas is sized in accordance with the weight.Type: GrantFiled: December 10, 2020Date of Patent: August 15, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Oded Bialer, Noa Garnett, Amnon Jonas, Ofer Givati
-
Publication number: 20230192095Abstract: A method of monitoring a driver in an autonomous vehicle includes monitoring, with a driver monitoring system, a driver of a vehicle, collecting, with a data processor, data from the driver monitoring system related to gaze behavior of the driver, classifying, with the data processor, the driver as one of a plurality of driver statuses based on the data from the driver monitoring system, and sending, with the data processor, instructions to at least one vehicle system based on the classification of the driver.Type: ApplicationFiled: December 20, 2021Publication date: June 22, 2023Inventors: Ke Liu, Ron Hecht, Noa Garnett, Yi Guo Glaser, Omer Tsimhoni
-
Publication number: 20230134125Abstract: A system in a vehicle includes an image sensor to obtain images in an image sensor coordinate system and a depth sensor to obtain point clouds in a depth sensor coordinate system. Processing circuitry implements a neural network to determine a validation state of a transformation matrix that transforms the point clouds in the depth sensor coordinate system to transformed point clouds in the image sensor coordinate system. The transformation matrix includes rotation parameters and translation parameters.Type: ApplicationFiled: November 1, 2021Publication date: May 4, 2023Inventors: Michael Baltaxe, Dan Levi, Noa Garnett, Doron Portnoy, Amit Batikoff, Shahar Ben Ezra, Tal Furman
-
Publication number: 20220292316Abstract: A system for analyzing images includes a processing device includes a receiving module configured to receive an image, and an analysis module configured to apply the received image to a machine learning network and classify one or more features in the received image, the machine learning network configured to propagate image data through a plurality of convolutional layers, each convolutional layer of the plurality of convolutional layers including a plurality of filter channels, the machine learning network including a bottleneck layer configured to recognize an image feature based on a shape of an image component, The system also includes an output module configured to output characterization data that includes a classification of the one or more features.Type: ApplicationFiled: March 10, 2021Publication date: September 15, 2022Inventors: Dan Levi, Noa Garnett, Roy Uziel
-
Publication number: 20220187446Abstract: A method of designing a radar system includes implementing a supervised learning process of a neural network to determine a weight corresponding with each of a plurality of patch antennas of a radar system. Each of the plurality of patch antennas is sized in accordance with the weight.Type: ApplicationFiled: December 10, 2020Publication date: June 16, 2022Inventors: Oded Bialer, Noa Garnett, Amnon Jonas, Ofer Givati
-
Publication number: 20210276574Abstract: A vehicle having an imaging sensor that is arranged to monitor a field-of-view (FOV) that includes a travel surface proximal to the vehicle is described. Detecting the travel lane includes capturing a FOV image of a viewable region of the travel surface. The FOV image is converted, via an artificial neural network, to a plurality of feature maps. The feature maps are projected, via an inverse perspective mapping algorithm, onto a BEV orthographic grid. The feature maps include travel lane segments and feature embeddings, and the travel lane segments are represented as line segments. The line segments are concatenated for the plurality of grid sections based upon the feature embeddings to form a predicted lane. The concatenation, or clustering is accomplished via the feature embeddings.Type: ApplicationFiled: March 3, 2020Publication date: September 9, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Netalee Efrat Sela, Max Bluvstein, Dan Levi, Noa Garnett, Bat El Shlomo
-
Patent number: 11100344Abstract: Systems and methods to perform image-based three-dimensional (3D) lane detection involve obtaining known 3D points of one or more lane markings in an image including the one or more lane markings. The method includes overlaying a grid of anchor points on the image. Each of the anchor points is a center of i concentric circles. The method also includes generating an i-length vector and setting an indicator value for each of the anchor points based on the known 3D points as part of a training process of a neural network, and using the neural network to obtain 3D points of one or more lane markings in a second image.Type: GrantFiled: November 21, 2019Date of Patent: August 24, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Dan Levi, Noa Garnett
-
Publication number: 20210158058Abstract: Systems and methods to perform image-based three-dimensional (3D) lane detection involve obtaining known 3D points of one or more lane markings in an image including the one or more lane markings. The method includes overlaying a grid of anchor points on the image. Each of the anchor points is a center of i concentric circles. The method also includes generating an i-length vector and setting an indicator value for each of the anchor points based on the known 3D points as part of a training process of a neural network, and using the neural network to obtain 3D points of one or more lane markings in a second image.Type: ApplicationFiled: November 21, 2019Publication date: May 27, 2021Inventors: Dan Levi, Noa Garnett
-
Patent number: 11003920Abstract: A vehicle, system for operating a vehicle and method of navigating a vehicle. The system includes a sensor and a multi-layer convolutional neural network. The sensor generates an image indicative of a road scene of the vehicle. The multi-layer convolutional neural network generates a plurality of feature maps from the image via a first processing pathway, projects at least one of the plurality of feature maps onto a defined plane relative to a defined coordinate system of the road scene to obtain at least one projected feature map, applies a convolution to the at least one projected feature map in a second processing pathway to obtain a final feature map, and determines lane information from the final feature map. A control system adjusts operation of the vehicle using the lane information.Type: GrantFiled: November 13, 2018Date of Patent: May 11, 2021Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Dan Levi, Noa Garnett
-
Publication number: 20200151465Abstract: A vehicle, system for operating a vehicle and method of navigating a vehicle. The system includes a sensor and a multi-layer convolutional neural network. The sensor generates an image indicative of a road scene of the vehicle. The multi-layer convolutional neural network generates a plurality of feature maps from the image via a first processing pathway, projects at least one of the plurality of feature maps onto a defined plane relative to a defined coordinate system of the road scene to obtain at least one projected feature map, applies a convolution to the at least one projected feature map in a second processing pathway to obtain a final feature map, and determines lane information from the final feature map. A control system adjusts operation of the vehicle using the lane information.Type: ApplicationFiled: November 13, 2018Publication date: May 14, 2020Inventors: Dan Levi, Noa Garnett
-
Publication number: 20190383900Abstract: A radar system and a method to configure the radar system involve one or more transmit antennas to transmit a radio frequency signal, and one or more receive antennas to receive reflected energy based on the radio frequency signal transmitted by the one or more transmit antennas. The one or more transmit antennas and the one or more receive antennas are arranged in an array. A processor jointly determines a spacing among the one or more transmit antennas and the one or more receive antennas arranged in the array and a mapping between data obtained by the one or more receive antennas and an angle of arrival of one or more targets detected in the data. Joint determination refers to both determination of the spacing being in consideration of the mapping and determination of the mapping being in consideration of the spacing.Type: ApplicationFiled: June 18, 2018Publication date: December 19, 2019Inventors: Oded Bialer, Noa Garnett, Dan Levi
-
Patent number: 10474908Abstract: A method in a vehicle for performing multiple on-board sensing tasks concurrently in the same network using deep learning algorithms is provided. The method includes receiving vision sensor data from a sensor on the vehicle, determining a set of features from the vision sensor data using a plurality of feature layers in a convolutional neural network, and concurrently estimating, using the convolutional neural network, bounding boxes for detected objects, free-space boundaries, and object poses for detected objects from the set of features determined by the plurality of feature layers. The neural network may include a plurality of free-space estimation layers configured to determine the boundaries of free-space in the vision sensor data, a plurality of object detection layers configured to detect objects in the image and to estimate bounding boxes that surround the detected objects, and a plurality of object pose detection layers configured to estimate the direction of each object.Type: GrantFiled: July 6, 2017Date of Patent: November 12, 2019Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Dan Levi, Noa Garnett, Ethan Fetaya, Shaul Oron
-
Publication number: 20190012548Abstract: A method in a vehicle for performing multiple on-board sensing tasks concurrently in the same network using deep learning algorithms is provided. The method includes receiving vision sensor data from a sensor on the vehicle, determining a set of features from the vision sensor data using a plurality of feature layers in a convolutional neural network, and concurrently estimating, using the convolutional neural network, bounding boxes for detected objects, free-space boundaries, and object poses for detected objects from the set of features determined by the plurality of feature layers. The neural network may include a plurality of free-space estimation layers configured to determine the boundaries of free-space in the vision sensor data, a plurality of object detection layers configured to detect objects in the image and to estimate bounding boxes that surround the detected objects, and a plurality of object pose detection layers configured to estimate the direction of each object.Type: ApplicationFiled: July 6, 2017Publication date: January 10, 2019Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Dan Levi, Noa Garnett, Ethan Fetaya, Shaul Oron
-
Publication number: 20160217335Abstract: Methods and systems are provided for detecting an object in an image. In one embodiment, a method includes: receiving, by a processor, data from a single sensor, the data representing an image; dividing, by the processor, the image into vertical sub-images; processing, by the processor, the vertical sub-images based on deep learning models; and detecting, by the processor, an object based on the processing.Type: ApplicationFiled: April 7, 2016Publication date: July 28, 2016Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: DAN LEVI, NOA GARNETT
-
Publication number: 20130135479Abstract: A method for generating a virtual tour (VT) comprising while shooting a video of the motion of an image-capturing device, identifying three distinct states, consisting of: a) Turning around (“scanning”); b) Moving forward and backward (“walking”); and c)Staying in place and holding the image-capturing device; and thereafter combining them together to create a virtual tour.Type: ApplicationFiled: November 29, 2011Publication date: May 30, 2013Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Orna Bregman-Amitai, Eduard Oks, Noa Garnett