Patents by Inventor Bat El Shlomo
Bat El Shlomo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11842546Abstract: A system in a vehicle includes a first sensor to obtain first sensor data from a first field of view and provide a first top-view feature representation. The system also includes a second sensor to obtain second sensor data from a second field of view with an overlap with the first field of view and provide a second top-view feature representation. Processing circuitry implements a neural network and provides a top-view stixel representation based on the first top-view feature representation and the second top-view feature representation. The top-view three-dimensional stixel representation is used to control an operation of the vehicle.Type: GrantFiled: May 13, 2021Date of Patent: December 12, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Keren Rotker, Max Bluvstein, Bat El Shlomo
-
Patent number: 11756312Abstract: Systems and methods include obtaining observation points of a lane line using a sensor of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes generating or updating a lane model with the observation points. The lane model indicates a path of the lane line and the lane model is expressed in a lane-specific coordinate system that differs from a vehicle coordinate system that is defined by an orientation of the vehicle. The method also includes transforming the lane-specific coordinate system to maintain a correspondence between the lane-specific coordinate system and the vehicle coordinate system based on a change in orientation of the vehicle resulting in a change in the vehicle coordinate system.Type: GrantFiled: September 17, 2020Date of Patent: September 12, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Alon Raveh, Shaul Oron, Bat El Shlomo
-
Patent number: 11731639Abstract: A vehicle having an imaging sensor that is arranged to monitor a field-of-view (FOV) that includes a travel surface proximal to the vehicle is described. Detecting the travel lane includes capturing a FOV image of a viewable region of the travel surface. The FOV image is converted, via an artificial neural network, to a plurality of feature maps. The feature maps are projected, via an inverse perspective mapping algorithm, onto a BEV orthographic grid. The feature maps include travel lane segments and feature embeddings, and the travel lane segments are represented as line segments. The line segments are concatenated for the plurality of grid sections based upon the feature embeddings to form a predicted lane. The concatenation, or clustering is accomplished via the feature embeddings.Type: GrantFiled: March 3, 2020Date of Patent: August 22, 2023Assignee: GM Global Technology Operations LLCInventors: Netalee Efrat Sela, Max Bluvstein, Dan Levi, Noa Garnett, Bat El Shlomo
-
Patent number: 11613272Abstract: Systems and methods involve obtaining observation points of a lane line using one or more sensors of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes obtaining uncertainty values, each uncertainty value corresponding with one of the observation points. A lane model is generated or updated using the observation points. The lane model indicates a path of the lane line. An uncertainty model is generated or updated using the uncertainty values corresponding with the observation points. The uncertainty model indicates uncertainty associated with each portion of the lane model.Type: GrantFiled: September 17, 2020Date of Patent: March 28, 2023Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Alon Raveh, Shaul Oron, Bat El Shlomo
-
Publication number: 20220366176Abstract: A system in a vehicle includes a first sensor to obtain first sensor data from a first field of view and provide a first top-view feature representation. The system also includes a second sensor to obtain second sensor data from a second field of view with an overlap with the first field of view and provide a second top-view feature representation. Processing circuitry implements a neural network and provides a top-view stixel representation based on the first top-view feature representation and the second top-view feature representation. The top-view three-dimensional stixel representation is used to control an operation of the vehicle.Type: ApplicationFiled: May 13, 2021Publication date: November 17, 2022Inventors: Keren Rotker, Max Bluvstein, Bat El Shlomo
-
Patent number: 11390286Abstract: A system for end to end prediction of lane detection uncertainty includes a sensor device of a host vehicle generating data related to a road surface and a navigation controller including a computerized processor operable to monitor an input image from the sensor device, utilize a convolutional neural network to analyze the input image and output a lane prediction and a lane uncertainty prediction, and generate a commanded navigation plot based upon the lane prediction and the lane uncertainty prediction. The convolutional neural network is initially trained using a per point association and error calculation, including associating a selected ground truth lane to a selected set of data points related to a predicted lane and then associating at least one point of the selected ground truth lane to a corresponding data point from the selected set of data points related to the predicted lane.Type: GrantFiled: March 4, 2020Date of Patent: July 19, 2022Assignee: GM Global Technology Operations LLCInventors: Netalee Efrat Sela, Max Bluvstein, Bat El Shlomo
-
Patent number: 11380110Abstract: Vehicles and methods for detecting a three-dimensional (3D) position of a traffic sign and controlling a feature of the vehicle based on the 3D position of the traffic sign. An image is received from a camera. The image is processed using a neural network. The neural network includes a traffic sign class block regressing a traffic sign class for a traffic sign included in the image and a rotation block regressing an orientation for the traffic sign. Dimensions for the traffic sign are retrieved from an information database based on the traffic sign class. A 3D position of the traffic sign is determined based on the dimensions of the traffic sign and the orientation of the traffic sign. A feature of the vehicle is controlled based on the 3D position of the traffic sign.Type: GrantFiled: December 17, 2020Date of Patent: July 5, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Erez Ben Yaacov, Elad Plaut, Bat El Shlomo
-
Publication number: 20220198203Abstract: Vehicles and methods for detecting a three-dimensional (3D) position of a traffic sign and controlling a feature of the vehicle based on the 3D position of the traffic sign. An image is received from a camera. The image is processed using a neural network. The neural network includes a traffic sign class block regressing a traffic sign class for a traffic sign included in the image and a rotation block regressing an orientation for the traffic sign. Dimensions for the traffic sign are retrieved from an information database based on the traffic sign class. A 3D position of the traffic sign is determined based on the dimensions of the traffic sign and the orientation of the traffic sign. A feature of the vehicle is controlled based on the 3D position of the traffic sign.Type: ApplicationFiled: December 17, 2020Publication date: June 23, 2022Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Erez Ben Yaacov, Elad Plaut, Bat El Shlomo
-
Publication number: 20220083791Abstract: Systems and methods include obtaining observation points of a lane line using a sensor of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes generating or updating a lane model with the observation points. The lane model indicates a path of the lane line and the lane model is expressed in a lane-specific coordinate system that differs from a vehicle coordinate system that is defined by an orientation of the vehicle. The method also includes transforming the lane-specific coordinate system to maintain a correspondence between the lane-specific coordinate system and the vehicle coordinate system based on a change in orientation of the vehicle resulting in a change in the vehicle coordinate system.Type: ApplicationFiled: September 17, 2020Publication date: March 17, 2022Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
-
Publication number: 20220080997Abstract: Systems and methods involve obtaining observation points of a lane line using one or more sensors of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes obtaining uncertainty values, each uncertainty value corresponding with one of the observation points. A lane model is generated or updated using the observation points. The lane model indicates a path of the lane line. An uncertainty model is generated or updated using the uncertainty values corresponding with the observation points. The uncertainty model indicates uncertainty associated with each portion of the lane model.Type: ApplicationFiled: September 17, 2020Publication date: March 17, 2022Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
-
Patent number: 11270170Abstract: A vehicle, system and method of detecting an object. The system includes an image network, a radar network and a head. The image network receives image data and proposes a boundary box from the image data and an object proposal. The radar network receives radar data and the boundary box and generates a fused set of data including the radar data and the image data. The head determines a parameter of the object from the object proposal and the fused set of data.Type: GrantFiled: March 18, 2020Date of Patent: March 8, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Liat Sless, Gilad Cohen, Shaul Oron, Bat El Shlomo, Roee Lahav
-
Patent number: 11214261Abstract: Methods and apparatus are provided for detecting and assigning objects to sensed values. An object detection arrangement includes a processor that is programmed to execute a first branch of instructions and a second branch of instructions. Each branch of instructions includes receiving a modality from at least one sensor of a group of sensors via a respective interface and determining an output value based on the modality. The object detection arrangement includes an association distance matrix. Modalities of different branches of instructions define different modalities of an object external to the object detection arrangement. The object detection arrangement cumulates the output values, and the association distance matrix associates an object to the cumulated output values to thereby detect and track the object external to the object detection arrangement.Type: GrantFiled: June 11, 2019Date of Patent: January 4, 2022Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Max Bluvstein, Shaul Oron, Bat El Shlomo
-
Publication number: 20210295113Abstract: A vehicle, system and method of detecting an object. The system includes an image network, a radar network and a head. The image network receives image data and proposes a boundary box from the image data and an object proposal. The radar network receives radar data and the boundary box and generates a fused set of data including the radar data and the image data. The head determines a parameter of the object from the object proposal and the fused set of data.Type: ApplicationFiled: March 18, 2020Publication date: September 23, 2021Inventors: Liat Sless, Gilad Cohen, Shaul Oron, Bat El Shlomo, Roee Lahav
-
Publication number: 20210276564Abstract: A system for end to end prediction of lane detection uncertainty includes a sensor device of a host vehicle generating data related to a road surface and a navigation controller including a computerized processor operable to monitor an input image from the sensor device, utilize a convolutional neural network to analyze the input image and output a lane prediction and a lane uncertainty prediction, and generate a commanded navigation plot based upon the lane prediction and the lane uncertainty prediction. The convolutional neural network is initially trained using a per point association and error calculation, including associating a selected ground truth lane to a selected set of data points related to a predicted lane and then associating at least one point of the selected ground truth lane to a corresponding data point from the selected set of data points related to the predicted lane.Type: ApplicationFiled: March 4, 2020Publication date: September 9, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Netalee Efrat Sela, Max Bluvstein, Bat El Shlomo
-
Publication number: 20210276574Abstract: A vehicle having an imaging sensor that is arranged to monitor a field-of-view (FOV) that includes a travel surface proximal to the vehicle is described. Detecting the travel lane includes capturing a FOV image of a viewable region of the travel surface. The FOV image is converted, via an artificial neural network, to a plurality of feature maps. The feature maps are projected, via an inverse perspective mapping algorithm, onto a BEV orthographic grid. The feature maps include travel lane segments and feature embeddings, and the travel lane segments are represented as line segments. The line segments are concatenated for the plurality of grid sections based upon the feature embeddings to form a predicted lane. The concatenation, or clustering is accomplished via the feature embeddings.Type: ApplicationFiled: March 3, 2020Publication date: September 9, 2021Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Netalee Efrat Sela, Max Bluvstein, Dan Levi, Noa Garnett, Bat El Shlomo
-
Publication number: 20200391750Abstract: Methods and apparatus are provided for detecting and assigning objects to sensed values. An object detection arrangement includes a processor that is programmed to execute a first branch of instructions and a second branch of instructions. Each branch of instructions includes receiving a modality from at least one sensor of a group of sensors via a respective interface and determining an output value based on the modality. The object detection arrangement includes an association distance matrix. Modalities of different branches of instructions define different modalities of an object external to the object detection arrangement. The object detection arrangement cumulates the output values, and the association distance matrix associates an object to the cumulated output values to thereby detect and track the object external to the object detection arrangement.Type: ApplicationFiled: June 11, 2019Publication date: December 17, 2020Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Max Bluvstein, Shaul Oron, Bat El Shlomo