Patents by Inventor SHAUL ORON

SHAUL ORON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143074
    Abstract: A method of training a disparity estimation network. The method includes obtaining an eye-gaze dataset having first images with at least one gaze direction associated with each of the first images. A gaze prediction neural network is trained based on the eye-gaze dataset to develop a model trained to provide a gaze prediction for an external image. A depth database is obtained that includes second images having depth information associated with each of the second images. A disparity estimation neural network for object detection is trained based on an output from the gaze prediction neural network and an output from the depth database.
    Type: Application
    Filed: October 18, 2023
    Publication date: May 2, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Ron M. Hecht, Omer Tsimhoni, Dan Levi, Shaul Oron, Andrea Forgacs, Ohad Rahamim, Gershon Celniker
  • Patent number: 11798295
    Abstract: A vehicle, system and method of navigating the vehicle. The system includes a sensor and a processor. The sensor is configured to obtain a first set of detection points representative of a lane of a road section at a first time step and a second set of detection points representative of the lane at a second time step. The processor is configured to determine a set of predicted points for the second time step from the first set of detection points, obtain a set of fused points from the second set of detection points and the set of predicted points, and navigate the vehicle using the set of fused points.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: October 24, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Alon Raveh, Shaul Oron
  • Patent number: 11756312
    Abstract: Systems and methods include obtaining observation points of a lane line using a sensor of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes generating or updating a lane model with the observation points. The lane model indicates a path of the lane line and the lane model is expressed in a lane-specific coordinate system that differs from a vehicle coordinate system that is defined by an orientation of the vehicle. The method also includes transforming the lane-specific coordinate system to maintain a correspondence between the lane-specific coordinate system and the vehicle coordinate system based on a change in orientation of the vehicle resulting in a change in the vehicle coordinate system.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: September 12, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
  • Patent number: 11613272
    Abstract: Systems and methods involve obtaining observation points of a lane line using one or more sensors of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes obtaining uncertainty values, each uncertainty value corresponding with one of the observation points. A lane model is generated or updated using the observation points. The lane model indicates a path of the lane line. An uncertainty model is generated or updated using the uncertainty values corresponding with the observation points. The uncertainty model indicates uncertainty associated with each portion of the lane model.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: March 28, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
  • Publication number: 20220366173
    Abstract: A vehicle, system and method of navigating the vehicle. The system includes a sensor and a processor. The sensor is configured to obtain a first set of detection points representative of a lane of a road section at a first time step and a second set of detection points representative of the lane at a second time step. The processor is configured to determine a set of predicted points for the second time step from the first set of detection points, obtain a set of fused points from the second set of detection points and the set of predicted points, and navigate the vehicle using the set of fused points.
    Type: Application
    Filed: April 27, 2021
    Publication date: November 17, 2022
    Inventors: Alon Raveh, Shaul Oron
  • Publication number: 20220083791
    Abstract: Systems and methods include obtaining observation points of a lane line using a sensor of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes generating or updating a lane model with the observation points. The lane model indicates a path of the lane line and the lane model is expressed in a lane-specific coordinate system that differs from a vehicle coordinate system that is defined by an orientation of the vehicle. The method also includes transforming the lane-specific coordinate system to maintain a correspondence between the lane-specific coordinate system and the vehicle coordinate system based on a change in orientation of the vehicle resulting in a change in the vehicle coordinate system.
    Type: Application
    Filed: September 17, 2020
    Publication date: March 17, 2022
    Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
  • Publication number: 20220080997
    Abstract: Systems and methods involve obtaining observation points of a lane line using one or more sensors of a vehicle. Each observation point indicates a location of a point on the lane line. A method includes obtaining uncertainty values, each uncertainty value corresponding with one of the observation points. A lane model is generated or updated using the observation points. The lane model indicates a path of the lane line. An uncertainty model is generated or updated using the uncertainty values corresponding with the observation points. The uncertainty model indicates uncertainty associated with each portion of the lane model.
    Type: Application
    Filed: September 17, 2020
    Publication date: March 17, 2022
    Inventors: Alon Raveh, Shaul Oron, Bat El Shlomo
  • Patent number: 11270170
    Abstract: A vehicle, system and method of detecting an object. The system includes an image network, a radar network and a head. The image network receives image data and proposes a boundary box from the image data and an object proposal. The radar network receives radar data and the boundary box and generates a fused set of data including the radar data and the image data. The head determines a parameter of the object from the object proposal and the fused set of data.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: March 8, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Liat Sless, Gilad Cohen, Shaul Oron, Bat El Shlomo, Roee Lahav
  • Patent number: 11214261
    Abstract: Methods and apparatus are provided for detecting and assigning objects to sensed values. An object detection arrangement includes a processor that is programmed to execute a first branch of instructions and a second branch of instructions. Each branch of instructions includes receiving a modality from at least one sensor of a group of sensors via a respective interface and determining an output value based on the modality. The object detection arrangement includes an association distance matrix. Modalities of different branches of instructions define different modalities of an object external to the object detection arrangement. The object detection arrangement cumulates the output values, and the association distance matrix associates an object to the cumulated output values to thereby detect and track the object external to the object detection arrangement.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: January 4, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Max Bluvstein, Shaul Oron, Bat El Shlomo
  • Publication number: 20210295113
    Abstract: A vehicle, system and method of detecting an object. The system includes an image network, a radar network and a head. The image network receives image data and proposes a boundary box from the image data and an object proposal. The radar network receives radar data and the boundary box and generates a fused set of data including the radar data and the image data. The head determines a parameter of the object from the object proposal and the fused set of data.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 23, 2021
    Inventors: Liat Sless, Gilad Cohen, Shaul Oron, Bat El Shlomo, Roee Lahav
  • Publication number: 20200391750
    Abstract: Methods and apparatus are provided for detecting and assigning objects to sensed values. An object detection arrangement includes a processor that is programmed to execute a first branch of instructions and a second branch of instructions. Each branch of instructions includes receiving a modality from at least one sensor of a group of sensors via a respective interface and determining an output value based on the modality. The object detection arrangement includes an association distance matrix. Modalities of different branches of instructions define different modalities of an object external to the object detection arrangement. The object detection arrangement cumulates the output values, and the association distance matrix associates an object to the cumulated output values to thereby detect and track the object external to the object detection arrangement.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 17, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Max Bluvstein, Shaul Oron, Bat El Shlomo
  • Patent number: 10699563
    Abstract: In one example implementation according to aspects of the present disclosure, a computer-implemented method includes projecting, by a processing device, tracked targets onto a virtual cylindrical omni-directional camera (VCOC). The method further includes projecting, by the processing device, detected targets onto the VCOC. The method further includes computing, by the processing device, a two-dimensional intersection-over-union (2D-IOU) between the tracked targets and the detected targets. The method further includes performing, by the processing device, an association between the tracked targets and the detected targets based at least in part on the computed IOU. The method further includes controlling, by the processing device, a vehicle based at least in part on the association.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: June 30, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventor: Shaul Oron
  • Patent number: 10474908
    Abstract: A method in a vehicle for performing multiple on-board sensing tasks concurrently in the same network using deep learning algorithms is provided. The method includes receiving vision sensor data from a sensor on the vehicle, determining a set of features from the vision sensor data using a plurality of feature layers in a convolutional neural network, and concurrently estimating, using the convolutional neural network, bounding boxes for detected objects, free-space boundaries, and object poses for detected objects from the set of features determined by the plurality of feature layers. The neural network may include a plurality of free-space estimation layers configured to determine the boundaries of free-space in the vision sensor data, a plurality of object detection layers configured to detect objects in the image and to estimate bounding boxes that surround the detected objects, and a plurality of object pose detection layers configured to estimate the direction of each object.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: November 12, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Dan Levi, Noa Garnett, Ethan Fetaya, Shaul Oron
  • Publication number: 20190012548
    Abstract: A method in a vehicle for performing multiple on-board sensing tasks concurrently in the same network using deep learning algorithms is provided. The method includes receiving vision sensor data from a sensor on the vehicle, determining a set of features from the vision sensor data using a plurality of feature layers in a convolutional neural network, and concurrently estimating, using the convolutional neural network, bounding boxes for detected objects, free-space boundaries, and object poses for detected objects from the set of features determined by the plurality of feature layers. The neural network may include a plurality of free-space estimation layers configured to determine the boundaries of free-space in the vision sensor data, a plurality of object detection layers configured to detect objects in the image and to estimate bounding boxes that surround the detected objects, and a plurality of object pose detection layers configured to estimate the direction of each object.
    Type: Application
    Filed: July 6, 2017
    Publication date: January 10, 2019
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Dan Levi, Noa Garnett, Ethan Fetaya, Shaul Oron
  • Patent number: 9678749
    Abstract: A processor includes a front end including a decoder, an execution unit including a shift-sum multiplier (SSM), and a retirement unit. The decoder includes logic identify a multiplication instruction to multiply a first number and a second number. The execution unit includes logic to, based on the instruction, access a look-up table based on the second number to determine a plurality of shift parameters and one or more flag parameters. The SSM includes logic to use the shift parameters to shift the first number to determine a plurality of partial products, and the flag parameters to determine signs of the partial products. The SSM also includes logic to sum the partial products to yield a result of the multiplication instruction.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: June 13, 2017
    Assignee: Intel Corporation
    Inventors: Shaul Oron, Gilad Michael
  • Patent number: 9489720
    Abstract: System, apparatus, method, and computer readable media for texture enhanced non-local means (NLM) image denoising. In embodiments, detail is preserved in filtered image data through a blending between the noisy input target pixel value and the NLM pixel value that is driven by self-similarity and further informed by an independent measure of local texture. In embodiments, the blending is driven by one or more blending weight or coefficient that is indicative of texture so that the level of detail preserved by the enhanced noise reduction filter scales with the amount of texture. Embodiments herein may thereby denoise regions of an image that lack significant texture (i.e. are smooth) more aggressively than more highly textured regions. In further embodiments, the blending coefficient is further determined based on similarity scores of candidate patches with the number of those scores considered being based on the texture score.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: November 8, 2016
    Assignee: Intel Corporation
    Inventors: Shaul Oron, Gilad Michael
  • Publication number: 20160179514
    Abstract: A processor includes a front end including a decoder, an execution unit including a shift-sum multiplier (SSM), and a retirement unit. The decoder includes logic identify a multiplication instruction to multiply a first number and a second number. The execution unit includes logic to, based on the instruction, access a look-up table based on the second number to determine a plurality of shift parameters and one or more flag parameters. The SSM includes logic to use the shift parameters to shift the first number to determine a plurality of partial products, and the flag parameters to determine signs of the partial products. The SSM also includes logic to sum the partial products to yield a result of the multiplication instruction.
    Type: Application
    Filed: December 22, 2014
    Publication date: June 23, 2016
    Inventors: Shaul Oron, Gilad Michael
  • Publication number: 20160086317
    Abstract: System, apparatus, method, and computer readable media for texture enhanced non-local means (NLM) image denoising. In embodiments, detail is preserved in filtered image data through a blending between the noisy input target pixel value and the NLM pixel value that is driven by self-similarity and further informed by an independent measure of local texture. In embodiments, the blending is driven by one or more blending weight or coefficient that is indicative of texture so that the level of detail preserved by the enhanced noise reduction filter scales with the amount of texture. Embodiments herein may thereby denoise regions of an image that lack significant texture (i.e. are smooth) more aggressively than more highly textured regions. In further embodiments, the blending coefficient is further determined based on similarity scores of candidate patches with the number of those scores considered being based on the texture score.
    Type: Application
    Filed: September 23, 2014
    Publication date: March 24, 2016
    Inventors: SHAUL ORON, GILAD MICHAEL