Patents by Inventor Dan Levi

Dan Levi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12270891
    Abstract: A system includes a transmitter of a radar system to transmit transmitted signals, and a receiver of the radar system to receive received signals based on reflection of one or more of the transmitted signals by one or more objects. The system also includes a processor to train a neural network with reference data obtained by simulating a higher resolution radar system than the radar system to obtain a trained neural network. The trained neural network enhances detection of the one or more objects based on obtaining and processing the received signals in a vehicle. One or more operations of the vehicle are controlled based on the detection of the one or more objects.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: April 8, 2025
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Oded Bialer, Yuval Haitman, Dan Levi
  • Patent number: 12269500
    Abstract: A system in a vehicle includes an image sensor to obtain images in an image sensor coordinate system and a depth sensor to obtain point clouds in a depth sensor coordinate system. Processing circuitry implements a neural network to determine a validation state of a transformation matrix that transforms the point clouds in the depth sensor coordinate system to transformed point clouds in the image sensor coordinate system. The transformation matrix includes rotation parameters and translation parameters.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: April 8, 2025
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Baltaxe, Dan Levi, Noa Garnett, Doron Portnoy, Amit Batikoff, Shahar Ben Ezra, Tal Furman
  • Patent number: 12252014
    Abstract: A method of controlling a color display screen aboard a motor vehicle includes performing, via a host computer, a color calibration test of a user of the motor vehicle in which the user is subjected to a calibrated set of color-coded test information. The method includes receiving a color perception response of the user to the calibrated set of color-coded test information via the host computer. Additionally, the method includes mapping a reduced visual gamut of the user via the host computer using the color perception response, and then commanding adjustment of user-specific color settings of the motor vehicle using the reduced visual gamut to thereby accommodate a color perception deficiency of the user.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: March 18, 2025
    Assignee: GM Global Technology Operations LLC
    Inventors: Yael Shmueli Friedland, Asaf Degani, Dan Levi, Tzvi Philipp, Eran Kishon
  • Patent number: 12229926
    Abstract: A method for deblurring a blurred image includes dividing the blurred image into overlapping regions each having a size and an offset from neighboring overlapping regions along a first direction as determined by a period of a ringing artifact in the blurred image, or by obtained blur characteristics relating to the blurred image and/or attributable to the optical system, or by a detected cause capable of producing the blur characteristics, stacking the overlapping regions to produce a stacked output, wherein the overlapping regions are sequentially organized along the first direction, convolving the stacked output through a first convolutional neural network (CNN) to produce a first CNN output having reduced blur as compared to the stacked output, and assembling the first CNN output into a re-assembled image, and processing the re-assembled image through a second CNN to produce a deblurred image having reduced residual artifacts as compared to the re-assembled image.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: February 18, 2025
    Assignee: GM Global Technology Operations LLC
    Inventors: Tzvi Philipp, Dan Levi, Shai Silberstein
  • Publication number: 20250040886
    Abstract: A garment for bringing an EM transducer to contact with a thoracic skin surface area of a wearer is disclosed. The garment comprises a thoracic garment having a EM transducer placement portion and a pressure applying element associated with the EM transducer placement portion for applying a pressure on an EM transducer secured in an associated the EM transducer placement portion when the thoracic garment is worn by a wearer so that the EM transducer applies a respective pressure on a thoracic skin surface area of the wearer.
    Type: Application
    Filed: October 21, 2024
    Publication date: February 6, 2025
    Applicant: Sensible Medical Innovations Ltd.
    Inventors: Amir SAROKA, Leonid VOSHIN, Jonathan BAR-OR, Tal LEVI, Ofer KARP, Yiftach BARASH, Nadav MIZRAHI, Dan RAPPAPORT, Shlomi BERGIDA, Jonathan BAHAT
  • Publication number: 20250037424
    Abstract: Herein, a technology that facilitates the optimization of vision-language (VL) based classifiers with text embeddings is discussed. The technology includes tuning the VL-based classifier employing a pre-trained image encoder of a visual-language model (VLM) for imaging embedding of pre-classified images and a pre-trained textual encoder of the VLM for textual embedding of a set of differing textual sentences. The technology further includes determining an optimized set of differing textual sentences of a superset of textual sentences. The optimized set of differing textual sentences has a minimal classification loss of the VL-based classifier when classifying the pre-classified images.
    Type: Application
    Filed: July 26, 2023
    Publication date: January 30, 2025
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Roy Uziel, Oded Bialer, Dan Levi
  • Publication number: 20240411804
    Abstract: A method for open-vocabulary query-based dense retrieval is provided. The method includes monitoring camera data including an image related to an object and referencing a set of queries, each of the queries describing a candidate object to be updated by a remote server device. An encoder of an open-vocabulary pre-trained vision-language model system is utilized to initialize a predefined embedding for each query, and a classifier is initialized by mapping the predefined embeddings to weights of the classifier. The method further includes applying a dense open-vocabulary image encoder on the camera data to create a mass of dense embeddings including a set of spatially-arranged embeddings for the image, each including a matrix including embedding vectors. The classifier is utilized by applying the classifier to the plurality of embedding vectors to classify the object within the operating environment as an identified object. The method further includes publishing the identified object.
    Type: Application
    Filed: June 7, 2023
    Publication date: December 12, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Hila Levi, Dan Levi
  • Publication number: 20240371132
    Abstract: A system for classifying a road crossing intention of a pedestrian includes a processor including a pretrained image encoder generating an image embedding based upon an input image. The system further includes a remote server receiving the image embedding. The remote server device further references a plurality of pretrained image and text embeddings each corresponding to either a positive road crossing intention or a negative road crossing intention. The remote server device further determines a plurality of proximity values evaluating whether the input image is closer to the positive road crossing intention or the negative road crossing intention, evaluating the image embedding against each of the pretrained embeddings. The remote server device further classifies a road crossing intention of the pedestrian based upon the plurality of proximity values. The system further includes generates a road crossing intention output based upon the road crossing intention of the pedestrian.
    Type: Application
    Filed: May 3, 2023
    Publication date: November 7, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Roy Uziel, Oded Bialer, Dan Levi
  • Publication number: 20240329225
    Abstract: A disclosure is presented for estimating height of an object using ultrasonic sensors carried onboard a vehicle. The height may be estimated by generating ultrasonic distance measurements based on a time of flight associated with ultrasonic reflections detected with the ultrasonic sensors, generating a three-dimensional (3D) occupancy grid to volumetrically represent at least a portion of an environment within field of view of the ultrasonic sensors, the 3D occupancy grid including a plurality of volumetric pixels (voxels), assigning a count value to each of the voxels based on the distance measurements, and estimating the height according to a relative spatial relationship between the vehicle and the voxel having the count value with a greatest value.
    Type: Application
    Filed: March 27, 2023
    Publication date: October 3, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Baltaxe, Michael Slutsky, Ariel Rubanenko, Dan Levi
  • Publication number: 20240300518
    Abstract: A detection system for a host vehicle includes a camera, global positioning system (“GPS”) receiver, compass, and electronic control unit (“ECU”). The camera collects polarimetric image data forming an imaged drive scene inclusive of a road surface illuminated by the Sun. The GPS receiver outputs a present location of the vehicle as a date-and-time-stamped coordinate set. The compass provides a directional heading of the vehicle. The ECU determines the Sun's location relative to the vehicle and camera using an input data set, including the present location and directional heading. The ECU also detects a specular reflecting area or areas on the road surface using the polarimetric image data and Sun's location, with the specular reflecting area(s) forming an output data set. The ECU then executes a control action aboard the host vehicle in response to the output data set.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 12, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Baltaxe, Tzvi Philipp, Dan Levi
  • Publication number: 20240300517
    Abstract: A system for a host vehicle operating on a road surface includes a polarimetric camera, a global positioning system (“GPS”) receiver, a compass, and an electronic control unit (“ECU”). The camera collects polarimetric image data of a drive scene, including a potential driving path on the road surface. The ECU receives the polarimetric image data, estimates the Sun location using the GPS receiver and compass, and computes an ideal representation of the road surface using the Sun location. The ECU normalizes the polarimetric image data such that the road surface has a normalized representation in the drive scene, i.e., an angle of linear polarization (“AoLP”) and degree of linear polarization (“DoLP”) equal predetermined fixed values. The ECU executes a control action using the normalized representation.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 12, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Baltaxe, Tzvi Philipp, Tomer Pe'er, Dan Levi
  • Publication number: 20240257527
    Abstract: A free space estimation and visualization system for a host vehicle includes a camera configured to collect red-green-blue (“RGB”)-polarimetric image data of drive environs of the host vehicle, including a potential driving path. An electronic control unit (“ECU”) receives the RGB-polarimetric image data and estimates free space in the driving path by processing the RGB-polarimetric image data via a run-time neural network. Control actions are taken in response to the estimated free space. A method for use with the visualization system includes collecting RGB and lidar data of target drive scenes and generating, via a first neural network, pseudo-labels of the scenes. The method includes collecting RGB-polarimetric data via a camera and thereafter training a second neural network using the RGB-polarimetric data and pseudo-labels. The second neural network is used in the ECU to estimate free space in the potential driving path.
    Type: Application
    Filed: January 30, 2023
    Publication date: August 1, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Baltaxe, Tomer Pe'er, Dan Levi
  • Publication number: 20240157791
    Abstract: A method of controlling a color display screen aboard a motor vehicle includes performing, via a host computer, a color calibration test of a user of the motor vehicle in which the user is subjected to a calibrated set of color-coded test information. The method includes receiving a color perception response of the user to the calibrated set of color-coded test information via the host computer. Additionally, the method includes mapping a reduced visual gamut of the user via the host computer using the color perception response, and then commanding adjustment of user-specific color settings of the motor vehicle using the reduced visual gamut to thereby accommodate a color perception deficiency of the user.
    Type: Application
    Filed: November 10, 2022
    Publication date: May 16, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Yael Shmueli Friedland, Asaf Degani, Dan Levi, Tzvi Philipp, Eran Kishon
  • Publication number: 20240143074
    Abstract: A method of training a disparity estimation network. The method includes obtaining an eye-gaze dataset having first images with at least one gaze direction associated with each of the first images. A gaze prediction neural network is trained based on the eye-gaze dataset to develop a model trained to provide a gaze prediction for an external image. A depth database is obtained that includes second images having depth information associated with each of the second images. A disparity estimation neural network for object detection is trained based on an output from the gaze prediction neural network and an output from the depth database.
    Type: Application
    Filed: October 18, 2023
    Publication date: May 2, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Ron M. Hecht, Omer Tsimhoni, Dan Levi, Shaul Oron, Andrea Forgacs, Ohad Rahamim, Gershon Celniker
  • Publication number: 20240094377
    Abstract: A system includes a transmitter of a radar system to transmit transmitted signals, and a receiver of the radar system to receive received signals based on reflection of one or more of the transmitted signals by one or more objects. The system also includes a processor to train a neural network with reference data obtained by simulating a higher resolution radar system than the radar system to obtain a trained neural network. The trained neural network enhances detection of the one or more objects based on obtaining and processing the received signals in a vehicle. One or more operations of the vehicle are controlled based on the detection of the one or more objects.
    Type: Application
    Filed: September 19, 2022
    Publication date: March 21, 2024
    Inventors: Oded Bialer, Yuval Haitman, Dan Levi
  • Publication number: 20240077607
    Abstract: A method, system and vehicle that repetitively correct angle offsets in a synthetic aperture radar image of a vehicle while the vehicle is in motion by utilizing a radar system and a camera to determine accurate velocity of a measured object by matching angles of the object in the SAR image with angles of the object in the camera image, thereby reducing angle offsets of objects in the SAR image. The method includes obtaining an SAR image of another vehicle via a radar unit of the vehicle, obtaining a camera image of the other vehicle via a camera unit of the vehicle, determining an association between at least one object in the SAR image and a corresponding at least one object in the camera image, correcting a velocity estimation of the vehicle based on the determined association, and adjusting the SAR image based on the corrected velocity estimation.
    Type: Application
    Filed: September 1, 2022
    Publication date: March 7, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Oded Bialer, Dan Levi
  • Patent number: 11893086
    Abstract: A system for analyzing images includes a processing device includes a receiving module configured to receive an image, and an analysis module configured to apply the received image to a machine learning network and classify one or more features in the received image, the machine learning network configured to propagate image data through a plurality of convolutional layers, each convolutional layer of the plurality of convolutional layers including a plurality of filter channels, the machine learning network including a bottleneck layer configured to recognize an image feature based on a shape of an image component, The system also includes an output module configured to output characterization data that includes a classification of the one or more features.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: February 6, 2024
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Dan Levi, Noa Garnett, Roy Uziel
  • Publication number: 20240020797
    Abstract: A method for deblurring a blurred image includes dividing the blurred image into overlapping regions each having a size and an offset from neighboring overlapping regions along a first direction as determined by a period of a ringing artifact in the blurred image, or by obtained blur characteristics relating to the blurred image and/or attributable to the optical system, or by a detected cause capable of producing the blur characteristics, stacking the overlapping regions to produce a stacked output, wherein the overlapping regions are sequentially organized along the first direction, convolving the stacked output through a first convolutional neural network (CNN) to produce a first CNN output having reduced blur as compared to the stacked output, and assembling the first CNN output into a re-assembled image, and processing the re-assembled image through a second CNN to produce a deblurred image having reduced residual artifacts as compared to the re-assembled image.
    Type: Application
    Filed: July 13, 2022
    Publication date: January 18, 2024
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Tzvi Philipp, Dan Levi, Shai Silberstein
  • Patent number: 11731639
    Abstract: A vehicle having an imaging sensor that is arranged to monitor a field-of-view (FOV) that includes a travel surface proximal to the vehicle is described. Detecting the travel lane includes capturing a FOV image of a viewable region of the travel surface. The FOV image is converted, via an artificial neural network, to a plurality of feature maps. The feature maps are projected, via an inverse perspective mapping algorithm, onto a BEV orthographic grid. The feature maps include travel lane segments and feature embeddings, and the travel lane segments are represented as line segments. The line segments are concatenated for the plurality of grid sections based upon the feature embeddings to form a predicted lane. The concatenation, or clustering is accomplished via the feature embeddings.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 22, 2023
    Assignee: GM Global Technology Operations LLC
    Inventors: Netalee Efrat Sela, Max Bluvstein, Dan Levi, Noa Garnett, Bat El Shlomo
  • Publication number: 20230134125
    Abstract: A system in a vehicle includes an image sensor to obtain images in an image sensor coordinate system and a depth sensor to obtain point clouds in a depth sensor coordinate system. Processing circuitry implements a neural network to determine a validation state of a transformation matrix that transforms the point clouds in the depth sensor coordinate system to transformed point clouds in the image sensor coordinate system. The transformation matrix includes rotation parameters and translation parameters.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 4, 2023
    Inventors: Michael Baltaxe, Dan Levi, Noa Garnett, Doron Portnoy, Amit Batikoff, Shahar Ben Ezra, Tal Furman