Patents by Inventor Daniel Hendricus Franciscus FONTIJNE

Daniel Hendricus Franciscus FONTIJNE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11927668
    Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 12, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Daniel Hendricus Franciscus Fontijne, Amin Ansari, Bence Major, Ravi Teja Sukhavasi, Radhika Dilip Gowaikar, Xinzhou Wu, Sundar Subramanian, Michael John Hamilton
  • Patent number: 11899099
    Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 13, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Radhika Dilip Gowaikar, Ravi Teja Sukhavasi, Daniel Hendricus Franciscus Fontijne, Bence Major, Amin Ansari, Teck Yian Lim, Sundar Subramanian, Xinzhou Wu
  • Patent number: 11620499
    Abstract: Aspects described herein provide a method including: receiving input data at a machine learning model, comprising: a plurality of processing layers; a plurality of gate logics; a plurality of gates; and a fully connected layer; determining based on a plurality of gate parameters associated with the plurality of gate logics, a subset of the plurality of processing layers with which to process the input data; processing the input data with the subset of the plurality of processing layers and the fully connected layer to generate an inference; determining a prediction loss based on the inference and a training label associated with the input data; determining an energy loss based on the subset of the plurality of processing layers used to process the input data; and optimizing the machine learning model based on: the prediction loss; the energy loss; and a prior probability associated with the training label.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: April 4, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Jamie Menjay Lin, Daniel Hendricus Franciscus Fontijne, Edwin Chongwoo Park
  • Patent number: 11443522
    Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: September 13, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Bence Major, Daniel Hendricus Franciscus Fontijne, Ravi Teja Sukhavasi, Amin Ansari
  • Publication number: 20210255304
    Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.
    Type: Application
    Filed: November 27, 2019
    Publication date: August 19, 2021
    Inventors: Daniel Hendricus Franciscus FONTIJNE, Amin ANSARI, Bence MAJOR, Ravi Teja SUKHAVASI, Radhika Dilip GOWAIKAR, Xinzhou WU, Sundar SUBRAMANIAN, Michael John HAMILTON
  • Publication number: 20210158145
    Abstract: Aspects described herein provide a method including: receiving input data at a machine learning model, comprising: a plurality of processing layers; a plurality of gate logics; a plurality of gates; and a fully connected layer; determining based on a plurality of gate parameters associated with the plurality of gate logics, a subset of the plurality of processing layers with which to process the input data; processing the input data with the subset of the plurality of processing layers and the fully connected layer to generate an inference; determining a prediction loss based on the inference and a training label associated with the input data; determining an energy loss based on the subset of the plurality of processing layers used to process the input data; and optimizing the machine learning model based on: the prediction loss; the energy loss; and a prior probability associated with the training label.
    Type: Application
    Filed: November 25, 2019
    Publication date: May 27, 2021
    Inventors: Jamie Menjay LIN, Daniel Hendricus Franciscus FONTIJNE, Edwin Chongwoo PARK
  • Publication number: 20200175286
    Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.
    Type: Application
    Filed: December 2, 2019
    Publication date: June 4, 2020
    Inventors: Bence MAJOR, Daniel Hendricus Franciscus FONTIJNE, Ravi Teja SUKHAVASI, Amin ANSARI
  • Publication number: 20200175315
    Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.
    Type: Application
    Filed: November 27, 2019
    Publication date: June 4, 2020
    Inventors: Radhika Dilip GOWAIKAR, Ravi Teja SUKHAVASI, Daniel Hendricus Franciscus FONTIJNE, Bence MAJOR, Amin ANSARI, Teck Yian LIM, Sundar SUBRAMANIAN, Xinzhou WU