Patents by Inventor Daniel Hendricus Franciscus FONTIJNE
Daniel Hendricus Franciscus FONTIJNE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11927668Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.Type: GrantFiled: November 27, 2019Date of Patent: March 12, 2024Assignee: QUALCOMM IncorporatedInventors: Daniel Hendricus Franciscus Fontijne, Amin Ansari, Bence Major, Ravi Teja Sukhavasi, Radhika Dilip Gowaikar, Xinzhou Wu, Sundar Subramanian, Michael John Hamilton
-
Patent number: 11899099Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.Type: GrantFiled: November 27, 2019Date of Patent: February 13, 2024Assignee: QUALCOMM IncorporatedInventors: Radhika Dilip Gowaikar, Ravi Teja Sukhavasi, Daniel Hendricus Franciscus Fontijne, Bence Major, Amin Ansari, Teck Yian Lim, Sundar Subramanian, Xinzhou Wu
-
Patent number: 11620499Abstract: Aspects described herein provide a method including: receiving input data at a machine learning model, comprising: a plurality of processing layers; a plurality of gate logics; a plurality of gates; and a fully connected layer; determining based on a plurality of gate parameters associated with the plurality of gate logics, a subset of the plurality of processing layers with which to process the input data; processing the input data with the subset of the plurality of processing layers and the fully connected layer to generate an inference; determining a prediction loss based on the inference and a training label associated with the input data; determining an energy loss based on the subset of the plurality of processing layers used to process the input data; and optimizing the machine learning model based on: the prediction loss; the energy loss; and a prior probability associated with the training label.Type: GrantFiled: November 25, 2019Date of Patent: April 4, 2023Assignee: Qualcomm IncorporatedInventors: Jamie Menjay Lin, Daniel Hendricus Franciscus Fontijne, Edwin Chongwoo Park
-
Patent number: 11443522Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.Type: GrantFiled: December 2, 2019Date of Patent: September 13, 2022Assignee: Qualcomm IncorporatedInventors: Bence Major, Daniel Hendricus Franciscus Fontijne, Ravi Teja Sukhavasi, Amin Ansari
-
Publication number: 20210255304Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.Type: ApplicationFiled: November 27, 2019Publication date: August 19, 2021Inventors: Daniel Hendricus Franciscus FONTIJNE, Amin ANSARI, Bence MAJOR, Ravi Teja SUKHAVASI, Radhika Dilip GOWAIKAR, Xinzhou WU, Sundar SUBRAMANIAN, Michael John HAMILTON
-
Publication number: 20210158145Abstract: Aspects described herein provide a method including: receiving input data at a machine learning model, comprising: a plurality of processing layers; a plurality of gate logics; a plurality of gates; and a fully connected layer; determining based on a plurality of gate parameters associated with the plurality of gate logics, a subset of the plurality of processing layers with which to process the input data; processing the input data with the subset of the plurality of processing layers and the fully connected layer to generate an inference; determining a prediction loss based on the inference and a training label associated with the input data; determining an energy loss based on the subset of the plurality of processing layers used to process the input data; and optimizing the machine learning model based on: the prediction loss; the energy loss; and a prior probability associated with the training label.Type: ApplicationFiled: November 25, 2019Publication date: May 27, 2021Inventors: Jamie Menjay LIN, Daniel Hendricus Franciscus FONTIJNE, Edwin Chongwoo PARK
-
Publication number: 20200175286Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.Type: ApplicationFiled: December 2, 2019Publication date: June 4, 2020Inventors: Bence MAJOR, Daniel Hendricus Franciscus FONTIJNE, Ravi Teja SUKHAVASI, Amin ANSARI
-
Publication number: 20200175315Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.Type: ApplicationFiled: November 27, 2019Publication date: June 4, 2020Inventors: Radhika Dilip GOWAIKAR, Ravi Teja SUKHAVASI, Daniel Hendricus Franciscus FONTIJNE, Bence MAJOR, Amin ANSARI, Teck Yian LIM, Sundar SUBRAMANIAN, Xinzhou WU