Patents by Inventor Bence MAJOR
Bence MAJOR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144087Abstract: Certain aspects of the present disclosure provide techniques and apparatus for beam selection using machine learning. A plurality of data samples corresponding to a plurality of data modalities is accessed. A plurality of features is generated by, for each respective data sample of the plurality of data samples, performing feature extraction based at least in part on a respective modality of the respective data sample. The plurality of features is fused using one or more attention-based models, and a wireless communication configuration is generated based on processing the fused plurality of features using a machine learning model.Type: ApplicationFiled: June 23, 2023Publication date: May 2, 2024Inventors: Fabio Valerio MASSOLI, Ang LI, Shreya KADAMBI, Hao YE, Arash BEHBOODI, Joseph Binamira SORIAGA, Bence MAJOR, Maximilian Wolfgang Martin ARNOLD
-
Patent number: 11927668Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.Type: GrantFiled: November 27, 2019Date of Patent: March 12, 2024Assignee: QUALCOMM IncorporatedInventors: Daniel Hendricus Franciscus Fontijne, Amin Ansari, Bence Major, Ravi Teja Sukhavasi, Radhika Dilip Gowaikar, Xinzhou Wu, Sundar Subramanian, Michael John Hamilton
-
Patent number: 11899099Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.Type: GrantFiled: November 27, 2019Date of Patent: February 13, 2024Assignee: QUALCOMM IncorporatedInventors: Radhika Dilip Gowaikar, Ravi Teja Sukhavasi, Daniel Hendricus Franciscus Fontijne, Bence Major, Amin Ansari, Teck Yian Lim, Sundar Subramanian, Xinzhou Wu
-
Publication number: 20230259600Abstract: Certain aspects of the present disclosure provide techniques and apparatus for biometric authentication using an anti-spoofing protection model refined using online data. The method generally includes receiving a biometric data input for a user. Features for the received biometric data input are extracted through a first machine learning model. It is determined, using the extracted features for the received biometric data input and a second machine learning model, whether the received biometric data input for the user is authentic or inauthentic. It is determined whether to add the extracted features for the received biometric data input, labeled with an indication of whether the received biometric data input is authentic or inauthentic, to a finetuning data set. The second machine learning model is adjusted based on the finetuning data set.Type: ApplicationFiled: January 17, 2023Publication date: August 17, 2023Inventors: Davide BELLI, Bence MAJOR, Amir JALALIRAD, Daniel Hendricus Franciscus DIJKMAN, Fatih Murat PORIKLI
-
Publication number: 20220327189Abstract: Certain aspects of the present disclosure provide techniques and apparatus for biometric authentication using neural-network-based anti-spoofing protection mechanisms. An example method generally includes receiving an image of a biometric data source for a user; extracting, through a first artificial neural network, features for at least the received image; combining the extracted features for the at least the received image and a combined feature representation of a plurality of enrollment biometric data source images; determining, using the combined extracted features for the at least the received image and the combined feature representation as input into a second artificial neural network, whether the received image of the biometric data source for the user is from a real biometric data source or a copy of the real biometric data source; and taking one or more actions to allow or deny the user access to a protected resource based on the determination.Type: ApplicationFiled: April 8, 2022Publication date: October 13, 2022Inventors: Davide BELLI, Bence MAJOR, Daniel Hendricus Franciscus DIJKMAN, Fatih Murat PORIKLI
-
Patent number: 11443522Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.Type: GrantFiled: December 2, 2019Date of Patent: September 13, 2022Assignee: Qualcomm IncorporatedInventors: Bence Major, Daniel Hendricus Franciscus Fontijne, Ravi Teja Sukhavasi, Amin Ansari
-
Publication number: 20210255304Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.Type: ApplicationFiled: November 27, 2019Publication date: August 19, 2021Inventors: Daniel Hendricus Franciscus FONTIJNE, Amin ANSARI, Bence MAJOR, Ravi Teja SUKHAVASI, Radhika Dilip GOWAIKAR, Xinzhou WU, Sundar SUBRAMANIAN, Michael John HAMILTON
-
Publication number: 20210150347Abstract: Aspects described herein provide a method of performing guided training of a neural network model, including: receiving supplementary domain feature data; providing the supplementary domain feature data to a fully connected layer of a neural network model; receiving from the fully connected layer supplementary domain feature scaling data; providing the supplementary domain feature scaling data to an activation function; receiving from the activation function supplementary domain feature weight data; receiving a set of feature maps from a first convolution layer of the neural network model; fusing the supplementary domain feature weight data with the set of feature maps to form fused feature maps; and providing the fused feature maps to a second convolution layer of the neural network model.Type: ApplicationFiled: November 13, 2020Publication date: May 20, 2021Inventors: Shubhankar Mange BORSE, Nojun KWAK, Daniel Hendricus Franciscus DIJKMAN, Bence MAJOR
-
Publication number: 20200175286Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.Type: ApplicationFiled: December 2, 2019Publication date: June 4, 2020Inventors: Bence MAJOR, Daniel Hendricus Franciscus FONTIJNE, Ravi Teja SUKHAVASI, Amin ANSARI
-
Publication number: 20200175315Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.Type: ApplicationFiled: November 27, 2019Publication date: June 4, 2020Inventors: Radhika Dilip GOWAIKAR, Ravi Teja SUKHAVASI, Daniel Hendricus Franciscus FONTIJNE, Bence MAJOR, Amin ANSARI, Teck Yian LIM, Sundar SUBRAMANIAN, Xinzhou WU