Patents by Inventor Bence MAJOR

Bence MAJOR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144087
    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for beam selection using machine learning. A plurality of data samples corresponding to a plurality of data modalities is accessed. A plurality of features is generated by, for each respective data sample of the plurality of data samples, performing feature extraction based at least in part on a respective modality of the respective data sample. The plurality of features is fused using one or more attention-based models, and a wireless communication configuration is generated based on processing the fused plurality of features using a machine learning model.
    Type: Application
    Filed: June 23, 2023
    Publication date: May 2, 2024
    Inventors: Fabio Valerio MASSOLI, Ang LI, Shreya KADAMBI, Hao YE, Arash BEHBOODI, Joseph Binamira SORIAGA, Bence MAJOR, Maximilian Wolfgang Martin ARNOLD
  • Patent number: 11927668
    Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 12, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Daniel Hendricus Franciscus Fontijne, Amin Ansari, Bence Major, Ravi Teja Sukhavasi, Radhika Dilip Gowaikar, Xinzhou Wu, Sundar Subramanian, Michael John Hamilton
  • Patent number: 11899099
    Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 13, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Radhika Dilip Gowaikar, Ravi Teja Sukhavasi, Daniel Hendricus Franciscus Fontijne, Bence Major, Amin Ansari, Teck Yian Lim, Sundar Subramanian, Xinzhou Wu
  • Publication number: 20230259600
    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for biometric authentication using an anti-spoofing protection model refined using online data. The method generally includes receiving a biometric data input for a user. Features for the received biometric data input are extracted through a first machine learning model. It is determined, using the extracted features for the received biometric data input and a second machine learning model, whether the received biometric data input for the user is authentic or inauthentic. It is determined whether to add the extracted features for the received biometric data input, labeled with an indication of whether the received biometric data input is authentic or inauthentic, to a finetuning data set. The second machine learning model is adjusted based on the finetuning data set.
    Type: Application
    Filed: January 17, 2023
    Publication date: August 17, 2023
    Inventors: Davide BELLI, Bence MAJOR, Amir JALALIRAD, Daniel Hendricus Franciscus DIJKMAN, Fatih Murat PORIKLI
  • Publication number: 20220327189
    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for biometric authentication using neural-network-based anti-spoofing protection mechanisms. An example method generally includes receiving an image of a biometric data source for a user; extracting, through a first artificial neural network, features for at least the received image; combining the extracted features for the at least the received image and a combined feature representation of a plurality of enrollment biometric data source images; determining, using the combined extracted features for the at least the received image and the combined feature representation as input into a second artificial neural network, whether the received image of the biometric data source for the user is from a real biometric data source or a copy of the real biometric data source; and taking one or more actions to allow or deny the user access to a protected resource based on the determination.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 13, 2022
    Inventors: Davide BELLI, Bence MAJOR, Daniel Hendricus Franciscus DIJKMAN, Fatih Murat PORIKLI
  • Patent number: 11443522
    Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: September 13, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Bence Major, Daniel Hendricus Franciscus Fontijne, Ravi Teja Sukhavasi, Amin Ansari
  • Publication number: 20210255304
    Abstract: Disclosed are techniques for employing deep learning to analyze radar signals. In an aspect, an on-board computer of a host vehicle receives, from a radar sensor of the vehicle, a plurality of radar frames, executes a neural network on a subset of the plurality of radar frames, and detects one or more objects in the subset of the plurality of radar frames based on execution of the neural network on the subset of the plurality of radar frames. Further, techniques for transforming polar coordinates to Cartesian coordinates in a neural network are disclosed. In an aspect, a neural network receives a plurality of radar frames in polar coordinate space, a polar-to-Cartesian transformation layer of the neural network transforms the plurality of radar frames to Cartesian coordinate space, and the neural network outputs the plurality of radar frames in the Cartesian coordinate space.
    Type: Application
    Filed: November 27, 2019
    Publication date: August 19, 2021
    Inventors: Daniel Hendricus Franciscus FONTIJNE, Amin ANSARI, Bence MAJOR, Ravi Teja SUKHAVASI, Radhika Dilip GOWAIKAR, Xinzhou WU, Sundar SUBRAMANIAN, Michael John HAMILTON
  • Publication number: 20210150347
    Abstract: Aspects described herein provide a method of performing guided training of a neural network model, including: receiving supplementary domain feature data; providing the supplementary domain feature data to a fully connected layer of a neural network model; receiving from the fully connected layer supplementary domain feature scaling data; providing the supplementary domain feature scaling data to an activation function; receiving from the activation function supplementary domain feature weight data; receiving a set of feature maps from a first convolution layer of the neural network model; fusing the supplementary domain feature weight data with the set of feature maps to form fused feature maps; and providing the fused feature maps to a second convolution layer of the neural network model.
    Type: Application
    Filed: November 13, 2020
    Publication date: May 20, 2021
    Inventors: Shubhankar Mange BORSE, Nojun KWAK, Daniel Hendricus Franciscus DIJKMAN, Bence MAJOR
  • Publication number: 20200175286
    Abstract: Methods of processing vehicle sensor information for object detection may include capturing generating a feature map based on captured sensor information, associating with each pixel of the feature map a prior box having a set of two or more width priors and a set of two or more height priors, determining a confidence value of each height prior and each width prior, outputting an indication of a detected object based on a highest confidence height prior and a highest confidence width prior, and performing a vehicle operation based on the output indication of a detected object. Embodiments may include determining for each pixel of the feature map one or more prior boxes having a center value, a size value, and a set of orientation priors, determining a confidence value for each orientation prior, and outputting an indication of the orientation of a detected object based on the highest confidence orientation.
    Type: Application
    Filed: December 2, 2019
    Publication date: June 4, 2020
    Inventors: Bence MAJOR, Daniel Hendricus Franciscus FONTIJNE, Ravi Teja SUKHAVASI, Amin ANSARI
  • Publication number: 20200175315
    Abstract: Disclosed are techniques for fusing camera and radar frames to perform object detection in one or more spatial domains. In an aspect, an on-board computer of a host vehicle receives, from a camera sensor of the host vehicle, a plurality of camera frames, receives, from a radar sensor of the host vehicle, a plurality of radar frames, performs a camera feature extraction process on a first camera frame of the plurality of camera frames to generate a first camera feature map, performs a radar feature extraction process on a first radar frame of the plurality of radar frames to generate a first radar feature map, converts the first camera feature map and/or the first radar feature map to a common spatial domain, and concatenates the first radar feature map and the first camera feature map to generate a first concatenated feature map in the common spatial domain.
    Type: Application
    Filed: November 27, 2019
    Publication date: June 4, 2020
    Inventors: Radhika Dilip GOWAIKAR, Ravi Teja SUKHAVASI, Daniel Hendricus Franciscus FONTIJNE, Bence MAJOR, Amin ANSARI, Teck Yian LIM, Sundar SUBRAMANIAN, Xinzhou WU