Patents by Inventor Senthil Kumar Yogamani
Senthil Kumar Yogamani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250054285Abstract: A sensor data processing system includes various elements, including a perception unit that collects data representing positions of sensors on a vehicle and obtains environmental information around the vehicle via the sensors. The sensor data processing system also includes a feature fusion unit that combines the first environmental information from the sensors into first fused feature data representing first positions of objects around the vehicle, provides the first fused feature data to the object tracking unit, receives feedback for the first fused feature data from the object tracking unit, and combines second environmental information from the sensors using the feedback into second fused feature data representing second positions of objects around the vehicle. The sensor data processing system may then at least partially control operation of the vehicle using the second fused feature data.Type: ApplicationFiled: August 10, 2023Publication date: February 13, 2025Applicant: QUALCOMM IncorporatedInventors: Senthil Kumar Yogamani, Varun Ravi Kumar, Venkatraman Narayanan
-
Publication number: 20250035448Abstract: Disclosed are techniques for localization of an object. For example, a device can generate, based on sensor data obtained from sensor(s) associated with an object, a predicted map comprising predicted nodes associated with a predicted location of the object within an environment. The device can receive a high definition (HD) map comprising HD nodes associated with a HD location of the object within the environment. The device can further match the predicted nodes with the HD nodes to determine pair(s) of matched nodes between the predicted map and the HD map. The device can determine, based on a comparison between nodes in each pair of the pair(s) of matched nodes, a respective node score for each pair of the pair(s) of matched nodes. The device can determine, based on the respective node score for each pair of the pair(s) of matched nodes, a location of the object within the environment.Type: ApplicationFiled: July 27, 2023Publication date: January 30, 2025Inventors: Heesoo MYEONG, Senthil Kumar YOGAMANI, Varun RAVI KUMAR
-
Publication number: 20250031089Abstract: Aspects presented herein may enable a UE to detect and identify a weather condition of an environment based on the sparsity of FFT/DWT coefficients derived from a set of range images associated with the environment. In one aspect, a UE converts a set of point clouds associated with an environment to a set of range images based on a spherical projection. The UE applies at least one of FFT or DWT to the set of range images to obtain a set of FFT coefficients or a set of DWT coefficients. The UE identifies a level of a condition for the environment based on a sparsity of the set of FFT coefficients or the set of DWT coefficients.Type: ApplicationFiled: July 21, 2023Publication date: January 23, 2025Inventors: Ming-Yuan YU, Senthil Kumar YOGAMANI, Varun RAVI KUMAR
-
Publication number: 20240428547Abstract: An apparatus for multi-object tracking determines a current representation of a current object in a current image. The apparatus computes a joint Gaussian distribution between the current representation of the current object and a previous representation stored in one or more memory buffers, wherein the previous representation was determined from a previous image. The apparatus updates the one or more memory buffers based on the joint Gaussian distribution. For example, the apparatus determines whether to remove or replace the previous representation in the one or more memory buffers based on values of a covariance matrix of the joint Gaussian distribution.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Inventors: Rajeev Yasarla, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240425042Abstract: A system for navigation store, in a time-based buffer, a first set of frames acquired by a sensor; stores, in a distance-based buffer, a second set of frames acquired by the sensor, performs moving object segmentation on the first set of frames and the second set of frames to identify at least one moving object in a scene of the frames; predicts a trajectory of the at least one moving object; and performs a navigation function based on the predicted trajectory.Type: ApplicationFiled: June 23, 2023Publication date: December 26, 2024Inventors: Ming-Yuan Yu, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240428441Abstract: In some aspects, a device may obtain, via a camera associated with the device, an image that includes one or more objects located within an area of the device. The device may generate a first three-dimensional output based at least in part on the image. The device may obtain, via an audio component associated with the device, an audio input associated with the one or more objects. The device may generate a second three-dimensional output based at least in part on the audio input. The device may detect the one or more objects based at least in part on the first three-dimensional output and the second three-dimensional output. Numerous other aspects are described.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Inventors: Balaji Shankar BALACHANDRAN, Varun RAVI KUMAR, Senthil Kumar YOGAMANI
-
Publication number: 20240420293Abstract: A method of processing image data includes receiving, with a frame correction machine-learning (ML) model executing on processing circuitry, an image frame captured from a first camera of a plurality of cameras; performing, with the frame correction ML model executing on the processing circuitry, image frame correction to generate a corrected image frame based on weights or biases of the frame correction ML model applied to two or more of: samples of the image frame, samples of previously captured image frames from the first camera, or samples from image frames from other cameras of the plurality of cameras; and performing, with the processing circuitry, post-processing based on the corrected image frame.Type: ApplicationFiled: June 16, 2023Publication date: December 19, 2024Inventors: Deeksha Dixit, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240412486Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving an image frame from an image sensor of a camera; receiving an indicator associated with a type of lens of the camera; determining a first tensor grid associated with the indicator, the first tensor grid including a plurality of image framework positions associated with the type of lens; and determining, using a machine learning model, a BEV feature map corresponding to the image frame based on features of the image frame and the first tensor grid. Other aspects and features are also claimed and described.Type: ApplicationFiled: June 6, 2023Publication date: December 12, 2024Inventors: Varun Ravi Kumar, Senthil Kumar Yogamani, Bala Murali Manoghar Sai Sudhakar
-
Publication number: 20240412494Abstract: This disclosure provides systems, methods, and devices that support image processing. In a first aspect, a method for multi-sensor fusion includes receiving first information indicative of a first set of BEV features of image data captured by an image sensor; receiving second information indicative of a second set of BEV features of non-image sensor data captured by a non-image sensor; and determining fused data that combines the image data and the non-image sensor data based on the first information, the second information, and third information indicative of differences between BEV features of training data and the first set of BEV features and the second set of BEV features. The BEV features of the training data include a third set of BEV features associated with the image sensor and a fourth set of BEV features associated with the non-image sensor. Other aspects and features are also claimed and described.Type: ApplicationFiled: June 9, 2023Publication date: December 12, 2024Inventors: Balaji Shankar Balachandran, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240412534Abstract: Systems and techniques are described herein for determining road profiles. For instance, a method for determining a road profiles is provided. The method may include extracting image features from one or more images of an environment, wherein the environment includes a road; generating a segmentation mask based on the image features; determining a subset of the image features based on the segmentation mask; generating image-based three-dimensional features based on the subset of the image features; obtaining point-cloud-based three-dimensional features derived from a point cloud representative of the environment; combining the image-based three-dimensional features and the point-cloud-based three-dimensional features to generate combined three-dimensional features; and generating a road profile based on the combined three-dimensional features.Type: ApplicationFiled: June 6, 2023Publication date: December 12, 2024Inventors: Senthil Kumar YOGAMANI, Varun RAVI KUMAR, Deeksha DIXIT
-
Publication number: 20240400079Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving, by a processor, image data from a camera image sensor; receiving, by the processor, point cloud data from a light detection and ranging (LiDAR) sensor; generating, by the processor and using a first machine learning model, fused image data that combines the image data and the point cloud data; and determining, by the processor and using a second machine learning model, whether the fused image data satisfies a criteria based on whether a population risk function of the first machine learning model exceeds a threshold. Other aspects and features are also claimed and described.Type: ApplicationFiled: June 5, 2023Publication date: December 5, 2024Inventors: Sweta Priyadarshi, Shivansh Rao, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240395007Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving a plurality of image frames by a computing device and using machine learning models to identify corrupted or occluded image frames. A first machine learning model may identify corrupted image frames, while a second machine learning model may identify partially occluded image frames. The method may further include generating updated versions of image frames captured by vehicle cameras, such as based on feature vectors from the first and second machine learning models. The feature vectors may be fused and provided to a third machine learning model to generate updated versions of occluded image frames. The method may further include determining vehicle control instructions based on the updated versions. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 22, 2023Publication date: November 28, 2024Inventors: Varun Ravi Kumar, Debasmit Das, Senthil Kumar Yogamani
-
Publication number: 20240378911Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided to train a machine learning model using image data and position data to identify contact points and ground surface normal vectors. Image data is received that depicts an object, and position data for the object is also received, such as point cloud position information for various points along the object's exterior surface. Two sets of labels may then be determined based on the position data, with one set identifying where the object contacts a ground surface and another identifying at least one normal vector for the ground surface. The machine learning model may then be trained based on both sets of labels to determine three-dimensional bounding boxes, normal maps, or combinations thereof. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 9, 2023Publication date: November 14, 2024Inventors: Balaji Shankar Balachandran, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240378743Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided that includes determining a first set of feature vectors for received images for a top view representation of an area surrounding a vehicle and a second set of feature vectors for a cylindrical representation of the area. The method may further include determining a first set of locations based on the first set of feature vectors and determining a second set of locations based on the second set of feature vectors. A third set of locations may be determined based on the first and second sets of locations, such as combining the first and second sets using a transformer attention process. Vehicle control instructions may then be determined based on the third set of locations. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 9, 2023Publication date: November 14, 2024Inventors: Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240378872Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving first kinematic information associated with a camera image sensor; receiving, by the processor, point cloud data from a light detection and ranging (LiDAR) sensor; generating, by the processor, first image data that is time-synchronized with the point cloud data based on the first kinematic information and a neural radiance fields (NeRF) model; and generating, by the processor, fused data that combines the first image data and the point cloud data. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 9, 2023Publication date: November 14, 2024Inventors: Nirnai Ach, Mireille Lucette Laure Gregoire, Julia Kabalar, Senthil Kumar Yogamani
-
Publication number: 20240371023Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided for determining the locations and bounding surfaces of objects depicted in image frames captured by fisheye image sensors attached to a vehicle. The method includes receiving raw fisheye image data from the sensor and using machine learning models to determine the locations and three-dimensional bounding surfaces of objects in the image frame. The bounding surfaces may be defined by three-dimensional polar coordinates representing portions of the viewing area of the fisheye image sensor. Control instructions for the vehicle may then be determined based on the bounding surfaces. In certain implementations, the bounding surfaces may be determined in three-dimensional polar coordinates. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 4, 2023Publication date: November 7, 2024Inventors: Balaji Shankar Balachandran, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240371147Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of fusing features from near-field images and far-field images is provided that includes determining feature vectors and spatial locations for received images from near-field and far-field image sensors. A first set of weighted feature vectors may be determined based on spatial locations of the features and a second set of weighted feature vectors may be determined based on corresponding features between the feature vectors. Fused feature vectors may then be determined based on the weighted feature vectors, such as using a transformer attention process trained to select and combine features from both sets of weighted feature vectors. Vehicle control instructions may be determined based on the fused feature vectors. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 5, 2023Publication date: November 7, 2024Inventors: Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240371168Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided that includes generating a top view image of an object using a plurality of images captured from different views. The method involves determining portions of the images that depict the object and generating novel views of the object from at least one novel view not present within the plurality of images. Corresponding portions containing an occluded view and an unobstructed view of the object are identified and corrected views for occluded views are determined based on corresponding unobstructed views using a machine learning model. A top view image may be then generated based on the corrected views. The invention enables improved visibility for autonomous driving systems in situations where objects are occluded or partially obstructed. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 3, 2023Publication date: November 7, 2024Inventors: Deeksha Dixit, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240312226Abstract: A computer-implemented method for analyzing a roundabout in an environment for a vehicle is disclosed. The method includes generating at least one initial feature map by applying a feature encoder module of a trained neural network to an input image. The input image depicts the roundabout. The method includes next applying a classificator module of the trained neural network to the initial feature map. An output of the classificator module represents a road region on the input image. The method includes next applying a radius estimation module of the trained neural network to the initial feature map. An output of the radius estimation module depends on an inner radius of the roundabout and an outer radius of the roundabout. The method includes finally determining an entry point and an exit point of the roundabout depending on the output of the classificator module and depending on the output of the radius estimation module.Type: ApplicationFiled: July 5, 2022Publication date: September 19, 2024Applicant: CONNAUGHT ELECTRONICS Ltd.Inventors: Akhilesh Kumar Malviya, Arindam Das, Senthil Kumar Yogamani
-
Publication number: 20240312188Abstract: Systems and techniques are described herein for training an object-detection model. For instance, a method for training an object-detection model is provided. The method may include obtaining a light detection and ranging (LIDAR) capture; obtaining a first LIDAR-based representation of an object as captured from a first distance; obtaining a second LIDAR-based representation of the object as captured from a second distance; augmenting the LIDAR capture using the first LIDAR-based representation of the object and the second LIDAR-based representation of the object to generate an augmented LIDAR capture; and training a machine-learning object-detection model using the augmented LIDAR capture.Type: ApplicationFiled: March 17, 2023Publication date: September 19, 2024Inventors: Venkatraman NARAYANAN, Varun RAVI KUMAR, Senthil Kumar YOGAMANI