Patents by Inventor Senthil Kumar Yogamani
Senthil Kumar Yogamani has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240371168Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided that includes generating a top view image of an object using a plurality of images captured from different views. The method involves determining portions of the images that depict the object and generating novel views of the object from at least one novel view not present within the plurality of images. Corresponding portions containing an occluded view and an unobstructed view of the object are identified and corrected views for occluded views are determined based on corresponding unobstructed views using a machine learning model. A top view image may be then generated based on the corrected views. The invention enables improved visibility for autonomous driving systems in situations where objects are occluded or partially obstructed. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 3, 2023Publication date: November 7, 2024Inventors: Deeksha Dixit, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240371023Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided for determining the locations and bounding surfaces of objects depicted in image frames captured by fisheye image sensors attached to a vehicle. The method includes receiving raw fisheye image data from the sensor and using machine learning models to determine the locations and three-dimensional bounding surfaces of objects in the image frame. The bounding surfaces may be defined by three-dimensional polar coordinates representing portions of the viewing area of the fisheye image sensor. Control instructions for the vehicle may then be determined based on the bounding surfaces. In certain implementations, the bounding surfaces may be determined in three-dimensional polar coordinates. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 4, 2023Publication date: November 7, 2024Inventors: Balaji Shankar Balachandran, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240371147Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of fusing features from near-field images and far-field images is provided that includes determining feature vectors and spatial locations for received images from near-field and far-field image sensors. A first set of weighted feature vectors may be determined based on spatial locations of the features and a second set of weighted feature vectors may be determined based on corresponding features between the feature vectors. Fused feature vectors may then be determined based on the weighted feature vectors, such as using a transformer attention process trained to select and combine features from both sets of weighted feature vectors. Vehicle control instructions may be determined based on the fused feature vectors. Other aspects and features are also claimed and described.Type: ApplicationFiled: May 5, 2023Publication date: November 7, 2024Inventors: Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240312226Abstract: A computer-implemented method for analyzing a roundabout in an environment for a vehicle is disclosed. The method includes generating at least one initial feature map by applying a feature encoder module of a trained neural network to an input image. The input image depicts the roundabout. The method includes next applying a classificator module of the trained neural network to the initial feature map. An output of the classificator module represents a road region on the input image. The method includes next applying a radius estimation module of the trained neural network to the initial feature map. An output of the radius estimation module depends on an inner radius of the roundabout and an outer radius of the roundabout. The method includes finally determining an entry point and an exit point of the roundabout depending on the output of the classificator module and depending on the output of the radius estimation module.Type: ApplicationFiled: July 5, 2022Publication date: September 19, 2024Applicant: CONNAUGHT ELECTRONICS Ltd.Inventors: Akhilesh Kumar Malviya, Arindam Das, Senthil Kumar Yogamani
-
Publication number: 20240312188Abstract: Systems and techniques are described herein for training an object-detection model. For instance, a method for training an object-detection model is provided. The method may include obtaining a light detection and ranging (LIDAR) capture; obtaining a first LIDAR-based representation of an object as captured from a first distance; obtaining a second LIDAR-based representation of the object as captured from a second distance; augmenting the LIDAR capture using the first LIDAR-based representation of the object and the second LIDAR-based representation of the object to generate an augmented LIDAR capture; and training a machine-learning object-detection model using the augmented LIDAR capture.Type: ApplicationFiled: March 17, 2023Publication date: September 19, 2024Inventors: Venkatraman NARAYANAN, Varun RAVI KUMAR, Senthil Kumar YOGAMANI
-
Publication number: 20240273742Abstract: Disclosed are systems, apparatuses, processes, and computer-readable media for processing image data. For example, a process can include obtaining segmentation information associated with an image of a scene, the image including a plurality of pixels having a resolution, and obtaining depth information associated with one or more objects in the scene. A plurality of features can be generated corresponding to the plurality of pixels, wherein each feature of the plurality of features corresponds to a particular pixel of the plurality of pixels, and wherein each feature includes respective segmentation information of the particular pixel and respective depth information of the particular pixel. The plurality of features can be processed to generate a dense depth output corresponding to the image.Type: ApplicationFiled: February 6, 2023Publication date: August 15, 2024Inventors: Debasmit DAS, Varun RAVI KUMAR, Shubhankar Mangesh BORSE, Senthil Kumar YOGAMANI
-
Publication number: 20240249530Abstract: Techniques and systems are provided for processing sensor data. For instance a process can include obtaining first sensor data of an environment, wherein the first sensor data includes a representation of a first object occluding a second object, obtaining second sensor data of the environment, wherein the second sensor data includes points associated with the first object and points associated with the second object, generating estimated segment data from the first sensor data, wherein the estimated segment data includes a first segment corresponding to the first object; matching points associated with the first object to the first segment, and deemphasizing points associated with the second object based on matching the points associated with the first object to the first segment.Type: ApplicationFiled: January 19, 2023Publication date: July 25, 2024Inventors: Varun RAVI KUMAR, Senthil Kumar YOGAMANI, Shubhankar Mangesh BORSE
-
Publication number: 20240249527Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method is provided that includes receiving sensor data from a plurality of sensors on a vehicle and determining a three-dimensional representation of an area surrounding the vehicle by mapping the sensor data onto a three-dimensional surface. The plurality of sensors may include at least one perspective view sensor and at least one top view sensor, and the three-dimensional surface may include sensor data from the at least one perspective view sensor and sensor data from the at least one top view sensor. The method may further include determining, with a machine learning model, one or more characteristics of the area surrounding the vehicle based on the three-dimensional representation. Other aspects and features are also claimed and described.Type: ApplicationFiled: January 24, 2023Publication date: July 25, 2024Inventors: Balaji Shankar Balachandran, Varun Ravi Kumar, Senthil Kumar Yogamani
-
Publication number: 20240221384Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving first and second image data from first and second cameras having different lens types. A first field of view of the second image data overlaps at least a portion of a second field of view of the first image data. The method further includes determining a point in space based on the first image data and the second image data and calculating a distance between the first camera and the point in space based on the lens type of the first camera and the lens type of the second camera. Other aspects and features are also claimed and described.Type: ApplicationFiled: November 13, 2023Publication date: July 4, 2024Inventors: Louis Joseph Kerofsky, Senthil Kumar Yogamani, Madhumitha Sakthi
-
Publication number: 20240221132Abstract: This disclosure provides systems, methods, and devices for vehicle driving assistance systems that support image processing. In a first aspect, a method of image processing includes receiving first image data from a first image sensor of a first camera, receiving second image data from a second image sensor of a second camera, wherein a first field of view of the second image data overlaps at least a portion of a second field of view of the first image data, generating, based on the first image data and the second image data, a bird's eye view composite image, detecting one or more holes in the bird's eye view composite image, and filling the one or more holes in the bird's eye view composite image using one or more rays of the first camera. Other aspects and features are also claimed and described.Type: ApplicationFiled: November 2, 2023Publication date: July 4, 2024Inventors: Louis Joseph Kerofsky, Kuan-Ting Shen, Madhumitha Sakthi, Senthil Kumar Yogamani
-
Publication number: 20240153249Abstract: This disclosure provides systems, methods, and devices for image signal processing that support training object recognition models. In a first aspect, a method of image processing includes training a first modality imaging system; receiving time-synchronized first input data samples and second input data samples from the first modality imaging system and a second modality imaging system, respectively; processing the first input data samples in the first modality imaging system to generate first output; processing the second input data samples in the second modality imaging system to generate second output; and training the second modality imaging system based on the first output and the second output. Other aspects and features are also claimed and described.Type: ApplicationFiled: September 14, 2023Publication date: May 9, 2024Inventors: Shubhankar Mangesh Borse, Marvin Richard Klingner, Varun Ravi Kumar, Senthil Kumar Yogamani, Fatih Murat Porikli
-
Publication number: 20240095937Abstract: Techniques and systems are provided for generating depth information for an image. For instance, a process can include obtaining one or more images of an environment. The process can further include generating a set of features for the one or more images. The process can also include combining the set of features with one or more distance maps to generate combined feature distance information, wherein the one or more distance maps indicate distances based on relative height above a ground level. The process can further include generating depth information of the environment based on the combined feature distance information, and outputting the depth information of the environment.Type: ApplicationFiled: September 20, 2022Publication date: March 21, 2024Inventors: David UNGER, Senthil Kumar YOGAMANI, Varun RAVI KUMAR
-
Publication number: 20240070541Abstract: Techniques and systems are provided for training a machine learning (ML) model. A technique can include generating a first set of features for objects in images, predicting image feature labels for the first set of features, comparing the predicted image feature labels to ground truth image feature labels to evaluate a first loss function, perform a perspective transform on the first set of features to generate a birds eye view (BEV) projected image features, combining the BEV projected image features and a first set of flattened features to generate combined image features, generating a segmented BEV map of the environment based on the combined image features, comparing the segmented BEV map to a ground truth segmented BEV map to evaluate a second loss function, and training the ML model for generation of segmented BEV maps based on the evaluated first loss function and the evaluated second loss function.Type: ApplicationFiled: August 4, 2023Publication date: February 29, 2024Inventors: Shubhankar Mangesh BORSE, Varun RAVI KUMAR, David UNGER, Senthil Kumar YOGAMANI, Fatih Murat PORIKLI
-
Patent number: 10950127Abstract: The invention relates to a method for operating a driver assistance system (2) of a motor vehicle (1), including a) Capturing an environment (4) of the motor vehicle (1) by a capturing device (3) of the driver assistance system (2); b) Detecting an accessible freespace (6) in the captured environment (4) by a computing device (5) of the driver assistance system (2); c) Detecting and Classifying at least one object (7a-7e) in the captured environment (4) that is located at a border (8) of the freespace (6) by a neural network (9) of the driver assistance system (2); d) Assigning a part (10a-10e) of the border (8) of the freespace (6) to the detected and classified object (7a-7e); and e) Categorizing a part (11a-11e) of the freespace (6) adjacent to the part (10a-10e) of the border (8) that is assigned to the detected and classified object (7a-7e) in dependence upon the class of that classified object (7a-7e), so as to enable improved safety in driving.Type: GrantFiled: August 27, 2018Date of Patent: March 16, 2021Assignee: Connaught Electronics Ltd.Inventors: Senthil Kumar Yogamani, Sunil Chandra
-
Publication number: 20190080604Abstract: The invention relates to a method for operating a driver assistance system (2) of a motor vehicle (1), including a) Capturing an environment (4) of the motor vehicle (1) by a capturing device (3) of the driver assistance system (2); b) Detecting an accessible freespace (6) in the captured environment (4) by a computing device (5) of the driver assistance system (2); c) Detecting and Classifying at least one object (7a-7e) in the captured environment (4) that is located at a border (8) of the freespace (6) by a neural network (9) of the driver assistance system (2); d) Assigning a part (10a-10e) of the border (8) of the freespace (6) to the detected and classified object (7a-7e); and e) Categorizing a part (11a-11e) of the freespace (6) adjacent to the part (10a-10e) of the border (8) that is assigned to the detected and classified object (7a-7e) in dependence upon the class of that classified object (7a-7e), so as to enable improved safety in driving.Type: ApplicationFiled: August 27, 2018Publication date: March 14, 2019Applicant: Connaught Electronics Ltd.Inventors: Senthil Kumar Yogamani, Sunil Chandra