Patents by Inventor Apurbaa MALLIK
Apurbaa MALLIK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240053487Abstract: Systems and methods for transforming autonomous aerial vehicle sensor data between platforms are disclosed herein. An example method can include receiving, by an UAV, vehicle radar data and radar calibration data from a vehicle, as well as location information for a location of the vehicle, determining a simulated UAV at the location, establishing an orientation for a simulated radar on a bottom of the simulated UAV, determining a height for the simulated UAV to match a field of view of the simulated radar, performing a geometrical transformation to convert the vehicle radar data set into a UAV perspective, converting the vehicle radar data into a vehicle coordinate frame using the radar calibration using vehicle global positioning system (GPS) coordinates, converting the vehicle coordinate frame from a global frame into a UAV coordinate frame using UAV GPS coordinates, and converting the UAV coordinate frame into simulated radar sensor frame.Type: ApplicationFiled: August 11, 2022Publication date: February 15, 2024Applicant: Ford Global Technologies, LLCInventors: Steven Chao, Ganesh Kumar, Apurbaa Mallik
-
Patent number: 11887396Abstract: A method for identifying a hand pose in a vehicle involves identifying a hand image for a hand in the vehicle by extraction from a vehicle image of the vehicle. A plurality of contextual images of the hand image is obtained based on the single point. Each of the plurality of contextual images are processed using one or more layers of a neural network to obtain a plurality of contextual features associated with the hand image. A hand pose associated with the hand is identified based on the plurality of contextual features using a classifier model.Type: GrantFiled: August 27, 2019Date of Patent: January 30, 2024Assignee: MERCEDES-BENZ GROUP AGInventors: Hisharn Cholakkal, Sanath Narayan, Arjun Jain, Shuaib Ahmed, Amit Bhatkal, Mallikarjun Byrasandra Ramalinga Reddy, Apurbaa Mallik
-
Publication number: 20240005637Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, generate a feature map from the image frame, generate a depth map from the feature map, classify an object in the image frame based on the feature map, and estimate a distance to the object based on the depth map and based on an input to generating the feature map.Type: ApplicationFiled: June 30, 2022Publication date: January 4, 2024Applicant: Ford Global Technologies, LLCInventors: Zafar Iqbal, Hitha Revalla, Apurbaa Mallik, Gurjeet Singh, Vijay Nagasamy
-
Patent number: 11772656Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.Type: GrantFiled: July 8, 2020Date of Patent: October 3, 2023Assignee: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
-
Publication number: 20230186637Abstract: The disclosure is generally directed to systems and methods for inference quality determination of a deep neural network (DNN) without requiring ground truth information for use in driver-assisted vehicles, including receiving an image frame from a source; applying a normal inference DNN model to the image frame to produce a first inference with a first bounding box using a normal inference DNN model; applying a deep inference DNN model to a plurality of filtered versions of the image frame to produce a plurality of deep inferences with a plurality of bounding boxes; comparing the plurality of bounding boxes to identify a cluster condition of the plurality of bounding boxes; and determining an inference quality of the image frame of the normal inference DNN model as a function of the cluster condition.Type: ApplicationFiled: December 10, 2021Publication date: June 15, 2023Applicant: Ford Global Technologies, LLCInventors: Gurjeet Singh, Apurbaa Mallik, Zafar Iqbal, Hitha Revalla, Steven Chao, Vijay Nagasamy
-
Publication number: 20230123899Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive image data from a camera, generate a depth map from the image data, detect an object in the image data, apply a bounding box circumscribing the object to the depth map, mask the depth map by setting depth values for pixels in the bounding box in the depth map to a depth value of a closest pixel in the bounding box, and determine a distance to the object based on the masked depth map. The closest pixel is closest to the camera of the pixels in the bounding box.Type: ApplicationFiled: October 18, 2021Publication date: April 20, 2023Applicant: Ford Global Technologies, LLCInventors: Zafar Iqbal, Hitha Revalla, Apurbaa Mallik, Gurjeet Singh, Vijay Nagasamy
-
Patent number: 11604946Abstract: A training system for a deep neural network and method of training is disclosed. The system and/or method may comprise: receiving, from an eye-tracking system associated with a sensor, an image frame captured while an operator is controlling a vehicle; receiving, from the eye-tracking system, eyeball gaze data corresponding to the image frame; and iteratively training the deep neural network to determine an object of interest depicted within the image frame based on the eyeball gaze data. The deep neural network generates at least one feature map and determine a proposed region corresponding to the object of interest within the at least one feature map based on the eyeball gaze data.Type: GrantFiled: May 6, 2020Date of Patent: March 14, 2023Assignee: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Vijay Nagasamy, Aniruddh Ravindran
-
Publication number: 20220388535Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.Type: ApplicationFiled: June 3, 2021Publication date: December 8, 2022Applicant: Ford Global Technologies, LLCInventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
-
Publication number: 20220188621Abstract: A system comprises a computer including a processor and a memory. The memory storing instructions executable by the processor to cause the processor to generate a low-level representation of the input source domain data; generate an embedding of the input source domain data; generate a high-level feature representation of features of the input source domain data; generate output target domain data in the target domain that includes semantics corresponding to the input source domain data by processing the high-level feature representation of the features of the input source domain data using a domain low-level decoder neural network layer that generate data from the target; and modify a loss function such that latent attributes corresponding to the embedding are selected from a same probability distribution.Type: ApplicationFiled: December 10, 2020Publication date: June 16, 2022Applicant: Ford Global Technologies, LLCInventors: Praveen Narayanan, Nikita Jaipuria, Apurbaa Mallik, Punarjay Chakravarty, Ganesh Kumar
-
Publication number: 20220009498Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to, generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.Type: ApplicationFiled: July 8, 2020Publication date: January 13, 2022Applicant: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
-
Publication number: 20210350184Abstract: A training system for a deep neural network and method of training is disclosed. The system and/or method may comprise: receiving, from an eye-tracking system associated with a sensor, an image frame captured while an operator is controlling a vehicle; receiving, from the eye-tracking system, eyeball gaze data corresponding to the image frame; and iteratively training the deep neural network to determine an object of interest depicted within the image frame based on the eyeball gaze data. The deep neural network generates at least one feature map and determine a proposed region corresponding to the object of interest within the at least one feature map based on the eyeball gaze data.Type: ApplicationFiled: May 6, 2020Publication date: November 11, 2021Applicant: Ford Global Technologies, LLCInventors: Apurbaa Mallik, Vijay Nagasamy, Aniruddh Ravindran
-
Publication number: 20210342579Abstract: A method for identifying a hand pose in a vehicle involves identifying a hand image for a hand in the vehicle by extraction from a vehicle image of the vehicle. A plurality of contextual images of the hand image is obtained based on the single point. Each of the plurality of contextual images are processed using one or more layers of a neural network to obtain a plurality of contextual features associated with the hand image. A hand pose associated with the hand is identified based on the plurality of contextual features using a classifier model.Type: ApplicationFiled: August 27, 2019Publication date: November 4, 2021Inventors: Hisham CHOLAKKAL, Sanath NARAYAN, Arjun JAIN, Shuaib AHMED, Amit BHATKAL, Mallikarjun BYRASANDRA RAMALINGA REDDY, Apurbaa MALLIK
-
Patent number: 10890444Abstract: Method and System for estimating three dimensional measurements of a physical object by utilizing readings from inertial sensors is provided. The method involves capturing by a handheld unit, three dimensional aspects of the physical object. The raw recordings are received from the inertial sensors and are used to develop a raw rotation matrix. The raw rotation matrix is subjected to low pass filtering to obtain processed matrix constituted of filtered Euler angles wherein coordinates from the processed rotation matrix is used to estimate gravitational component along the three axis leading to determination of acceleration values and further calculation of measurement of each dimension of the physical object.Type: GrantFiled: May 13, 2016Date of Patent: January 12, 2021Assignee: Tata Consultancy Services LimitedInventors: Apurbaa Mallik, Brojeshwar Bhowmick, Aniruddha Sinha, Aishwarya Visvanathan, Sudeb Das
-
Patent number: 10475231Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.Type: GrantFiled: February 15, 2018Date of Patent: November 12, 2019Assignee: Tata Consultancy Services LimitedInventors: Brojeshwar Bhowmick, Swapna Agarwal, Sanjana Sinha, Balamuralidhar Purushothaman, Apurbaa Mallik
-
Publication number: 20190080503Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.Type: ApplicationFiled: February 15, 2018Publication date: March 14, 2019Applicant: Tata Consultancy Services LimitedInventors: Brojeshwar BHOWMICK, Swapna AGARWAL, Sanjana SINHA, Balamuralidhar PURUSHOTHAMAN, Apurbaa MALLIK
-
Patent number: 9865061Abstract: Disclosed is a method and system for constructing a 3D structure. The system of the present disclosure comprises an image capturing unit for capturing images of an object. The system comprises of a gyroscope, a magnetometer, and an accelerometer for determining extrinsic camera parameters, wherein the extrinsic camera parameters comprise a rotation and a translation of the images. Further the system determines an internal calibration matrix once. The system uses the extrinsic camera parameters and the internal calibration matrix for determining a fundamental matrix. The system extracts features of the images for establishing point correspondences between the images. Further, the point correspondences are filtered using the fundamental matrix for generating filtered point correspondences. The filtered point correspondences are triangulated for determining 3D points representing the 3D structure. Further, the 3D structure may be optimized for eliminating reprojection errors associated with the 3D structure.Type: GrantFiled: September 23, 2014Date of Patent: January 9, 2018Assignee: Tata Consultancy Services LimitedInventors: Brojeshwar Bhowmick, Apurbaa Mallik, Arindam Saha
-
Patent number: 9830736Abstract: Disclosed is a method for segmenting a plurality of objects from a two-dimensional (2D) video captured through a depth camera and an RGB/G camera. The method comprises detecting camera motion in each 2D frame of the plurality of 2D frames from the 2D video and generate a first set of 2D frames without any camera motion. The method further comprises generating a plurality of cloud points for the first set of 2D frames corresponding to each pixel associated a 2D frames in the first set of 2D frames. The method further comprises generating a 3D grid comprising a plurality of voxels. The method further comprises determining valid voxels and an invalid voxels in the 3D grid. Further, a 3D connected component labeling technique is applied on to the set of valid voxels to segment the plurality of objects in the 2D video.Type: GrantFiled: January 22, 2014Date of Patent: November 28, 2017Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Aniruddha Sinha, Tanushyam Chattopadhyay, Sangheeta Roy, Apurbaa Mallik
-
Publication number: 20160334210Abstract: Method and System for estimating three dimensional measurements of a physical object by utilizing readings from inertial sensors is provided. The method involves capturing by a handheld unit, three dimensional aspects of the physical object. The raw recordings are received from the inertial sensors and are used to develop a raw rotation matrix. The raw rotation matrix is subjected to low pass filtering to obtain processed matrix constituted of filtered Euler angles wherein coordinates from the processed rotation matrix is used to estimate gravitational component along the three axis leading to determination of acceleration values and further calculation of measurement of each dimension of the physical object.Type: ApplicationFiled: May 13, 2016Publication date: November 17, 2016Applicant: Tata Consultancy Services LimitedInventors: Apurbaa MALLIK, Brojeshwar BHOWMICK, Aniruddha SINHA, Aishwarya VISVANATHAN, Sudeb DAS
-
Publication number: 20160035124Abstract: Disclosed is a method for segmenting a plurality of objects from a two-dimensional (2D) video captured through a depth camera and an RGB/G camera. The method comprises detecting camera motion in each 2D frame of the plurality of 2D frames from the 2D video and generate a first set of 2D frames without any camera motion. The method further comprises generating a plurality of cloud points for the first set of 2D frames corresponding to each pixel associated a 2D frames in the first set of 2D frames. The method further comprises generating a 3D grid comprising a plurality of voxels. The method further comprises determining valid voxels and an invalid voxels in the 3D grid. Further, a 3D connected component labeling technique is applied on to the set of valid voxels to segment the plurality of objects in the 2D video.Type: ApplicationFiled: January 22, 2014Publication date: February 4, 2016Inventors: Aniruddha SINHA, Tanushyam CHATTOPADHYAY, Sangheeta ROY, Apurbaa MALLIK
-
Publication number: 20150371396Abstract: Disclosed is a method and system for constructing a 3D structure. The system of the present disclosure comprises an image capturing unit for capturing images of an object. The system comprises of a gyroscope, a magnetometer, and an accelerometer for determining extrinsic camera parameters, wherein the extrinsic camera parameters comprise a rotation and a translation of the images. Further the system determines an internal calibration matrix once. The system uses the extrinsic camera parameters and the internal calibration matrix for determining a fundamental matrix. The system extracts features of the images for establishing point correspondences between the images. Further, the point correspondences are filtered using the fundamental matrix for generating filtered point correspondences. The filtered point correspondences are triangulated for determining 3D points representing the 3D structure. Further, the 3D structure may be optimized for eliminating reprojection errors associated with the 3D structure.Type: ApplicationFiled: September 23, 2014Publication date: December 24, 2015Inventors: Brojeshwar BHOWMICK, Apurbaa MALLIK, Arindam SAHA