Patents by Inventor Apurbaa MALLIK

Apurbaa MALLIK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240053487
    Abstract: Systems and methods for transforming autonomous aerial vehicle sensor data between platforms are disclosed herein. An example method can include receiving, by an UAV, vehicle radar data and radar calibration data from a vehicle, as well as location information for a location of the vehicle, determining a simulated UAV at the location, establishing an orientation for a simulated radar on a bottom of the simulated UAV, determining a height for the simulated UAV to match a field of view of the simulated radar, performing a geometrical transformation to convert the vehicle radar data set into a UAV perspective, converting the vehicle radar data into a vehicle coordinate frame using the radar calibration using vehicle global positioning system (GPS) coordinates, converting the vehicle coordinate frame from a global frame into a UAV coordinate frame using UAV GPS coordinates, and converting the UAV coordinate frame into simulated radar sensor frame.
    Type: Application
    Filed: August 11, 2022
    Publication date: February 15, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Steven Chao, Ganesh Kumar, Apurbaa Mallik
  • Patent number: 11887396
    Abstract: A method for identifying a hand pose in a vehicle involves identifying a hand image for a hand in the vehicle by extraction from a vehicle image of the vehicle. A plurality of contextual images of the hand image is obtained based on the single point. Each of the plurality of contextual images are processed using one or more layers of a neural network to obtain a plurality of contextual features associated with the hand image. A hand pose associated with the hand is identified based on the plurality of contextual features using a classifier model.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: January 30, 2024
    Assignee: MERCEDES-BENZ GROUP AG
    Inventors: Hisharn Cholakkal, Sanath Narayan, Arjun Jain, Shuaib Ahmed, Amit Bhatkal, Mallikarjun Byrasandra Ramalinga Reddy, Apurbaa Mallik
  • Publication number: 20240005637
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, generate a feature map from the image frame, generate a depth map from the feature map, classify an object in the image frame based on the feature map, and estimate a distance to the object based on the depth map and based on an input to generating the feature map.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Zafar Iqbal, Hitha Revalla, Apurbaa Mallik, Gurjeet Singh, Vijay Nagasamy
  • Patent number: 11772656
    Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: October 3, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
  • Publication number: 20230186637
    Abstract: The disclosure is generally directed to systems and methods for inference quality determination of a deep neural network (DNN) without requiring ground truth information for use in driver-assisted vehicles, including receiving an image frame from a source; applying a normal inference DNN model to the image frame to produce a first inference with a first bounding box using a normal inference DNN model; applying a deep inference DNN model to a plurality of filtered versions of the image frame to produce a plurality of deep inferences with a plurality of bounding boxes; comparing the plurality of bounding boxes to identify a cluster condition of the plurality of bounding boxes; and determining an inference quality of the image frame of the normal inference DNN model as a function of the cluster condition.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Apurbaa Mallik, Zafar Iqbal, Hitha Revalla, Steven Chao, Vijay Nagasamy
  • Publication number: 20230123899
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive image data from a camera, generate a depth map from the image data, detect an object in the image data, apply a bounding box circumscribing the object to the depth map, mask the depth map by setting depth values for pixels in the bounding box in the depth map to a depth value of a closest pixel in the bounding box, and determine a distance to the object based on the masked depth map. The closest pixel is closest to the camera of the pixels in the bounding box.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 20, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Zafar Iqbal, Hitha Revalla, Apurbaa Mallik, Gurjeet Singh, Vijay Nagasamy
  • Patent number: 11604946
    Abstract: A training system for a deep neural network and method of training is disclosed. The system and/or method may comprise: receiving, from an eye-tracking system associated with a sensor, an image frame captured while an operator is controlling a vehicle; receiving, from the eye-tracking system, eyeball gaze data corresponding to the image frame; and iteratively training the deep neural network to determine an object of interest depicted within the image frame based on the eyeball gaze data. The deep neural network generates at least one feature map and determine a proposed region corresponding to the object of interest within the at least one feature map based on the eyeball gaze data.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: March 14, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Vijay Nagasamy, Aniruddh Ravindran
  • Publication number: 20220388535
    Abstract: A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.
    Type: Application
    Filed: June 3, 2021
    Publication date: December 8, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Gurjeet Singh, Apurbaa Mallik, Rohun Atluri, Vijay Nagasamy, Praveen Narayanan
  • Publication number: 20220188621
    Abstract: A system comprises a computer including a processor and a memory. The memory storing instructions executable by the processor to cause the processor to generate a low-level representation of the input source domain data; generate an embedding of the input source domain data; generate a high-level feature representation of features of the input source domain data; generate output target domain data in the target domain that includes semantics corresponding to the input source domain data by processing the high-level feature representation of the features of the input source domain data using a domain low-level decoder neural network layer that generate data from the target; and modify a loss function such that latent attributes corresponding to the embedding are selected from a same probability distribution.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Nikita Jaipuria, Apurbaa Mallik, Punarjay Chakravarty, Ganesh Kumar
  • Publication number: 20220009498
    Abstract: A system includes a computer including a processor and a memory, the memory storing instructions executable by the processor to, generate a synthetic image by adjusting respective color values of one or more pixels of a reference image based on a specified meteorological optical range from a vehicle sensor to simulated fog, and input the synthetic image to a machine learning program to train the machine learning program to identify a meteorological optical range from the vehicle sensor to actual fog.
    Type: Application
    Filed: July 8, 2020
    Publication date: January 13, 2022
    Applicant: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Kaushik Balakrishnan, Vijay Nagasamy, Praveen Narayanan, Sowndarya Sundar
  • Publication number: 20210350184
    Abstract: A training system for a deep neural network and method of training is disclosed. The system and/or method may comprise: receiving, from an eye-tracking system associated with a sensor, an image frame captured while an operator is controlling a vehicle; receiving, from the eye-tracking system, eyeball gaze data corresponding to the image frame; and iteratively training the deep neural network to determine an object of interest depicted within the image frame based on the eyeball gaze data. The deep neural network generates at least one feature map and determine a proposed region corresponding to the object of interest within the at least one feature map based on the eyeball gaze data.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 11, 2021
    Applicant: Ford Global Technologies, LLC
    Inventors: Apurbaa Mallik, Vijay Nagasamy, Aniruddh Ravindran
  • Publication number: 20210342579
    Abstract: A method for identifying a hand pose in a vehicle involves identifying a hand image for a hand in the vehicle by extraction from a vehicle image of the vehicle. A plurality of contextual images of the hand image is obtained based on the single point. Each of the plurality of contextual images are processed using one or more layers of a neural network to obtain a plurality of contextual features associated with the hand image. A hand pose associated with the hand is identified based on the plurality of contextual features using a classifier model.
    Type: Application
    Filed: August 27, 2019
    Publication date: November 4, 2021
    Inventors: Hisham CHOLAKKAL, Sanath NARAYAN, Arjun JAIN, Shuaib AHMED, Amit BHATKAL, Mallikarjun BYRASANDRA RAMALINGA REDDY, Apurbaa MALLIK
  • Patent number: 10890444
    Abstract: Method and System for estimating three dimensional measurements of a physical object by utilizing readings from inertial sensors is provided. The method involves capturing by a handheld unit, three dimensional aspects of the physical object. The raw recordings are received from the inertial sensors and are used to develop a raw rotation matrix. The raw rotation matrix is subjected to low pass filtering to obtain processed matrix constituted of filtered Euler angles wherein coordinates from the processed rotation matrix is used to estimate gravitational component along the three axis leading to determination of acceleration values and further calculation of measurement of each dimension of the physical object.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: January 12, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Apurbaa Mallik, Brojeshwar Bhowmick, Aniruddha Sinha, Aishwarya Visvanathan, Sudeb Das
  • Patent number: 10475231
    Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: November 12, 2019
    Assignee: Tata Consultancy Services Limited
    Inventors: Brojeshwar Bhowmick, Swapna Agarwal, Sanjana Sinha, Balamuralidhar Purushothaman, Apurbaa Mallik
  • Publication number: 20190080503
    Abstract: Methods and systems for change detection utilizing three dimensional (3D) point-cloud processing are provided. The method includes detecting changes in the surface based on a surface fitting approach with a locally weighted Moving Least Squares (MLS) approximation. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces using the surface fitting approach. The method provides effective change detection for both, rigid as well as non-rigid changes, reducing false detections due to presence of noise and is independent of factors such as texture or illumination of an object or scene being tracked for changed detection.
    Type: Application
    Filed: February 15, 2018
    Publication date: March 14, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Brojeshwar BHOWMICK, Swapna AGARWAL, Sanjana SINHA, Balamuralidhar PURUSHOTHAMAN, Apurbaa MALLIK
  • Patent number: 9865061
    Abstract: Disclosed is a method and system for constructing a 3D structure. The system of the present disclosure comprises an image capturing unit for capturing images of an object. The system comprises of a gyroscope, a magnetometer, and an accelerometer for determining extrinsic camera parameters, wherein the extrinsic camera parameters comprise a rotation and a translation of the images. Further the system determines an internal calibration matrix once. The system uses the extrinsic camera parameters and the internal calibration matrix for determining a fundamental matrix. The system extracts features of the images for establishing point correspondences between the images. Further, the point correspondences are filtered using the fundamental matrix for generating filtered point correspondences. The filtered point correspondences are triangulated for determining 3D points representing the 3D structure. Further, the 3D structure may be optimized for eliminating reprojection errors associated with the 3D structure.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: January 9, 2018
    Assignee: Tata Consultancy Services Limited
    Inventors: Brojeshwar Bhowmick, Apurbaa Mallik, Arindam Saha
  • Patent number: 9830736
    Abstract: Disclosed is a method for segmenting a plurality of objects from a two-dimensional (2D) video captured through a depth camera and an RGB/G camera. The method comprises detecting camera motion in each 2D frame of the plurality of 2D frames from the 2D video and generate a first set of 2D frames without any camera motion. The method further comprises generating a plurality of cloud points for the first set of 2D frames corresponding to each pixel associated a 2D frames in the first set of 2D frames. The method further comprises generating a 3D grid comprising a plurality of voxels. The method further comprises determining valid voxels and an invalid voxels in the 3D grid. Further, a 3D connected component labeling technique is applied on to the set of valid voxels to segment the plurality of objects in the 2D video.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: November 28, 2017
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Aniruddha Sinha, Tanushyam Chattopadhyay, Sangheeta Roy, Apurbaa Mallik
  • Publication number: 20160334210
    Abstract: Method and System for estimating three dimensional measurements of a physical object by utilizing readings from inertial sensors is provided. The method involves capturing by a handheld unit, three dimensional aspects of the physical object. The raw recordings are received from the inertial sensors and are used to develop a raw rotation matrix. The raw rotation matrix is subjected to low pass filtering to obtain processed matrix constituted of filtered Euler angles wherein coordinates from the processed rotation matrix is used to estimate gravitational component along the three axis leading to determination of acceleration values and further calculation of measurement of each dimension of the physical object.
    Type: Application
    Filed: May 13, 2016
    Publication date: November 17, 2016
    Applicant: Tata Consultancy Services Limited
    Inventors: Apurbaa MALLIK, Brojeshwar BHOWMICK, Aniruddha SINHA, Aishwarya VISVANATHAN, Sudeb DAS
  • Publication number: 20160035124
    Abstract: Disclosed is a method for segmenting a plurality of objects from a two-dimensional (2D) video captured through a depth camera and an RGB/G camera. The method comprises detecting camera motion in each 2D frame of the plurality of 2D frames from the 2D video and generate a first set of 2D frames without any camera motion. The method further comprises generating a plurality of cloud points for the first set of 2D frames corresponding to each pixel associated a 2D frames in the first set of 2D frames. The method further comprises generating a 3D grid comprising a plurality of voxels. The method further comprises determining valid voxels and an invalid voxels in the 3D grid. Further, a 3D connected component labeling technique is applied on to the set of valid voxels to segment the plurality of objects in the 2D video.
    Type: Application
    Filed: January 22, 2014
    Publication date: February 4, 2016
    Inventors: Aniruddha SINHA, Tanushyam CHATTOPADHYAY, Sangheeta ROY, Apurbaa MALLIK
  • Publication number: 20150371396
    Abstract: Disclosed is a method and system for constructing a 3D structure. The system of the present disclosure comprises an image capturing unit for capturing images of an object. The system comprises of a gyroscope, a magnetometer, and an accelerometer for determining extrinsic camera parameters, wherein the extrinsic camera parameters comprise a rotation and a translation of the images. Further the system determines an internal calibration matrix once. The system uses the extrinsic camera parameters and the internal calibration matrix for determining a fundamental matrix. The system extracts features of the images for establishing point correspondences between the images. Further, the point correspondences are filtered using the fundamental matrix for generating filtered point correspondences. The filtered point correspondences are triangulated for determining 3D points representing the 3D structure. Further, the 3D structure may be optimized for eliminating reprojection errors associated with the 3D structure.
    Type: Application
    Filed: September 23, 2014
    Publication date: December 24, 2015
    Inventors: Brojeshwar BHOWMICK, Apurbaa MALLIK, Arindam SAHA