Patents by Inventor Punarjay Chakravarty

Punarjay Chakravarty has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087332
    Abstract: A computer is programmed to receive image data from a sensor; generate a feature pyramid from the image data, the feature pyramid including a plurality of features; apply a plurality of preliminary bounding boxes to the features to generate a plurality of preliminarily bounded features, each preliminarily bounded feature being a pairing of one of the preliminary bounding boxes and one of the features; execute a machine-learning program on the preliminarily bounded features to determine a plurality of classifications and a respective plurality of predicted bounding boxes; and actuate a component of a machine, e.g., a vehicle, based on the classifications and the predicted bounding boxes. The machine-learning program is a two-stage object detector having a first stage and a second stage. The first stage selects a subset of the preliminarily bounded features to pass to the second stage.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 14, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Cédric Picron, Tinne Tuytelaars, Punarjay Chakravarty, Shubham Shrivastava
  • Publication number: 20240054673
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, the image frame showing a vehicle, determine a two-dimensional bounding box around the vehicle in the image frame, determine a first three-dimensional point that is an intersection between (a) a ray extending from the camera through a center of the two-dimensional bounding box and (b) a first plane that is parallel to a ground plane, determine a second three-dimensional point on the vehicle, and determine an orientation of the vehicle based on the first three-dimensional point, the second three-dimensional point, and the ground plane. The ground plane represents a surface on which the vehicle sits.
    Type: Application
    Filed: August 9, 2022
    Publication date: February 15, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Bhushan Ghadge, Punarjay Chakravarty
  • Publication number: 20240046563
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to jointly train a geometric NeRF multilayer perceptron (MLP) and a color NeRF MLP to model a scene using an occupancy grid map, camera data of the scene from a camera, and lidar data of the scene from a lidar; supervise the geometric NeRF MLP with the lidar data during the joint training; and supervise the color NeRF MLP with the camera data during the joint training. The geometric NeRF MLP is a neural radiance field modeling a geometry of the scene, and the color NeRF MLP is a neural radiance field modeling colors of the scene.
    Type: Application
    Filed: July 25, 2023
    Publication date: February 8, 2024
    Applicants: Ford Global Technologies, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Alexandra Carlson, Nikita Jaipuria, Punarjay Chakravarty, Manikandasriram Srinivasan Ramanagopal, Ramanarayan Vasudevan, Katherine Skinner
  • Publication number: 20240037772
    Abstract: A robot can be moved in a structure that includes a plurality of downward-facing cameras, and, as the robot moves, upward images can be captured with an upward-facing camera mounted to the robot. Downward images can be captured with the respective downward-facing cameras. Upward-facing camera poses can be determined at respective times based on the upward images. Further, respective poses of the downward-facing cameras can be determined based on (a) describing motion of the robot from the downward images, and (b) the upward-facing camera poses determined from the upward images.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Subodh Mishra, Punarjay Chakravarty, Sagar Manglani, Sushruth Nagesh, Gaurav Pandey
  • Patent number: 11887317
    Abstract: A plurality of agent locations can be determined at a plurality of time steps by inputting a plurality of images to a perception algorithm that inputs the plurality of images and outputs agent labels and the agent locations. A plurality of first uncertainties corresponding to the agent locations can be determined at the plurality of time steps by inputting the plurality of agent locations to a filter algorithm that inputs the agent locations and outputs the plurality of first uncertainties corresponding to the plurality of agent locations. A plurality of predicted agent trajectories and potential trajectories corresponding to the predicted agent trajectories can be determined by inputting the plurality of agent locations at the plurality of time steps and the first uncertainties corresponding to the agent locations at the plurality of time steps to a variational autoencoder.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: January 30, 2024
    Assignees: Ford Global Technologies, LLC, The Board of Trustees of the Leland Stanford Junior University
    Inventors: Boris Ivanovic, Yifeng Lin, Shubham Shrivastava, Punarjay Chakravarty, Marco Pavone
  • Patent number: 11887323
    Abstract: A method may include: receiving a first image captured by a camera at a first time instance, wherein the first image includes at least a portion of an observed vehicle; determining a first ray angle based on a coordinate system of an ego-vehicle and a coordinate system of the observed vehicle corresponding to the first image; receiving a second image captured by the camera at a second time instance, wherein the second image includes at least a portion of the observed vehicle oriented at a different viewpoint; determining a second ray angle based on a coordinate system of the ego-vehicle and the coordinate system of the observed vehicle corresponding to the second image; determining a local angle difference based on the first ray angle and the second ray angle; and training a deep neural network using the local angle difference, the first image, and the second image.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: January 30, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Tinne Tuytelaars, Cédric Picron, Tom Roussel
  • Publication number: 20230419539
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive at least one image frame from at least one camera, the at least one image frame showing a vehicle; determine at least one baseline pixel coordinate within one image frame for the vehicle; initialize a plurality of initial poses for the vehicle in a preset pattern; for each initial pose, determine a respective final pose by minimizing a reprojection error between the at least one baseline pixel coordinate and at least one respective estimated pixel coordinate, the at least one respective estimated pixel coordinate resulting from reprojecting the vehicle to pixel coordinates in the one image frame; and select a first final pose from the final poses, the first final pose having the lowest minimized reprojection error of the final poses.
    Type: Application
    Filed: May 17, 2022
    Publication date: December 28, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Sushruth Nagesh, Shubham Shrivastava, Punarjay Chakravarty
  • Patent number: 11827203
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to capture, from a camera, one or more images, wherein the one or more images include at least a portion of a vehicle, receive a plurality of keypoints corresponding to markers on the vehicle and instantiate a virtual vehicle corresponding to the vehicle. The instructions include further instructions to determine rotational and translation parameters of the vehicle by matching a plurality of virtual keypoints to the plurality of keypoints and determine a multi-degree of freedom (MDF) pose of the vehicle based on the rotational and translation parameters.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: November 28, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava
  • Publication number: 20230267640
    Abstract: A two-dimensional image segment that includes an outline of an object can be determined in a top-down fisheye image. A six degree of freedom (DoF) pose for the object can be determined based on determining a three-dimensional bounding box determined by one or more of (1) an axis of the two-dimensional image segment in a ground plane included in the top-down fisheye image and a three-dimensional model of the object and (2) inputting the two-dimensional image segment to a deep neural network trained to determine a three-dimensional bounding box for the object.
    Type: Application
    Filed: February 21, 2022
    Publication date: August 24, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Subodh Mishra, Mostafa Parchami, Gaurav Pandey, Shubham Shrivastava
  • Publication number: 20230252667
    Abstract: An approximate camera location on a route can be determined by inputting a first image acquired by a vehicle camera to a first convolutional neural network. First image feature points can be extracted from the first image using a feature extraction algorithm. Pose estimation parameters for a pose estimation algorithm can be selected based on the approximate camera location. A six degree-of-freedom (DoF) camera pose can be determined by inputting the first image feature points and second feature points included in a structure-from-motion (SfM) map based on the route to the pose estimation algorithm which is controlled by the pose estimation parameters. A six DoF vehicle pose can be determined based on the six DoF camera pose.
    Type: Application
    Filed: February 8, 2022
    Publication date: August 10, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Ming Xu, Sourav Garg, Michael Milford, Punarjay Chakravarty, Shubham Shrivastava
  • Patent number: 11720995
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to input a fisheye image to a vector quantized variational autoencoder. The vector quantized variational autoencoder can encode the fisheye image to first latent variables based on an encoder. The vector quantized variational autoencoder can quantize the first latent variables to generate second latent variables based on a dictionary of embeddings. The vector quantized variational autoencoder can decode the second latent variables to a rectified rectilinear image using a decoder and output the rectified rectilinear image.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: August 8, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Ramchandra Ganesh Karandikar, Nikita Jaipuria, Punarjay Chakravarty, Ganesh Kumar
  • Patent number: 11710254
    Abstract: A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: July 25, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
  • Publication number: 20230186587
    Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
  • Patent number: 11670088
    Abstract: A plurality of temporally successive vehicle sensor images are received as input to a variational autoencoder neural network that outputs an averaged semantic birds-eye view image that includes respective pixels determined by averaging semantic class values of corresponding pixels in respective images in the plurality of temporally successive vehicle sensor images. From a plurality of topological nodes that each specify respective real-world locations, a topological node closest to the vehicle, and a three degree-of-freedom pose for the vehicle relative to the topological node closest to the vehicle, is determined based on the averaged semantic birds-eye view image. A real-world three degree-of-freedom pose for the vehicle is determined by combining the three degree-of-freedom pose for the vehicle relative to the topological node and the real-world location of the topological node closest to the vehicle.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: June 6, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Mokshith Voodarla, Shubham Shrivastava, Punarjay Chakravarty
  • Patent number: 11662741
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to determine an eccentricity map based on video image data and determine vehicle motion data by processing the eccentricity map and two red, green, blue (RGB) video images with a deep neural network trained to output vehicle motion data in global coordinates. The instructions can further include instructions to operate a vehicle based on the vehicle motion data.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 30, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Bruno Sielly Jales Costa, Gintaras Vincent Puskorius
  • Publication number: 20230136871
    Abstract: A first plurality of center points of first two-dimensional bounding boxes corresponding to a vehicle occurring in a first plurality of images acquired by a first camera can be determined. A second plurality of center points of second two-dimensional bounding boxes corresponding to the vehicle occurring in a second plurality of images acquired by a second camera can also be determined. A plurality of non-linear equations based on the locations of the first and second pluralities of center points and first and second camera parameters corresponding to the first and second cameras can be determined. The plurality of non-linear equations can be solved simultaneously for the locations of the vehicle with respect to the first and second cameras and the six degree of freedom pose of the second camera with respect to the first camera. Real-world coordinates of the six degree of freedom pose of the second camera can be determined based on real-world coordinates of a six degree of freedom pose of the first camera.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 4, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava, Bhushan Ghadge, Mostafa Parchami, Gaurav Pandey
  • Patent number: 11625856
    Abstract: Example localization systems and methods are described. In one implementation, a method receives a camera image from a vehicle camera and cleans the camera image using a VAE-GAN (variational autoencoder combined with a generative adversarial network) algorithm. The method further receives a vector map related to an area proximate the vehicle and generates a synthetic image based on the vector map. The method then localizes the vehicle based on the cleaned camera image and the synthetic image.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: April 11, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Sarah Houts, Praveen Narayanan, Punarjay Chakravarty, Gaurav Pandey, Graham Mills, Tyler Reid
  • Patent number: 11619727
    Abstract: A calibration device and method of calculating a global multi-degree of freedom (MDF) pose of a camera affixed to a structure is disclosed. The method may comprise: determining, via a computer of a calibration device, a calibration device MDF pose with respect to a global coordinate system corresponding to the structure; receiving, from an image system including the camera, a camera MDF pose with respect to the calibration device, wherein a computer of the image system determines the camera MDF pose based on an image captured by the camera including at least a calibration board affixed to the calibration device; calculating the global MDF pose based on the calibration device MDF pose and the MDF pose; and transmitting the global MDF pose to the image system such that a computer of the image system can use the global MDF pose for calibration purposes.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: April 4, 2023
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Sagar Manglani, Punarjay Chakravarty, Shubham Shrivastava
  • Patent number: 11620475
    Abstract: The present disclosure discloses a system and a method that includes receiving, at a decoder, a latent representation of an image having a first domain, and generating a reconstructed image having a second domain, wherein the reconstructed image is generated based on the latent representation.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: April 4, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Praveen Narayanan, Nikita Jaipuria, Punarjay Chakravarty, Vidya Nariyambut murali
  • Publication number: 20230097584
    Abstract: A plurality of virtual three-dimensional (3D) points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional (2D) projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.
    Type: Application
    Filed: September 27, 2021
    Publication date: March 30, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava