Patents by Inventor Punarjay Chakravarty

Punarjay Chakravarty has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12333757
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive at least one image frame from at least one camera, the at least one image frame showing a vehicle; determine at least one baseline pixel coordinate within one image frame for the vehicle; initialize a plurality of initial poses for the vehicle in a preset pattern; for each initial pose, determine a respective final pose by minimizing a reprojection error between the at least one baseline pixel coordinate and at least one respective estimated pixel coordinate, the at least one respective estimated pixel coordinate resulting from reprojecting the vehicle to pixel coordinates in the one image frame; and select a first final pose from the final poses, the first final pose having the lowest minimized reprojection error of the final poses.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: June 17, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Sushruth Nagesh, Shubham Shrivastava, Punarjay Chakravarty
  • Patent number: 12229985
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, the image frame showing a vehicle, determine a two-dimensional bounding box around the vehicle in the image frame, determine a first three-dimensional point that is an intersection between (a) a ray extending from the camera through a center of the two-dimensional bounding box and (b) a first plane that is parallel to a ground plane, determine a second three-dimensional point on the vehicle, and determine an orientation of the vehicle based on the first three-dimensional point, the second three-dimensional point, and the ground plane. The ground plane represents a surface on which the vehicle sits.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: February 18, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Bhushan Ghadge, Punarjay Chakravarty
  • Patent number: 12217450
    Abstract: An approximate camera location on a route can be determined by inputting a first image acquired by a vehicle camera to a first convolutional neural network. First image feature points can be extracted from the first image using a feature extraction algorithm. Pose estimation parameters for a pose estimation algorithm can be selected based on the approximate camera location. A six degree-of-freedom (DoF) camera pose can be determined by inputting the first image feature points and second feature points included in a structure-from-motion (SfM) map based on the route to the pose estimation algorithm which is controlled by the pose estimation parameters. A six DoF vehicle pose can be determined based on the six DoF camera pose.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: February 4, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Ming Xu, Sourav Garg, Michael Milford, Punarjay Chakravarty, Shubham Shrivastava
  • Patent number: 12197208
    Abstract: A first plurality of center points of first two-dimensional bounding boxes corresponding to a vehicle occurring in a first plurality of images acquired by a first camera can be determined. A second plurality of center points of second two-dimensional bounding boxes corresponding to the vehicle occurring in a second plurality of images acquired by a second camera can also be determined. A plurality of non-linear equations based on the locations of the first and second pluralities of center points and first and second camera parameters corresponding to the first and second cameras can be determined. The plurality of non-linear equations can be solved simultaneously for the locations of the vehicle with respect to the first and second cameras and the six degree of freedom pose of the second camera with respect to the first camera.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: January 14, 2025
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava, Bhushan Ghadge, Mostafa Parchami, Gaurav Pandey
  • Patent number: 12164021
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: December 10, 2024
    Assignees: Ford Global Technologies, LLC, Board of Trustees of Michigan State University
    Inventors: Xiaoming Liu, Daniel Morris, Yunfei Long, Marcos Paul Gerardo Castro, Punarjay Chakravarty, Praveen Narayanan
  • Publication number: 20240404081
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to determine a quantity of relative motion between a host vehicle and a target vehicle; in response to the quantity of relative motion being below a threshold, determine a pose of the host vehicle subject to a constraint on motion of the host vehicle; and in response to the quantity of relative motion being above the threshold, determine the pose of the host vehicle without the constraint.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Applicants: Ford Global Technologies, LLC, QUEENSLAND UNIVERSITY OF TECHNOLOGY
    Inventors: Stephen Hausler, Shubham Shrivastava, Punarjay Chakravarty, Ankit Girish Vora, Michael Milford, Sourav Garg
  • Patent number: 12100189
    Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: September 24, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
  • Patent number: 12061253
    Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive radar data from a radar, the radar data including radar pixels having respective measured depths; receive camera data from a camera, the camera data including an image frame including camera pixels; map the radar pixels to the image frame; generate respective regions of the image frame surrounding the respective radar pixels; for each region, determine confidence scores for the respective camera pixels in that region; output a depth map of projected depths for the respective camera pixels based on the confidence scores; and operate a vehicle including the radar and the camera based on the depth map. The confidence scores indicate confidence in applying the measured depth of the radar pixel for that region to the respective camera pixels.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: August 13, 2024
    Assignees: Ford Global Technologies, LLC, Board of Trustees of Michigan State University
    Inventors: Yunfei Long, Daniel Morris, Xiaoming Liu, Marcos Paul Gerardo Castro, Praveen Narayanan, Punarjay Chakravarty
  • Publication number: 20240264276
    Abstract: A computer that includes a processor and a memory, the memory including instructions executable by the processor to generate radar data by projecting radar returns of objects within a scene onto an image plane of camera data of the scene based on extrinsic and intrinsic parameters of a camera and extrinsic parameters of a radar sensor to generate the radar data. The image data can be received at an image channel of an image/radar convolutional neural network (CNN) and receive the radar data at a radar channel of the image/radar CNN, wherein features are transferred from the image channel to the radar channel at multiple stages Image object features and image confidence scores can be determined by the image channel, and radar object features and radar confidences by the radar channel. The image object features can be combined with the radar object features using a weighted sum.
    Type: Application
    Filed: January 26, 2024
    Publication date: August 8, 2024
    Applicants: Ford Global Technologies, LLC, Board of Trustees of Michigan State University
    Inventors: Yunfei Long, Daniel Morris, Abhinav Kumar, Xiaoming Liu, Marcos Paul Gerardo Castro, Punarjay Chakravarty, Praveen Narayanan
  • Patent number: 12045950
    Abstract: A plurality of virtual three-dimensional points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: July 23, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava
  • Patent number: 12008787
    Abstract: A depth image of an object can be input to a deep neural network to determine a first four degree-of-freedom pose of the object. The first four degree-of-freedom pose and a three-dimensional model of the object can be input to a silhouette rendering program to determine a first two-dimensional silhouette of the object. A second two-dimensional silhouette of the object can be determined based on thresholding the depth image. A loss function can be determined based on comparing the first two-dimensional silhouette of the object to the second two-dimensional silhouette of the object. Deep neural network parameters can be optimized based on the loss function and the deep neural network can be output.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: June 11, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Gaurav Pandey, Punarjay Chakravarty
  • Publication number: 20240087332
    Abstract: A computer is programmed to receive image data from a sensor; generate a feature pyramid from the image data, the feature pyramid including a plurality of features; apply a plurality of preliminary bounding boxes to the features to generate a plurality of preliminarily bounded features, each preliminarily bounded feature being a pairing of one of the preliminary bounding boxes and one of the features; execute a machine-learning program on the preliminarily bounded features to determine a plurality of classifications and a respective plurality of predicted bounding boxes; and actuate a component of a machine, e.g., a vehicle, based on the classifications and the predicted bounding boxes. The machine-learning program is a two-stage object detector having a first stage and a second stage. The first stage selects a subset of the preliminarily bounded features to pass to the second stage.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 14, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Cédric Picron, Tinne Tuytelaars, Punarjay Chakravarty, Shubham Shrivastava
  • Publication number: 20240054673
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, the image frame showing a vehicle, determine a two-dimensional bounding box around the vehicle in the image frame, determine a first three-dimensional point that is an intersection between (a) a ray extending from the camera through a center of the two-dimensional bounding box and (b) a first plane that is parallel to a ground plane, determine a second three-dimensional point on the vehicle, and determine an orientation of the vehicle based on the first three-dimensional point, the second three-dimensional point, and the ground plane. The ground plane represents a surface on which the vehicle sits.
    Type: Application
    Filed: August 9, 2022
    Publication date: February 15, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Shubham Shrivastava, Bhushan Ghadge, Punarjay Chakravarty
  • Publication number: 20240046563
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to jointly train a geometric NeRF multilayer perceptron (MLP) and a color NeRF MLP to model a scene using an occupancy grid map, camera data of the scene from a camera, and lidar data of the scene from a lidar; supervise the geometric NeRF MLP with the lidar data during the joint training; and supervise the color NeRF MLP with the camera data during the joint training. The geometric NeRF MLP is a neural radiance field modeling a geometry of the scene, and the color NeRF MLP is a neural radiance field modeling colors of the scene.
    Type: Application
    Filed: July 25, 2023
    Publication date: February 8, 2024
    Applicants: Ford Global Technologies, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Alexandra Carlson, Nikita Jaipuria, Punarjay Chakravarty, Manikandasriram Srinivasan Ramanagopal, Ramanarayan Vasudevan, Katherine Skinner
  • Publication number: 20240037772
    Abstract: A robot can be moved in a structure that includes a plurality of downward-facing cameras, and, as the robot moves, upward images can be captured with an upward-facing camera mounted to the robot. Downward images can be captured with the respective downward-facing cameras. Upward-facing camera poses can be determined at respective times based on the upward images. Further, respective poses of the downward-facing cameras can be determined based on (a) describing motion of the robot from the downward images, and (b) the upward-facing camera poses determined from the upward images.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Applicant: Ford Global Technologies, LLC
    Inventors: Subodh Mishra, Punarjay Chakravarty, Sagar Manglani, Sushruth Nagesh, Gaurav Pandey
  • Patent number: 11887317
    Abstract: A plurality of agent locations can be determined at a plurality of time steps by inputting a plurality of images to a perception algorithm that inputs the plurality of images and outputs agent labels and the agent locations. A plurality of first uncertainties corresponding to the agent locations can be determined at the plurality of time steps by inputting the plurality of agent locations to a filter algorithm that inputs the agent locations and outputs the plurality of first uncertainties corresponding to the plurality of agent locations. A plurality of predicted agent trajectories and potential trajectories corresponding to the predicted agent trajectories can be determined by inputting the plurality of agent locations at the plurality of time steps and the first uncertainties corresponding to the agent locations at the plurality of time steps to a variational autoencoder.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: January 30, 2024
    Assignees: Ford Global Technologies, LLC, The Board of Trustees of the Leland Stanford Junior University
    Inventors: Boris Ivanovic, Yifeng Lin, Shubham Shrivastava, Punarjay Chakravarty, Marco Pavone
  • Patent number: 11887323
    Abstract: A method may include: receiving a first image captured by a camera at a first time instance, wherein the first image includes at least a portion of an observed vehicle; determining a first ray angle based on a coordinate system of an ego-vehicle and a coordinate system of the observed vehicle corresponding to the first image; receiving a second image captured by the camera at a second time instance, wherein the second image includes at least a portion of the observed vehicle oriented at a different viewpoint; determining a second ray angle based on a coordinate system of the ego-vehicle and the coordinate system of the observed vehicle corresponding to the second image; determining a local angle difference based on the first ray angle and the second ray angle; and training a deep neural network using the local angle difference, the first image, and the second image.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: January 30, 2024
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Tinne Tuytelaars, Cédric Picron, Tom Roussel
  • Publication number: 20230419539
    Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive at least one image frame from at least one camera, the at least one image frame showing a vehicle; determine at least one baseline pixel coordinate within one image frame for the vehicle; initialize a plurality of initial poses for the vehicle in a preset pattern; for each initial pose, determine a respective final pose by minimizing a reprojection error between the at least one baseline pixel coordinate and at least one respective estimated pixel coordinate, the at least one respective estimated pixel coordinate resulting from reprojecting the vehicle to pixel coordinates in the one image frame; and select a first final pose from the final poses, the first final pose having the lowest minimized reprojection error of the final poses.
    Type: Application
    Filed: May 17, 2022
    Publication date: December 28, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Sushruth Nagesh, Shubham Shrivastava, Punarjay Chakravarty
  • Patent number: 11827203
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to capture, from a camera, one or more images, wherein the one or more images include at least a portion of a vehicle, receive a plurality of keypoints corresponding to markers on the vehicle and instantiate a virtual vehicle corresponding to the vehicle. The instructions include further instructions to determine rotational and translation parameters of the vehicle by matching a plurality of virtual keypoints to the plurality of keypoints and determine a multi-degree of freedom (MDF) pose of the vehicle based on the rotational and translation parameters.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: November 28, 2023
    Assignee: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Shubham Shrivastava
  • Publication number: 20230267640
    Abstract: A two-dimensional image segment that includes an outline of an object can be determined in a top-down fisheye image. A six degree of freedom (DoF) pose for the object can be determined based on determining a three-dimensional bounding box determined by one or more of (1) an axis of the two-dimensional image segment in a ground plane included in the top-down fisheye image and a three-dimensional model of the object and (2) inputting the two-dimensional image segment to a deep neural network trained to determine a three-dimensional bounding box for the object.
    Type: Application
    Filed: February 21, 2022
    Publication date: August 24, 2023
    Applicant: Ford Global Technologies, LLC
    Inventors: Punarjay Chakravarty, Subodh Mishra, Mostafa Parchami, Gaurav Pandey, Shubham Shrivastava