Patents by Inventor Shubham Shrivastava
Shubham Shrivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12197208Abstract: A first plurality of center points of first two-dimensional bounding boxes corresponding to a vehicle occurring in a first plurality of images acquired by a first camera can be determined. A second plurality of center points of second two-dimensional bounding boxes corresponding to the vehicle occurring in a second plurality of images acquired by a second camera can also be determined. A plurality of non-linear equations based on the locations of the first and second pluralities of center points and first and second camera parameters corresponding to the first and second cameras can be determined. The plurality of non-linear equations can be solved simultaneously for the locations of the vehicle with respect to the first and second cameras and the six degree of freedom pose of the second camera with respect to the first camera.Type: GrantFiled: November 1, 2021Date of Patent: January 14, 2025Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Shubham Shrivastava, Bhushan Ghadge, Mostafa Parchami, Gaurav Pandey
-
Publication number: 20240404081Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to determine a quantity of relative motion between a host vehicle and a target vehicle; in response to the quantity of relative motion being below a threshold, determine a pose of the host vehicle subject to a constraint on motion of the host vehicle; and in response to the quantity of relative motion being above the threshold, determine the pose of the host vehicle without the constraint.Type: ApplicationFiled: June 2, 2023Publication date: December 5, 2024Applicants: Ford Global Technologies, LLC, QUEENSLAND UNIVERSITY OF TECHNOLOGYInventors: Stephen Hausler, Shubham Shrivastava, Punarjay Chakravarty, Ankit Girish Vora, Michael Milford, Sourav Garg
-
Patent number: 12100189Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.Type: GrantFiled: December 14, 2021Date of Patent: September 24, 2024Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Patent number: 12045950Abstract: A plurality of virtual three-dimensional points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.Type: GrantFiled: September 27, 2021Date of Patent: July 23, 2024Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Shubham Shrivastava
-
Patent number: 12008787Abstract: A depth image of an object can be input to a deep neural network to determine a first four degree-of-freedom pose of the object. The first four degree-of-freedom pose and a three-dimensional model of the object can be input to a silhouette rendering program to determine a first two-dimensional silhouette of the object. A second two-dimensional silhouette of the object can be determined based on thresholding the depth image. A loss function can be determined based on comparing the first two-dimensional silhouette of the object to the second two-dimensional silhouette of the object. Deep neural network parameters can be optimized based on the loss function and the deep neural network can be output.Type: GrantFiled: July 20, 2021Date of Patent: June 11, 2024Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Gaurav Pandey, Punarjay Chakravarty
-
Patent number: 11931868Abstract: A wrench head for a universal wrench is disclosed that may include a housing having an upper portion and a lower portion. The wrench head may further include a base plate positioned inside the housing towards the upper portion of the housing, and a plurality of push-rods positioned inside the housing. The wrench head may further include a resilient member positioned inside the housing between the base plate and the distal end of each of the plurality of push-rods. In the default position, the resilient member biases each of the plurality of push-rods, and in the retracted position, the proximal end of each of the plurality of push-rods is pushed against the resilient member to be received along with a portion of the resilient member by a compartment of the plurality of compartments defined at the bottom surface of the base plate.Type: GrantFiled: February 2, 2022Date of Patent: March 19, 2024Assignee: L&T TECHNOLOGY SERVICES LIMITEDInventor: Shubham Shrivastava
-
Publication number: 20240087332Abstract: A computer is programmed to receive image data from a sensor; generate a feature pyramid from the image data, the feature pyramid including a plurality of features; apply a plurality of preliminary bounding boxes to the features to generate a plurality of preliminarily bounded features, each preliminarily bounded feature being a pairing of one of the preliminary bounding boxes and one of the features; execute a machine-learning program on the preliminarily bounded features to determine a plurality of classifications and a respective plurality of predicted bounding boxes; and actuate a component of a machine, e.g., a vehicle, based on the classifications and the predicted bounding boxes. The machine-learning program is a two-stage object detector having a first stage and a second stage. The first stage selects a subset of the preliminarily bounded features to pass to the second stage.Type: ApplicationFiled: September 14, 2022Publication date: March 14, 2024Applicant: Ford Global Technologies, LLCInventors: Cédric Picron, Tinne Tuytelaars, Punarjay Chakravarty, Shubham Shrivastava
-
Publication number: 20240054673Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive an image frame from a camera, the image frame showing a vehicle, determine a two-dimensional bounding box around the vehicle in the image frame, determine a first three-dimensional point that is an intersection between (a) a ray extending from the camera through a center of the two-dimensional bounding box and (b) a first plane that is parallel to a ground plane, determine a second three-dimensional point on the vehicle, and determine an orientation of the vehicle based on the first three-dimensional point, the second three-dimensional point, and the ground plane. The ground plane represents a surface on which the vehicle sits.Type: ApplicationFiled: August 9, 2022Publication date: February 15, 2024Applicant: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Bhushan Ghadge, Punarjay Chakravarty
-
Patent number: 11887317Abstract: A plurality of agent locations can be determined at a plurality of time steps by inputting a plurality of images to a perception algorithm that inputs the plurality of images and outputs agent labels and the agent locations. A plurality of first uncertainties corresponding to the agent locations can be determined at the plurality of time steps by inputting the plurality of agent locations to a filter algorithm that inputs the agent locations and outputs the plurality of first uncertainties corresponding to the plurality of agent locations. A plurality of predicted agent trajectories and potential trajectories corresponding to the predicted agent trajectories can be determined by inputting the plurality of agent locations at the plurality of time steps and the first uncertainties corresponding to the agent locations at the plurality of time steps to a variational autoencoder.Type: GrantFiled: August 30, 2021Date of Patent: January 30, 2024Assignees: Ford Global Technologies, LLC, The Board of Trustees of the Leland Stanford Junior UniversityInventors: Boris Ivanovic, Yifeng Lin, Shubham Shrivastava, Punarjay Chakravarty, Marco Pavone
-
Publication number: 20230419539Abstract: A computer includes a processor and a memory, and the memory stores instructions executable by the processor to receive at least one image frame from at least one camera, the at least one image frame showing a vehicle; determine at least one baseline pixel coordinate within one image frame for the vehicle; initialize a plurality of initial poses for the vehicle in a preset pattern; for each initial pose, determine a respective final pose by minimizing a reprojection error between the at least one baseline pixel coordinate and at least one respective estimated pixel coordinate, the at least one respective estimated pixel coordinate resulting from reprojecting the vehicle to pixel coordinates in the one image frame; and select a first final pose from the final poses, the first final pose having the lowest minimized reprojection error of the final poses.Type: ApplicationFiled: May 17, 2022Publication date: December 28, 2023Applicant: Ford Global Technologies, LLCInventors: Sushruth Nagesh, Shubham Shrivastava, Punarjay Chakravarty
-
Patent number: 11827203Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to capture, from a camera, one or more images, wherein the one or more images include at least a portion of a vehicle, receive a plurality of keypoints corresponding to markers on the vehicle and instantiate a virtual vehicle corresponding to the vehicle. The instructions include further instructions to determine rotational and translation parameters of the vehicle by matching a plurality of virtual keypoints to the plurality of keypoints and determine a multi-degree of freedom (MDF) pose of the vehicle based on the rotational and translation parameters.Type: GrantFiled: January 14, 2021Date of Patent: November 28, 2023Assignee: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Shubham Shrivastava
-
Publication number: 20230267640Abstract: A two-dimensional image segment that includes an outline of an object can be determined in a top-down fisheye image. A six degree of freedom (DoF) pose for the object can be determined based on determining a three-dimensional bounding box determined by one or more of (1) an axis of the two-dimensional image segment in a ground plane included in the top-down fisheye image and a three-dimensional model of the object and (2) inputting the two-dimensional image segment to a deep neural network trained to determine a three-dimensional bounding box for the object.Type: ApplicationFiled: February 21, 2022Publication date: August 24, 2023Applicant: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Subodh Mishra, Mostafa Parchami, Gaurav Pandey, Shubham Shrivastava
-
Publication number: 20230252667Abstract: An approximate camera location on a route can be determined by inputting a first image acquired by a vehicle camera to a first convolutional neural network. First image feature points can be extracted from the first image using a feature extraction algorithm. Pose estimation parameters for a pose estimation algorithm can be selected based on the approximate camera location. A six degree-of-freedom (DoF) camera pose can be determined by inputting the first image feature points and second feature points included in a structure-from-motion (SfM) map based on the route to the pose estimation algorithm which is controlled by the pose estimation parameters. A six DoF vehicle pose can be determined based on the six DoF camera pose.Type: ApplicationFiled: February 8, 2022Publication date: August 10, 2023Applicant: Ford Global Technologies, LLCInventors: Ming Xu, Sourav Garg, Michael Milford, Punarjay Chakravarty, Shubham Shrivastava
-
Patent number: 11710254Abstract: A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.Type: GrantFiled: April 7, 2021Date of Patent: July 25, 2023Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Publication number: 20230186587Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.Type: ApplicationFiled: December 14, 2021Publication date: June 15, 2023Applicant: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Patent number: 11670088Abstract: A plurality of temporally successive vehicle sensor images are received as input to a variational autoencoder neural network that outputs an averaged semantic birds-eye view image that includes respective pixels determined by averaging semantic class values of corresponding pixels in respective images in the plurality of temporally successive vehicle sensor images. From a plurality of topological nodes that each specify respective real-world locations, a topological node closest to the vehicle, and a three degree-of-freedom pose for the vehicle relative to the topological node closest to the vehicle, is determined based on the averaged semantic birds-eye view image. A real-world three degree-of-freedom pose for the vehicle is determined by combining the three degree-of-freedom pose for the vehicle relative to the topological node and the real-world location of the topological node closest to the vehicle.Type: GrantFiled: December 7, 2020Date of Patent: June 6, 2023Assignee: Ford Global Technologies, LLCInventors: Mokshith Voodarla, Shubham Shrivastava, Punarjay Chakravarty
-
Publication number: 20230136871Abstract: A first plurality of center points of first two-dimensional bounding boxes corresponding to a vehicle occurring in a first plurality of images acquired by a first camera can be determined. A second plurality of center points of second two-dimensional bounding boxes corresponding to the vehicle occurring in a second plurality of images acquired by a second camera can also be determined. A plurality of non-linear equations based on the locations of the first and second pluralities of center points and first and second camera parameters corresponding to the first and second cameras can be determined. The plurality of non-linear equations can be solved simultaneously for the locations of the vehicle with respect to the first and second cameras and the six degree of freedom pose of the second camera with respect to the first camera. Real-world coordinates of the six degree of freedom pose of the second camera can be determined based on real-world coordinates of a six degree of freedom pose of the first camera.Type: ApplicationFiled: November 1, 2021Publication date: May 4, 2023Applicant: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Shubham Shrivastava, Bhushan Ghadge, Mostafa Parchami, Gaurav Pandey
-
Patent number: 11619727Abstract: A calibration device and method of calculating a global multi-degree of freedom (MDF) pose of a camera affixed to a structure is disclosed. The method may comprise: determining, via a computer of a calibration device, a calibration device MDF pose with respect to a global coordinate system corresponding to the structure; receiving, from an image system including the camera, a camera MDF pose with respect to the calibration device, wherein a computer of the image system determines the camera MDF pose based on an image captured by the camera including at least a calibration board affixed to the calibration device; calculating the global MDF pose based on the calibration device MDF pose and the MDF pose; and transmitting the global MDF pose to the image system such that a computer of the image system can use the global MDF pose for calibration purposes.Type: GrantFiled: June 29, 2020Date of Patent: April 4, 2023Assignee: FORD GLOBAL TECHNOLOGIES, LLCInventors: Sagar Manglani, Punarjay Chakravarty, Shubham Shrivastava
-
Publication number: 20230097584Abstract: A plurality of virtual three-dimensional (3D) points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional (2D) projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.Type: ApplicationFiled: September 27, 2021Publication date: March 30, 2023Applicant: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Shubham Shrivastava
-
Publication number: 20230074293Abstract: A plurality of agent locations can be determined at a plurality of time steps by inputting a plurality of images to a perception algorithm that inputs the plurality of images and outputs agent labels and the agent locations. A plurality of first uncertainties corresponding to the agent locations can be determined at the plurality of time steps by inputting the plurality of agent locations to a filter algorithm that inputs the agent locations and outputs the plurality of first uncertainties corresponding to the plurality of agent locations. A plurality of predicted agent trajectories and potential trajectories corresponding to the predicted agent trajectories can be determined by inputting the plurality of agent locations at the plurality of time steps and the first uncertainties corresponding to the agent locations at the plurality of time steps to a variational autoencoder.Type: ApplicationFiled: August 30, 2021Publication date: March 9, 2023Applicants: Ford Global Technologies, LLC, The Board of Trustees of the Leland Stanford Junior UniversityInventors: Boris Ivanovic, Yifeng Lin, Shubham Shrivastava, Punarjay Chakravarty, Marco Pavone