Patents by Inventor Gaurav Pandey
Gaurav Pandey has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240409158Abstract: A trailer angle identification system includes an imaging device configured to capture an image. A controller is configured to receive steering angle data corresponding to a steering angle of a vehicle, receive vehicle speed data corresponding to a vehicle speed, and estimate an angle of the trailer relative to the vehicle by processing the image, the steering angle data, and the vehicle speed data in at least one neural network.Type: ApplicationFiled: August 23, 2024Publication date: December 12, 2024Applicant: Ford Global Technologies, LLCInventors: Gaurav Pandey, Nikita Jaipuria
-
Patent number: 12097899Abstract: A trailer angle identification system includes an imaging device configured to capture an image. A controller is configured to receive steering angle data corresponding to a steering angle of a vehicle, receive vehicle speed data corresponding to a vehicle speed, and estimate an angle of the trailer relative to the vehicle by processing the image, the steering angle data, and the vehicle speed data in at least one neural network.Type: GrantFiled: January 19, 2022Date of Patent: September 24, 2024Assignee: Ford Global Technologies, LLCInventors: Gaurav Pandey, Nikita Jaipuria
-
Patent number: 12100189Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.Type: GrantFiled: December 14, 2021Date of Patent: September 24, 2024Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Patent number: 12073588Abstract: A camera is positioned to obtain an image of an object. The image is input to a neural network that outputs a three-dimensional (3D) bounding box for the object relative to a pixel coordinate system and object parameters. Then a center of a bottom face of the 3D bounding box is determined in pixel coordinates. The bottom face of the 3D bounding box is located in a ground plane in the image. Based on calibration parameters for the camera that transform pixel coordinates into real-world coordinates, a) a distance from the center of the bottom face of the 3D bounding box to the camera relative to a real-world coordinate system and b) an angle between a line extending from the camera to the center of the bottom face of the 3D bounding box and an optical axis of the camera are determined. The calibration parameters include a camera height relative to the ground plane, a camera focal distance, and a camera tilt relative to the ground plane.Type: GrantFiled: September 24, 2021Date of Patent: August 27, 2024Assignee: Ford Global Technologies, LLCInventors: Mostafa Parchami, Enrique Corona, Kunjan Singh, Gaurav Pandey
-
Patent number: 12046132Abstract: First feature points can be determined which correspond to pose-invariant surface model properties based on first data points included in a first lidar point cloud acquired by a sensor. A three-dimensional occupancy grid can be determined based on first data points included in the first lidar point cloud. Dynamic objects in a second lidar point cloud acquired by the sensor can be determined based on the occupancy grid. Second feature points can be determined which correspond to pose-invariant surface model properties based on second data points included in the second lidar point cloud not including the dynamic objects. A difference can be determined between corresponding feature points included in the first feature points and the second feature points. A traffic infrastructure system can be alerted based on the difference exceeding a threshold.Type: GrantFiled: November 17, 2021Date of Patent: July 23, 2024Assignee: Ford Global Technologies, LLCInventors: Matthew Baer Wilmes, Gaurav Pandey, Devarth Parikh
-
Patent number: 12008787Abstract: A depth image of an object can be input to a deep neural network to determine a first four degree-of-freedom pose of the object. The first four degree-of-freedom pose and a three-dimensional model of the object can be input to a silhouette rendering program to determine a first two-dimensional silhouette of the object. A second two-dimensional silhouette of the object can be determined based on thresholding the depth image. A loss function can be determined based on comparing the first two-dimensional silhouette of the object to the second two-dimensional silhouette of the object. Deep neural network parameters can be optimized based on the loss function and the deep neural network can be output.Type: GrantFiled: July 20, 2021Date of Patent: June 11, 2024Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Gaurav Pandey, Punarjay Chakravarty
-
Patent number: 11893004Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive a time series of vectors from a sensor, determine a weighted moving mean of the vectors, determine an inverse covariance matrix of the vectors, receive a current vector from the sensor, determine a squared Mahalanobis distance between the current vector and the weighted moving mean, and output an indicator of an anomaly with the sensor in response to the squared Mahalanobis distance exceeding a threshold. The squared Mahalanobis distance is determined by using the inverse covariance matrix.Type: GrantFiled: August 26, 2020Date of Patent: February 6, 2024Assignee: Ford Global Technologies, LLCInventors: Gaurav Pandey, Brian George Buss, Dimitar Petrov Filev
-
Patent number: 11893085Abstract: An image including a first object can be input to a deep neural network trained to detect objects. The deep neural network can output a first feature vector corresponding to the first object. A first distance can be measured from the first feature vector to a feature vector subspace determined using a k-means single value decomposition algorithm on an overcomplete dictionary of feature vectors. The first object can be determined to correspond to an anomaly based on the first distance.Type: GrantFiled: May 4, 2021Date of Patent: February 6, 2024Assignee: Ford Global Technologies, LLCInventor: Gaurav Pandey
-
Publication number: 20240037772Abstract: A robot can be moved in a structure that includes a plurality of downward-facing cameras, and, as the robot moves, upward images can be captured with an upward-facing camera mounted to the robot. Downward images can be captured with the respective downward-facing cameras. Upward-facing camera poses can be determined at respective times based on the upward images. Further, respective poses of the downward-facing cameras can be determined based on (a) describing motion of the robot from the downward images, and (b) the upward-facing camera poses determined from the upward images.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Applicant: Ford Global Technologies, LLCInventors: Subodh Mishra, Punarjay Chakravarty, Sagar Manglani, Sushruth Nagesh, Gaurav Pandey
-
Patent number: 11797014Abstract: A method for controlling a robotic vehicle in a delivery environment includes causing the robotic vehicle to deploy from an autonomous vehicle (AV) at a first AV position in the delivery environment. The method further includes localizing, via a robotic vehicle controller, an initial position within a global reference map using a robot vehicle perception system, receiving, from the AV, a 3-dimensional (3D) augmented map and localizing an updated position in the delivery environment based on the 3D augmented map and the global reference map. The robot vehicle perception system senses obstacle characteristics, and generates a unified 3D augmented map with robot-sensed obstacle characteristics. The method further includes generating a dynamic path plan to a package delivery destination using the unified 3D augmented map, and actuating the robot vehicle to the package delivery destination according to the dynamic path plan.Type: GrantFiled: February 9, 2021Date of Patent: October 24, 2023Assignee: Ford Global Technologies, LLCInventors: Gaurav Pandey, James McBride, Justin Miller, Jianbo Lu, Yifan Chen
-
Publication number: 20230267640Abstract: A two-dimensional image segment that includes an outline of an object can be determined in a top-down fisheye image. A six degree of freedom (DoF) pose for the object can be determined based on determining a three-dimensional bounding box determined by one or more of (1) an axis of the two-dimensional image segment in a ground plane included in the top-down fisheye image and a three-dimensional model of the object and (2) inputting the two-dimensional image segment to a deep neural network trained to determine a three-dimensional bounding box for the object.Type: ApplicationFiled: February 21, 2022Publication date: August 24, 2023Applicant: Ford Global Technologies, LLCInventors: Punarjay Chakravarty, Subodh Mishra, Mostafa Parchami, Gaurav Pandey, Shubham Shrivastava
-
Publication number: 20230267278Abstract: Methods, systems, and computer program products for context-based response generation are provided herein. A method includes: obtaining conversation logs comprising agent responses matched to contexts and a set of designated responses that are not matched to the contexts; replacing at least a portion of the agent responses with the designated responses to form modified conversation logs; training a first model, using the modified conversation logs, to output a designated response in the set for a given context and a second model, using the historical conversation logs, to output one of the agent responses for a given context; identifying one or more new responses based at least in part on the output of the second machine learning model for a particular one of the contexts; and retraining the first machine learning model based at least in part on the one or more new responses.Type: ApplicationFiled: February 18, 2022Publication date: August 24, 2023Inventors: Gaurav Pandey, DANISH CONTRACTOR, Nathaniel Mills, Jatin GANHOTRA, Ross Warren Judd, Sachindra Joshi, Luis A. Lastras-Montano
-
Publication number: 20230242118Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive a waypoint; generate a path for a vehicle from a starting point to the waypoint, the path characterized by a preset number of parameters; and instruct a propulsion and a steering system of the vehicle to actuate to navigate the vehicle along the path.Type: ApplicationFiled: February 1, 2022Publication date: August 3, 2023Applicants: Ford Global Technologies, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Gabor Orosz, Sanghoon Oh, Hongtei Eric Tseng, Qi Chen, Gaurav Pandey
-
Patent number: 11710254Abstract: A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.Type: GrantFiled: April 7, 2021Date of Patent: July 25, 2023Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Publication number: 20230227104Abstract: A trailer angle identification system includes an imaging device configured to capture an image. A controller is configured to receive steering angle data corresponding to a steering angle of a vehicle, receive vehicle speed data corresponding to a vehicle speed, and estimate an angle of the trailer relative to the vehicle by processing the image, the steering angle data, and the vehicle speed data in at least one neural network.Type: ApplicationFiled: January 19, 2022Publication date: July 20, 2023Applicant: Ford Global Technologies, LLCInventors: Gaurav Pandey, Nikita Jaipuria
-
Publication number: 20230186587Abstract: An image can be input to a deep neural network to determine a point in the image based on a center of a Gaussian heatmap corresponding to an object included in the image. The deep neural network can determine an object descriptor corresponding to the object and include the object descriptor in an object vector attached to the point. The deep neural network can determine object parameters including a three-dimensional location of the object in global coordinates and predicted pixel offsets of the object. The object parameters can be included in the object vector, and the deep neural network can predict a future location of the object in global coordinates based on the point and the object vector.Type: ApplicationFiled: December 14, 2021Publication date: June 15, 2023Applicant: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Publication number: 20230179547Abstract: Methods, systems, and computer program products for implementing automated communication exchange programs for attended robotic process automation are provided herein. A computer-implemented method includes invoking, during a user communication associated with an attended robotic process automation context, at least one automated communication exchange program in response to at least one user input; determining, using the at least one automated communication exchange program, information directed to the at least one user input; carrying out, using the at least one automated communication exchange program, at least a portion of the user communication subsequent to determining the information directed to the at least one user input; and performing one or more automated actions in connection with automatically carrying out the at least a portion of the user communication.Type: ApplicationFiled: December 7, 2021Publication date: June 8, 2023Inventors: Danish Contractor, Ateret Anaby - Tavor, Gaurav Pandey
-
Patent number: 11668794Abstract: An extrinsic-calibration system includes at least one target and a computer. Each target includes three flat surfaces that are mutually nonparallel and a corner at which the three surfaces intersect. The computer is programmed to estimate a first set of relative positions of the corner in a first coordinate frame from a first sensor, the first set of relative positions corresponding one-to-one to a set of absolute positions of the at least one target; estimate a second set of relative positions of the corner in a second coordinate frame from a second sensor, the second set of relative positions corresponding one-to-one to the set of absolute positions; and estimate a rigid transformation between the first coordinate frame and the second coordinate frame based on the first set of relative positions and the second set of relative positions.Type: GrantFiled: January 7, 2020Date of Patent: June 6, 2023Assignee: Ford Global Technologies, LLCInventors: Gaurav Pandey, Weifeng Xiong
-
Patent number: 11671385Abstract: Methods, systems, and computer program products for implementing automated communication exchange programs for attended robotic process automation are provided herein. A computer-implemented method includes invoking, during a user communication associated with an attended robotic process automation context, at least one automated communication exchange program in response to at least one user input; determining, using the at least one automated communication exchange program, information directed to the at least one user input; carrying out, using the at least one automated communication exchange program, at least a portion of the user communication subsequent to determining the information directed to the at least one user input; and performing one or more automated actions in connection with automatically carrying out the at least a portion of the user communication.Type: GrantFiled: December 7, 2021Date of Patent: June 6, 2023Assignee: International Business Machines CorporationInventors: Danish Contractor, Ateret Anaby-Tavor, Gaurav Pandey
-
Publication number: 20230154313Abstract: First feature points can be determined which correspond to pose-invariant surface model properties based on first data points included in a first lidar point cloud acquired by a sensor. A three-dimensional occupancy grid can be determined based on first data points included in the first lidar point cloud. Dynamic objects in a second lidar point cloud acquired by the sensor can be determined based on the occupancy grid. Second feature points can be determined which correspond to pose-invariant surface model properties based on second data points included in the second lidar point cloud not including the dynamic objects. A difference can be determined between corresponding feature points included in the first feature points and the second feature points. A traffic infrastructure system can be alerted based on the difference exceeding a threshold.Type: ApplicationFiled: November 17, 2021Publication date: May 18, 2023Applicant: Ford Global Technologies, LLCInventors: Matthew Baer Wilmes, Gaurav Pandey, Devarth Parikh