Patents by Inventor Karankumar Patel

Karankumar Patel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11958201
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a method includes receiving image data about an object and receiving depth data about the object. The method also includes generating a visual estimate of the object based on the image data and the depth data. The method further includes receiving tactile data about the object. The method yet further includes generating a tactile estimate of the object based on the tactile data. The method includes estimating a pose of the object based on the visual estimate and the tactile estimate.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: April 16, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Nawid Jamali, Huckleberry Febbo, Karankumar Patel, Soshi Iba, Akinobu Hayashi, Itoshi Naramura
  • Publication number: 20230316734
    Abstract: Pose fusion estimation may be achieved via a first and second set of sensors receiving a first and second set of data, passing the first and second set of data through a graph-based neural network to generate a set of geometric features to be passed through a pose fusion network to generate a first and second pose estimate. A second portion of the pose fusion network may receive the set of geometric features and generate a second set of geometric features and the second pose estimate based on the set of geometric features. A first portion of the pose fusion network may receive the first set of data and the second set of geometric features and generate the first pose estimate based on a fusion of the first set of data and the second set of geometric features.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Daksh DHINGRA, Nawid JAMALI, Snehal DIKHALE, Karankumar PATEL, Soshi IBA
  • Publication number: 20220080598
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a method includes receiving image data about an object and receiving depth data about the object. The method also includes generating a visual estimate of the object based on the image data and the depth data. The method further includes receiving tactile data about the object. The method yet further includes generating a tactile estimate of the object based on the tactile data. The method includes estimating a pose of the object based on the visual estimate and the tactile estimate.
    Type: Application
    Filed: September 17, 2020
    Publication date: March 17, 2022
    Inventors: Nawid Jamali, Huckleberry Febbo, Karankumar Patel, Soshi Iba, Akinobu Hayashi, Itoshi Naramura
  • Publication number: 20220084241
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a computer implemented method includes receiving image data, depth data, and tactile data about the object in the environment. The computer implemented method also includes generating a visual estimate of the object that includes an object point cloud. The computer implemented method further includes generating a tactile estimate of the object that includes a surface point cloud based on the tactile data. The computer implemented method yet further includes estimating a pose of the object based on the visual estimate and the tactile estimate by fusing the object point cloud and the surface point cloud in a 3D space. The pose is a six-dimensional pose.
    Type: Application
    Filed: July 12, 2021
    Publication date: March 17, 2022
    Inventors: Snehal DIKHALE, Karankumar PATEL, Daksh DHINGRA, Soshi IBA, Nawid JAMALI
  • Publication number: 20210270605
    Abstract: Systems and methods for tactile output estimation are provided. In one embodiment, the system includes a depth map module, an estimation module, and a surface module. The depth map module is configured to identify a region of interest (RoI) of an object. The area of the RoI corresponds to a tactile sensor size of a tactile sensor. The depth module is further configured to receive depth data for the RoI from a depth sensor and generate a depth map for the RoI based on a volume of the depth data relative to a frame of reference of the RoI. The estimation module is configured to estimate a tactile sensor output based on the depth map. The surface module configured to determine surface properties based on the estimated tactile sensor output.
    Type: Application
    Filed: September 17, 2020
    Publication date: September 2, 2021
    Inventors: Karankumar Patel, Soshi Iba, Nawid Jamali