Patents by Inventor Nawid JAMALI

Nawid JAMALI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11958201
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a method includes receiving image data about an object and receiving depth data about the object. The method also includes generating a visual estimate of the object based on the image data and the depth data. The method further includes receiving tactile data about the object. The method yet further includes generating a tactile estimate of the object based on the tactile data. The method includes estimating a pose of the object based on the visual estimate and the tactile estimate.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: April 16, 2024
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Nawid Jamali, Huckleberry Febbo, Karankumar Patel, Soshi Iba, Akinobu Hayashi, Itoshi Naramura
  • Publication number: 20240062410
    Abstract: A system and method for multimodal object-centric representation learning that include receiving data associated with an image and a depth map of an object. The system and method also include determining an object-surface point cloud based on the image and the depth map. The system and method additionally include determining multi-resolution receptive fields based on the object-surface point cloud. The system and method further include passing the multi-resolution receptive fields through convolutional encoders to learn an object centric representation of the object.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Alireza REZAZADEH, Nawid JAMALI, Soshi IBA
  • Publication number: 20230316734
    Abstract: Pose fusion estimation may be achieved via a first and second set of sensors receiving a first and second set of data, passing the first and second set of data through a graph-based neural network to generate a set of geometric features to be passed through a pose fusion network to generate a first and second pose estimate. A second portion of the pose fusion network may receive the set of geometric features and generate a second set of geometric features and the second pose estimate based on the set of geometric features. A first portion of the pose fusion network may receive the first set of data and the second set of geometric features and generate the first pose estimate based on a fusion of the first set of data and the second set of geometric features.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Daksh DHINGRA, Nawid JAMALI, Snehal DIKHALE, Karankumar PATEL, Soshi IBA
  • Publication number: 20220080598
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a method includes receiving image data about an object and receiving depth data about the object. The method also includes generating a visual estimate of the object based on the image data and the depth data. The method further includes receiving tactile data about the object. The method yet further includes generating a tactile estimate of the object based on the tactile data. The method includes estimating a pose of the object based on the visual estimate and the tactile estimate.
    Type: Application
    Filed: September 17, 2020
    Publication date: March 17, 2022
    Inventors: Nawid Jamali, Huckleberry Febbo, Karankumar Patel, Soshi Iba, Akinobu Hayashi, Itoshi Naramura
  • Publication number: 20220084241
    Abstract: Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a computer implemented method includes receiving image data, depth data, and tactile data about the object in the environment. The computer implemented method also includes generating a visual estimate of the object that includes an object point cloud. The computer implemented method further includes generating a tactile estimate of the object that includes a surface point cloud based on the tactile data. The computer implemented method yet further includes estimating a pose of the object based on the visual estimate and the tactile estimate by fusing the object point cloud and the surface point cloud in a 3D space. The pose is a six-dimensional pose.
    Type: Application
    Filed: July 12, 2021
    Publication date: March 17, 2022
    Inventors: Snehal DIKHALE, Karankumar PATEL, Daksh DHINGRA, Soshi IBA, Nawid JAMALI
  • Patent number: 11185978
    Abstract: Methods, grasping systems, and computer-readable mediums storing computer executable code for grasping an object are provided. In an example, a depth image of the object may be obtained by a grasping system. A potential grasp point of the object may be determined by the grasping system based on the depth image. A tactile output corresponding to the potential grasp point may be estimated by the grasping system based on data from the depth image. The grasping system may be controlled to grasp the object at the potential grasp point based on the estimated tactile output.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: November 30, 2021
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Nawid Jamali, Soshi Iba
  • Publication number: 20210270605
    Abstract: Systems and methods for tactile output estimation are provided. In one embodiment, the system includes a depth map module, an estimation module, and a surface module. The depth map module is configured to identify a region of interest (RoI) of an object. The area of the RoI corresponds to a tactile sensor size of a tactile sensor. The depth module is further configured to receive depth data for the RoI from a depth sensor and generate a depth map for the RoI based on a volume of the depth data relative to a frame of reference of the RoI. The estimation module is configured to estimate a tactile sensor output based on the depth map. The surface module configured to determine surface properties based on the estimated tactile sensor output.
    Type: Application
    Filed: September 17, 2020
    Publication date: September 2, 2021
    Inventors: Karankumar Patel, Soshi Iba, Nawid Jamali
  • Publication number: 20210107139
    Abstract: Aspects of the present disclosure include methods, apparatuses, and computer readable media for performing an undesirable task including receiving an indication identifying a person performing a task at a first time, receiving a plurality of input data associated with the person while performing the task, determining whether the task is undesirable based on the plurality of input data, and causing, in response to determining that the task is undesirable, the robot to perform the task at a second time after the first time.
    Type: Application
    Filed: October 11, 2019
    Publication date: April 15, 2021
    Inventors: Nawid JAMALI, Soshi IBA
  • Publication number: 20200215685
    Abstract: Methods, grasping systems, and computer-readable mediums storing computer executable code for grasping an object are provided. In an example, a depth image of the object may be obtained by a grasping system. A potential grasp point of the object may be determined by the grasping system based on the depth image. A tactile output corresponding to the potential grasp point may be estimated by the grasping system based on data from the depth image. The grasping system may be controlled to grasp the object at the potential grasp point based on the estimated tactile output.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Inventors: Nawid JAMALI, Soshi Iba