Patents by Inventor Ademola Ayodeji ORIDATE

Ademola Ayodeji ORIDATE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240116181
    Abstract: Aspects of the disclosure are directed towards generating a trajectory for use by a robot to perform an operation on a target object. A method includes a robot instructed to traverse a surface of the target object from the starting position and obtain sensor input from a sensor system of the robot using a first trajectory specified by a CAD file for the target object. Sensor input that includes a first height to the target object, a width of the seam, and a depth of the seam may be received. A second trajectory may be generated for traversing the surface of the target object and that modifies the first trajectory based on the sensor input. The robot may be instructed to move to the starting position and traverse the surface of the target object using the second trajectory and apply a sealant to the seam.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 11, 2024
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Nicole MAGPANTAY, Daniel MORAN
  • Patent number: 11951635
    Abstract: Aspects of the disclosure are directed towards generating a trajectory for use by a robot to perform an operation on a target object. A method includes a robot instructed to traverse a surface of the target object from the starting position and obtain sensor input from a sensor system of the robot using a first trajectory specified by a CAD file for the target object. Sensor input that includes a first height to the target object, a width of the seam, and a depth of the seam may be received. A second trajectory may be generated for traversing the surface of the target object and that modifies the first trajectory based on the sensor input. The robot may be instructed to move to the starting position and traverse the surface of the target object using the second trajectory and apply a sealant to the seam.
    Type: Grant
    Filed: September 29, 2023
    Date of Patent: April 9, 2024
    Assignee: WILDER SYSTEMS INC
    Inventors: Ademola Ayodeji Oridate, William Wilder, Nicole Magpantay, Daniel Moran
  • Patent number: 11931910
    Abstract: Aspects of the disclosure are directed towards artificial intelligence-based modeling of target objects, such as aircraft parts. In an example, a system initially trains a machine learning (ML) model based on synthetic images generated based on multi-dimensional representation of target objects. The same system or a different system subsequently further trains the ML model based on actual images generated by cameras positioned by robots relative to target objects. The ML model can be used to process an image generated by a camera positioned by a robot relative to a target object based on a multi-dimensional representation of the target object. The output of the ML model can indicate, for a detected target, position data, a target type, and/or a visual inspection property. This output can then be used to update the multi-dimensional representation, which is then used to perform robotics operations on the target object.
    Type: Grant
    Filed: August 9, 2023
    Date of Patent: March 19, 2024
    Assignee: Wilder Systems Inc.
    Inventors: Ademola Ayodeji Oridate, William Wilder, Spencer Voiss
  • Patent number: 11911921
    Abstract: Aspects of the disclosure are directed towards artificial intelligence-based modeling of target objects, such as aircraft parts. In an example, a system initially trains a machine learning (ML) model based on synthetic images generated based on multi-dimensional representation of target objects. The same system or a different system subsequently further trains the ML model based on actual images generated by cameras positioned by robots relative to target objects. The ML model can be used to process an image generated by a camera positioned by a robot relative to a target object based on a multi-dimensional representation of the target object. The output of the ML model can indicate, for a detected target, position data, a target type, and/or a visual inspection property. This output can then be used to update the multi-dimensional representation, which is then used to perform robotics operations on the target object.
    Type: Grant
    Filed: August 9, 2023
    Date of Patent: February 27, 2024
    Assignee: WILDER SYSTEMS INC.
    Inventors: Ademola Ayodeji Oridate, William Wilder, Spencer Voiss
  • Publication number: 20240051148
    Abstract: Aspects of the disclosure are directed towards artificial intelligence-based modeling of target objects, such as aircraft parts. In an example, a system initially trains a machine learning (ML) model based on synthetic images generated based on multi-dimensional representation of target objects. The same system or a different system subsequently further trains the ML model based on actual images generated by cameras positioned by robots relative to target objects. The ML model can be used to process an image generated by a camera positioned by a robot relative to a target object based on a multi-dimensional representation of the target object. The output of the ML model can indicate, for a detected target, position data, a target type, and/or a visual inspection property. This output can then be used to update the multi-dimensional representation, which is then used to perform robotics operations on the target object.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 15, 2024
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Spencer VOISS
  • Publication number: 20240051149
    Abstract: Aspects of the disclosure are directed towards artificial intelligence-based modeling of target objects, such as aircraft parts. In an example, a system initially trains a machine learning (ML) model based on synthetic images generated based on multi-dimensional representation of target objects. The same system or a different system subsequently further trains the ML model based on actual images generated by cameras positioned by robots relative to target objects. The ML model can be used to process an image generated by a camera positioned by a robot relative to a target object based on a multi-dimensional representation of the target object. The output of the ML model can indicate, for a detected target, position data, a target type, and/or a visual inspection property. This output can then be used to update the multi-dimensional representation, which is then used to perform robotics operations on the target object.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 15, 2024
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Spencer VOISS
  • Publication number: 20240051116
    Abstract: Aspects of the disclosure are directed towards path generation. A method includes a user interface (UI) displaying a first page on a first pane, wherein the first page provides a first control input for registering a working frame of a target object with a reference frame of a robot. The method further includes receiving, via the UI, a first user selection of the first control input for registering the working frame with the reference frame, based on detection of the first user selection. The UI can display a second page on the first pane, wherein the second page provides a second control input for generating a path for the robot to traverse over a surface of the target object. The method further includes receiving, via the UI, a second user selection of the second control input for generating the path, based on detection of the second user selection.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 15, 2024
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Spencer VOISS
  • Publication number: 20240051139
    Abstract: Aspects of the disclosure are directed towards path generation. A method includes a device registering working frame of a target object with a reference frame of a robot. The device can generate a path over a representation of a surface of the target object. The device can generate a trajectory over the surface of the target object based on the registration, the path, and a normal. The device can classify a target type for the real-world target using a machine learning model based on scanned data of the surface of the target object. The device can generate a robot job file, wherein the robot job file comprises the trajectory and an autonomous operation instruction. The device can transmit the robot job file to a robot controller.
    Type: Application
    Filed: August 9, 2023
    Publication date: February 15, 2024
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Spencer VOISS
  • Patent number: 11897145
    Abstract: Aspects of the disclosure are directed towards path generation. A method includes a device registering working frame of a target object with a reference frame of a robot. The device can generate a path over a representation of a surface of the target object. The device can generate a trajectory over the surface of the target object based on the registration, the path, and a normal. The device can classify a target type for the real-world target using a machine learning model based on scanned data of the surface of the target object. The device can generate a robot job file, wherein the robot job file comprises the trajectory and an autonomous operation instruction. The device can transmit the robot job file to a robot controller.
    Type: Grant
    Filed: August 9, 2023
    Date of Patent: February 13, 2024
    Assignee: WILDER SYSTEMS INC.
    Inventors: Ademola Ayodeji Oridate, William Wilder, Spencer Voiss
  • Publication number: 20230405818
    Abstract: Aspects of the disclosure are directed towards decontamination of a target object. A method includes a device registering a target object by identifying corresponding pairs of data points from the first set of data points of the three-dimensional representation and a second set of data points associated with a reference three-dimensional representation. The device localizes a position of the target object with respect to a position of a robotic arm based at least in part on the three-dimensional representation. The device generates a set of waypoints based on a subset of the first set of data points, the waypoints being arranged on the three-dimensional representation. The device determining a first path for the robotic arm to traverse over the surface of the target object based on the waypoints. The device receives a location on the surface of the target object that comprises a contaminant.
    Type: Application
    Filed: June 20, 2023
    Publication date: December 21, 2023
    Inventors: Ademola Ayodeji ORIDATE, William WILDER, Spencer VOISS, Jacob ST. JOHN