Patents by Inventor Kazuhiro SASABUCHI

Kazuhiro SASABUCHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240051128
    Abstract: The techniques disclosed herein enable a machine learning model to learn a termination condition of a sub-task. A sub-task is one of a number of sub-tasks that, when performed in sequence, accomplish a long-running task. A machine learning model used to perform the sub-task is augmented to also provide a termination signal. The termination signal indicates whether the sub-task's termination condition has been met. Monitoring the termination signal while performing the sub-task enables subsequent sub-tasks to seamlessly begin at the appropriate time. A termination condition may be learned from the same data used to train other model outputs. In some configurations, the model learns whether a sub-task is complete by periodically attempting subsequent sub-tasks. If a subsequent sub-task can be performed, positive reinforcement is provided for the termination condition. The termination condition may also be trained using synthetic scenarios designed to test when the termination condition has been met.
    Type: Application
    Filed: December 5, 2022
    Publication date: February 15, 2024
    Inventors: Kartavya NEEMA, Kazuhiro SASABUCHI, Aydan AKSOYLAR, Naoki WAKE, Jun TAKAMATSU, Ruofan KONG, Marcos de MOURA CAMPOS, Victor SHNAYDER, Brice Hoani Valentin CHUNG, Katsushi IKEUCHI
  • Patent number: 11731271
    Abstract: Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Naoki Wake, Kazuhiro Sasabuchi, Katsushi Ikeuchi
  • Publication number: 20210402593
    Abstract: Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
    Type: Application
    Filed: June 30, 2020
    Publication date: December 30, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Naoki WAKE, Kazuhiro SASABUCHI, Katsushi IKEUCHI
  • Publication number: 20210402597
    Abstract: Systems, methods, and computer-readable media are disclosed for task-oriented motion mapping on an agent using body role division. One method includes: receiving task demonstration information of a particular task; receiving a set of instructions for the particular task; receiving a configuration of an agent to perform the particular task, the configuration of the agent including a plurality of joints, and each joint belong to one or more of a configurational group, a positional group, and a orientational group: mapping the configurational group of the agent based on the task demonstration information; changing values in the orientational group based on one or more of the task demonstration information and the set of instructions; changing values in the positional group based on the set of instructions; and producing a task-oriented motion mapping based on the mapped configuration group, changed values in the orientation group, and changed values in the positional group.
    Type: Application
    Filed: June 29, 2020
    Publication date: December 30, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kazuhiro SASABUCHI, Naoki WAKE, Katsushi IKEUCHI