Patents by Inventor Kazuto MURASE
Kazuto MURASE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11958202Abstract: A system and method for performing object detection are presented. The system receives spatial structure information associated with an object which is or has been in a camera field of view of a spatial structure sensing camera. The spatial structure information is generated by the spatial structure sensing camera, and includes depth information for an environment in the camera field of view. The system determines a container pose based on the spatial structure information, wherein the container pose is for describing at least one of an orientation for the container or a depth value for at least a portion of the container. The system further determines an object pose based on the container pose, wherein the object pose is for describing at least one of an orientation for the object or a depth value for at least a portion of the object.Type: GrantFiled: August 12, 2021Date of Patent: April 16, 2024Assignee: MUJIN, INC.Inventors: Xutao Ye, Kazuto Murase
-
Patent number: 11493928Abstract: A trajectory generation apparatus according to an embodiment includes an arithmetic unit capable of generating a positional trajectory of a movable part of a robot, in which: the arithmetic unit is further capable of generating a velocity trajectory of the movable part; and a predetermined time before switching a trajectory along which the movable part is moved from the velocity trajectory to the positional trajectory, the arithmetic unit predicts a position of the movable part at the time of the switching and generates the positional trajectory that starts from the predicted position.Type: GrantFiled: November 13, 2019Date of Patent: November 8, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Yuta Watanabe, Kazuto Murase, Koji Terada
-
Patent number: 11407102Abstract: A grasping robot includes: a grasping mechanism configured to grasp a target object; an image-pickup unit configured to shoot a surrounding environment; an extraction unit configured to extract a graspable part that can be grasped by the grasping mechanism in the surrounding environment by using a learned model that uses an image acquired by the image-pickup unit as an input image; a position detection unit configured to detect a position of the graspable part; a recognition unit configured to recognize a state of the graspable part by referring to a lookup table that associates the position of the graspable part with a movable state thereof; and a grasping control unit configured to control the grasping mechanism so as to displace the graspable part in accordance with the state of the graspable part recognized by the recognition unit.Type: GrantFiled: November 25, 2019Date of Patent: August 9, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventor: Kazuto Murase
-
Publication number: 20210370518Abstract: A system and method for performing object detection are presented. The system receives spatial structure information associated with an object which is or has been in a camera field of view of a spatial structure sensing camera. The spatial structure information is generated by the spatial structure sensing camera, and includes depth information for an environment in the camera field of view. The system determines a container pose based on the spatial structure information, wherein the container pose is for describing at least one of an orientation for the container or a depth value for at least a portion of the container. The system further determines an object pose based on the container pose, wherein the object pose is for describing at least one of an orientation for the object or a depth value for at least a portion of the object.Type: ApplicationFiled: August 12, 2021Publication date: December 2, 2021Inventors: Xutao YE, Kazuto Murase
-
Patent number: 11130237Abstract: A system and method for performing object detection are presented. The system receives spatial structure information associated with an object which is or has been in a camera field of view of a spatial structure sensing camera. The spatial structure information is generated by the spatial structure sensing camera, and includes depth information for an environment in the camera field of view. The system determines a container pose based on the spatial structure information, wherein the container pose is for describing at least one of an orientation for the container or a depth value for at least a portion of the container. The system further determines an object pose based on the container pose, wherein the object pose is for describing at least one of an orientation for the object or a depth value for at least a portion of the object.Type: GrantFiled: July 15, 2020Date of Patent: September 28, 2021Assignee: MUJIN, INC.Inventors: Xutao Ye, Kazuto Murase
-
Publication number: 20210276197Abstract: A system and method for performing object detection are presented. The system receives spatial structure information associated with an object which is or has been in a camera field of view of a spatial structure sensing camera. The spatial structure information is generated by the spatial structure sensing camera, and includes depth information for an environment in the camera field of view. The system determines a container pose based on the spatial structure information, wherein the container pose is for describing at least one of an orientation for the container or a depth value for at least a portion of the container. The system further determines an object pose based on the container pose, wherein the object pose is for describing at least one of an orientation for the object or a depth value for at least a portion of the object.Type: ApplicationFiled: July 15, 2020Publication date: September 9, 2021Inventors: Xutao YE, Kazuto Murase
-
Patent number: 10713486Abstract: A failure diagnosis support system includes first image acquisition means mounted on a robot for acquiring an image of the robot; and control means for controlling position and orientation of the first image acquisition means. The control means controls the position and orientation of the first image acquisition means at a predetermined timing so that the first image acquisition means faces a predetermined part of the robot. The first image acquisition means acquires an image of the predetermined part at the position and orientation controlled by the control means.Type: GrantFiled: February 26, 2018Date of Patent: July 14, 2020Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kazuto Murase, Yuka Hashiguchi
-
Publication number: 20200164507Abstract: A grasping robot includes: a grasping mechanism configured to grasp a target object; an image-pickup unit configured to shoot a surrounding environment; an extraction unit configured to extract a graspable part that can be grasped by the grasping mechanism in the surrounding environment by using a learned model that uses an image acquired by the image-pickup unit as an input image; a position detection unit configured to detect a position of the graspable part; a recognition unit configured to recognize a state of the graspable part by referring to a lookup table that associates the position of the graspable part with a movable state thereof; and a grasping control unit configured to control the grasping mechanism so as to displace the graspable part in accordance with the state of the graspable part recognized by the recognition unit.Type: ApplicationFiled: November 25, 2019Publication date: May 28, 2020Inventor: Kazuto Murase
-
Publication number: 20200159235Abstract: A trajectory generation apparatus according to an embodiment includes an arithmetic unit capable of generating a positional trajectory of a movable part of a robot, in which: the arithmetic unit is further capable of generating a velocity trajectory of the movable part; and a predetermined time before switching a trajectory along which the movable part is moved from the velocity trajectory to the positional trajectory, the arithmetic unit predicts a position of the movable part at the time of the switching and generates the positional trajectory that starts from the predicted position.Type: ApplicationFiled: November 13, 2019Publication date: May 21, 2020Applicant: Toyota Jidosha Kabushiki KaishaInventors: Yuta Watanabe, Kazuto Murase, Koji Terada
-
Patent number: 10607079Abstract: Systems, robots, and methods for generating three-dimensional skeleton representations of people are disclosed. A method includes generating, from a two-dimensional image, a two-dimensional skeleton representation of a person present in the two-dimensional image. The two-dimensional skeleton representation includes a plurality of joints and a plurality of links between individual joints of the plurality of joints. The method further includes positioning a cone around one or more links of the plurality of links, and identifying points of a depth cloud that intersect with the cone positioned around the one or more links of the two-dimensional skeleton. The points of the depth cloud are generated by a depth sensor and each point provides depth information. The method also includes projecting the two-dimensional skeleton representation into three-dimensional space using the depth information of the points that intersect with the cone, thereby generating the three-dimensional skeleton representation of the person.Type: GrantFiled: January 11, 2018Date of Patent: March 31, 2020Assignee: Toyota Research Institute, Inc.Inventors: Brandon Northcutt, Kevin Stone, Konstantine Mushegian, Katarina Bouma, Kazuto Murase, Akiyoshi Ochiai
-
Publication number: 20190095711Abstract: Systems, robots, and methods for generating three-dimensional skeleton representations of people are disclosed. A method includes generating, from a two-dimensional image, a two-dimensional skeleton representation of a person present in the two-dimensional image. The two-dimensional skeleton representation includes a plurality of joints and a plurality of links between individual joints of the plurality of joints. The method further includes positioning a cone around one or more links of the plurality of links, and identifying points of a depth cloud that intersect with the cone positioned around the one or more links of the two-dimensional skeleton. The points of the depth cloud are generated by a depth sensor and each point provides depth information. The method also includes projecting the two-dimensional skeleton representation into three-dimensional space using the depth information of the points that intersect with the cone, thereby generating the three-dimensional skeleton representation of the person.Type: ApplicationFiled: January 11, 2018Publication date: March 28, 2019Inventors: Brandon Northcutt, Kevin Stone, Konstantine Mushegian, Katarina Bouma, Kazuto Murase, Akiyoshi Ochiai
-
Publication number: 20180268217Abstract: A failure diagnosis support system includes first image acquisition means mounted on a robot for acquiring an image of the robot; and control means for controlling position and orientation of the first image acquisition means. The control means controls the position and orientation of the first image acquisition means at a predetermined timing so that the first image acquisition means faces a predetermined part of the robot. The first image acquisition means acquires an image of the predetermined part at the position and orientation controlled by the control means.Type: ApplicationFiled: February 26, 2018Publication date: September 20, 2018Applicant: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Kazuto MURASE, Yuka HASHIGUCHI