Patents by Inventor Hitoshi Hasunuma
Hitoshi Hasunuma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11919164Abstract: A robot system (1) includes the robot (10), a motion sensor (11), a surrounding environment sensor (12, 13), an operation apparatus (21), a learning control section (41), and a relay apparatus (30). The robot (10) performs work based on an operation command. The operation apparatus (21) detects and outputs an operator-operating force applied by the operator. The learning control section (41) outputs a calculation operating force. The relay apparatus (30) outputs the operation command based on the operator-operating force and the calculation operating force. The learning control section (41) estimates and outputs the calculation operating force by using a model constructed by performing the machine learning of the operator-operating force, the surrounding environment data, the operation data, and the operation command based on the operation data and the surrounding environment data outputted by the sensors (11 to 13), and the operation command outputted by the relay apparatus (30).Type: GrantFiled: May 24, 2019Date of Patent: March 5, 2024Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi Hasunuma, Jun Fujimori, Hiroki Kinoshita, Takeshi Yamamoto, Hiroki Takahashi, Kazuki Kurashima
-
Publication number: 20240051134Abstract: A controller that performs a control in which a robot autonomously performs a given work includes a first processor. The first processor performs processing including acquiring state information including a state of a workpiece that is a work target while performing the given work, determining candidates of a work position of the workpiece based on the state information, transmitting a selection request for requesting a selection of the work position from the candidates of the work position to an operation terminal, the operation terminal being connected data-communicably with the first processor via a communication network, and when information on a selected position that is the selected work position is received from the operation terminal, causing the robot to operate autonomously according to the selected position.Type: ApplicationFiled: December 16, 2021Publication date: February 15, 2024Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Masayuki KAMON, Takeshi YAMAMOTO
-
Publication number: 20240042620Abstract: A robot system includes a plurality of self-propelled robots, each including an autonomously travelable carriage and a robotic arm mounted on the carriage, and a single manipulation console that is operated by an operator to allow the operator to manually operate the plurality of self-propelled robots. The plurality of self-propelled robots include a first self-propelled robot that performs a given first work, and a second self-propelled robot that performs a given second work different in kind from the first work.Type: ApplicationFiled: December 21, 2021Publication date: February 8, 2024Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Masayuki KAMON, Hitoshi HASUNUMA
-
Patent number: 11858140Abstract: A robot system includes a robot, state detection sensors to, a timekeeping unit, a learning control unit, a determination unit, an operation device, and an input unit, and an additional learning unit. The determination unit determines whether or not the work of the robot can be continued under the control of the learning control unit based on the state values detected by the state detection sensors to and outputs determination result. The additional learning unit performs additional learning of the determination result indicating that the work of the robot cannot be continued, the operator operation force, work state output by the operation device and the input unit, and timer signal output by the timekeeping unit.Type: GrantFiled: May 24, 2019Date of Patent: January 2, 2024Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi Hasunuma, Takuya Shitaka, Takeshi Yamamoto, Kazuki Kurashima
-
Publication number: 20230321812Abstract: A remote control system includes: an operator that is operated by a user; a robot that applies a treatment to an object in accordance with an action of the operator; a contact force sensor that is disposed in the robot and detects an operating state of the robot; an imager that captures images of at least one of the robot or the object; a display that displays the captured images captured by the imager and presents the captured images to the user operating the operator; and a controller that performs action control of at least one of the robot or the operator in accordance with detection results of the contact force sensor. The controller delays the action control to reduce a lag between the action control and display timings of the captured images by the display.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Kentaro AZUMA, Hitoshi HASUNUMA
-
Publication number: 20230249341Abstract: The robot teaching method includes a pre-registration step, robot operation step, and teaching step. The pre-registration step is for specifying a relative self-position of a measuring device with respect to surrounding environment by measuring the surrounding environment using the measuring device, and registering an environment teaching point that is a teaching point of the robot specified using the relative self-position. The robot operation step for automatically operating the robot so that the relative self-position of the robot with respect to the surrounding environment become equal to the environment teaching point in a state where the measuring device is attached to the robot. The teaching step for registering a detection value of a position and a posture of the robot measured by an internal sensor as teaching information in a state where the relative self-position of the robot with respect to the surrounding environment become equal to the environment teaching point.Type: ApplicationFiled: June 21, 2021Publication date: August 10, 2023Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Kazuki KURASHIMA, Hitoshi HASUNUMA, Takeshi YAMAMOTO, Masaomi IIDA, Tomomi SANO
-
Patent number: 11701772Abstract: The automatic operation system includes a plurality of learned imitation models and a model selecting unit. The learned imitation models are constructed by machine learning of operation history data, the operation history data being classified into several groups by an automatic classification system algorithm, the operation history data of each group being learned by the imitation model corresponding to the group. The operation history data include data indicating a surrounding environment and data indicating an operation of an operator in the surrounding environment. The model selecting unit selects one imitation model from several imitation models based on a result of classifying data indicating a given surrounding environment by the automatic classification algorithm of the classification system.Type: GrantFiled: June 8, 2018Date of Patent: July 18, 2023Inventors: Hitoshi Hasunuma, Masayuki Enomoto, Jun Fujimori
-
Publication number: 20230045162Abstract: A training data screening device includes a data evaluation model, a data evaluator, a memory, and a training data screener. The data evaluation model is constructed by machine learning on at least a part of the collected data, or by machine learning on data different from the collected data. The data evaluator evaluates the input collected data using the data evaluation model. The memory stores the evaluated data, which is the collected data evaluated by the data evaluator. The training data screener screens the training data tier constructing the learning model from the evaluated data stored by the memory by an instruction of an operator to whom an evaluation result of the data evaluator is presented, or automatically screens the training data based on the evaluation result.Type: ApplicationFiled: December 22, 2020Publication date: February 9, 2023Applicant: Kawasaki Jukogyo Kabushiki KaishaInventors: Takeshi YAMAMOTO, Hitoshi HASUNUMA, Kazuki KURASHIMA
-
Publication number: 20220388160Abstract: A control device includes: first circuitry that generates a command to cause a robot to autonomously grind a grinding target portion; second circuitry that generates a command to cause the robot to grind a grinding target portion according to manipulation information from an operation device; third circuitry that controls operation of the robot according to the command; storage that stores image data of a grinding target portion and operation data of the robot corresponding to the command; and forth circuitry that performs machine learning by using image data of a grinding target portion and the operation data for the grinding target portion, receives the image data as input data, and outputs an operation correspondence command corresponding to the operation data as output data. The first circuitry generates the command, based on the operation correspondence command.Type: ApplicationFiled: November 16, 2020Publication date: December 8, 2022Applicant: Kawasaki Jukogyo Kabushiki KaishaInventors: Shingo YONEMOTO, Takanori KOZUKI, Masahiko AKAMATSU, Hitoshi HASUNUMA, Masayuki KAMON
-
Publication number: 20220220709Abstract: Construction machinery with learning function includes an operating part having a working part, a manipulating part, a work-state detecting part, an operation-state detecting part, a reaction detecting part, a learning data memory configured to store a command outputted from the manipulating part in a time series as command data, and store, in a time series as estimation basic data, work-state data, operation-state data, and reaction data, a learning module configured to execute machine learning of command data stored in the learning data memory by using estimation basic data stored in the learning data memory, and, after the machine learning, receive an input of the estimation basic data during the operation of the operating part, and output an estimated command of the command, and a hydraulic drive system configured to drive the operating part based on one of the command and the estimated command, or both of the command and the estimated command.Type: ApplicationFiled: May 20, 2020Publication date: July 14, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Masayuki KAMON, Hitoshi HASUNUMA, Shigetsugu TANAKA
-
Publication number: 20220152823Abstract: A machine learning model operation management system includes a model building server and an operation server. The model building server builds a trained machine learning model based on received training data. When the trained machine learning model stored in a robot controller operates for determining the operation of a robot, the operation server receives operation information generated by the robot controller. Data of the trained machine learning model built by the model building server is assigned with model identification information that uniquely identifies the trained machine learning model. The robot controller makes an external inquiry as to whether or not it has permission to use the trained machine learning model stored in itself, and if it has the permission to use, it makes the trained machine learning model available for use. The operation server stores the operation information in association with the model identification information.Type: ApplicationFiled: February 27, 2020Publication date: May 19, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Jun YAMAGUCHI
-
Patent number: 11305427Abstract: A robot system includes a robot, an image acquisition part, an image prediction part, and an operation controller. The image acquisition part acquires a current image captured by a robot camera arranged to move with the end effector. The image prediction part predicts a next image to be captured by the robot camera based on a teaching image model and the current image. The teaching image model is constructed by a machine learning of teaching image which is predicted to capture by the robot camera while the movable part performs an adjustment operation. The operation controller calculates the command value for operating the movable part so that the image captured by the robot camera approaches the next image, and controls the movable part based on the command value.Type: GrantFiled: November 28, 2018Date of Patent: April 19, 2022Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi Hasunuma, Masayuki Enomoto
-
Publication number: 20220088775Abstract: A robot control device includes: a trained model built by being trained on work data; a control data acquisition section which acquires control data of the robot based on data from the trained model; base trained models built for each of a plurality of simple operations by being trained on work data; an operation label storage section which stores operation labels corresponding to the base trained models; a base trained model combination information acquisition section which acquires combination information when the trained model is represented by a combination of a plurality of the base trained models, by acquiring a similarity between the trained model and the respective base trained models; and an information output section which outputs the operation label corresponding to each of the base trained models which represent the trained model.Type: ApplicationFiled: December 27, 2019Publication date: March 24, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Takeshi YAMAMOTO, Kazuki KURASHIMA
-
Publication number: 20220063091Abstract: A robot control device includes a modification work trained model building section. The modification work trained model building section builds a modification work trained model by training on modification work data when a user's modification operation is performed to intervene in a provisional operation of a robot arm to perform a series of operations. In the modification work data, input data is a state of the robot arm and its surroundings when the robot arm is operating and output data is data of the operation by a user for modifying the provisional operation or the modification operation of the robot arm by the user's operation for modifying the provisional operation.Type: ApplicationFiled: December 27, 2019Publication date: March 3, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Takeshi YAMAMOTO, Kazuki KURASHIMA
-
Publication number: 20220016761Abstract: A robot control device includes: a learned model created through learning work data composed of input and output data, the input data including states of a robot and the surroundings where humans operate the robot to perform a series of works, the output data including human operation corresponding to the case or movement of the robot caused thereby; a control data acquisition section that acquires control data by obtaining output data related to human operation or movement from the model, being presumed in response to and in accordance with the input data; a completion rate acquisition section acquiring a completion rate indicating to which progress level in the series of works the output data corresponds; and a certainty factor acquisition section that acquires a certainty factor indicating a probability of the presumption in a case where the model outputs the output data in response to the input data.Type: ApplicationFiled: December 27, 2019Publication date: January 20, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Takeshi YAMAMOTO, Kazuki KURASHIMA
-
Publication number: 20220011750Abstract: The information projection system includes stereo cameras, a controller, and projectors. The controller has a communication device, an analysis unit, a registration unit, and a projection control unit. The communication device acquires sets of appearance information in which each stereo camera detects an appearance of a workplace. The analysis unit analyzes the sets of appearance information, and creates a map information indicating shapes and positions of objects existing in the workplace. The registration unit creates and registers a work status information based on the map information that is individually created from the sets of appearance information respectively detected by the plurality of stereo cameras. The projection control unit creates an auxiliary image for assisting a work based on the work status information, and outputs the auxiliary image to each projector.Type: ApplicationFiled: December 18, 2019Publication date: January 13, 2022Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Shigekazu SHIKODA, Takeshi YAMAMOTO, Naohiro NAKAMURA, Kazuki KURASHIMA
-
Publication number: 20210220990Abstract: A robot system (1) includes the robot (10), a motion sensor (11), a surrounding environment sensor (12, 13), an operation apparatus (21), a learning control section (41), and a relay apparatus (30). The robot (10) performs work based on an operation command. The operation apparatus (21) detects and outputs an operator-operating force applied by the operator. The learning control section (41) outputs a calculation operating force. The relay apparatus (30) outputs the operation command based on the operator-operating force and the calculation operating force. The learning control section (41) estimates and outputs the calculation operating force by using a model constructed by performing the machine learning of the operator-operating force, the surrounding environment data, the operation data, and the operation command based on the operation data and the surrounding environment data outputted by the sensors (11 to 13), and the operation command outputted by the relay apparatus (30).Type: ApplicationFiled: May 24, 2019Publication date: July 22, 2021Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Jun FUJIMORI, Hiroki KINOSHITA, Takeshi YAMAMOTO, Hiroki TAKAHASHI, Kazuki KURASHIMA
-
Publication number: 20210197369Abstract: A robot system includes a robot, state detection sensors to, a timekeeping unit, a learning control unit, a determination unit, an operation device, and an input unit, and an additional learning unit. The determination unit determines whether or not the work of the robot can be continued under the control of the learning control unit based on the state values detected by the state detection sensors to and outputs determination result. The additional learning unit performs additional learning of the determination result indicating that the work of the robot cannot be continued, the operator operation force, work state output by the operation device and the input unit, and timer signal output by the timekeeping unit.Type: ApplicationFiled: May 24, 2019Publication date: July 1, 2021Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Takuya SHITAKA, Takeshi YAMAMOTO, Kazuki KURASHIMA
-
Publication number: 20200366815Abstract: An environment acquisition system includes a housing, a visual sensor, and a data processor. The visual sensor is accommodated in the housing and is capable of acquiring environmental information about environment of outside of the housing repeatedly. The data processor performs an estimation process of a position and a posture of the visual sensor and a generating process of an external environment three-dimensional data based on the environmental information acquired by the visual sensor or information obtained from the environmental information. In a state where a posture of the housing is not controlled and the housing is not in contact with ground and is not mechanically restrained from outside, the visual sensor can acquire the environmental information.Type: ApplicationFiled: November 5, 2018Publication date: November 19, 2020Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventor: Hitoshi HASUNUMA
-
Publication number: 20200353620Abstract: A robot system includes a robot, an image acquisition part, an image prediction part, and an operation controller. The image acquisition part acquires a current image captured by a robot camera arranged to move with the end effector. The image prediction part predicts a next image to be captured by the robot camera based on a teaching image model and the current image. The teaching image model is constructed by a machine learning of teaching image which is predicted to capture by the robot camera while the movable part performs an adjustment operation. The operation controller calculates the command value for operating the movable part so that the image captured by the robot camera approaches the next image, and controls the movable part based on the command value.Type: ApplicationFiled: November 28, 2018Publication date: November 12, 2020Applicant: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Hitoshi HASUNUMA, Masayuki ENOMOTO