Patents by Inventor Ryota HIGA
Ryota HIGA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240124251Abstract: The loading container information input means 71 accepts input of information on the target container which is the container to be loaded next. The inquiring means 72 transmits current loading state and information on the target container to a container loading planning device, which replies to a loading position of the container in response to an inquiry, to inquire about the loading position of the target container. The evaluation means 73 outputs an evaluation value for loading the target container at the loading position received from the container loading planning device. The output means 74 outputs the evaluation value in time series order corresponding to the loading of the target container.Type: ApplicationFiled: February 24, 2021Publication date: April 18, 2024Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20240127115Abstract: The loading container information input means 71 accepts input of information on the target container. The Inquiring means 72 transmits current loading state and information on the target container to the container loading planning device 80 to inquire about the loading position of the target container. The evaluation means 73 outputs an evaluation value for loading the target container at the received loading position. The output means 74 outputs data including the loading state and information of the target container, the loading position of the target container, and the evaluation value as training data. The learning means 91 learns the model by machine learning using the output training data. The loading position determination means 81 determines the loading position of the target container using the learned model.Type: ApplicationFiled: February 24, 2021Publication date: April 18, 2024Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20230394970Abstract: A learning means 81 learns a plan evaluation function that evaluates an internal value in an own agent when a mission including an action is planned so as to maximize a value of a mission evaluation function that calculates a value of the action of the own agent in a certain state or an expected value of a cumulative sum of the values. An evaluation means 82 evaluates, using a utility function that defines a difference between the internal values calculated using the plan evaluation function, a utility of the mission when a target resource, which is a resource to be a target candidate for negotiation, is transferred to another agent or when the target resource is transferred from the other agent.Type: ApplicationFiled: October 28, 2020Publication date: December 7, 2023Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20230385892Abstract: An execution planning means 81 calculates, with an offer from another agent as a constraint condition, a first value which is a value of an optimal execution plan up to achievement of an objective planned based on a state transition by an action taken according to a policy of an own agent. A determination means 82 determines, with the first value as an argument, whether or not a value calculated by a utility function, which is a function defining a utility of an execution plan of the own agent when the offer from the other agent is accepted, is greater than a predetermined threshold value. The determination means 82 determines to accept the offer from the other agent when the value is greater than the threshold value, and determines to reject the offer from the other agent when the value is equal to or less than the threshold value.Type: ApplicationFiled: October 23, 2020Publication date: November 30, 2023Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20230314147Abstract: An object of the present disclosure is to provide a path generation method that can generate a set of paths from any start nodes to any goal nodes with the distribution given by the user. A path generator (06) includes a path finder (13) generates a plurality paths based on a plurality of weights between nodes, the nodes being included in a map, a weight generator (12) generates the plurality of weights defined between the nodes based on a predetermined distribution.Type: ApplicationFiled: September 29, 2020Publication date: October 5, 2023Applicant: NEC CorporationInventors: Aayush AGGARWAL, Ryota HIGA, HIroaki INOTSUME
-
Patent number: 11740912Abstract: An operation support apparatus (100) includes a storage unit (110) configured to store time-series data (111), and operation information (112), a specification unit (120) configured to specify a plurality of change points in a change trend of the states from the time-series data (111), and specify each of a plurality of time windows as one of a plurality of operating modes in the target system, and an operation-set generation unit (130) configured to extract, for each of the plurality of time windows, a set of operations performed at a time included in that time window from the operation information (112), generate an operating-mode operation set (113) in which the operating modes corresponding to the respective time windows are associated with the extracted set of operations, and stores the generated operating-mode operation set (113) in the storage unit (110).Type: GrantFiled: February 1, 2019Date of Patent: August 29, 2023Assignee: NEC CORPORATIONInventors: Ryota Higa, Junya Kato
-
Publication number: 20230030599Abstract: An input unit 81 receives an input of information on a container to be loaded, loading status of a freight car, and a container arrival prediction. A loading position determination unit 82 determines a loading position of the container to be loaded on a freight car based on a policy function, which is trained based on a past loading result or a loading plan, that calculates a selection probability of the loading position of the container assumed for the loading status of the freight car and a value function that calculates a value for the loading status of the freight car. And the loading position determination unit 82 determines the loading position of the container based on the value function calculated based on the container arrival prediction and the policy function.Type: ApplicationFiled: January 20, 2020Publication date: February 2, 2023Applicant: NEC CorporationInventor: Ryota HIGA
-
Patent number: 11494247Abstract: A model generation apparatus (2000) acquires component failure data in which a usage status is associated with a failure record of a component. The model generation apparatus (2000) generates, for each of a plurality of component groups, a prediction model for predicting the number of failures of each component included in the component group by using the component failure data relating to the component belonging to the component group. The prediction model computes a prediction value of the total number of failures of the components belonging to a corresponding component group from the usage status, and computes a prediction value of the number of failures of each component belonging to the component group from the computed prediction value of the total number of failures.Type: GrantFiled: September 6, 2019Date of Patent: November 8, 2022Assignee: NEC CORPORATIONInventor: Ryota Higa
-
Publication number: 20220318029Abstract: An operation support apparatus (100) includes a storage unit (110) configured to store time-series data (111) obtained by measuring states of a target system controlled according to a plurality of operations performed by an operator, and operation information (112), the operation information (112) being a set of at least one of the plurality of operations and a time, a specification unit (120) configured to specify a plurality of change points in a change trend of the states from the time-series data (111), and specify each of a plurality of time windows as one of a plurality of operating modes in the target system, the plurality of time windows being separated at at least one of the plurality of change points, and an operation-set generation unit (130) configured to extract, for each of the plurality of time windows, a set of operations performed at a time included in that time window from the operation information (112), generate an operating-mode operation set (113) in which the operating modes correspondiType: ApplicationFiled: February 1, 2019Publication date: October 6, 2022Applicant: NEC CorporationInventors: Ryota HIGA, Junya KATO
-
Publication number: 20220036122Abstract: An object of the present disclosure is to utilize a model adapted to a predetermined system and efficiently adapt the model to another system with an environment or an agent similar to those of the predetermined system. An information processing apparatus (1) according to the present disclosure includes a generation unit (11) configured to correct a first model adapted to a first system operated based on a first condition including a specific environment and a specific agent using a correction model to thereby generate a second model, and an adaptation unit (12) configured to adapt the second model to a second system operated based on a second condition, the second condition being partially different from the first condition.Type: ApplicationFiled: September 27, 2018Publication date: February 3, 2022Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20220012540Abstract: The learning device 80 includes an input unit 81 and an imitation learning unit 82. The input unit 81 receives input of a type of a reward function. The imitation learning unit 82 learns a policy by imitation learning based on training data. The imitation learning unit 82 learns the reward function according to the type by the imitation learning, based on a form defined depending on the type.Type: ApplicationFiled: December 7, 2018Publication date: January 13, 2022Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20210398019Abstract: A learning device 80 is a learning device for learning a model applied to a device that performs processing using a specific model, includes an input unit 81 and an imitation learning unit 82. The input unit 81 receives input of a functional form of a reward. The imitation learning unit 82 learns a policy by imitation learning based on training data. The imitation learning unit 82 learns a reward function depending on the input functional form of the reward by the imitation learning.Type: ApplicationFiled: December 7, 2018Publication date: December 23, 2021Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20210318921Abstract: A model generation apparatus (2000) acquires component failure data in which a usage status is associated with a failure record of a component. The model generation apparatus (2000) generates, for each of a plurality of component groups, a prediction model for predicting the number of failures of each component included in the component group by using the component failure data relating to the component belonging to the component group. The prediction model computes a prediction value of the total number of failures of the components belonging to a corresponding component group from the usage status, and computes a prediction value of the number of failures of each component belonging to the component group from the computed prediction value of the total number of failures.Type: ApplicationFiled: September 6, 2019Publication date: October 14, 2021Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20210264307Abstract: A model setting unit 81 sets, as a problem setting to be targeted in reinforcement learning, a model in which a policy for determining an action to be taken in an environmental state is associated with a Boltzmann distribution representing a probability distribution of a prescribed state, and a reward function for determining a reward obtainable from an environmental state and an action selected in the state is associated with a physical equation representing a physical quantity corresponding to an energy. A parameter estimation unit 82 estimates parameters of the physical equation by performing the reinforcement learning using training data including the state based on the set model. A difference detection unit 83 detects differences between previously estimated parameters of the physical equation and newly estimated parameters of the physical equation.Type: ApplicationFiled: June 26, 2018Publication date: August 26, 2021Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20210201138Abstract: A model setting unit 81 sets, as a problem setting to be targeted in reinforcement learning, a model in which a policy for determining an action to be taken in an environmental state is associated with a Boltzmann distribution representing a probability distribution of a prescribed state, and a reward function for determining a reward obtainable from an environmental state and an action selected in the state is associated with a physical equation representing a physical quantity corresponding to an energy. A parameter estimation unit 82 estimates parameters of the physical equation by performing the reinforcement learning using learning data including the state based on the set model.Type: ApplicationFiled: May 25, 2018Publication date: July 1, 2021Applicant: NEC CorporationInventor: Ryota HIGA
-
Publication number: 20210042584Abstract: An information processing device (2000) includes an acquisition unit (2020) and a learning unit (2040). The acquisition unit (2020) acquires one or more pieces of action data. The action data are data each piece of which associates a state vector representing a state of an environment with an action that is performed in a state represented by the state vector. The learning unit (2040) generates a policy function P and a reward function r through imitation learning using the acquired action data. The reward function r outputs, when given a state vector S as input, a reward r(S) that is acquired in a state represented by the state vector S. The policy function accepts, as input, an output r(S) of the reward function upon input of a state vector S and outputs an action a=P(r(S)) to be performed in a state represented by the state vector S.Type: ApplicationFiled: January 30, 2018Publication date: February 11, 2021Applicant: NEC CORPORATIONInventors: Ryota HIGA, Itaru NISHIOKA