Patents by Inventor Riu HIRAI

Riu HIRAI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230401071
    Abstract: An information processing terminal includes a processor, a memory, a communication unit, a container reception module, a container execution module and a sensor. The container reception module receives a container image via the communication unit. The container execution module boots the container when the container image is received and executes the application contained in the container image. The container boots the application based on the recognition situation about the sensor is recognized. The application executes a predetermined processing for sensing data acquired from the sensor.
    Type: Application
    Filed: October 20, 2021
    Publication date: December 14, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Riu HIRAI, Keiki NAKAMURA, Tasuku SHIMADA
  • Publication number: 20230097594
    Abstract: An information processing device executes a DNN computation by a neural network including a plurality of layers. The information processing device executes a computation process corresponding to a given layer in the neural network, on a first area and on a second area different from the first area, the first and second areas being included in a feature map inputted to the neural network. The information processing device synthesizes a result of the computation process on the first area and a result of the computation process on the second area, and outputs the synthesized computation process results, as a result of the computation process on the feature map.
    Type: Application
    Filed: March 12, 2021
    Publication date: March 30, 2023
    Applicant: Hitachi Astemo, Ltd.
    Inventors: Riu HIRAI, Hiroaki ITO, Hiroo UCHIDA, Goichi ONO, Tadashi KISHIMOTO
  • Patent number: 11427141
    Abstract: A computing system includes a server, and an on-vehicle device mounted on a vehicle. The server includes: a server storage part containing a learned model; a simplification part which generates a contraction information for determination of calculation precision by using the learned model and an object of inference; and a server communication part for transmitting the contraction information to the on-vehicle device. The on-vehicle device includes: a signal input part to which an output signal from a sensor mounted on the vehicle is inputted; an on-vehicle communication part for receiving the contraction information; an inference part for making an inference on the output signal and a reconfigurable logic circuit; and a reconfiguration part for configuring the inference part in the logic circuit based on the contraction information.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: August 30, 2022
    Assignee: HITACHI ASTEMO, LTD.
    Inventors: Riu Hirai, Goichi Ono, Taisuke Ueta
  • Patent number: 11388223
    Abstract: To improve learning accuracy, while avoiding transferring of a dataset from an edge terminal to a cloud server. A management device accessible to a target object to be managed has a processor executing a program, a storage device storing the program, and a communication interface communicable with the target object. The processor executes a reception process for receiving first environmental information representing a first environment of the target object, a first generation process for generating relevant information representing relevancy between the first environmental information and second environmental information representing a second environment of the target object, a second generation process for generating a first learned model for inference by the target object in the first environment based on the relevant information and a second learned model for inference by the target object in the second environment, and a transmission process for transmitting the first learned model to the target object.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: July 12, 2022
    Assignee: HITACHI, LTD.
    Inventors: Riu Hirai, Goichi Ono, Daisuke Ishii, Yuji Ogata
  • Patent number: 11343309
    Abstract: Provided is a server load prediction system that predicts a server load on a server connected to an apparatus installed in a production process including an apparatus requirement specification storage unit that stores a requirement specification of each apparatus for a server, a server specification storage unit that stores a server specification indicating a capability held by each server, an input information creation unit that receives an input of a calculation condition and creates an input parameter required to execute a simulation for calculating the server load, a server load calculation unit that calculates the server load caused by the apparatus used in a process designated under the calculation condition by executing the simulation, and an output unit that outputs the calculation result.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: May 24, 2022
    Assignee: HITACHI, LTD.
    Inventors: Eriko Takeda, Daisuke Ishii, Takumi Oishi, Kunihiko Toumura, Riu Hirai
  • Publication number: 20220122290
    Abstract: Provided are: an extraction unit extracting a plurality of sets of combinations of planes and coordinates corresponding to a plurality of objects from among images of the respective objects belonging to image information, based on the image information including the respective images of the plurality of objects which are images in a front direction of a worker; a plane selection unit selecting a designated set from among the plurality of sets of combinations of planes and coordinates extracted by the extraction unit according to a rule in which a priority for a plane corresponding to each object is defined; and an analysis unit calculating a gaze point coordinate indicating a gaze point of the worker based on visual line information indicating a visual line position of the worker and information on a combination of a plane and a coordinate belonging to the designated set selected by the plane selection unit.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 21, 2022
    Inventors: Taku Kumon, Riu Hirai, Keiki Nakamura
  • Publication number: 20220051163
    Abstract: A work support apparatus holds a value indicating information on at least one of an apparatus used for a work and a worker, a work change recognition determination algorithm, and a work support content corresponding to a determination result, applies a value indicating information on at least one of the apparatus and the worker to the work change recognition determination algorithm, determines whether or not the worker recognizes a work change indicating at least one of a change in a work process and a transition between a normal state and an abnormal state of a work, and outputs, to the output device, a work support content corresponding to the determination result.
    Type: Application
    Filed: July 30, 2021
    Publication date: February 17, 2022
    Applicant: HITACHI, LTD.
    Inventors: Taku KUMON, Riu HIRAI
  • Publication number: 20220051030
    Abstract: An information processing device includes a DNN operation unit that executes a DNN operation by a neural network including plural layers and a weight storage unit that stores a weight used in the DNN operation, and the DNN operation unit specifies data having a value larger than a predetermined threshold as operation target data among data input to a predetermined layer of the neural network, acquires a weight corresponding to the operation target data from the weight storage unit, and executes an operation of the predetermined layer based on the operation target data and the weight acquired from the weight storage unit.
    Type: Application
    Filed: December 10, 2019
    Publication date: February 17, 2022
    Inventors: Riu HIRAI, Goichi ONO, Hiroaki ITO
  • Publication number: 20220036190
    Abstract: Processing time of a neural network is shortened, and the number of operations of the neural network is reduced such that a plurality of calculators can be effectively used. A neural network reduction device (100) that reduces the number of operations of a neural network by an operation device (140) including a plurality of calculators by reducing the neural network, the neural network reduction device including: a calculator allocation unit (102) that sets the number of calculators allocated to calculation processing of the neural network; a number-of-operations setting unit (103) that sets the number of operations of a reduced neural network based on the number of allocated calculators; and a neural network reduction unit (104) that reduces the neural network such that the number of operations of the neural network by the operation device (140) is equal to the number of operations set by the number-of-operations setting unit (103).
    Type: Application
    Filed: January 8, 2020
    Publication date: February 3, 2022
    Applicant: Hitachi Astemo, Ltd.
    Inventors: Hiroaki ITO, Goichi ONO, Riu HIRAI
  • Publication number: 20210306411
    Abstract: Provided is a server load prediction system that predicts a server load on a server connected to an apparatus installed in a production process including an apparatus requirement specification storage unit that stores a requirement specification of each apparatus for a server, a server specification storage unit that stores a server specification indicating a capability held by each server, an input information creation unit that receives an input of a calculation condition and creates an input parameter required to execute a simulation for calculating the server load, a server load calculation unit that calculates the server load caused by the apparatus used in a process designated under the calculation condition by executing the simulation, and an output unit that outputs the calculation result.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 30, 2021
    Applicant: Hitachi, Ltd.
    Inventors: Eriko Takeda, Daisuke Ishii, Takumi Oishi, Kunihiko Toumura, Riu Hirai
  • Publication number: 20210170962
    Abstract: A computing system includes a server, and an on-vehicle device mounted on a vehicle. The server includes: a server storage part containing a learned model; a simplification part which generates a contraction information for determination of calculation precision by using the learned model and an object of inference; and a server communication part for transmitting the contraction information to the on-vehicle device. The on-vehicle device includes: a signal input part to which an output signal from a sensor mounted on the vehicle is inputted; an on-vehicle communication part for receiving the contraction information; an inference part for making an inference on the output signal and a reconfigurable logic circuit; and a reconfiguration part for configuring the inference part in the logic circuit based on the contraction information.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 10, 2021
    Applicant: HITACHI AUTOMOTIVE SYSTEMS, LTD.
    Inventors: Riu HIRAI, Goichi ONO, Taisuke UETA
  • Publication number: 20210037084
    Abstract: To improve learning accuracy, while avoiding transferring of a dataset from an edge terminal to a cloud server. A management device accessible to a target object to be managed has a processor executing a program, a storage device storing the program, and a communication interface communicable with the target object. The processor executes a reception process for receiving first environmental information representing a first environment of the target object, a first generation process for generating relevant information representing relevancy between the first environmental information and second environmental information representing a second environment of the target object, a second generation process for generating a first learned model for inference by the target object in the first environment based on the relevant information and a second learned model for inference by the target object in the second environment, and a transmission process for transmitting the first learned model to the target object.
    Type: Application
    Filed: July 23, 2020
    Publication date: February 4, 2021
    Inventors: Riu HIRAI, Goichi ONO, Daisuke ISHII, Yuji OGATA
  • Patent number: 10014956
    Abstract: Provided is an optical receiver module which includes a conversion unit which converts an input optical signal to an electrical signal, an amplification unit which amplifies the electrical signal and outputs an amplified signal, a reception unit which directly or indirectly receives the amplified signal, and an offsetting unit which offsets the electrical signal such that a difference between a center of an intensity width of the electrical signal and a center of an intensity range of a signal capable of being received by the reception unit becomes small.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: July 3, 2018
    Assignee: Oclaro Jaoan, Inc.
    Inventor: Riu Hirai
  • Publication number: 20170180057
    Abstract: Provided is an optical receiver module which includes a conversion unit which converts an input optical signal to an electrical signal, an amplification unit which amplifies the electrical signal and outputs an amplified signal, a reception unit which directly or indirectly receives the amplified signal, and an offsetting unit which offsets the electrical signal such that a difference between a center of an intensity width of the electrical signal and a center of an intensity range of a signal capable of being received by the reception unit becomes small.
    Type: Application
    Filed: December 1, 2016
    Publication date: June 22, 2017
    Inventor: Riu HIRAI
  • Patent number: 8520196
    Abstract: Bending of an optical fiber where a heat may be generated by a high output power can be detected without using a dedicated light source. An optical communication module that outputs a continuous wave light generated by at least one light source to an optical fiber transmission line, includes: (1) a loss measurement unit that measures a loss of an amplified spontaneous emission generated by allowing the continuous wave light output from the light source to create stimulated Raman scattering in the optical fiber transmission line; (2) a fiber abnormality analyzer that detects the abnormal state of the optical fiber transmission line on the basis of loss information on the ASE measured by the loss measurement unit; and (3) a light source controller that controls a supply state of the continuous wave light from the light source on the basis of the detection of the fiber abnormality analyzer.
    Type: Grant
    Filed: February 1, 2012
    Date of Patent: August 27, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Riu Hirai, Nobuhiko Kikuchi, Tetsuya Uda, Shinya Sasaki
  • Publication number: 20120224168
    Abstract: Bending of an optical fiber where a heat may be generated by a high output power can be detected without using a dedicated light source. An optical communication module that outputs a continuous wave light generated by at least one light source to an optical fiber transmission line, includes: (1) a loss measurement unit that measures a loss of an amplified spontaneous emission generated by allowing the continuous wave light output from the light source to create stimulated Raman scattering in the optical fiber transmission line; (2) a fiber abnormality analyzer that detects the abnormal state of the optical fiber transmission line on the basis of loss information on the ASE measured by the loss measurement unit; and (3) a light source controller that controls a supply state of the continuous wave light from the light source on the basis of the detection of the fiber abnormality analyzer.
    Type: Application
    Filed: February 1, 2012
    Publication date: September 6, 2012
    Applicant: HITACHI, LTD.
    Inventors: Riu HIRAI, Nobuhiko KIKUCHI, Tetsuya UDA, Shinya SASAKI