Patents by Inventor Mitsuo Hayasaka

Mitsuo Hayasaka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11977487
    Abstract: In a management node that manages a distributed file and object storage that accessibly manages a file used by an application, the distributed file and object storage is accessible to a file managed by a storage of another site, and a management node includes a processor, and the processor is configured to specify an access circumstance relating to a file by an application, and control caching by the distributed file and object storage of the own site with respect to the file managed by the storage of the other site used by the application, before the application is executed, based on the access circumstance.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: May 7, 2024
    Assignee: HITACHI, LTD.
    Inventors: Shimpei Nomura, Mitsuo Hayasaka
  • Publication number: 20240104064
    Abstract: The present invention makes it possible to maintain availability and scale out file performance, while suppressing costs. A unified storage has a plurality of controllers and a storage apparatus (storage device unit), and each of the plurality of controllers is equipped with one or more main processors (CPU) and one or more channel adapters (FE-I/F). Each main processor causes a block storage control program to operate and thereby process data inputted to and outputted from the storage apparatus, each channel adapter has a processor (CPU) that performs transmission and reception to and from a main processor after receiving an access request, and the processors in the plurality of channel adapters cooperate to cause a distributed file system to operate, and distributively store data, written as a file, to the plurality of controllers.
    Type: Application
    Filed: March 1, 2023
    Publication date: March 28, 2024
    Applicant: Hitachi, Ltd.
    Inventors: Yuto KAMO, Mitsuo HAYASAKA, Norio SHIMOZONO
  • Publication number: 20240103934
    Abstract: A memory of an application platform stores per site a performance model indicating a relationship between program performance and a resource amount of hardware necessary for realizing program performance, and an electric power consumption model indicating a relationship between a resource allocation amount that is an amount allocated to the program, and an electric power consumption amount consumed when the program is executed.
    Type: Application
    Filed: March 10, 2023
    Publication date: March 28, 2024
    Applicant: Hitachi, Ltd.
    Inventors: Shimpei NOMURA, Mitsuo HAYASAKA
  • Publication number: 20240103935
    Abstract: The CPU of the management node measures the power consumption of a computer node while causing the computer node to execute the power measurement benchmark that uses hardware whose resource is allocated to a program to be executed by the computer node, where the CPU is changing the use amount of the resource while causing the computer node to execute the power measurement benchmark. The CPU generates a power consumption model representing a relationship between an allocation amount of the resource to be allocated to the program and the power consumption on the basis of a measurement result obtained by measuring the power consumption.
    Type: Application
    Filed: March 13, 2023
    Publication date: March 28, 2024
    Applicant: Hitachi, Ltd.
    Inventors: Akio SHIMADA, Mitsuo HAYASAKA
  • Publication number: 20230409401
    Abstract: A management node that manages performance resource amount information indicating a relationship between a resource amount of hardware allocated to software executed in a predetermined node and performance of the software includes a storage unit and a processor connected to the storage unit. The management node is configured to store a performance model management table in which a performance model is associated with an execution environment in which an application of a computing node is executed, in the storage unit, acquire operation information capable of specifying performance of the application executed in the execution environment, and modify the performance model corresponding to the execution environment based on the operation information.
    Type: Application
    Filed: March 10, 2023
    Publication date: December 21, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Mitsuo Hayasaka, Kazumasa Matsubara, Akio Shimada
  • Patent number: 11842308
    Abstract: A computer system stores management information that manages a workflow and a deletion flag that indicates deletion of data in the workflow hidden from a user. The computer system executes a workflow that includes one or more processes that convert input data into output data. The computer system includes a lineage of the executed workflow including information of the input data and the output data in the management information. The computer system deletes data selected from data in the executed workflow, and sets the deletion flag of the selected data in the management information. The computer system, in response to an access to first data to which the deletion flag is set, regenerate the first data based on the management information and removes the deletion flag of the first data in the management information.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: December 12, 2023
    Assignee: Hitachi, Ltd.
    Inventors: Ken Nomura, Mitsuo Hayasaka, Mutsumi Hosoya
  • Publication number: 20230367477
    Abstract: Provided is a storage system to improve a compression effect of data compression. A NAS that, when data of a chunk included in a content matches with data of a chunk of another content, collects data of the chunks as a duplicate chunk storage content, performs compression processing on the duplicate chunk storage content, and stores the compressed duplicate chunk storage content in a storage device. A processor of a NAS head specifies, when data of a chunk included in a predetermined content matches with data of a chunk of another content, a duplicate chunk storage content, in which a chunk similar to the chunks is stored, based on feature information on the chunks, and writes the chunks to the specified duplicate chunk storage content.
    Type: Application
    Filed: September 13, 2022
    Publication date: November 16, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Yuto KAMO, Mitsuo HAYASAKA
  • Publication number: 20230367503
    Abstract: Utilization efficiency of a storage device in a computer system can be appropriately improved. A computer system includes: a distributed FS including a plurality of distributed FS servers, the distributed FS being configured to distribute and manage files; a plurality of compute servers each having a processing function of executing a predetermined process using a PV provided by the distributed FS; and a management server configured to manage allocation of a PV to the compute servers. In the computer system, a processor of the management server is configured to determine whether data in the PV is protected due to redundancy by the plurality of compute servers, and allocate, as the PV of the plurality of compute servers, a PV in which data protection due to redundancy of data is not executed by the distributed FS from the distributed FS when determining that the data in the PV is protected.
    Type: Application
    Filed: September 1, 2022
    Publication date: November 16, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Takayuki FUKATANI, Mitsuo HAYASAKA
  • Publication number: 20230350723
    Abstract: A control server checks whether a resource necessary for process execution is present in a computer system. The control server determines that generating a placement plan is possible when relative guarantee is possible, the relative guarantee guaranteeing a guarantee volume that is at least an absolute resource volume and, if possible, guaranteeing a resource volume reaching an upper limit volume, for a reservation acceptance limit, and starts generating the placement plan. In a period between right after the start of generation of the placement plan and right before permission to selection/approval of the proposed placement plan, the control server makes a reservation with relative guarantee that for the resource and the reservation acceptance limit, makes the guarantee volume and the upper limit volume for the resource volume of the resource different respectively from the guarantee volume and the upper limit volume for the reservation acceptance limit.
    Type: Application
    Filed: March 7, 2023
    Publication date: November 2, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Ken NOMURA, Akio Shimada, Mitsuo Hayasaka
  • Publication number: 20230334072
    Abstract: A customer inputs a question sentence indicating a problem that the customer needs to resolve, to an automatic question answering system, and the system answers the question sentence. A history of the conversation is recorded in the system as conversation history data. When the system fails to give a suitable answer in a question-and-answer session, the system escalates the question to a support representative. In such a case, the question sentences and an answer sentence given by the support representative to resolve the problem are added to question-answer pair data as new question-answer pairs. The accuracy of automatic question answering is thus enhanced.
    Type: Application
    Filed: February 17, 2023
    Publication date: October 19, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Keiichi MATSUZAWA, Mitsuo HAYASAKA
  • Publication number: 20230319131
    Abstract: An information processing system including Application Platform capable of communicating with Edge1 connected to each other to be able to communicate each other, in which Application Platform includes a second processor, information on microservices and data possessed by Edge1, and performance information describing the performance of Edge1, and the second processor uses predetermined data to combine a plurality of predetermined microservices and causes Edge1 to execute them in a predetermined order. When executing the application, microservices and data are moved between Edge1 based on the information of the microservices and the data possessed by Edge1, and the performance information.
    Type: Application
    Filed: June 8, 2023
    Publication date: October 5, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Shimpei NOMURA, Mitsuo HAYASAKA, Kazumasa MATSUBARA, Eiichi INOUE
  • Patent number: 11778020
    Abstract: It aims to make it possible to readily and rapidly scale up the server which executes one application. In a computer system which includes one or more compute server(s) which each has an application container which executes the one application and a management server which manages the compute server(s), the management server is configured to, in a case of increasing the number of the compute servers which each has the execution unit which executes the one application, specify a logic unit that a data unit that the execution unit of an existing compute server utilizes upon execution of an application is stored, and in a case where the execution unit of a newly added computer server executes the application, set the newly added compute server so as to refer to the specified logic unit.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: October 3, 2023
    Assignee: HITACHI, LTD.
    Inventors: Yugo Yamauchi, Mitsuo Hayasaka, Takayuki Fukatani
  • Patent number: 11768616
    Abstract: A management node that controls a hardware resource amount of a node to be allocated to a distributed data store includes: a disk device that stores an application performance model indicating a correspondence relationship between an application performance and a distributed data store performance, and stores a data store performance model indicating a correspondence relationship between the hardware resource amount and data store performance of the distributed data store; and a CPU. The CPU is configured to: receive target performance information by the application; determine a required performance which is a distributed data store performance required for achieving a performance to be specified by the target performance information based on the application performance model; determine a hardware resource amount required for achieving the required performance based on the data store performance model; and set the determined resource amount to be allocated to the distributed data store.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: September 26, 2023
    Assignee: Hitachi, Ltd.
    Inventors: Akio Shimada, Mitsuo Hayasaka
  • Patent number: 11768608
    Abstract: A computer system includes a NoSQL/SQL cluster and a distributed storage. In order to make storage target data redundant, a compute node in the NoSQL/SQL cluster instructs other compute nodes in the NoSQL/SQL cluster to write the storage target data into the distributed storage. As regards a file containing the storage target data, the compute node in the NoSQL/SQL cluster deduplicates data in storage apparatuses of a plurality of storage nodes in the distributed storage. The distributed storage performs erasure coding to store a file of the storage target data newly found to be duplicate in deduplication and to store a file of the storage target data not deduplicated in deduplication.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: September 26, 2023
    Assignee: HITACHI, LTD.
    Inventors: Yuto Kamo, Mitsuo Hayasaka
  • Publication number: 20230297486
    Abstract: In an application platform, an application performance model and a data store performance model in each of a plurality of site systems are stored, and a processor receives target performance information, determines an application allocated resource amount for the plurality of site systems based on the application performance model, determines required performance of a data store for the plurality of site systems, determines a data store allocated resource amount for the plurality of site systems based on the data store performance model, and searches for an arrangement plan of the application and the data store capable of implementing the application allocated resource amount and the data store allocated resource amount in the plurality of site systems.
    Type: Application
    Filed: September 9, 2022
    Publication date: September 21, 2023
    Inventors: Shimpei NOMURA, Mitsuo HAYASAKA
  • Publication number: 20230281161
    Abstract: Easily provided is a file virtualization function without being affected by an application that accesses a file system. A CPF node containerizes an application program and an IO Hook program and provides the application program and the IO Hook program to a client, the application program performs call processing on the virtual file system provided by the IO Hook program on the basis of an operation request for a file from the client, the IO Hook program performs processing for updating state management information of the file on the basis of input information or operation content with respect to a virtual file system related to the operation request, and the file virtualization program performs file management processing between a CPF and a CAS on the basis of the state management information and outputs the call processing to a distributed file system program.
    Type: Application
    Filed: September 1, 2022
    Publication date: September 7, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Takeshi KITAMURA, Ryo FURUHASHI, Mitsuo HAYASAKA, Shimpei NOMURA, Masanori TAKATA
  • Publication number: 20230259286
    Abstract: In a computer system including a distributed file system cluster and a block SDS cluster, the distributed file system cluster has a plurality of distributed file system nodes, and stores a management-subject file redundantly in a plurality of volumes managed by a plurality of distributed file system nodes, the block SDS cluster has a plurality of block SDS nodes, and provides a plurality of the volumes on the basis of storage regions of storage apparatuses of the block SDS nodes, and a CPU of a management server is configured to perform control such that a plurality of the volumes storing the file in the distributed file system cluster for redundancy are not volumes based on a storage region of a storage apparatus of one block SDS node.
    Type: Application
    Filed: September 2, 2022
    Publication date: August 17, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Masanori TAKATA, Mitsuo HAYASAKA
  • Patent number: 11706287
    Abstract: An information processing system including Application Platform capable of communicating with Edge1 connected to each other to be able to communicate each other, in which Application Platform includes a second processor, information on microservices and data possessed by Edge1, and performance information describing the performance of Edge1, and the second processor uses predetermined data to combine a plurality of predetermined microservices and causes Edge1 to execute them in a predetermined order. When executing the application, microservices and data are moved between Edge1 based on the information of the microservices and the data possessed by Edge1, and the performance information.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: July 18, 2023
    Assignee: HITACHI, LTD.
    Inventors: Shimpei Nomura, Mitsuo Hayasaka, Kazumasa Matsubara, Eiichi Inoue
  • Publication number: 20230224362
    Abstract: It aims to make it possible to readily and rapidly scale up the server which executes one application. In a computer system which includes one or more compute server(s) which each has an application container which executes the one application and a management server which manages the compute server(s), the management server is configured to, in a case of increasing the number of the compute servers which each has the execution unit which executes the one application, specify a logic unit that a data unit that the execution unit of an existing compute server utilizes upon execution of an application is stored, and in a case where the execution unit of a newly added computer server executes the application, set the newly added compute server so as to refer to the specified logic unit.
    Type: Application
    Filed: September 2, 2022
    Publication date: July 13, 2023
    Applicant: Hitachi, Ltd.
    Inventors: Yugo YAMAUCHI, Mitsuo HAYASAKA, Takayuki FUKATANI
  • Patent number: 11687239
    Abstract: A file storage system configured to use a second storage system includes a first file system provided to an application, a first storage system in which a file is stored by the first file system, a processor, state management information storing a state of the file, a state information management unit that manages the state management information, and a file virtualization unit that manages files stored in the first storage system and the second storage system. The processor performs a calling process of the first file system based on an operation request of the file from the application. The first file system processes the operation request of the file. The state information management unit performs a state management information update process of the file based on input information with respect to the first file system related to the operation request, or an operation content.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: June 27, 2023
    Assignee: Hitachi, Ltd.
    Inventors: Shimpei Nomura, Mitsuo Hayasaka, Masanori Takata