Patents by Inventor Naonobu Sukegawa

Naonobu Sukegawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140380328
    Abstract: A computer system includes: a physical computer including plural physical processors, a peripheral device connected to the plural physical processors, and a memory connected to the plural physical processors; and a management computer connected to the physical computer. The physical computer includes plural physical processor environments on each of which a virtual computer can be built, and the management computer includes an environment table indicating correspondence between plural physical processor environments each of which has the physical processor and on each of which a virtual computer can be built and an executable software program in each of the physical processor environments. When a specific software program is executed in the physical computer, a physical processor environment corresponding to a software program to be executed is selected from the plural physical processor environments by the environment table, and a virtual computer is built on the selected physical processor environment.
    Type: Application
    Filed: June 20, 2014
    Publication date: December 25, 2014
    Applicant: HITACHI, LTD.
    Inventors: Toshiyuki UKAI, Naonobu SUKEGAWA
  • Patent number: 8296746
    Abstract: A method of generating optimum parallel codes from a source code for a computer system configured of plural processors that share a cache memory or a main memory is provided. A preset code is read and operation amounts and process contents are analyzed while distinguishing dependence and independence among processes from the code. Then, the amount of data to be reused among processes is analyzed, and the amount of data that accesses the main memory is analyzed. Further, upon the reception of a parallel code generation policy inputted by a user, the processes of the code are divided, and while estimating an execution cycle from the operation amount and process contents thereof, the cache use of the reuse data, and the main memory access data amount, a parallelization method with which the execution cycle becomes shortest is executed.
    Type: Grant
    Filed: February 6, 2008
    Date of Patent: October 23, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Koichi Takayama, Naonobu Sukegawa
  • Patent number: 8234453
    Abstract: To provide an easy way to constitute a processor from a plurality of LSIs, the processor includes: a first LSI containing a processor; a second LSI having a cache memory; and information transmission paths connecting the first LSI to a plurality of the second LSIs, in which the first LSI contains an address information issuing unit which broadcasts, to the second LSIs, via the information transmission paths, address information of data, the second LSI includes: a partial address information storing unit which stores a part of address information; a partial data storing unit which stores data that is associated with the address information; and a comparison unit which compares the address information broadcast with the address information stored in the partial address information storing unit to judge whether a cache hit occurs, and the comparison units of the plurality of the second LSIs are connected to the information transmission paths.
    Type: Grant
    Filed: February 11, 2008
    Date of Patent: July 31, 2012
    Assignee: Hitachi, Ltd.
    Inventor: Naonobu Sukegawa
  • Patent number: 8108629
    Abstract: Provided is a method of managing, in a computer including a processor and a memory that stores information referred to by the processor, the memory. The memory includes a plurality of memory banks, respective power supplies of which are independently controlled. The respective memory banks include a plurality of physical pages. The method includes collecting the physical pages having same degrees of use frequencies in the same memory bank, selecting the memory bank, the power supply for which is controlled, on the basis of the use frequency, and controlling the power supply for the memory bank selected.
    Type: Grant
    Filed: February 16, 2007
    Date of Patent: January 31, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Patent number: 7958508
    Abstract: Provided is a method used in a computer system which includes at least one host computer, the method including managing a job to be executed by the host computer and a power supply of the host computer, the method including the procedures of: receiving the job; storing the received job; scheduling an execution plan for the stored job; determining, based on the execution plan of the job, a timing to execute power control of the host computer; determining a host computer to execute the power control when the determined timing to execute the power control is reached; controlling the power supply of the determined host computer; and executing the scheduled job.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: June 7, 2011
    Assignee: Hitachi, Ltd.
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Patent number: 7895399
    Abstract: A processor reads a program including a prefetch command and a load command and data from a main memory, and executes the program. The processor includes: a processor core that executes the program; a L2 cache that stores data on the main memory for each predetermined unit of data storage; and a prefetch unit that pre-reads the data into the L2 cache from the main memory on the basis of a request for prefetch from the processor core. The prefetch unit includes: a L2 cache management table including an area in which a storage state is held for each position in the unit of data storage of the L2 cache and an area in which a request for prefetch is reserved; and a prefetch control unit that instructs, the L2 cache to perform the request for prefetch reserved or the request for prefetch from the processor core.
    Type: Grant
    Filed: February 13, 2007
    Date of Patent: February 22, 2011
    Assignee: Hitachi, Ltd.
    Inventors: Aki Tomita, Naonobu Sukegawa
  • Patent number: 7739530
    Abstract: Provided is a method of reliably reducing power consumption of a computer, while promoting prompt compilation of a source code and execution of an output code. The method according to this invention includes the steps of: reading a code which is preset and analyzing an amount of operation of the CPU and an access amount with respect to the cache memory based on the code; obtaining an execution rate of the CPU and an access rate with respect to the cache memory based on the amount of operation and the access amount; determining an area in which the access rate with respect to the cache memory is higher than the execution rate of the CPU, based on the code; adding a code for enabling the power consumption reduction function to the area; and generating an execution code executable on the computer, based on the code.
    Type: Grant
    Filed: February 16, 2007
    Date of Patent: June 15, 2010
    Assignee: Hitachi, Ltd.
    Inventors: Koichi Takayama, Naonobu Sukegawa
  • Publication number: 20100064070
    Abstract: In order to improve throughput by suppressing contention of hardware resources in a computer to which a data transfer unit is coupled, a control unit for transferring data between a first interface coupled to the computer and a second interface coupled to a memory transaction issuing unit for issuing, when one of the first interface and the second interface receives an access request to a memory of the computer, a memory transaction for the main memory to the first interface, the first interface includes a plurality of interfaces coupled in parallel to the computer, and the control unit further includes a memory transaction distribution unit for extracting an address of the main memory, which is contained in the memory transaction issued by the memory transaction issuing unit, and selecting an interface having address designation information set therein, which corresponds to the extracted address to transmit the memory transaction.
    Type: Application
    Filed: August 24, 2009
    Publication date: March 11, 2010
    Inventors: Chihiro Yoshimura, Yoshiko Nagasaka, Naonobu Sukegawa, Koichi Takayama
  • Publication number: 20090172288
    Abstract: To provide an easy way to constitute a processor from a plurality of LSIs, the processor includes: a first LSI containing a processor; a second LSI having a cache memory; and information transmission paths connecting the first LSI to a plurality of the second LSIs, in which the first LSI contains an address information issuing unit which broadcasts, to the second LSIs, via the information transmission paths, address information of data, the second LSI includes: a partial address information storing unit which stores a part of address information; a partial data storing unit which stores data that is associated with the address information; and a comparison unit which compares the address information broadcast with the address information stored in the partial address information storing unit to judge whether a cache hit occurs, and the comparison units of the plurality of the second LSIs are connected to the information transmission paths.
    Type: Application
    Filed: February 11, 2008
    Publication date: July 2, 2009
    Inventor: Naonobu Sukegawa
  • Publication number: 20090113404
    Abstract: A method of generating optimum parallel codes from a source code for a computer system configured of plural processors that share a cache memory or a main memory is provided. A preset code is read and operation amounts and process contents are analyzed while distinguishing dependence and independence among processes from the code. Then, the amount of data to be reused among processes is analyzed, and the amount of data that accesses the main memory is analyzed. Further, upon the reception of a parallel code generation policy inputted by a user, the processes of the code are divided, and while estimating an execution cycle from the operation amount and process contents thereof, the cache use of the reuse data, and the main memory access data amount, a parallelization method with which the execution cycle becomes shortest is executed.
    Type: Application
    Filed: February 6, 2008
    Publication date: April 30, 2009
    Inventors: Koichi Takayama, Naonobu Sukegawa
  • Publication number: 20090106499
    Abstract: Non-speculatively prefetched data is prevented from being discarded from a cache memory before being accessed. In a cache memory including a cache control unit for reading data from a main memory into the cache memory and registering the data in the cache memory upon reception of a fill request from a processor and for accessing the data in the cache memory upon reception of a memory instruction from the processor, a cache line of the cache memory includes a registration information storage unit for storing information indicating whether the registered data is written into the cache line in response to the fill request and whether the registered data is accessed by the memory instruction. The cache control unit sets information in the registration information storage unit for performing a prefetch based on the fill request and resets the information for accessing the cache line based on the memory instruction.
    Type: Application
    Filed: February 14, 2008
    Publication date: April 23, 2009
    Inventors: Hidetaka Aoki, Naonobu Sukegawa
  • Publication number: 20080222434
    Abstract: Provided is a method used in a computer system which includes at least one host computer, the method including managing a job to be executed by the host computer and a power supply of the host computer, the method including the procedures of: receiving the job; storing the received job; scheduling an execution plan for the stored job; determining, based on the execution plan of the job, a timing to execute power control of the host computer; determining a host computer to execute the power control when the determined timing to execute the power control is reached; controlling the power supply of the determined host computer; and executing the scheduled job.
    Type: Application
    Filed: February 1, 2008
    Publication date: September 11, 2008
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Patent number: 7366814
    Abstract: Interrupt process generated in a processor for arithmetic operation is offloaded onto a system control processor, thereby reducing disturbance to the processor for arithmetic operation. A heterogeneous multiprocessor system includes: means which accepts an interrupt in each CPU; means which inquires the accepted interrupt of an interrupt destination management table to select an interrupt destination CPU; means which queues the accepted interrupt; means which generates an inter-CPU interrupt to the selected interrupt destination CPU; each means which receives the inter-CPU interrupt in the interrupt source CPU, performs interrupt process of the interrupt source CPU, and generates the inter-CPU interrupt to the interrupt source CPU in the interrupt destination CPU; means which performs an interrupt end process; and means which performs interrupt process in its own CPU when the interrupt destination CPU selected as a result of the inquiry to the interrupt destination management table is its own CPU.
    Type: Grant
    Filed: February 21, 2006
    Date of Patent: April 29, 2008
    Assignee: Hitachi, Ltd.
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Publication number: 20080059715
    Abstract: A processor reads a program including a prefetch command and a load command and data from a main memory, and executes the program. The processor includes: a processor core that executes the program; a L2 cache that stores data on the main memory for each predetermined unit of data storage; and a prefetch unit that pre-reads the data into the L2 cache from the main memory on the basis of a request for prefetch from the processor core. The prefetch unit includes: a L2 cache management table including an area in which a storage state is held for each position in the unit of data storage of the L2 cache and an area in which a request for prefetch is reserved; and a prefetch control unit that instructs, the L2 cache to perform the request for prefetch reserved or the request for prefetch from the processor core.
    Type: Application
    Filed: February 13, 2007
    Publication date: March 6, 2008
    Inventors: Aki Tomita, Naonobu Sukegawa
  • Publication number: 20080034236
    Abstract: Provided is a method of reliably reducing power consumption of a computer, while promoting prompt compilation of a source code and execution of an output code. The method according to this invention includes the steps of: reading a code which is preset and analyzing an amount of operation of the CPU and an access amount with respect to the cache memory based on the code; obtaining an execution rate of the CPU and an access rate with respect to the cache memory based on the amount of operation and the access amount; determining an area in which the access rate with respect to the cache memory is higher than the execution rate of the CPU, based on the code; adding a code for enabling the power consumption reduction function to the area; and generating an execution code executable on the computer, based on the code.
    Type: Application
    Filed: February 16, 2007
    Publication date: February 7, 2008
    Inventors: Koichi Takayama, Naonobu Sukegawa
  • Publication number: 20080034234
    Abstract: Provided is a method of managing, in a computer including a processor and a memory that stores information referred to by the processor, the memory. The memory includes a plurality of memory banks, respective power supplies of which are independently controlled. The respective memory banks include a plurality of physical pages. The method includes collecting the physical pages having same degrees of use frequencies in the same memory bank, selecting the memory bank, the power supply for which is controlled, on the basis of the use frequency, and controlling the power supply for the memory bank selected.
    Type: Application
    Filed: February 16, 2007
    Publication date: February 7, 2008
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Patent number: 7293092
    Abstract: A parallel or grid computing system that having a plurality of nodes and achieves job scheduling for the nodes with a view toward system efficiency optimization. The parallel or grid computing system has a plurality of nodes for transmitting and receiving data and a communication path for exchanging data among the nodes, which are either a transmitting node for transmitting data or a receiving node for processing a job dependent on transmitted data, and further has a time measuring means for measuring the time interval between the instant at which data is called for by a job and the instant at which the data is transmitted from a transmitting node to a receiving node, a time counting means for adding up the measured wait time data about each job, and a job scheduling means for determining the priority of jobs in accordance with the counted wait time and for scheduling jobs.
    Type: Grant
    Filed: January 14, 2003
    Date of Patent: November 6, 2007
    Assignee: Hitachi, Ltd.
    Inventor: Naonobu Sukegawa
  • Publication number: 20070180198
    Abstract: When the same data is used in a multiprocessor system, cache misses are reduced to prevent a coherence request from frequently occurring between processors.
    Type: Application
    Filed: February 22, 2006
    Publication date: August 2, 2007
    Inventors: Hidetaka Aoki, Naonobu Sukegawa
  • Publication number: 20070124523
    Abstract: Interrupt process generated in a processor for arithmetic operation is offloaded onto a system control processor, thereby reducing disturbance to the processor for arithmetic operation. A heterogeneous multiprocessor system includes: means which accepts an interrupt in each CPU; means which inquires the accepted interrupt of an interrupt destination management table to select an interrupt destination CPU; means which queues the accepted interrupt; means which generates an inter-CPU interrupt to the selected interrupt destination CPU; each means which receives the inter-CPU interrupt in the interrupt source CPU, performs interrupt process of the interrupt source CPU, and generates the inter-CPU interrupt to the interrupt source CPU in the interrupt destination CPU; means which performs an interrupt end process; and means which performs interrupt process in its own CPU when the interrupt destination CPU selected as a result of the inquiry to the interrupt destination management table is its own CPU.
    Type: Application
    Filed: February 21, 2006
    Publication date: May 31, 2007
    Inventors: Masaaki Shimizu, Naonobu Sukegawa
  • Publication number: 20070124567
    Abstract: A processor system capable of improving usability and performance of an on-chip heterogeneous multiprocessor is provided. The processor system has a processor and a memory, the processor including one control unit that reads a program, a plurality of arithmetic units that transmit a SIMD instruction of the program read by the control unit, and a shared cache capable of storing the program read by the control unit from the memory and allowing the control unit and the plurality of arithmetic units to read and write data. An instruction transmitted from the control unit to the plurality of arithmetic units specifies, in a process where the plurality of arithmetic units execute instructions, whether, until receiving an external signal from an arithmetic unit different from the arithmetic unit that is executing the instruction, execution of the instruction is to be suspended.
    Type: Application
    Filed: February 22, 2006
    Publication date: May 31, 2007
    Inventors: Aki Tomita, Hidetaka Aoki, Naonobu Sukegawa