Patents by Inventor Kenichi Tsuchiya

Kenichi Tsuchiya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7836287
    Abstract: A method and processor for reducing the fetch time of target instructions of a predicted taken branch instruction. Each entry in a buffer, referred to herein as a “branch target buffer”, may store an address of a branch instruction predicted taken and the instructions beginning at the target address of the branch instruction predicted taken. When an instruction is fetched from the instruction cache, a particular entry in the branch target buffer is indexed using particular bits of the fetched instruction. The address of the branch instruction in the indexed entry is compared with the address of the instruction fetched from the instruction cache. If there is a match, then the instructions beginning at the target address of that branch instruction are dispatched directly behind the branch instruction. In this manner, the fetch time of target instructions of a predicted taken branch instruction is reduced.
    Type: Grant
    Filed: July 20, 2008
    Date of Patent: November 16, 2010
    Assignee: International Business Machines Corporation
    Inventors: Richard William Doing, Brett Olsson, Kenichi Tsuchiya
  • Patent number: 7809922
    Abstract: A node of a multiple-node system includes a translation lookaside buffer (TLB), a cache, and a TLB snoop mechanism. The node shares memory with other nodes of the multiple-node systems, and is connected with the other nodes via a bus. The TLB snooping mechanism snoops inbound memory access requests and/or outbound memory access requests. Inbound requests are received from over the bus and are intended for the cache. However, the cache receives only the inbound requests that relate to memory addresses having associated entries within the TLB. Outbound requests are received from within the node and are intended for transmission over the bus. However, the bus coherently transmits only the outbound requests that relate to memory addresses that are part of memory pages having set shared-memory page memory flags. All other outbound memory access requests are sent over the bus non-coherently.
    Type: Grant
    Filed: October 21, 2007
    Date of Patent: October 5, 2010
    Assignee: International Business Machines Corporation
    Inventors: Makoto Ueda, Kenichi Tsuchiya
  • Patent number: 7779237
    Abstract: A method, system and processor for adaptively and selectively controlling the instruction execution frequency of a data processor. Processing logic or a software compiler determines when a number of first-type instructions, requiring longer execution latency, are scheduled to be executed. The logic/compiler then triggers the CPM unit to automatically switch the execution frequency of the instruction processor from a first frequency that is optimal for processing regular-type instructions to a second, pre-established lower frequency that is optimal for processing the first-type instructions, to enable more efficient execution and higher execution throughput of the number of first-type operations within the processor. When the first-type instructions have completed execution, the processor's instruction execution frequency is returned to the first optimal frequency.
    Type: Grant
    Filed: July 11, 2007
    Date of Patent: August 17, 2010
    Assignee: International Business Machines Corporation
    Inventors: Anthony Correale, Jr., Kenichi Tsuchiya
  • Patent number: 7711930
    Abstract: A method and apparatus for executing instructions in a pipeline processor. The method decreases the latency between an instruction cache and a pipeline processor when bubbles occur in the processing stream due to an execution of a branch correction, or when an interrupt changes the sequence of an instruction stream. The latency is reduced when a decode stage for detecting branch prediction and a related instruction queue location have invalid data representing a bubble in the processing stream. Instructions for execution are inserted in parallel into the decode stage and instruction queue, thereby reducing by one cycle time the length of the pipeline stage.
    Type: Grant
    Filed: October 8, 2007
    Date of Patent: May 4, 2010
    Assignee: International Business Machines Corporation
    Inventors: James N. Dieffenderfer, Richard W. Doing, Brian M. Stempel, Steven R. Testa, Kenichi Tsuchiya
  • Publication number: 20090116323
    Abstract: A system for at-functional-clock-speed continuous scan array built-in self testing (ABIST) of multiport memory is disclosed. During ABIST testing, functional addressing latches from a first port are used as shadow latches for a second port's addressing latches. The arrangement reduces the amount of test-only hardware on a chip and reduces the need to write complex testing software. Higher level functions may be inserted between the shadow latches and the addressing latches to automatically provide functions such as inversions.
    Type: Application
    Filed: January 7, 2009
    Publication date: May 7, 2009
    Applicant: International Business Machines Corporation
    Inventors: Robert Glen Gerowitz, Kenichi Tsuchiya
  • Publication number: 20090102979
    Abstract: A portable television-broadcast reception unit is provided which is capable of changing the direction of a directional antenna easily to a direction where an enough quality to watch television can be obtained.
    Type: Application
    Filed: April 25, 2006
    Publication date: April 23, 2009
    Inventors: Masayoshi Matsuoka, Yusuke Mizuno, Takefumi Oota, Saloshi Abe, Kenichi Tsuchiya
  • Publication number: 20090106502
    Abstract: A node of a multiple-node system includes a translation lookaside buffer (TLB), a cache, and a TLB snoop mechanism. The node shares memory with other nodes of the multiple-node systems, and is connected with the other nodes via a bus. The TLB snooping mechanism snoops inbound memory access requests and/or outbound memory access requests. Inbound requests are received from over the bus and are intended for the cache. However, the cache receives only the inbound requests that relate to memory addresses having associated entries within the TLB. Outbound requests are received from within the node and are intended for transmission over the bus. However, the bus coherently transmits only the outbound requests that relate to memory addresses that are part of memory pages having set shared-memory page memory flags. All other outbound memory access requests are sent over the bus non-coherently.
    Type: Application
    Filed: October 21, 2007
    Publication date: April 23, 2009
    Inventors: Makoto Ueda, Kenichi Tsuchiya
  • Patent number: 7506225
    Abstract: A system for at-functional-clock-speed continuous scan array built-in self testing (ABIST) of multiport memory is disclosed. During ABIST testing, functional addressing latches from a first port are used as shadow latches for a second port's addressing latches. The arrangement reduces the amount of test-only hardware on a chip and reduces the need to write complex testing software. Higher level functions may be inserted between the shadow latches and the addressing latches to automatically provide functions such as inversions.
    Type: Grant
    Filed: October 14, 2005
    Date of Patent: March 17, 2009
    Assignee: International Business Machines Corporation
    Inventors: Robert Glen Gerowitz, Kenichi Tsuchiya
  • Publication number: 20090019265
    Abstract: A method, system and processor for adaptively and selectively controlling the instruction execution frequency of a data processor. Processing logic or a software compiler determines when a number of first-type instructions, requiring longer execution latency, are scheduled to be executed. The logic/compiler then triggers the CPM unit to automatically switch the execution frequency of the instruction processor from a first frequency that is optimal for processing regular-type instructions to a second, pre-established lower frequency that is optimal for processing the first-type instructions, to enable more efficient execution and higher execution throughput of the number of first-type operations within the processor. When the first-type instructions have completed execution, the processor's instruction execution frequency is returned to the first optimal frequency.
    Type: Application
    Filed: July 11, 2007
    Publication date: January 15, 2009
    Inventors: ANTHONY CORREALE, JR., Kenichi Tsuchiya
  • Publication number: 20090019264
    Abstract: A method, system and processor for increasing the instruction throughput in a processor executing longer latency instructions within the instruction pipeline. Logic associated with specific stages of the execution pipeline, responsible for executing the particular type of instructions, determines when at least a threshold number of the particular-type instructions is scheduled to be executed. The logic then automatically changes an execution cycle frequency of the specific pipeline stages from a first cycle frequency to a second, pre-established higher cycle frequency, which enables more efficient execution and higher execution throughput of the particular-type instructions. The cycle frequency of only the one or more functional stages are switched to the higher cycle frequency independent of the cycle frequency of the other functional stages in the processor pipeline.
    Type: Application
    Filed: July 11, 2007
    Publication date: January 15, 2009
    Inventors: Anthony Correale, JR., Kenichi Tsuchiya
  • Publication number: 20090005225
    Abstract: A charging roll includes a shaft and an ionically conductive elastic layer formed around the shaft. The ionically conductive elastic layer is formed of a rubber composition free of any electron-conductive agent and containing 0.7 to 1.0 parts by weight of a peroxide cross-linking agent per 100 parts by weight of an ion-conductive rubber. The ion-conductive rubber is formed of at least one of an epichlorohydrin rubber and a nitrile rubber, and a percentage of a rubber component in the ionically conductive elastic layer measured by thermogravimetric analysis is 90% or more by weight.
    Type: Application
    Filed: June 18, 2008
    Publication date: January 1, 2009
    Applicant: Tokai Rubber Industries, Ltd.
    Inventors: Kenichi Tsuchiya, Naoaki Sasakibara, Fumio Misumi, Satoshi Suzuki, Kadai Takeyama
  • Publication number: 20080320236
    Abstract: A system includes processor units, caches, memory shared by the processor units, a system bus interface, and a cache snoop interfaces. Each processor unit has one of the caches. The system bus interface communicatively connects the processor units to the memory via at least the caches, and is a non-cache snoop system bus interface. The cache snoop interface communicatively connects the caches, and is independent of the system bus interface. Upon a given processor unit writing a new value to an address within the memory such that the new value and the address are cached within the cache of the given processor unit a write invalidation event is sent over the cache snoop interface to the caches of the processor units other than the given processor unit. This event invalidates the address as stored within any of the caches other than the cache of the given processor unit.
    Type: Application
    Filed: June 25, 2007
    Publication date: December 25, 2008
    Inventors: Makoto Ueda, Kenichi Tsuchiya, Takeo Nakada, Norio Fujita
  • Publication number: 20080276070
    Abstract: A method and processor for reducing the fetch time of target instructions of a predicted taken branch instruction. Each entry in a buffer, referred to herein as a “branch target buffer”, may store an address of a branch instruction predicted taken and the instructions beginning at the target address of the branch instruction predicted taken. When an instruction is fetched from the instruction cache, a particular entry in the branch target buffer is indexed using particular bits of the fetched instruction. The address of the branch instruction in the indexed entry is compared with the address of the instruction fetched from the instruction cache. If there is a match, then the instructions beginning at the target address of that branch instruction are dispatched directly behind the branch instruction. In this manner, the fetch time of target instructions of a predicted taken branch instruction is reduced.
    Type: Application
    Filed: July 20, 2008
    Publication date: November 6, 2008
    Applicant: International Business Machines Corporation
    Inventors: Richard William Doing, Brett Olsson, Kenichi Tsuchiya
  • Publication number: 20080276071
    Abstract: A method and processor for reducing the fetch time of target instructions of a predicted taken branch instruction. Each entry in a buffer, referred to herein as a “branch target buffer”, may store an address of a branch instruction predicted taken and the instructions beginning at the target address of the branch instruction predicted taken. When an instruction is fetched from the instruction cache, a particular entry in the branch target buffer is indexed using particular bits of the fetched instruction. The address of the branch instruction in the indexed entry is compared with the address of the instruction fetched from the instruction cache. If there is a match, then the instructions beginning at the target address of that branch instruction are dispatched directly behind the branch instruction. In this manner, the fetch time of target instructions of a predicted taken branch instruction is reduced.
    Type: Application
    Filed: July 20, 2008
    Publication date: November 6, 2008
    Applicant: International Business Machines Corporation
    Inventors: Richard William Doing, Brett Olsson, Kenichi Tsuchiya
  • Patent number: 7437543
    Abstract: A method and processor for reducing the fetch time of target instructions of a predicted taken branch instruction. Each entry in a buffer, referred to herein as a “branch target buffer”, may store an address of a branch instruction predicted taken and the instructions beginning at the target address of the branch instruction predicted taken. When an instruction is fetched from the instruction cache, a particular entry in the branch target buffer is indexed using particular bits of the fetched instruction. The address of the branch instruction in the indexed entry is compared with the address of the instruction fetched from the instruction cache. If there is a match, then the instructions beginning at the target address of that branch instruction are dispatched directly behind the branch instruction. In this manner, the fetch time of target instructions of a predicted taken branch instruction is reduced.
    Type: Grant
    Filed: April 19, 2005
    Date of Patent: October 14, 2008
    Assignee: International Business Machines Corporation
    Inventors: Richard William Doing, Brett Olsson, Kenichi Tsuchiya
  • Patent number: 7425738
    Abstract: A metal thin film provided on a substrate and having a metal with a face-centered cubic crystal structure, wherein the metal thin film is preferentially oriented in a (111) plane, and a (100) plane which is not parallel to a surface of the substrate is present on a surface of the thin film. In this metal thin film, the metal with a face-centered cubic crystal structure includes at least one element selected from the group consisting of Pt, Ir, and Ru.
    Type: Grant
    Filed: April 14, 2005
    Date of Patent: September 16, 2008
    Assignee: Seiko Epson Corporation
    Inventors: Tatsuo Sawasaki, Kenichi Kurokawa, Teruo Tagawa, Kenichi Tsuchiya
  • Publication number: 20080177981
    Abstract: A method and apparatus for executing instructions in a pipeline processor. The method decreases the latency between an instruction cache and a pipeline processor when bubbles occur in the processing stream due to an execution of a branch correction, or when an interrupt changes the sequence of an instruction stream. The latency is reduced when a decode stage for detecting branch prediction and a related instruction queue location have invalid data representing a bubble in the processing stream. Instructions for execution are inserted in parallel into the decode stage and instruction queue, thereby reducing by one cycle time the length of the pipeline stage.
    Type: Application
    Filed: October 8, 2007
    Publication date: July 24, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James N. Dieffenderfer, Richard W. Doing, Brian M. Stempel, Steven R. Testa, Kenichi Tsuchiya
  • Publication number: 20080022044
    Abstract: A processor contains multiple levels of registers having different access latency. A relatively smaller set of registers is contained in a relatively faster higher level register bank, and a larger, more complete set of the registers is contained in a relatively slower lower level register bank. Physically, the higher level register bank is placed closer to functional logic which receives inputs from the registers. Selection logic enables selecting output of either register bank for input to processor execution logic. Preferably, the lower level bank includes a complete set of all processor registers, and the higher level bank includes a smaller subset of the registers, duplicating information in the lower level bank. The higher level bank is preferably accessible in a single clock cycle.
    Type: Application
    Filed: August 8, 2007
    Publication date: January 24, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nathan Nunamaker, Jack Randolph, Kenichi Tsuchiya
  • Patent number: 7321954
    Abstract: An LRU array and method for tracking the accessing of lines of an associative cache. The most recently accessed lines of the cache are identified in the table, and cache lines can be blocked from being replaced. The LRU array contains a data array having a row of data representing each line of the associative cache, having a common address portion. A first set of data for the cache line identifies the relative age of the cache line for each way with respect to every other way. A second set of data identifies whether a line of one of the ways is not to be replaced. For cache line replacement, the cache controller will select the least recently accessed line using contents of the LRU array, considering the value of the first set of data, as well as the value of the second set of data indicating whether or not a way is locked. Updates to the LRU occur after each pre-fetch or fetch of a line or when it replaces another line in the cache memory.
    Type: Grant
    Filed: August 11, 2004
    Date of Patent: January 22, 2008
    Assignee: International Business Machines Corporation
    Inventors: James N. Dieffenderfer, Richard W. Doing, Brian E. Frankel, Kenichi Tsuchiya
  • Patent number: 7284092
    Abstract: A processor contains multiple levels of registers having different access latency. A relatively smaller set of registers is contained in a relatively faster higher level register bank, and a larger, more complete set of the registers is contained in a relatively slower lower level register bank. Physically, the higher level register bank is placed closer to functional logic which receives inputs from the registers. Preferably, the lower level bank includes a complete set of all processor registers, and the higher level bank includes a smaller subset of the registers, duplicating information in the lower level bank. The higher level bank is preferably accessible in a single clock cycle.
    Type: Grant
    Filed: June 24, 2004
    Date of Patent: October 16, 2007
    Assignee: International Business Machines Corporation
    Inventors: Nathan Samuel Nunamaker, Jack Chris Randolph, Kenichi Tsuchiya