Patents by Inventor Takeki Osanai

Takeki Osanai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070022277
    Abstract: Systems and methods for modes of operation for processing data are disclosed. While executing a program in one mode the hazard checking logic present in the microprocessor system may be utilized to check or ameliorate the hazards caused by the execution of this program. However, when a program does not need this hazard checking, the microprocessor may execute this program in a mode where some portion of the hazard checking logic of the microprocessor may not be utilized in conjunction with the execution of this program. This allows the higher speed execution of these types of programs by eliminating checking for dependencies, the detection of false load/store dependencies, the insertion of unnecessary stalls into the execution pipeline of the microprocessor or other hardware operations. Furthermore, by reducing the use of hazard detection logic a decrease in power consumption may also be effectuated.
    Type: Application
    Filed: July 20, 2005
    Publication date: January 25, 2007
    Inventors: Kenji Iwamura, Takeki Osanai, Yukio Watanabe
  • Publication number: 20060277351
    Abstract: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.
    Type: Application
    Filed: June 6, 2005
    Publication date: December 7, 2006
    Inventors: Takeki Osanai, Kimberly Fernsler
  • Publication number: 20060271767
    Abstract: Systems and methods for determining dependencies between processor instructions in multiple phases. In one embodiment, a partial comparison is made between the addresses of a sequence of instructions. Younger instructions having potential dependencies on older instructions are suspended if the partial comparison yields a match. One or more subsequent comparisons are made for suspended instructions based on portions of the addresses referenced by the instructions that were not previously compared. If subsequent comparisons determine that the addresses of the instructions do not match, the suspended instructions are reinstated and execution of the suspended instructions is resumed. In one embodiment, data needed by suspended instructions is speculatively requested in case the instructions are reinstated.
    Type: Application
    Filed: May 31, 2005
    Publication date: November 30, 2006
    Inventors: Takeki Osanai, Kenji Iwamura
  • Publication number: 20060174083
    Abstract: A method, an apparatus, and a computer program product are provided for detecting load/store dependency in a memory system by dynamically changing the address width for comparison. An incoming load/store operation must be compared to the operations in the pipeline and the queues to avoid address conflicts. Overall, the present invention introduces a cache hit or cache miss input into the load/store dependency logic. If the incoming load operation is a cache hit, then the quadword boundary address value is used for detection. If the incoming load operation is a cache miss, then the cacheline boundary address value is used for detection. This invention enhances the performance of LHS and LHR operations in a memory system.
    Type: Application
    Filed: February 3, 2005
    Publication date: August 3, 2006
    Inventors: Brian Barrick, Dwain Hicks, Takeki Osanai, David Ray
  • Publication number: 20060106985
    Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.
    Type: Application
    Filed: November 12, 2004
    Publication date: May 18, 2006
    Applicants: International Business Machines Corporation, Toshiba America Electronic Components, Inc., Kabushiki Kaisha Toshiba
    Inventors: Brian Barrick, Kimberly Fernsler, Dwain Hicks, Takeki Osanai, David Ray
  • Publication number: 20060106987
    Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.
    Type: Application
    Filed: November 18, 2004
    Publication date: May 18, 2006
    Applicants: International Business Machines Corporation, Toshiba America Electronic Components, Inc., Kabushiki Kaisha Toshiba
    Inventors: Brian Barrick, Kimberly Fensler, Dwain Hicks, David Ray, David Shippy, Takeki Osanai
  • Publication number: 20060107021
    Abstract: Methods for executing load instructions are disclosed. In one method, a load instruction and corresponding thread information are received. Address information of the load instruction is used to generate an address of the needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load and/or store instruction specifying the same address. If such a previous load/store instruction is found, the thread information is used to determine if the previous load/store instruction is from the same thread. If the previous load/store instruction is from the same thread, the cache hit signal is ignored, and the load instruction is stored in the queue. A load/store unit is also described.
    Type: Application
    Filed: November 12, 2004
    Publication date: May 18, 2006
    Applicants: International Business Machines Corporation, Toshiba America Electronic Components, Inc., Kabushiki Kaisha Toshiba
    Inventors: Brian Barrick, Kimberly Fernsler, Dwain Hicks, Takeki Osanai, David Ray
  • Publication number: 20060036638
    Abstract: A system and method for determining whether to retire a data entry from a buffer. A portion of the retirement conditions is processed prior to the data entry being considered for retirement resulting in faster processing of remaining retirement conditions at the time retirement of the data is to be considered. The results from the pre-processing are stored as predecoded retirement information, which is later used with the remaining retirement conditions to determine whether the data is to be retired from the buffer.
    Type: Application
    Filed: August 16, 2004
    Publication date: February 16, 2006
    Inventors: Takeki Osanai, Brian Barrick
  • Publication number: 20060020759
    Abstract: The present invention provides a method of updating the cache state information for store transactions in an system in which store transactions only read the cache state information upon entering the unit pipe or store portion of the store/load queue. In this invention, store transactions in the unit pipe and queue are checked whenever a cache line is modified, and their cache state information updated as necessary. When the modification is an invalidate, the check tests that the two share the same physical addressable location. When the modification is a validate, the check tests that the two involve the same data cache line.
    Type: Application
    Filed: July 22, 2004
    Publication date: January 26, 2006
    Applicants: International Business Machines Corporation, Toshiba America Electronic Components, Inc., Kabushiki Kaisha Toshiba
    Inventors: Brian Barrick, Dwain Hicks, Takeki Osanai
  • Patent number: 6389527
    Abstract: The present invention comprises a LSU which executes instructions relating to load/store. The LSU includes a DCACHE which temporarily stores data read from and written to the external memory, an SPRAM used to specific purposes other than cache, and an address generator generating virtual addresses for access to the DCACHE and the SPRAM. Because the SPRAM can load and store data by a pipeline of the LSU and exchanges data with an external memory through a DMA transfer, the present invention is especially available to high-speedily process a large amount of data such as the image data. Because the LSU can access the SPRAM with the same latency as that of the DCACHE, after data being stored in the external memory is transferred to the SPRAM, the processor can access the SPRAM in order to perform data process, and it is possible to process a large amount of data with shorter time than time necessary to directly access an external memory.
    Type: Grant
    Filed: February 8, 1999
    Date of Patent: May 14, 2002
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Michael Raam, Toru Utsumi, Takeki Osanai, Kamran Malik
  • Patent number: 6360298
    Abstract: A load/store instruction control method of a microprocessor according to the present invention has a feature as follows. The circuit implements non-blocking cache which does not allow a pipeline process of a microprocessor to stop even if a cache miss by load/store instructions occurs. When the load instruction for a no-write allocate area directly storing a store-data to a lower layer memory in a cache hierarchy at time of a cache-miss initiates the cache-miss, and a subsequent store instruction initiates the cache-miss for the same cache line as that of the preceding load instruction, during a refill process of the DCACHE by the preceding load instruction or after the refill process, the store-data by the subsequent store instruction is stored to a corresponding cache line. Consequently, unconformity of data such as only the lower layer memory in the cache hierarchy holds a new data and only the DCACHE holds an old data does not occur.
    Type: Grant
    Filed: February 10, 2000
    Date of Patent: March 19, 2002
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Takeki Osanai, Johnny K. Szeto, Kyle Tsukamoto
  • Patent number: 6327665
    Abstract: An incrementer (5a) adds 1 to the contents of a register (7a) in synchronism with a local clock. Aregister (7b) which has a bit width larger than that of the register (7a) is connected to an incrementer (5b). The incrementer (5b) adds 1 to the contents of the register (7b) in synchronism with a system clock. The local clock is terminated by an action of a NAND gate (9) when a most significant bit of the register (7a) becomes 1. The most significant bit of the register (7a) generates a full-bit clear signal (11) for the register (7b), and full bits of the register (7b) are then cleared to 0's when this full-bit clear signal becomes 1. A most significant bit of the register (7b) is a full-bit clear signal for the register (7a), and full bits of the register (7a) are then cleared to 0's when this full-bit clear signal becomes 1.
    Type: Grant
    Filed: October 29, 1997
    Date of Patent: December 4, 2001
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Takeki Osanai