Patents by Inventor Jih-Kwon Peir

Jih-Kwon Peir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8842690
    Abstract: A data structure is provided for storing network contact information based on an array of physical memory locations. Virtual vectors are constructed for each source, wherein each element in each virtual vector is assigned to a corresponding physical memory location within the array. The physical memory locations are shared between the virtual vectors uniformly at random so that the noise introduced by sharing can be predicted and removed. A method for storing network contact information is also provided in which a hash function is performed using the address of a source host to find a virtual vector for holding information about the source host. A second hash function is performed using the address of a destination host to find a virtual memory location, within the virtual vector, for holding information about the destination host. Finally, information is stored at a physical memory location assigned to the virtual memory location.
    Type: Grant
    Filed: April 2, 2010
    Date of Patent: September 23, 2014
    Assignee: University of Florida Research Foundation, Incorporated
    Inventors: Shigang Chen, Jih-Kwon Peir, Myungkeun Yoon, Tao Li
  • Publication number: 20110289295
    Abstract: A data structure is provided for storing network contact information based on an array of physical memory locations. Virtual vectors are constructed for each source, wherein each element in each virtual vector is assigned to a corresponding physical memory location within the array. The physical memory locations are shared between the virtual vectors uniformly at random so that the noise introduced by sharing can be predicted and removed. A method for storing network contact information is also provided in which a hash function is performed using the address of a source host to find a virtual vector for holding information about the source host. A second hash function is performed using the address of a destination host to find a virtual memory location, within the virtual vector, for holding information about the destination host. Finally, information is stored at a physical memory location assigned to the virtual memory location.
    Type: Application
    Filed: April 2, 2010
    Publication date: November 24, 2011
    Applicant: University of Florida Research Foundation, Inc.
    Inventors: Shigang Chen, Jih-Kwon Peir, Myungkeun Yoon, Tao Li
  • Patent number: 7076613
    Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    Type: Grant
    Filed: January 21, 2004
    Date of Patent: July 11, 2006
    Assignee: Intel Corporation
    Inventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
  • Publication number: 20040268054
    Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    Type: Application
    Filed: January 21, 2004
    Publication date: December 30, 2004
    Applicant: Intel Corporation
    Inventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
  • Patent number: 6725341
    Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    Type: Grant
    Filed: June 28, 2000
    Date of Patent: April 20, 2004
    Assignee: Intel Corporation
    Inventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
  • Patent number: 6711662
    Abstract: A shared-memory system includes processing modules communicating with each other through a network. Each of the processing modules includes a processor, a cache, and a memory unit that is locally accessible by the processor and remotely accessible via the network by all other processors. A home directory records states and locations of data blocks in the memory unit. A prediction facility that contains reference history information of the data blocks predicts a next requester of a number of the data blocks that have been referenced recently. The next requester is informed by the prediction facility of the current owner of the data block. As a result, the next requester can issue a request to the current owner directly without an additional hop through the home directory.
    Type: Grant
    Filed: March 29, 2001
    Date of Patent: March 23, 2004
    Assignee: Intel Corporation
    Inventors: Jih-Kwon Peir, Konrad Lai
  • Publication number: 20030208665
    Abstract: A processor may use a cache hit/miss prediction table (CPT) to predict whether a load will hit or miss and use this information to schedule dependent instructions in the instruction pipeline. The CPT may be a Bloom filter which uses a portion of the load address to index the table.
    Type: Application
    Filed: May 1, 2002
    Publication date: November 6, 2003
    Inventors: Jih-Kwon Peir, Konrad Lai
  • Publication number: 20020144063
    Abstract: A shared-memory system includes processing modules communicating with each other through a network. Each of the processing modules includes a processor, a cache, and a memory unit that is locally accessible by the processor and remotely accessible via the network by all other processors. A home directory records states and locations of data blocks in the memory unit. A prediction facility that contains reference history information of the data blocks predicts a next requester of a number of the data blocks that have been referenced recently. The next requester is informed by the prediction facility of the current owner of the data block. As a result, the next requester can issue a request to the current owner directly without an additional hop through the home directory.
    Type: Application
    Filed: March 29, 2001
    Publication date: October 3, 2002
    Inventors: Jih-Kwon Peir, Konrad Lai
  • Patent number: 6128755
    Abstract: A multiprocessor computer system and associated method having processing error detection capability is disclosed for error-free processing of an instruction set. The instruction set is replicated and processed substantially in parallel through a plurality of processing nodes of the computer system. Each processing node collects a compressed hardware signature commensurate with and derived from the execution of the instruction set. Subsequent instruction set processing, the collected hardware signatures from each processing node are compared and the presence or absence of a processing error is determined with reference to a predetermined voting scheme. Processing of the instruction sets through the plurality of processing nodes is typically asynchronous with synchronization occurring subsequent each processor's execution of the instruction set, such that each processor can be driven by an independent clock.
    Type: Grant
    Filed: August 25, 1994
    Date of Patent: October 3, 2000
    Assignee: International Business Machines Corporation
    Inventors: Stephen Edward Bello, Kien Anh Hua, Jih-Kwon Peir
  • Patent number: 5148538
    Abstract: This invention implements a cache access system that shortens the address generation machine cycle of a digital computer, while simultaneously avoiding the synonym problem of logical addressing. The invention is based on the concept of predicting what the real address used in the cache memory will be, independent of the generation of the logical address. The prediction involves recalling the last real address used to access the cache memory for a particular instruction, and then using that real address to access the cache memory. Incorrect guesses are corrected and kept to a minimum through monitoring the history of instructions and real addresses called for in the computer. This allows the cache memory to retrieve the information faster than waiting for the virtual address to be generated and then translating the virtual address into a real address.
    Type: Grant
    Filed: October 20, 1989
    Date of Patent: September 15, 1992
    Assignee: International Business Machines Corporation
    Inventors: Joseph O. Celtruda, Kein A. Hua, Anderson H. Hunt, Lishing Liu, Jih-Kwon Peir, David R. Pruett, Joseph L. Temple, III