Patents by Inventor Brian David Barrick
Brian David Barrick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7904697Abstract: An apparatus and method for executing a Load Register instruction in which the source data of the Load Register instruction is retained in its original physical register while the architected target register is mapped to this same physical target register. In this state the two architected registers alias to one physical register. When the source register of the Load Address instruction is specified as the target address of a subsequent instruction, a free physical register is assigned to the Load Registers source register. And with this assignment the alias is thus broken. Similarly when the target register of the Load Address instruction is the target address of a subsequent instruction, a new physical register is assigned to the Load Registers target address. And with this assignment the alias is thus broken.Type: GrantFiled: March 7, 2008Date of Patent: March 8, 2011Assignee: International Business Machines CorporationInventors: Brian David Barrick, Brian William Curran, Lee Evan Eisen
-
Patent number: 7769985Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: GrantFiled: February 4, 2008Date of Patent: August 3, 2010Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai
-
Patent number: 7730290Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: GrantFiled: February 25, 2008Date of Patent: June 1, 2010Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20090228692Abstract: An apparatus and method for executing a Load Register instruction in which the source data of the Load Register instruction is retained in its original physical register while the architected target register is mapped to this same physical target register. In this state the two architected registers alias to one physical register. When the source register of the Load Address instruction is specified as the target address of a subsequent instruction, a free physical register is assigned to the Load Registers source register. And with this assignment the alias is thus broken. Similarly when the target register of the Load Address instruction is the target address of a subsequent instruction, a new physical register is assigned to the Load Registers target address. And with this assignment the alias is thus broken.Type: ApplicationFiled: March 7, 2008Publication date: September 10, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian David Barrick, Brian William Curran, Lee Evan Eisen
-
Patent number: 7464242Abstract: A method, an apparatus, and a computer program product are provided for detecting load/store dependency in a memory system by dynamically changing the address width for comparison. An incoming load/store operation must be compared to the operations in the pipeline and the queues to avoid address conflicts. Overall, the present invention introduces a cache hit or cache miss input into the load/store dependency logic. If the incoming load operation is a cache hit, then the quadword boundary address value is used for detection. If the incoming load operation is a cache miss, then the cacheline boundary address value is used for detection. This invention enhances the performance of LHS and LHR operations in a memory system.Type: GrantFiled: February 3, 2005Date of Patent: December 9, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Dwain Alan Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20080148017Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: ApplicationFiled: February 25, 2008Publication date: June 19, 2008Inventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20080141014Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: ApplicationFiled: February 4, 2008Publication date: June 12, 2008Inventors: Brian David Barrick, Kimberly Marie Fensler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai
-
Patent number: 7376816Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: GrantFiled: November 12, 2004Date of Patent: May 20, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Patent number: 7363468Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: GrantFiled: November 18, 2004Date of Patent: April 22, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fensler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai
-
Patent number: 7302527Abstract: Methods for executing load instructions are disclosed. In one method, a load instruction and corresponding thread information are received. Address information of the load instruction is used to generate an address of the needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load and/or store instruction specifying the same address. If such a previous load/store instruction is found, the thread information is used to determine if the previous load/store instruction is from the same thread. If the previous load/store instruction is from the same thread, the cache hit signal is ignored, and the load instruction is stored in the queue. A load/store unit is also described.Type: GrantFiled: November 12, 2004Date of Patent: November 27, 2007Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Patent number: 7302530Abstract: The present invention provides a method of updating the cache state information for store transactions in an system in which store transactions only read the cache state information upon entering the unit pipe or store portion of the store/load queue. In this invention, store transactions in the unit pipe and queue are checked whenever a cache line is modified, and their cache state information updated as necessary. When the modification is an invalidate, the check tests that the two share the same physical addressable location. When the modification is a validate, the check tests that the two involve the same data cache line.Type: GrantFiled: July 22, 2004Date of Patent: November 27, 2007Assignee: International Business Machines CorporationInventors: Brian David Barrick, Dwain Alan Hicks, Takeki Osanai
-
Patent number: 7130947Abstract: The present invention provides a method of arbitration for resources which allows requestors from multiple frequency domains. Most requestors generate requests at full speed. A small number of low-speed requesters generate requests every two full-speed cycles, and hold their requests for two full-speed cycles. The arbitration method gives priority to the requests from the low-priority requesters and guarantees that two requests made by the half-speed requestors at the beginning of a low-speed cycle will be granted over the course of the low-speed cycle. The requests generated by the low-speed requestors are issued in phases. Issuance of later phases of a request is blocked when the request has been granted in an earlier phase.Type: GrantFiled: April 29, 2004Date of Patent: October 31, 2006Assignee: International Business Machines CorporationInventor: Brian David Barrick
-
Patent number: 6895454Abstract: A method and an apparatus for sharing a request queue between two or more destinations. The method and apparatus utilizes a common data table and a common age queue. The age queue is used to select the oldest request. The corresponding request from the common data table is then extracted and sent to the appropriate destination.Type: GrantFiled: October 18, 2001Date of Patent: May 17, 2005Assignee: International Business Machines CorporationInventor: Brian David Barrick
-
Patent number: 6779162Abstract: A method of analyzing timing reports in a microprocessor design for quick identification of all negative timing paths has been provided. Timing paths are first grouped and saved in a list file. A timing analysis program searches the timing report file for timing paths that match those in the list file. Summary reports have been generated for the existing timing paths. If there are new timing paths, summary reports for the new timing paths are generated. The new timing paths go through the same procedure until all negative timing paths are identified.Type: GrantFiled: January 7, 2002Date of Patent: August 17, 2004Assignee: International Business Machines CorporationInventors: Brian David Barrick, Alvan Wing Ng
-
Publication number: 20030131328Abstract: A method of analyzing timing reports in a microprocessor design for quick identification of all negative timing paths has been provided. Timing paths are first grouped and saved in a list file. A timing analysis program searches the timing report file for timing paths that match those in the list file. Summary reports have been generated for the existing timing paths. If there are new timing paths, summary reports for the new timing paths are generated. The new timing paths go through the same procedure until all negative timing paths are identified.Type: ApplicationFiled: January 7, 2002Publication date: July 10, 2003Applicant: International Business Machines CorporationInventors: Brian David Barrick, Alvan Wing Ng
-
Patent number: 6578130Abstract: A method and apparatus for prefetching data in computer systems that tracks the number of prefetches currently active and compares that number to a preset maximum number of allowable prefetches to determine if additional prefetches should currently be performed. By limiting the number of prefetches being performed at any given time, the use of system resources for prefetching can be controlled, and thus system performance can be optimized.Type: GrantFiled: October 18, 2001Date of Patent: June 10, 2003Assignee: International Business Machines CorporationInventors: Brian David Barrick, Michael John Mayfield, Brian Patrick Hanley
-
Publication number: 20030079068Abstract: A method and an apparatus for sharing a request queue between two or more destinations. The method and apparatus utilizes a common data table and a common age queue. The age queue is used to select the oldest request. The corresponding request from the common data table is then extracted and sent to the appropriate destination.Type: ApplicationFiled: October 18, 2001Publication date: April 24, 2003Applicant: International Business Machines CorporationInventor: Brian David Barrick
-
Publication number: 20030079089Abstract: A method and apparatus for prefetching data in computer systems that tracks the number of prefetches currently active and compares that number to a preset maximum number of allowable prefetches to determine if additional prefetches should currently be performed. By limiting the number of prefetches being performed at any given time, the use of system resources for prefetching can be controlled, and thus system performance can be optimized.Type: ApplicationFiled: October 18, 2001Publication date: April 24, 2003Applicant: International Business Machines CorporationInventors: Brian David Barrick, Michael John Mayfield, Brian Patrick Hanley