Patents by Inventor Dwain A. Hicks

Dwain A. Hicks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11409663
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 11221963
    Abstract: A computer system includes a translation lookaside buffer (TLB) data cache and a processor. The TLB data cache includes a hierarchical configuration comprising a first TLB array, a second TLB array, a third TLB array, and a fourth TLB array. The processor is configured to receive a first address for translation to a second address, and determine whether translation should be performed using a hierarchical page table or a hashed page table. The processor also determines (using a first portion of the first address) whether the first array stores a mapping of the first portion of the first address in response to determining that the translation should be performed using the hashed page table, and retrieving the second address from the third TLB array or the fourth TLB array in response to determining that the first TLB array stores the mapping of the first portion of the first address.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 11157415
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
  • Patent number: 11061810
    Abstract: A system and method of stopping program execution includes tagging an entry in a virtual cache with an indicator bit where the virtual address of the entry corresponds to a virtual address range in a break point register, in response to a second virtual cache data access demand matching the entry tagged with the indicator bit, determining whether the second data access demand matches the virtual address range of the breakpoint register, and in response to the second data access demand matching the virtual address range of the break point register, flagging an exception and stopping execution of the program. In an embodiment, the method or system enters a slow-mode in response to the second data access demand matching the virtual cache entry with the indicator bit, and performs a full comparison between the second data access demand and the break point register virtual address range.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: July 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks, David A. Hrusecky, Bryan Lloyd
  • Publication number: 20210049107
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Application
    Filed: November 4, 2020
    Publication date: February 18, 2021
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10915459
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10824494
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more translation caches, where operation includes: determining, at the load/store slice, a real address from a cache hit in the translation cache for an effective address for an instruction received at a load/store slice; determining, at the load/store slice, an error condition corresponding to an access of the real address; determining, at the load/store slice, a process type indicating a source of the instruction to be a guest process; and responsive to determining the error condition, initiating, in dependence upon the process type indicating a source of the instruction to be a guest process, an effective address translation corresponding to a cache miss in the translation cache for the effective address for the instruction.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwain A. Hicks, Jonathan H. Raymond, Shih-Hsiung S. Tung
  • Publication number: 20200272557
    Abstract: A system and method of stopping program execution includes tagging an entry in a virtual cache with an indicator bit where the virtual address of the entry corresponds to a virtual address range in a break point register, in response to a second virtual cache data access demand matching the entry tagged with the indicator bit, determining whether the second data access demand matches the virtual address range of the breakpoint register, and in response to the second data access demand matching the virtual address range of the break point register, flagging an exception and stopping execution of the program. In an embodiment, the method or system enters a slow-mode in response to the second data access demand matching the virtual cache entry with the indicator bit, and performs a full comparison between the second data access demand and the break point register virtual address range.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: David Campbell, Dwain A. Hicks, David A. Hrusecky, Bryan Lloyd
  • Patent number: 10740248
    Abstract: A method or system of translating a virtualized address to a real address is disclosed that includes receiving a virtualized address for translation; generating a predicted intermediate address translation using a portion of the bit field of the virtualized address; determining a predicted real address using the predicted intermediate address or portion thereof; performing a translation of the virtualized address to an actual intermediate address; determining whether the predicted intermediate address is the same as the actual intermediate address; and in response to the predicted intermediate address being the same as the actual intermediate address, providing the predicted real address as the real address.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi
  • Publication number: 20200192817
    Abstract: A method or system of translating a virtualized address to a real address is disclosed that includes receiving a virtualized address for translation; generating a predicted intermediate address translation using a portion of the bit field of the virtualized address; determining a predicted real address using the predicted intermediate address or portion thereof; performing a translation of the virtualized address to an actual intermediate address; determining whether the predicted intermediate address is the same as the actual intermediate address; and in response to the predicted intermediate address being the same as the actual intermediate address, providing the predicted real address as the real address.
    Type: Application
    Filed: December 13, 2018
    Publication date: June 18, 2020
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi
  • Publication number: 20200183858
    Abstract: A computer system includes a translation lookaside buffer (TLB) data cache and a processor. The TLB data cache includes a hierarchical configuration comprising a first TLB array, a second TLB array, a third TLB array, and a fourth TLB array. The processor is configured to receive a first address for translation to a second address, and determine whether translation should be performed using a hierarchical page table or a hashed page table. The processor also determines (using a first portion of the first address) whether the first array stores a mapping of the first portion of the first address in response to determining that the translation should be performed using the hashed page table, and retrieving the second address from the third TLB array or the fourth TLB array in response to determining that the first TLB array stores the mapping of the first portion of the first address.
    Type: Application
    Filed: January 27, 2020
    Publication date: June 11, 2020
    Inventors: David Campbell, Dwain A. Hicks
  • Publication number: 20200174793
    Abstract: A method of optimized congruence class matching for concurrent memory translation requests to avoid memory access conflicts with respect to a virtual memory managed by a processor is provided. The method includes initiating a first table walk by a first memory access of the concurrent memory translation requests and pending a subsequent table walk initiated by a subsequent memory access of the concurrent memory translation requests. Then, the method determines whether the subsequent table walk will cause a memory access conflict with the first table walk based on the optimized congruence class matching. The subsequent memory access is rejected when the subsequent table walk will cause the memory access conflict with the first table walk.
    Type: Application
    Filed: December 4, 2018
    Publication date: June 4, 2020
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi, Kerey M. Tassin
  • Patent number: 10649778
    Abstract: A method of optimized congruence class matching for concurrent memory translation requests to avoid memory access conflicts with respect to a virtual memory managed by a processor is provided. The method includes initiating a first table walk by a first memory access of the concurrent memory translation requests and pending a subsequent table walk initiated by a subsequent memory access of the concurrent memory translation requests. Then, the method determines whether the subsequent table walk will cause a memory access conflict with the first table walk based on the optimized congruence class matching. The subsequent memory access is rejected when the subsequent table walk will cause the memory access conflict with the first table walk.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David Campbell, Dwain A. Hicks, Christian Jacobi, Kerey M. Tassin
  • Publication number: 20200133881
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Application
    Filed: October 29, 2018
    Publication date: April 30, 2020
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10621106
    Abstract: A computer system includes a translation lookaside buffer (TLB) data cache and a processor. The TLB data cache includes a hierarchical configuration comprising a first TLB array, a second TLB array, a third TLB array, and a fourth TLB array. The processor is configured to receive a first address for translation to a second address, and determine whether translation should be performed using a hierarchical page table or a hashed page table. The processor also determines (using a first portion of the first address) whether the first array stores a mapping of the first portion of the first address in response to determining that the translation should be performed using the hashed page table, and retrieving the second address from the third TLB array or the fourth TLB array in response to determining that the first TLB array stores the mapping of the first portion of the first address.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: April 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10534715
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: January 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
  • Patent number: 10042691
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more translation caches, where operation includes: determining, at the load/store slice, a real address from a cache hit in the translation cache for an effective address for an instruction received at a load/store slice; determining, at the load/store slice, an error condition corresponding to an access of the real address; determining, at the load/store slice, a process type indicating a source of the instruction to be a guest process; and responsive to determining the error condition, initiating, in dependence upon the process type indicating a source of the instruction to be a guest process, an effective address translation corresponding to a cache miss in the translation cache for the effective address for the instruction.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: August 7, 2018
    Assignee: International Business Machines Corporation
    Inventors: Dwain A. Hicks, Jonathan H. Raymond, Shih-Hsiung S. Tung
  • Patent number: 7730290
    Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: June 1, 2010
    Assignee: International Business Machines Corporation
    Inventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
  • Publication number: 20080148017
    Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.
    Type: Application
    Filed: February 25, 2008
    Publication date: June 19, 2008
    Inventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
  • Patent number: 7376816
    Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.
    Type: Grant
    Filed: November 12, 2004
    Date of Patent: May 20, 2008
    Assignee: International Business Machines Corporation
    Inventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray