Patents by Inventor David S. Levitan

David S. Levitan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180275993
    Abstract: A computer processor includes a branch prediction unit that includes a local branch predictor and a global branch predictor. Managing power consumption in such a computer processor includes, for each of a plurality of branch instructions: performing, by the local branch predictor, a local branch prediction; performing, by each of the global branch predictors, a global branch prediction; determining to utilize the local branch prediction over the global branch predictions as a branch prediction for the branch instruction; incrementing a value of a counter; determining whether the value of the counter exceeds a predetermined threshold; and if the value of the counter exceeds the predetermined threshold, powering down at least one of the global branch predictors and configuring the branch prediction unit to bypass the powered down global branch predictor for branch predictions of subsequent branch instructions.
    Type: Application
    Filed: June 1, 2018
    Publication date: September 27, 2018
    Inventors: DAVID S. LEVITAN, NICHOLAS R. ORZOL, ROBERT A. PHILHOWER
  • Patent number: 10078514
    Abstract: A technique for operating a processor includes allocating an entry in a prefetch filter queue (PFQ) for a cache line address (CLA) in response to the CLA missing in an upper level instruction cache. In response to the CLA subsequently hitting in the upper level instruction cache, an associated prefetch value for the entry in the PFQ is updated. In response to the entry being aged-out of the PFQ, an entry in a backing array for the CLA and the associated prefetch value is allocated. In response to subsequently determining that prefetching is required for the CLA, the backing array is accessed to determine the associated prefetch value for the CLA. A cache line at the CLA and a number of sequential cache lines specified by the associated prefetch value in the backing array are then prefetched into the upper level instruction cache.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: September 18, 2018
    Assignee: International Business Machines Corporation
    Inventors: Richard J. Eickemeyer, Sheldon B. Levenstein, David S. Levitan, Mauricio J. Serrano, Brian W. Thompto
  • Patent number: 10037207
    Abstract: A computer processor includes a branch prediction unit that includes a local branch predictor and a global branch predictor. Managing power consumption in such a computer processor includes, for each of a plurality of branch instructions: performing, by the local branch predictor, a local branch prediction; performing, by each of the global branch predictors, a global branch prediction; determining to utilize the local branch prediction over the global branch predictions as a branch prediction for the branch instruction; incrementing a value of a counter; determining whether the value of the counter exceeds a predetermined threshold; and if the value of the counter exceeds the predetermined threshold, powering down at least one of the global branch predictors and configuring the branch prediction unit to bypass the powered down global branch predictor for branch predictions of subsequent branch instructions.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: July 31, 2018
    Assignee: International Business Machines Corporation
    Inventors: David S. Levitan, Nicholas R. Orzol, Robert A. Philhower
  • Patent number: 9996351
    Abstract: A computer processor includes a branch prediction unit that includes a local branch predictor and a global branch predictor. Managing power consumption in such a computer processor includes, for each of a plurality of branch instructions: performing, by the local branch predictor, a local branch prediction; performing, by each of the global branch predictors, a global branch prediction; determining to utilize the local branch prediction over the global branch predictions as a branch prediction for the branch instruction; incrementing a value of a counter; determining whether the value of the counter exceeds a predetermined threshold; and if the value of the counter exceeds the predetermined threshold, powering down at least one of the global branch predictors and configuring the branch prediction unit to bypass the powered down global branch predictor for branch predictions of subsequent branch instructions.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: June 12, 2018
    Assignee: International Business Machines Corporation
    Inventors: David S. Levitan, Nicholas R. Orzol, Robert A. Philhower
  • Patent number: 9983878
    Abstract: Branch prediction is provided by generating a first index from a previous instruction address and from a first branch history vector having a first length. A second index is generated from the previous instruction address and from a second branch history vector that is longer than the first vector. Using the first index, a first branch prediction is retrieved from a first branch prediction table. Using the second index, a second branch prediction is retrieved from a second branch prediction table. Based upon additional branch history data, the first branch history vector and the second branch history vector are updated. A first hash value is generated from a current instruction address and the updated first branch history vector. A second hash value is generated from the current instruction address and the updated second branch history vector. One of the branch predictions are selected based upon the hash values.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: May 29, 2018
    Assignee: International Business Machines Corporation
    Inventors: David S. Levitan, Jose E. Moreira, Mauricio J. Serrano
  • Patent number: 9904551
    Abstract: Branch prediction is provided by generating a first index from a previous instruction address and from a first branch history vector having a first length. A second index is generated from the previous instruction address and from a second branch history vector that is longer than the first vector. Using the first index, a first branch prediction is retrieved from a first branch prediction table. Using the second index, a second branch prediction is retrieved from a second branch prediction table. Based upon additional branch history data, the first branch history vector and the second branch history vector are updated. A first hash value is generated from a current instruction address and the updated first branch history vector. A second hash value is generated from the current instruction address and the updated second branch history vector. One of the branch predictions are selected based upon the hash values.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: February 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: David S. Levitan, Jose E. Moreira, Mauricio J. Serrano
  • Patent number: 9898295
    Abstract: Branch prediction is provided by generating a first index from a previous instruction address and from a first branch history vector having a first length. A second index is generated from the previous instruction address and from a second branch history vector that is longer than the first vector. Using the first index, a first branch prediction is retrieved from a first branch prediction table. Using the second index, a second branch prediction is retrieved from a second branch prediction table. Based upon additional branch history data, the first branch history vector and the second branch history vector are updated. A first hash value is generated from a current instruction address and the updated first branch history vector. A second hash value is generated from the current instruction address and the updated second branch history vector. One of the branch predictions are selected based upon the hash values.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: February 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: David S. Levitan, Jose E. Moreira, Mauricio J. Serrano
  • Publication number: 20180004516
    Abstract: Administering ITAGs in a computer processor, includes, for each instruction in a single-thread mode: incrementing a value of a wrap around counter; setting a wrap bit to a predefined value if incrementing the value causes the counter to wrap around; generating, in dependence upon the counter value and the wrap bit, an ITAG for the instruction, the ITAG comprising a bit string having a wrap bit and an index comprising the counter value; and, for each instruction in a multi-thread mode: incrementing the value of the wrap around counter; setting a wrap bit to a predefined value if incrementing the value causes the counter to wrap around; and generating, in dependence upon the counter value and the wrap bit, an ITAG for the instruction, the ITAG comprising a bit string having the wrap bit, a thread identifier, and an index comprising the counter value.
    Type: Application
    Filed: July 1, 2016
    Publication date: January 4, 2018
    Inventors: KURT A. FEISTE, HUNG Q. LE, DAVID S. LEVITAN, ALBERT J. VAN NORSTRAND, JR.
  • Publication number: 20170344372
    Abstract: A computer processor includes a branch prediction unit that includes a local branch predictor and a global branch predictor. Managing power consumption in such a computer processor includes, for each of a plurality of branch instructions: performing, by the local branch predictor, a local branch prediction; performing, by each of the global branch predictors, a global branch prediction; determining to utilize the local branch prediction over the global branch predictions as a branch prediction for the branch instruction; incrementing a value of a counter; determining whether the value of the counter exceeds a predetermined threshold; and if the value of the counter exceeds the predetermined threshold, powering down at least one of the global branch predictors and configuring the branch prediction unit to bypass the powered down global branch predictor for branch predictions of subsequent branch instructions.
    Type: Application
    Filed: July 27, 2016
    Publication date: November 30, 2017
    Inventors: DAVID S. LEVITAN, NICHOLAS R. ORZOL, ROBERT A. PHILHOWER
  • Publication number: 20170344378
    Abstract: Methods and apparatus for managing an effective address table (EAT) in a multi-slice processor including receiving, from an instruction sequence unit, a next-to-complete instruction tag (ITAG); obtaining, from the EAT, a first ITAG from a tail-plus-one EAT row, wherein the EAT comprises a tail EAT row that precedes the tail-plus-one EAT row; determining, based on a comparison of the next-to-complete ITAG and the first ITAG, that the tail EAT row has completed; and retiring the tail EAT row based on the determination.
    Type: Application
    Filed: July 27, 2016
    Publication date: November 30, 2017
    Inventors: AKASH V. GIRI, DAVID S. LEVITAN, MEHUL PATEL, ALBERT J. VAN NORSTRAND, JR.
  • Publication number: 20170344379
    Abstract: Methods and apparatus for generating a mask vector for determining a processor instruction address using an instruction tag (ITAG) in a multi-slice processor including receiving a first ITAG value and an interrupt ITAG value; generating the mask vector divided into mask sections comprising a plurality of elements with unset flags; for each mask section: if the mask section comprises the first ITAG value, setting a flag of an element in the mask section corresponding to the first ITAG value; if the mask section comprises the interrupt ITAG value, setting a flag of an element in the mask section corresponding to the interrupt ITAG value; setting each flag of each element in the mask vector between the element in the mask vector corresponding to the first ITAG value and the element in the mask vector corresponding to the interrupt ITAG value; and providing the mask vector to an instruction fetch unit.
    Type: Application
    Filed: May 24, 2016
    Publication date: November 30, 2017
    Inventors: David S. Levitan, Mehul Patel
  • Publication number: 20170344370
    Abstract: Operation of a multi-slice processor implementing a tagged geometric history length prediction unit and an effective address table aligned with an update table, where the multi-slice processor includes a plurality of execution slices. Operation of such a multi-slice processor includes: receiving, at an effective address table and at a TAGE update table, information for a branch instruction dispatched to an execution slice, wherein the effective address table and the TAGE update table are in alignment; responsive to the branch instruction being taken, updating the effective address table and the TAGE update table to indicate the branch instruction being taken; and updating, in dependence upon the alignment between the effective address table and the TAGE update table, the TAGE branch prediction unit with update information from both the effective address table and the TAGE update table.
    Type: Application
    Filed: May 25, 2016
    Publication date: November 30, 2017
    Inventors: DAVID S. LEVITAN, NICHOLAS R. ORZOL
  • Publication number: 20170344469
    Abstract: Methods and apparatus for managing an effective address table (EAT) in a multi-slice processor including receiving, from an instruction sequence unit, a next-to-complete instruction tag (ITAG); obtaining, from the EAT, a first ITAG from a tail-plus-one EAT row, wherein the EAT comprises a tail EAT row that precedes the tail-plus-one EAT row; determining, based on a comparison of the next-to-complete ITAG and the first ITAG, that the tail EAT row has completed; and retiring the tail EAT row based on the determination.
    Type: Application
    Filed: May 31, 2016
    Publication date: November 30, 2017
    Inventors: AKASH V. GIRI, DAVID S. LEVITAN, MEHUL PATEL, ALBERT J. VAN NORSTRAND, Jr.
  • Publication number: 20170344377
    Abstract: A computer processor includes a branch prediction unit that includes a local branch predictor and a global branch predictor. Managing power consumption in such a computer processor includes, for each of a plurality of branch instructions: performing, by the local branch predictor, a local branch prediction; performing, by each of the global branch predictors, a global branch prediction; determining to utilize the local branch prediction over the global branch predictions as a branch prediction for the branch instruction; incrementing a value of a counter; determining whether the value of the counter exceeds a predetermined threshold; and if the value of the counter exceeds the predetermined threshold, powering down at least one of the global branch predictors and configuring the branch prediction unit to bypass the powered down global branch predictor for branch predictions of subsequent branch instructions.
    Type: Application
    Filed: May 26, 2016
    Publication date: November 30, 2017
    Inventors: DAVID S. LEVITAN, NICHOLAS R. ORZOL, ROBERT A. PHILHOWER
  • Publication number: 20170344368
    Abstract: Methods and apparatus for identifying an effective address (EA) using an interrupt instruction tag (ITAG) in a multi-slice processor including receiving, by an instruction fetch unit of the processor, the interrupt ITAG; retrieving an effective address table (EAT) row from an EAT, wherein the EAT row comprises a range of EAs and a first ITAG of a range of ITAGs; accessing a processor instruction vector comprising a plurality of elements, each element corresponding to one of a plurality of ITAGs; applying a mask to the processor instruction vector to obtain a portion of the processor instruction vector that begins with an element corresponding to the first ITAG and is defined by an element corresponding to the interrupt ITAG; calculating an EA offset; and identifying the EA for the interrupt ITAG using the EA offset and the range of EAs in the retrieved EAT row.
    Type: Application
    Filed: May 31, 2016
    Publication date: November 30, 2017
    Inventors: DAVID S. LEVITAN, MEHUL PATEL, ALBERT J. VAN NORSTRAND, Jr., PHILLIP G. WILLIAMS
  • Publication number: 20170329607
    Abstract: Hazard avoidance in a multi-slice processor including adding, to a hazard table, an entry for an effective address, wherein the entry comprises an instruction tag (ITAG) offset for the effective address; fetching, by an instruction fetch unit, a processor instruction from a memory location using the effective address; determining that the hazard table includes the entry for the effective address; retrieving, from the hazard table, the ITAG offset for the effective address; identifying a prior internal operation (TOP) using the ITAG offset; and decoding the processor instruction into a load IOP with a dependency on the prior IOP.
    Type: Application
    Filed: May 16, 2016
    Publication date: November 16, 2017
    Inventors: RICHARD J. EICKEMEYER, JOHN B. GRISWELL, JR., DAVID S. LEVITAN, BRIAN W. THOMPTO
  • Publication number: 20170329715
    Abstract: Hazard avoidance in a multi-slice processor including adding, to a hazard table, an entry for an effective address, wherein the entry comprises an instruction tag (ITAG) offset for the effective address; fetching, by an instruction fetch unit, a processor instruction from a memory location using the effective address; determining that the hazard table includes the entry for the effective address; retrieving, from the hazard table, the ITAG offset for the effective address; identifying a prior internal operation (TOP) using the ITAG offset; and decoding the processor instruction into a load IOP with a dependency on the prior IOP.
    Type: Application
    Filed: July 26, 2016
    Publication date: November 16, 2017
    Inventors: RICHARD J. EICKEMEYER, JOHN B. GRISWELL, JR., DAVID S. LEVITAN, BRIAN W. THOMPTO
  • Publication number: 20170329608
    Abstract: A technique for operating a processor includes allocating an entry in a prefetch filter queue (PFQ) for a cache line address (CLA) in response to the CLA missing in an upper level instruction cache. In response to the CLA subsequently hitting in the upper level instruction cache, an associated prefetch value for the entry in the PFQ is updated. In response to the entry being aged-out of the PFQ, an entry in a backing array for the CLA and the associated prefetch value is allocated. In response to subsequently determining that prefetching is required for the CLA, the backing array is accessed to determine the associated prefetch value for the CLA. A cache line at the CLA and a number of sequential cache lines specified by the associated prefetch value in the backing array are then prefetched into the upper level instruction cache.
    Type: Application
    Filed: May 11, 2016
    Publication date: November 16, 2017
    Inventors: RICHARD J. EICKEMEYER, SHELDON B. LEVENSTEIN, DAVID S. LEVITAN, MAURICIO J. SERRANO, Jr., BRIAN W. THOMPTO
  • Publication number: 20170315810
    Abstract: A technique for operating a processor includes identifying a difficult branch instruction (branch) whose target address (target) has been mispredicted multiple times. Information about the branch (which includes a current target and a next target) is learned and stored in a data structure. In response to the branch executing subsequent to the storing, whether a branch target of the branch corresponds to the current target in the data structure is determined. In response to the branch target of the branch corresponding to the current target of the branch in the data structure, the next target of the branch that is associated with the current target of the branch in the data structure is determined. In response to detecting that a next instance of the branch has been fetched, the next target of the branch is utilized as the predicted target for execution of the next instance of the branch.
    Type: Application
    Filed: April 28, 2016
    Publication date: November 2, 2017
    Inventors: RICHARD J. EICKEMEYER, NAGA P. GORTI, DAVID S. LEVITAN, ALBERT J. VAN NORSTRAND, JR.
  • Publication number: 20170277535
    Abstract: A technique for operating a processor includes receiving, by a history buffer, a flush tag associated with an oldest instruction to be flushed from a processor pipeline. In response to the flush tag being older than a first instruction tag that identifies a first instruction associated with a current value stored in a register of the register file and younger than a second instruction tag that identifies a second instruction associated with a previous value that was stored in the register of the register file, the history buffer transfers the previous value for the register to the register file. In response to the flush tag not being older than the first instruction tag and younger than the second instruction tag, the history buffer does not transfer the previous value for the register to the register file (as such, the register maintains the current value following a pipeline flush).
    Type: Application
    Filed: March 24, 2016
    Publication date: September 28, 2017
    Inventors: HUNG Q. LE, DAVID S. LEVITAN, DUNG Q. NGUYEN, ALBERT J. VAN NORSTRAND, JR.