Patents by Inventor Bryan Lloyd

Bryan Lloyd has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230367595
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Application
    Filed: July 25, 2023
    Publication date: November 16, 2023
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David Campbell
  • Patent number: 11775337
    Abstract: A first instruction for processing by a processor core is received. Whether the instruction is a larx is determined. Responsive to determining the instruction is a larx, whether a cacheline associated with the larx is locked is determined. Responsive to determining the cacheline associated with the larx is not locked, the cacheline associated with the larx is locked and a counter associated with a first thread of the processor core is started. The first thread is processing the first instruction.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: October 3, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Guy L. Guthrie, Susan E. Eisen, Dhivya Jeganathan, Luke Murray
  • Publication number: 20230293632
    Abstract: The present disclosure is directed to the use of perlecan compositions to reduce the risk of mortality in subjects due to neurological injury such as stroke, including large vessel occlusion, and traumatic brain injury. The disclosure is also directed to the use of perlecan compositions to reduce mortality in stroke patients treated with tPA.
    Type: Application
    Filed: July 26, 2021
    Publication date: September 21, 2023
    Applicant: STREAM BIOMEDICAL, INC.
    Inventors: Huston Davis ADKISSON, Bryan Lloyd CLOSSEN, Gary B. GAGE
  • Patent number: 11755324
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: September 12, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David A. Campbell
  • Patent number: 11748104
    Abstract: Technology for fusing certain load instructions and compare-immediate instructions in a computer processor having a load-store architecture with respect to transferring data between memory and registers of the computer processor. In some embodiments the load and compare-immediate instructions are consecutive. In some embodiments, the instructions are only merged if: (i) the respective RA and RT fields of the two instructions match; (ii) the immediate field of the compare-immediate instruction has a certain value, or falls within a range of certain values; and/or (iii) the instructions are received in a consecutive manner.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: September 5, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, David A. Hrusecky, Sundeep Chadha, Dung Q. Nguyen, Christian Gerhard Zoellin, Brian W. Thompto, Sheldon Bernard Levenstein, Phillip G. Williams
  • Patent number: 11687337
    Abstract: A method for operation of a processor core is provided. A rejected first load instruction is received that has been rejected due to a false load-hit-store detection against a first store instruction. A warning label is generated on a basis of the false load-hit-store detection. The warning label is added to the received first load instruction to create a labeled first load instruction. The labeled first load instruction is issued such that the warning label causes the labeled first load instruction to bypass the first store instruction in the store reorder queue and thereby avoid another false load-hit-store detection against the first store instruction. A computer system and a processor core configured to operate according to the method are also disclosed herein.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: June 27, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian Chen, Kimberly M. Fernsler
  • Patent number: 11650926
    Abstract: A system and method of handling data access demands in a processor virtual cache that includes: determining if a virtual cache data access demand missed because of a difference in the context tag of the data access demand and a corresponding entry in the virtual cache with the same virtual address as the data access demand; in response to the virtual cache missing, determining whether the alias tag valid bit is set in the corresponding entry of the virtual cache; in response to the alias tag valid bit not being set, determining whether the virtual cache data access demand is a synonym of the corresponding entry in the virtual cache; and in response to the virtual access demand being a synonym of the corresponding entry in the virtual cache with the same virtual address but a different context tag, updating information in a tagged entry in an alias table.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: May 16, 2023
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd
  • Patent number: 11645208
    Abstract: A computer system includes a processor and a prefetch engine. The processor is configured to generate a demand access stream. The prefetch engine is configured to generate a first prefetch request and a second prefetch request based on the demand access stream, to output the first prefetch request to a first translation lookaside buffer (TLB), and to output the second prefetch request to a second TLB that is different from the first TLB. The processor performs a first TLB lookup in the first TLB based on one of the demand access stream or the first prefetch request, and performs a second TLB lookup in the second TLB based on the second prefetch request.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd, George W. Rohrbaugh, III, Vivek Britto, Mohit Karve
  • Publication number: 20230063976
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Application
    Filed: August 31, 2021
    Publication date: March 2, 2023
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David A. Campbell
  • Publication number: 20230061030
    Abstract: A first instruction for processing by a processor core is received. Whether the instruction is a larx is determined. Responsive to determining the instruction is a larx, whether a cacheline associated with the larx is locked is determined. Responsive to determining the cacheline associated with the larx is not locked, the cacheline associated with the larx is locked and a counter associated with a first thread of the processor core is started. The first thread is processing the first instruction.
    Type: Application
    Filed: September 2, 2021
    Publication date: March 2, 2023
    Inventors: Bryan Lloyd, Guy L. Guthrie, Susan E. Eisen, Dhivya Jeganathan, Luke Murray
  • Publication number: 20230056077
    Abstract: A method for operation of a processor core is provided. A rejected first load instruction is received that has been rejected due to a false load-hit-store detection against a first store instruction. A warning label is generated on a basis of the false load-hit-store detection. The warning label is added to the received first load instruction to create a labeled first load instruction. The labeled first load instruction is issued such that the warning label causes the labeled first load instruction to bypass the first store instruction in the store reorder queue and thereby avoid another false load-hit-store detection against the first store instruction. A computer system and a processor core configured to operate according to the method are also disclosed herein.
    Type: Application
    Filed: August 20, 2021
    Publication date: February 23, 2023
    Inventors: Bryan Lloyd, Brian Chen, Kimberly M. Fernsler
  • Publication number: 20230028929
    Abstract: A method for operation of a processor core is provided. First instruction data is consulted to determine whether a second instruction has execution data that matches the first instruction data. The first instruction data is from a first instruction. In response to determining that the second instruction has execution data that matches the first instruction data, prior data is copied into the second instruction. The first instruction depends on the prior data. After receiving an availability indication of the prior data, both the first instruction and the second instruction are woken for execution, without requiring execution of the first instruction before waking of the second instruction. The second instruction is executed by using the prior data as a skip of the first instruction. A computer system and a processor core configured to operate according to the method are also disclosed herein.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 26, 2023
    Inventors: Brian D. Barrick, Bryan Lloyd, Dung Q. Nguyen, Brian W. Thompto, Edmund Joseph Gieske, John B. Griswell, JR.
  • Patent number: 11537402
    Abstract: A method for operation of a processor core is provided. First instruction data is consulted to determine whether a second instruction has execution data that matches the first instruction data. The first instruction data is from a first instruction. In response to determining that the second instruction has execution data that matches the first instruction data, prior data is copied into the second instruction. The first instruction depends on the prior data. After receiving an availability indication of the prior data, both the first instruction and the second instruction are woken for execution, without requiring execution of the first instruction before waking of the second instruction. The second instruction is executed by using the prior data as a skip of the first instruction. A computer system and a processor core configured to operate according to the method are also disclosed herein.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: December 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian D. Barrick, Bryan Lloyd, Dung Q. Nguyen, Brian W. Thompto, Edmund Joseph Gieske, John B. Griswell, Jr.
  • Patent number: 11537519
    Abstract: A memory-referent instruction is executed to calculate a target effective address (EA) of a corresponding memory-referent request. An array entry in an upper level cache is allocated, and the EA is specified in a corresponding EA directory entry. While in-flight, the memory-referent request is buffered in a queue in association with a pointer to the entry in the EA directory. Based on receiving a translation invalidation request requesting invalidation of an address translation in a translation structure, the processor core walks the EA directory, determines the EA in the entry matches an address range specified by the translation invalidation request, and, based on the match, precisely marks the memory-referent request using the pointer to the EA directory entry. Based on the marking, the translation invalidation request is permitted to complete with reference to the processor core only after the memory-referent request has drained from the processing unit.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: December 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen, David Campbell, Bryan Lloyd, Samuel David Kirchhoff, Jeffrey A. Stuecheli
  • Patent number: 11520585
    Abstract: In at least one embodiment, a processing unit includes a processor core and a vertical cache hierarchy including at least a store-through upper-level cache and a store-in lower-level cache. The upper-level cache includes a data array and an effective address (EA) directory. The processor core includes an execution unit, an address translation unit, and a prefetch unit configured to initiate allocation of a directory entry in the EA directory for a store target EA without prefetching a cache line of data into the corresponding data entry in the data array. The processor core caches in the directory entry an EA-to-RA address translation information for the store target EA, such that a subsequent demand store access that hits in the directory entry can avoid a performance penalty associated with address translation by the translation unit.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: December 6, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian W. Thompto, George W. Rohrbaugh, III, Mohit Karve, Vivek Britto
  • Patent number: 11520704
    Abstract: A load-store unit (LSU) of a processor core determines whether or not a second store operation specifies an adjacent update to that specified by a first store operation. The LSU additionally determines whether the total store data length of the first and second store operations exceeds a maximum size. Based on determining the second store operation specifies an adjacent update and the total store data length does not exceed the maximum size, the LSU merges the first and second store operations and writes merged store data into a same write block of a cache. Based on determining that the total store data length exceeds the maximum size, the LSU splits the second store operation into first and second portions, merges the first portion with the first store operation, and writes store data of the partially merged store operation into the write block.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: December 6, 2022
    Assignee: International Business Machines Corporation
    Inventors: Robert A. Cordes, Bryan Lloyd
  • Patent number: 11500774
    Abstract: A system and method of handling access demands in a virtual cache comprising, by a processing system, checking if a virtual cache access demand missed because of a synonym tagged in the virtual cache; in response to the virtual cache access demand missing because of a synonym tagged in the virtual cache, updating the virtual address tag in the virtual cache to a new virtual address tag; searching for additional synonyms tagged in the virtual cache; and in response to finding additional synonyms tagged in the virtual cache, updating the virtual address tag of the additional synonyms to the new virtual address tag.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd
  • Publication number: 20220309001
    Abstract: A computer system includes a processor and a prefetch engine. The processor is configured to generate a demand access stream. The prefetch engine is configured to generate a first prefetch request and a second prefetch request based on the demand access stream, to output the first prefetch request to a first translation lookaside buffer (TLB), and to output the second prefetch request to a second TLB that is different from the first TLB. The processor performs a first TLB lookup in the first TLB based on one of the demand access stream or the first prefetch request, and performs a second TLB lookup in the second TLB based on the second prefetch request.
    Type: Application
    Filed: March 29, 2021
    Publication date: September 29, 2022
    Inventors: David Campbell, Bryan Lloyd, George W. Rohrbaugh, III, VIVEK BRITTO, Mohit Karve
  • Patent number: 11379241
    Abstract: System includes at least one computer processor having a load store execution unit (LSU) for processing load and store instructions, wherein the LSU includes (a) a store queue having a plurality of entries for storing data, each store queue entry having a data field for storing the data, the data field having a width for storing the data; and (b) a gather buffer for holding data, wherein the processor is configured to: process oversize data larger than the width of the data field of the store queue, and process an oversize load instruction for oversize data by executing two passes through the LSU, a first pass through the LSU configured to store a first portion of the oversize data in the gather buffer and a second pass through the LSU configured to merge the first portion of the oversize data with a second portion of the oversize data.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: July 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian Chen, Kimberly M. Fernsler, Robert A. Cordes, David A. Hrusecky
  • Patent number: 11321088
    Abstract: A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Samuel David Kirchhoff, Brian Chen, Kimberly M. Fernsler, David A. Hrusecky