Patents by Inventor Bartholomew Blaner

Bartholomew Blaner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10846235
    Abstract: An integrated circuit for a coherent data processing system includes a first communication interface for communicatively coupling the integrated circuit with the coherent data processing system, a second communication interface for communicatively coupling the integrated circuit with an accelerator unit including an effective address-based accelerator cache for buffering copies of data from a system memory of the coherent data processing system, and a real address-based directory inclusive of contents of the accelerator cache. The real address-based directory assigns entries based on real addresses utilized to identify storage locations in the system memory. The integrated circuit further includes request logic that communicates memory access requests and request responses with the accelerator unit via the second communication interface.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: November 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, William J. Starke, Kenneth M. Valk, John D. Irish, Lakshminarayana Arimilli
  • Patent number: 10831593
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The operation and the fault memory address translation are flushed from the hardware accelerator including augmenting the operation with an entity identifier. The switchboard forwards the operation with the fault memory address translation and the entity identifier from the hardware accelerator to a second buffer. The operating system repairs the fault memory address translation. The operating system sends the operation to the processing core utilizing an effective address based on the entity identifier. The switchboard, supported by the processing core, forwards the operation with the repaired memory address translation to a first buffer and the hardware accelerator executes the operation with the repaired address.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Patent number: 10824585
    Abstract: An array processor includes a managing element having a load streaming unit coupled to multiple processing elements. The load streaming unit provides input data portions to each of a first subset of the processing elements and also receives output data from each of a second subset of the processing elements based on a comparatively sorted combination of the input data portions provided to the first subset of processing elements. Furthermore, each of processing elements is configurable by the managing element to compare input data portions received from either the load streaming unit or two or more of the other processing elements, wherein the input data portions are stored for processing in respective queues. Each processing unit is further configurable to select an input data portion to be output data based on the comparison, and in response to selecting the input data portion, remove a queue entry corresponding to the selected input data portion.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ganesh Balakrishnan, Bartholomew Blaner, John J. Reilly, Jeffrey A. Stuecheli
  • Patent number: 10824953
    Abstract: Various implementations of a method, system, and computer program product for pattern matching using a reconfigurable array processor are disclosed. In one embodiment, a processor array manager of the reconfigurable array processor receives an input data stream for pattern matching and generates a tokenized input data stream from the input data stream. A different portion of the tokenized input data stream is provided to each of a plurality of processing elements of the reconfigurable array processor. Each processing element can compare the received portion of the tokenized input data stream against one or more reference patterns to generate an intermediate result that indicates whether the portion of the tokenized input data stream matches a reference pattern. The processor array manager can combine the intermediate results received from each processing element to yield a final result that indicates whether the input data stream includes a reference pattern.
    Type: Grant
    Filed: January 21, 2015
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, Ganesh Balakrishnan, Bartholomew Blaner, Peter A. Sandon, Jeffrey A. Stuecheli
  • Patent number: 10824952
    Abstract: Various implementations of a method, system, and computer program product for pattern matching using a reconfigurable array processor are disclosed. In one embodiment, a processor array manager of the reconfigurable array processor receives an input data stream for pattern matching and generates a tokenized input data stream from the input data stream. A different portion of the tokenized input data stream is provided to each of a plurality of processing elements of the reconfigurable array processor. Each processing element can compare the received portion of the tokenized input data stream against one or more reference patterns to generate an intermediate result that indicates whether the portion of the tokenized input data stream matches a reference pattern. The processor array manager can combine the intermediate results received from each processing element to yield a final result that indicates whether the input data stream includes a reference pattern.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, Ganesh Balakrishnan, Bartholomew Blaner, Peter A. Sandon, Jeffrey A. Stuecheli
  • Patent number: 10817491
    Abstract: Responsive to a data lookup in a buffer triggered for a search string, a processor searches for a selection of pairs from among multiple pairs of a hash table read from at least one address hash of the search string and matching at least one data hash of the search string, each row of the hash table assigned to a separate address hash, each of the pairs comprising a pointer to a location in the buffer and a tag with a previous data hash of a previously buffered string in the buffer. The processor identifies, from among the selection of pairs, at least one separate location in the buffer most frequently pointed to by two or more pointers within the selection of pairs. The processor, responsive to at least one read string from the buffer at the at least one separate location matching at least a substring of the search string, outputs the at least one separate location as the response to the data lookup.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: October 27, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Bartholomew Blaner, John J. Reilly
  • Patent number: 10803040
    Abstract: Responsive to a data lookup in a buffer triggered for a search string, a processor searches for a selection of pairs from among multiple pairs of a hash table read from at least one address hash of the search string and matching at least one data hash of the search string, each row of the hash table assigned to a separate address hash, each of the pairs comprising a pointer to a location in the buffer and a tag with a previous data hash of a previously buffered string in the buffer. The processor identifies, from among the selection of pairs, at least one separate location in the buffer most frequently pointed to by two or more pointers within the selection of pairs. The processor, responsive to at least one read string from the buffer at the at least one separate location matching at least a substring of the search string, outputs the at least one separate location as the response to the data lookup.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: October 13, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Bartholomew Blaner, John J. Reilly
  • Patent number: 10761995
    Abstract: An integrated circuit includes a first communication interface for communicatively coupling the integrated circuit with a coherent data processing system, a second communication interface for communicatively coupling the integrated circuit with an accelerator unit including an effective address-based accelerator cache for buffering copies of data from a system memory, and a real address-based directory inclusive of contents of the accelerator cache. The real address-based directory assigns entries based on real addresses utilized to identify storage locations in the system memory. The integrated circuit further includes directory control logic that configures at least a number of congruence classes utilized in the real address-based directory based on configuration parameters specified on behalf of or by the accelerator unit.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: September 1, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jeffrey A. Stuecheli, Michael S. Siegel, William J. Starke, Curtis C. Wollbrink, Kenneth M. Valk, Lakshminarayana Arimilli, John D. Irish
  • Patent number: 10698812
    Abstract: Updating cache devices includes a processor to detect a first set of hash functions and a first bit array corresponding to elements of a cache. In some examples, the processor detects a first instruction to add a new element to the cache and modify the first bit array based on the new element. Additionally, the processor processes a first invalidation operation and generates a second bit array and a second set of hash functions, while processing additional instructions. The processor deletes the first bit array and the first set of hash functions in response to detecting that the second bit array and the second set of hash functions have each been generated. Some examples process a second invalidation operation with the second set of hash functions and the second bit array.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: June 30, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Bar-Joshua, Bartholomew Blaner, Yiftach Benjamini, Michael Grubman
  • Publication number: 20200167224
    Abstract: Aspects include copying a plurality of input data into a buffer of a processor configured to perform speculatively executing pipelined streaming of the input data. A bit counter maintains a difference in a number of input bits from the input data entering a pipeline of the processor and a number of the input bits consumed in the pipeline. The pipeline is flushed based on detecting an error. A portion of the input data is recirculated from the buffer into the pipeline based on a value of the bit counter.
    Type: Application
    Filed: November 26, 2018
    Publication date: May 28, 2020
    Inventors: Bulent Abali, Bartholomew Blaner, John J. Reilly
  • Patent number: 10599569
    Abstract: A technique for operating a memory management unit (MMU) of a processor includes the MMU detecting that one or more address translation invalidation requests are indicated for an accelerator unit (AU). In response to detecting that the invalidation requests are indicated, the MMU issues a raise barrier request for the AU. In response to detecting a raise barrier response from the AU to the raise barrier request the MMU issues the invalidation requests to the AU. In response to detecting an address translation invalidation response from the AU to each of the invalidation requests, the MMU issues a lower barrier request to the AU. In response to detecting a lower barrier response from the AU to the lower barrier request, the MMU resumes handling address translation check-in and check-out requests received from the AU.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner
  • Patent number: 10585744
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator pulls an operation from a first buffer and adjusts a receive credit value in a first window context operatively coupled to the hypervisor. The receive credit value to limit a first quantity of one or more first tasks in the first buffer. The hardware accelerator determines at least one memory address translation related to the operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Publication number: 20200073817
    Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes receiving a request for an Effective Address to Real Address Translation (ERAT); determining whether there is a permissions miss; changing, in response to determining there is a permission miss, permissions of an ERAT cache entry; and providing a Real Address translation. The method, computer program product, and computer system may optionally include providing a promote checkout request to a memory management unit (MMU).
    Type: Application
    Filed: August 31, 2018
    Publication date: March 5, 2020
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Benjamin Herrenschmidt, Robert D. Herzl, Jody Joyner, Jon K. Kriegel, Charles D. Wait
  • Publication number: 20200073816
    Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes setting a threshold number of free Effective to Real Address Translation (ERAT) cache entries in an ERAT cache; determining whether a total number of free ERAT cache entries is less than or equal to the threshold number of free ERAT cache entries; allocating, in response to determining that the total number of free entries is less than or equal to the threshold number, one or more active ERAT cache entries to be speculatively checked in to a memory management unit (MMU); and speculatively checking in the one or more active ERAT cache entries to the MMU.
    Type: Application
    Filed: August 30, 2018
    Publication date: March 5, 2020
    Applicants: International Business Machines Corporation, International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner, Jeffrey A. Stuecheli
  • Patent number: 10572337
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The operation and the fault memory address translation are flushed from the hardware accelerator including augmenting the operation with an entity identifier. The switchboard forwards the operation with the fault memory address translation and the entity identifier from the hardware accelerator to a second buffer. The operating system repairs the fault memory address translation. The operating system sends the operation to the processing core utilizing an effective address based on the entity identifier. The switchboard, supported by the processing core, forwards the operation with the repaired memory address translation to a first buffer and the hardware accelerator executes the operation with the repaired address.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Patent number: 10572381
    Abstract: Updating cache devices includes a processor to detect a first set of hash functions and a first bit array corresponding to elements of a cache. In some examples, the processor detects a first instruction to add a new element to the cache and modify the first bit array based on the new element. Additionally, the processor processes a first invalidation operation and generates a second bit array and a second set of hash functions, while processing additional instructions. The processor deletes the first bit array and the first set of hash functions in response to detecting that the second bit array and the second set of hash functions have each been generated. Some examples process a second invalidation operation with the second set of hash functions and the second bit array.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Bar-Joshua, Yiftach Benjamini, Bartholomew Blaner, Michael Grubman
  • Patent number: 10565102
    Abstract: Updating cache devices includes a processor to detect a first set of hash functions and a first bit array corresponding to elements of a cache. In some examples, the processor detects a first instruction to add a new element to the cache and modify the first bit array based on the new element. Additionally, the processor processes a first invalidation operation and generates a second bit array and a second set of hash functions, while processing additional instructions. The processor deletes the first bit array and the first set of hash functions in response to detecting that the second bit array and the second set of hash functions have each been generated. Some examples process a second invalidation operation with the second set of hash functions and the second bit array.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: February 18, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Bar-Joshua, Yiftach Benjamini, Bartholomew Blaner, Michael Grubman
  • Patent number: 10552313
    Abstract: Updating cache devices includes a processor to detect a first set of hash functions and a first bit array corresponding to elements of a cache. In some examples, the processor detects a first instruction to add a new element to the cache and modify the first bit array based on the new element. Additionally, the processor processes a first invalidation operation and generates a second bit array and a second set of hash functions, while processing additional instructions. The processor deletes the first bit array and the first set of hash functions in response to detecting that the second bit array and the second set of hash functions have each been generated. Some examples process a second invalidation operation with the second set of hash functions and the second bit array.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: February 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: Michael Bar-Joshua, Yiftach Benjamini, Bartholomew Blaner, Michael Grubman
  • Patent number: 10545816
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator pulls an operation from a first buffer and adjusts a receive credit value in a first window context operatively coupled to the hypervisor. The receive credit value to limit a first quantity of one or more first tasks in the first buffer. The hardware accelerator determines at least one memory address translation related to the operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: January 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner
  • Patent number: 10528418
    Abstract: Hardware accelerator memory address translation fault resolution is provided. A hardware accelerator and a switchboard are in communication with a processing core. The hardware accelerator determines at least one memory address translation related to an operation having a fault. The switchboard forwards the operation with the fault memory address translation from the hardware accelerator to a second buffer. The operation and the fault memory address translation are flushed from the hardware accelerator, and the operating system repairs the fault memory address translation. The switchboard forwards the operation with the repaired memory address translation from the second buffer to a first buffer and the hardware accelerator executes the operation with the repaired address.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: January 7, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Richard L. Arndt, Bartholomew Blaner