Patents by Inventor Arthur J. O'Neill

Arthur J. O'Neill has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10824565
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: November 3, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 10402328
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: September 3, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 10394712
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Publication number: 20190205251
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Patent number: 10310982
    Abstract: A computer-implemented method for managing cache memory in a distributed symmetric multiprocessing computer is described. The method may include receiving, at a first central processor (CP) chip, a fetch request from a first chip. The method may further include determining via address compare mechanisms on the first CP chip whether one or more of a second CP chip and a third CP chip is requesting access to a target line. The first chip, the second chip, and the third chip are within the same chip cluster. The method further includes providing access to the target line if both of the second CP chip and the third CP chip have accessed the target line at least one time since the first CP chip has accessed the target line.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 4, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Johnathon J. Hoste, Pak-kin Mak, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Patent number: 10169260
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, Jr.
  • Publication number: 20180365070
    Abstract: Methods, systems, and computer program products for managing broadcasts in a distributed symmetric multiprocessing computer are provided. Aspects include defining a default broadcast rate for a plurality of processors in the distributed symmetric multiprocessing computer, wherein the default broadcast rate is a rate at which a processor broadcasts a request for a resource. The one or more broadcasted requests by a first processor are monitored and related responses are utilized to determine a state of the one or more broadcasted requests. The default broadcast rate is adjusted based at least in part on the state of the one or more broadcasted requests.
    Type: Application
    Filed: June 16, 2017
    Publication date: December 20, 2018
    Inventors: Michael A. Blake, Mike Chow, Kenneth D. Klapproth, Pak-kin Mak, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20180173630
    Abstract: A computer-implemented method for managing cache memory in a distributed symmetric multiprocessing computer is described. The method may include receiving, at a first central processor (CP) chip, a fetch request from a first chip. The method may further include determining via address compare mechanisms on the first CP chip whether one or more of a second CP chip and a third CP chip is requesting access to a target line. The first chip, the second chip, and the third chip are within the same chip cluster. The method further includes providing access to the target line if both of the second CP chip and the third CP chip have accessed the target line at least one time since the first CP chip has accessed the target line.
    Type: Application
    Filed: December 15, 2016
    Publication date: June 21, 2018
    Inventors: Deanna Postles Dunn Berger, Johnathon J. Hoste, Pak-kin Mak, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20180121359
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration
    Type: Application
    Filed: January 2, 2018
    Publication date: May 3, 2018
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20180121358
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration
    Type: Application
    Filed: January 2, 2018
    Publication date: May 3, 2018
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill, Robert J. Sonnelitter, III
  • Publication number: 20180107617
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Application
    Filed: December 15, 2017
    Publication date: April 19, 2018
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, JR.
  • Patent number: 9898407
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: February 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 9892067
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Grant
    Filed: January 29, 2015
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9886382
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.
    Type: Grant
    Filed: November 20, 2014
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
  • Patent number: 9792213
    Abstract: Various embodiments mitigate busy time in a hierarchical store-through memory cache structure including a cache directory associated with a memory cache. The cache directory is divided into a plurality of portions each associated with a portion of memory cache. A determination is made that a first subpipe of a shared cache pipeline comprises a non-store request. The shared pipeline is communicatively coupled to the plurality of portions of the cache directory. A store command is prevented from being placed in a second subpipe of the shared cache pipeline based on determining that a first subpipe of the shared cache pipeline comprises a non-store request. Simultaneous cache lookup operations are supported between the plurality of portions of the cache directory and cache write operations. Two or more store commands simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: October 17, 2017
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Berger, Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diana L. Orf, Robert J. Sonnelitter, III
  • Patent number: 9703661
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Grant
    Filed: February 5, 2015
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9678848
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: June 13, 2017
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 9645904
    Abstract: A technique is provided for accumulating failures. A failure of a first row is detected in a group of array macros, the first row having first row address values. A mask has mask bits corresponding to each of the first row address values. The mask bits are initially in active status. A failure of a second row, having second row address values, is detected. When none of the first row address values matches the second row address values, and when mask bits are all in the active status, the array macros are determined to be bad. When at least one of the first row address values matches the second row address values, mask bits that correspond to at least one of the first row address values that match are kept in active status, and mask bits that correspond to non-matching first address values are set to inactive status.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: May 9, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Patrick J. Meaney, Arthur J. O'Neill, Jr.
  • Patent number: 9600360
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Patent number: 9600361
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: March 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger