Patents by Inventor Arthur J. O'Neill, Jr.

Arthur J. O'Neill, Jr. has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160239375
    Abstract: A technique is provided for accumulating failures. A failure of a first row is detected in a group of array macros, the first row having first row address values. A mask has mask bits corresponding to each of the first row address values. The mask bits are initially in active status. A failure of a second row, having second row address values, is detected. When none of the first row address values matches the second row address values, and when mask bits are all in the active status, the array macros are determined to be bad. When at least one of the first row address values matches the second row address values, mask bits that correspond to at least one of the first row address values that match are kept in active status, and mask bits that correspond to non-matching first address values are set to inactive status.
    Type: Application
    Filed: March 21, 2016
    Publication date: August 18, 2016
    Inventors: Michael F. Fee, Patrick J. Meaney, Arthur J. O'Neill, JR.
  • Publication number: 20160239378
    Abstract: A method, system, and/or computer program product for dynamic array masking is provided. Dynamic array masking includes, during execution of computer instructions that access a cache memory, detecting an error condition in a portion of the cache memory. The portion of the cache memory contains an array macro. Dynamic array masking, during the execution of the computer instructions that access a cache memory, further includes dynamically setting mask bits to indicate the error condition in the portion of the cache memory and preventing subsequent writes to the portion of the cache memory in accordance with the dynamically set mask bits. Embodiments also include evicting cache entries from the portion of the cache memory. This evicting can include performing a cache purge operation for the cache entries corresponding to the dynamically set mask bits.
    Type: Application
    Filed: February 12, 2015
    Publication date: August 18, 2016
    Inventors: Michael A. Blake, Hieu T. Huynh, Pak-kin Mak, Arthur J. O'Neill, JR., Rebecca S. Wisniewski
  • Publication number: 20160232067
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Application
    Filed: February 5, 2015
    Publication date: August 11, 2016
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, JR.
  • Publication number: 20160232052
    Abstract: In an approach for taking corrupt portions of cache offline during runtime, a notification of a section of a cache to be taken offline is received, wherein the section includes one or more sets in one or more indexes of the cache. An indication is associated with each set of the one or more sets in a first index of the one or more indexes, wherein the indication marks the respective set as unusable for future operations. Data is purged from the one or more sets in the first index of the cache. Each set of the one or more sets in the first index is marked as invalid.
    Type: Application
    Filed: April 13, 2016
    Publication date: August 11, 2016
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Michael Fee, Arthur J. O'Neill, JR.
  • Publication number: 20160232099
    Abstract: In an approach for backing up designated data located in a cache, data stored within an index of a cache is identified, wherein the data has an associated designation indicating that the data is applicable to be backed up to a higher level memory. It is determined that the data stored to the cache has been updated. A status associated with the data is adjusted, such that the adjusted status indicates that the data stored to the cache has not been changed. A copy of the data is created. The copy of the data is stored to the higher level memory.
    Type: Application
    Filed: February 9, 2015
    Publication date: August 11, 2016
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Garrett M. Drapala, Michael Fee, Pak-kin Mak, Arthur J. O'Neill, JR., Diana L. Orf
  • Publication number: 20160224481
    Abstract: In an approach for managing data transfer across a bus shared by processors, a request for a first set of data is received from a first processor. A request for a second set of data is received from a second processor. First portions of the first set of data and the second set of data are written to a buffer. Additional portions of each set of data are written to the buffer as portions are received. It is determined that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme is based on return progress of each respective set of data having at least a portion of data in the buffer. The portion of the first set of data is granted access to the bus.
    Type: Application
    Filed: January 29, 2015
    Publication date: August 4, 2016
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, JR.
  • Publication number: 20160147658
    Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration
    Type: Application
    Filed: November 20, 2014
    Publication date: May 26, 2016
    Applicant: International Business Machines Corp
    Inventors: Ekaterina M. Ambroladze, Deanna P. Berger, Michael Fee, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
  • Publication number: 20160147664
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Application
    Filed: November 21, 2014
    Publication date: May 26, 2016
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, Jr., Deanna Postles Dunn Berger
  • Publication number: 20160147597
    Abstract: An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process.
    Type: Application
    Filed: August 12, 2015
    Publication date: May 26, 2016
    Inventors: Michael F. Fee, Pak-kin Mak, Arthur J. O'Neill, JR., Deanna Postles Dunn Berger
  • Patent number: 9189415
    Abstract: A method for implementing embedded dynamic random access memory (eDRAM) refreshing in a high performance cache architecture. The method includes receiving a memory access request, via a cache controller, from a memory refresh requestor, the memory access request for a memory address range in a cache memory. The method also includes detecting that the cache memory located at the memory address range is available to receive the memory access request and sending the memory access request to a memory request interpreter. The method further includes receiving the memory access request from the cache controller, determining that the memory access request is a request to refresh contents of the memory address range in the cache memory, and refreshing data in the memory address range.
    Type: Grant
    Filed: October 22, 2012
    Date of Patent: November 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael Fee, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Patent number: 9104581
    Abstract: A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael Fee, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Patent number: 8671267
    Abstract: A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: March 11, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Christine C. Jones, Arthur J. O'Neill, Jr., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Patent number: 8645796
    Abstract: Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: February 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael Fee, Edward T. Gerchman, Arthur J. O'Neill, Jr.
  • Patent number: 8560767
    Abstract: Embodiments relate to embedded Dynamic Random Access Memory (eDRAM) refresh rates in a high performance cache architecture. An aspect includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Grant
    Filed: July 11, 2012
    Date of Patent: October 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, Jr., Scott B. Swaney
  • Patent number: 8522076
    Abstract: A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: August 27, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, Jr., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Patent number: 8495287
    Abstract: A method of debugging an embedded dynamic random access memory (eDRAM) element of a processor core is provided. An aspect includes, based on an error occurring in the eDRAM element, stopping a functional clock, and not stopping a refresh clock. Another aspect includes, based on the functional clock being stopped, creating a fence signal that prevents all commands other than a refresh command, the refresh command being based on the refresh clock, from entering into the eDRAM element. Another aspect includes initializing a line fetch controller of the processor core with at least one of write data and read data. Another aspect includes restarting the functional clock. Another aspect includes performing at least one of write requests and read requests to the eDRAM element based on the at least one of the write data and the read data from the line fetch controller based on the functional clock.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Adam B. Collura, Michael Fee, Arthur J. O'Neill, Jr., Gerard M. Salem, Robert J. Sonnelitter, III
  • Patent number: 8468421
    Abstract: A memory system is provided. The memory system includes a memory element that is configured to selectively output data stored to and data fetched from the memory element. An error checking station is configured to receive the data stored to and the data fetched from the memory element. The error checking station is further configured to perform error checking on the data.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: June 18, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael Fee, Arthur J. O'Neill, Jr.
  • Patent number: 8392621
    Abstract: A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
    Type: Grant
    Filed: June 22, 2010
    Date of Patent: March 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, Jr., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Patent number: 8365055
    Abstract: Defining a set of correctable error and uncorrectable error syndrome code points, generating an error correction code (ECC) syndrome decode, regarding the uncorrectable error syndrome code points as “don't cares” and logically minimizing the ECC syndrome decode for the determination of the correctable error syndrome code points based on the regarding of the uncorrectable error syndrome code points as the “don't cares” whereby output data can be ignored for the uncorrectable error syndrome code points.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Patrick J. Meaney, Arthur J. O'Neill, Jr.
  • Patent number: 8364899
    Abstract: User-controlled targeted cache purging includes receiving a request to perform an operation to purge data from a cache, the request including an index identifier identifying an index associated with the cache. The index specifies a portion of the cache to be purged. The user-controlled targeted cache purging also includes purging the data from the cache, and providing notification of successful completion of the operation.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Patrick J. Meaney, Arthur J. O'Neill, Jr.