Patents by Inventor David A. Gilda

David A. Gilda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960426
    Abstract: Concurrent servicing of a first cable of a cable pair while a second cable of the cable pair remains operational improves multi-processor computer system availability and serviceability. A pair of processing chips in different processing drawers may continue operation by way of the second cable while the first cable is degraded or serviced. Upon the servicing of the first cable, the serviced cable may transition to a fully operational state with no interruptions to the operations between the processing drawers by way of the second cable. Since cable faults are typically more common than processing chip or processing drawer faults, identification of cable faults and the ability to maintain operations of the processing drawers connected therewith is increasingly important.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: April 16, 2024
    Assignee: International Business Machines Corporation
    Inventors: Rajat Rao, Patrick James Meaney, Glenn David Gilda, Michael Jason Cade, Robert J Sonnelitter, III, Hubert Harrer, Xiaomin Duan, Christian Jacobi, Arthur O'Neill
  • Publication number: 20240103967
    Abstract: A memory controller stores each of a plurality of data blocks encoded by error correction code (ECC) across multiple channels of a redundant memory system. Based on receiving, from the memory system, channel data of a fetch operation requesting a data block, the memory controller decodes the channel data and concurrently generates a predicted channel mark based on tests of channel-induced syndromes generated from the channel data. The predicted channel mark identifies a marked channel among the multiple channels as a likely source of data errors. The memory controller determines whether the decoding detects an uncorrectable error in the channel data and, based on determining the decoding detects an uncorrectable error in the channel data, re-reads channel data corresponding to the data block and corrects the re-read channel data by excluding, from decoding, channel data received from the marked channel.
    Type: Application
    Filed: September 28, 2022
    Publication date: March 28, 2024
    Inventors: Barry M. Trager, Patrick James Meaney, Glenn David Gilda, Lawrence Jones
  • Patent number: 11907074
    Abstract: Embodiments of the invention are directed to a computer-implemented method of operating a data transmission system. The data transmission system includes a transmitter and a receiver. The computer-implemented method includes using the transmitter to send serialized data from the transmitter through a plurality of lanes to the receiver. The transmitter sends the serialized data at a first serialization ratio. The receiver is configured to receive and load the serialized data at a second deserialization ratio, wherein the first serialization ration is greater than the second deserialization ratio.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Patrick James Meaney, Ashutosh Mishra, Paul Allen Ganfield, Christian Jacobi, Logan Ian Friedman, Jentje Leenstra, Glenn David Gilda, Michael B. Spear
  • Publication number: 20230393999
    Abstract: Concurrent servicing of a first cable of a cable pair while a second cable of the cable pair remains operational improves multi-processor computer system availability and serviceability. A pair of processing chips in different processing drawers may continue operation by way of the second cable while the first cable is degraded or serviced. Upon the servicing of the first cable, the serviced cable may transition to a fully operational state with no interruptions to the operations between the processing drawers by way of the second cable. Since cable faults are typically more common than processing chip or processing drawer faults, identification of cable faults and the ability to maintain operations of the processing drawers connected therewith is increasingly important.
    Type: Application
    Filed: June 1, 2022
    Publication date: December 7, 2023
    Inventors: Rajat Rao, Patrick James Meaney, Glenn David Gilda, Michael Jason Cade, Robert J Sonnelitter, III, Hubert Harrer, Xiaomin Duan, Christian Jacobi, Arthur O'Neill
  • Patent number: 11646861
    Abstract: A computer-implemented method includes using a transmitter to send data from the transmitter through a plurality of lanes to a receiver using a synchronous operation mode that includes sending the data from the transmitter through the plurality of lanes to the receiver in a synchronous transmission manner that relies on an alignment between a transmitter clock frequency and a receiver clock frequency. A synchronous operation performance analysis (SOPA) is performed during the synchronous operation mode. A switch from the synchronous operation mode to an asynchronous operation mode is made based on a result of performing the SOPA. The asynchronous operation mode includes sending the data from the transmitter through the plurality of lanes to the receiver without requiring alignment between the transmitter clock frequency and the receiver clock frequency.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Patrick James Meaney, Ashutosh Mishra, Paul Allen Ganfield, Christian Jacobi, Logan Ian Friedman, Jentje Leenstra, Glenn David Gilda, Jason Andrew Thompson, Yvonne Hanson Kleppel
  • Publication number: 20230115533
    Abstract: Embodiments of the invention are directed to a computer-implemented method of operating a data transmission system. The data transmission system includes a transmitter and a receiver. The computer-implemented method includes using the transmitter to send serialized data from the transmitter through a plurality of lanes to the receiver. The transmitter sends the serialized data at a first serialization ratio. The receiver is configured to receive and load the serialized data at a second deserialization ratio, wherein the first serialization ration is greater than the second deserialization ratio.
    Type: Application
    Filed: September 24, 2021
    Publication date: April 13, 2023
    Inventors: Patrick James Meaney, Ashutosh Mishra, Paul Allen Ganfield, Christian Jacobi, Logan Ian Friedman, Jentje Leenstra, Glenn David Gilda, Michael B. Spear
  • Publication number: 20230098514
    Abstract: A computer-implemented method includes using a transmitter to send data from the transmitter through a plurality of lanes to a receiver using a synchronous operation mode that includes sending the data from the transmitter through the plurality of lanes to the receiver in a synchronous transmission manner that relies on an alignment between a transmitter clock frequency and a receiver clock frequency. A synchronous operation performance analysis (SOPA) is performed during the synchronous operation mode. A switch from the synchronous operation mode to an asynchronous operation mode is made based on a result of performing the SOPA. The asynchronous operation mode includes sending the data from the transmitter through the plurality of lanes to the receiver without requiring alignment between the transmitter clock frequency and the receiver clock frequency.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Patrick James Meaney, Ashutosh Mishra, Paul Allen Ganfield, Christian Jacobi, Logan Ian Friedman, Jentje Leenstra, Glenn David Gilda, Jason Andrew Thompson, Yvonne Hanson Kleppel
  • Patent number: 11609817
    Abstract: A computer-implemented method includes fetching, by a controller, data using a plurality of memory channels of a memory system. The method further includes detecting, by the controller, that a first memory channel of the plurality of memory channels has not returned data. The method further includes marking, by the controller, the first memory channel from the plurality of memory channels as unavailable. The method further includes, in response to a fetch, reconstructing, by the controller, fetch data based on data received from all memory channels other than the first memory channel.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: March 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Patrick James Meaney, Glenn David Gilda, David D. Cadigan, Lawrence Jones
  • Patent number: 11520659
    Abstract: A computer-implemented method includes refreshing a set of memory channels in a memory system substantially simultaneously, each memory channel refreshing a rank that is distinct from each of the other ranks being refreshed. Further, the method includes marking a memory channel from the set of memory channels as being unavailable for the rank being refreshed in the memory channel. In one or more examples, the method further includes blocking a fetch command to the memory channel for the rank being refreshed in the memory channel.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: December 6, 2022
    Assignee: International Business Machines Corporation
    Inventors: Patrick James Meaney, Glenn David Gilda, David D. Cadigan, Christian Jacobi, Lawrence Jones, Stephen J. Powell
  • Patent number: 11449397
    Abstract: A computer-implemented method for memory macro disablement in a cache memory includes identifying a defective portion of a memory macro of a cache memory bank. The method includes iteratively testing each line of the memory macro, the testing including attempting at least one write operation at each line of the memory macro. The method further includes determining that an error occurred during the testing. The method further includes, in response to determining the memory macro as being defective, disabling write operations for a portion of the cache memory bank that includes the memory macro by generating a logical mask that includes at least bits comprising a compartment bit, and read address bits.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: September 20, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gregory J. Fredeman, Glenn David Gilda, Thomas E. Miller, Arthur O'Neill
  • Publication number: 20210406126
    Abstract: A computer-implemented method includes fetching, by a controller, data using a plurality of memory channels of a memory system. The method further includes detecting, by the controller, that a first memory channel of the plurality of memory channels has not returned data. The method further includes marking, by the controller, the first memory channel from the plurality of memory channels as unavailable. The method further includes, in response to a fetch, reconstructing, by the controller, fetch data based on data received from all memory channels other than the first memory channel.
    Type: Application
    Filed: September 9, 2021
    Publication date: December 30, 2021
    Inventors: Patrick James MEANEY, Glenn David GILDA, David D. CADIGAN, Lawrence JONES
  • Patent number: 11200119
    Abstract: A computer-implemented method includes fetching, by a controller, data using a plurality of memory channels of a memory system. The method further includes detecting, by the controller, that a first memory channel of the plurality of memory channels has not returned data. The method further includes marking, by the controller, the first memory channel from the plurality of memory channels as unavailable. The method further includes, in response to a fetch, reconstructing, by the controller, fetch data based on data received from all memory channels other than the first memory channel.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: December 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Patrick James Meaney, Glenn David Gilda, David D. Cadigan, Lawrence Jones
  • Publication number: 20210216400
    Abstract: A computer-implemented method includes fetching, by a controller, data using a plurality of memory channels of a memory system. The method further includes detecting, by the controller, that a first memory channel of the plurality of memory channels has not returned data. The method further includes marking, by the controller, the first memory channel from the plurality of memory channels as unavailable. The method further includes, in response to a fetch, reconstructing, by the controller, fetch data based on data received from all memory channels other than the first memory channel.
    Type: Application
    Filed: January 13, 2020
    Publication date: July 15, 2021
    Inventors: PATRICK JAMES MEANEY, GLENN DAVID GILDA, DAVID D. CADIGAN, LAWRENCE JONES
  • Publication number: 20210216401
    Abstract: A computer-implemented method includes refreshing a set of memory channels in a memory system substantially simultaneously, each memory channel refreshing a rank that is distinct from each of the other ranks being refreshed. Further, the method includes marking a memory channel from the set of memory channels as being unavailable for the rank being refreshed in the memory channel. In one or more examples, the method further includes blocking a fetch command to the memory channel for the rank being refreshed in the memory channel.
    Type: Application
    Filed: January 13, 2020
    Publication date: July 15, 2021
    Inventors: PATRICK JAMES MEANEY, GLENN DAVID GILDA, DAVID D. CADIGAN, CHRISTIAN JACOBI, LAWRENCE JONES, STEPHEN J. POWELL
  • Publication number: 20210073087
    Abstract: A computer-implemented method for memory macro disablement in a cache memory includes identifying a defective portion of a memory macro of a cache memory bank. The method includes iteratively testing each line of the memory macro, the testing including attempting at least one write operation at each line of the memory macro. The method further includes determining that an error occurred during the testing. The method further includes, in response to determining the memory macro as being defective, disabling write operations for a portion of the cache memory bank that includes the memory macro by generating a logical mask that includes at least bits comprising a compartment bit, and read address bits.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 11, 2021
    Inventors: Gregory J. FREDEMAN, Glenn David Gilda, Thomas E. Miller, Arthur O'Neill
  • Patent number: 6920519
    Abstract: Dynamic routing of data to multiple processor complexes. PCI address space is subdivided among a plurality of processor complexes. Translation table entries at each processor complex determine which processor complex is to receive a DMA transfer, thereby enabling routing of DMA data to one I/O hub node while accessing translation table entries at another I/O hub node. Further, interrupt requests may be dynamically routed to multiple processor complexes.
    Type: Grant
    Filed: May 10, 2000
    Date of Patent: July 19, 2005
    Assignee: International Business Machines Corporation
    Inventors: Bruce Leroy Beukema, Timothy Carl Bronson, Ronald Edward Fuhs, Glenn David Gilda, Anthony J Bybell, Stefan Peter Jackowski, William Garrett Verdoorn, Jr., Phillip G Williams
  • Patent number: 6785759
    Abstract: A processor system includes an I/O bus to host bridge in which I/O address translation elements are shared across multiple I/O bus bridges. A TCE manager is provided for retaining in cache a TCE entry associated with a discarded channel for association with a new channel responsive to a subsequent read request for a memory page referenced by the TCE entry.
    Type: Grant
    Filed: May 10, 2000
    Date of Patent: August 31, 2004
    Assignee: International Business Machines Corporation
    Inventors: Bruce Leroy Beukema, Timothy Carl Bronson, Ronald Edward Fuhs, Glenn David Gilda
  • Patent number: 6490660
    Abstract: A coherency controller for configurable caches. A base microprocessor design accommodates system configurations both with and without L2 cache tag and data arrays installed. Second level cache control logic exists within the microprocessor chip, and when the external second level cache tag and data arrays are removed their inputs to the microprocessor are tied to an inactive state. A configuration switch is set in the second level cache controller that causes snoop requests from a system bus to get reflected onto a first level cache snooping path. The first level cache status is then fed back to the second level cache controller, in a manner consistent with the timing required for support of a second level cache search, and fed into the second level cache status signal generation logic, effectively making the second level cache controller believe that the second level cache still exists for snooping.
    Type: Grant
    Filed: July 1, 2000
    Date of Patent: December 3, 2002
    Assignee: International Business Machines Corporation
    Inventors: Glenn David Gilda, Steven Lee Gregor
  • Patent number: 6438657
    Abstract: A cache system is provided for accessing set associative caches with no increase in critical path delay, for reducing the latency penalty for cache accesses, for reducing snoop busy time, and for responding to MRU misses and cache misses. A two level cache subsystem including an L1 cache and an L2 cache is provided. A cache directory is accessed for a second snoop request while a directory access from a first snoop request is being evaluated. During a REQUEST stage, a directory access snoop to the directory of the L1 cache is requested; and responsive thereto, during a SNOOP stage, the directory is accessed; during an ACCESS stage, the cache arrays are accessed while processing results from the SNOOP stage. If multiple data transfers are required out of the L1 cache, a pipeline hold is issued to the REQUEST and SNOOP stages, and the ACCESS stage is repeated. During a FLUSH stage, cache data read from the L1 cache during the ACCESS stage is sent to the L2 cache.
    Type: Grant
    Filed: September 17, 1999
    Date of Patent: August 20, 2002
    Assignee: International Business Machines Corporation
    Inventor: Glenn David Gilda
  • Patent number: 6138206
    Abstract: A cache system provides for accessing set associative caches with no increase in critical path delay, for reducing the latency penalty for cache accesses and for responding to slot MRU misses and cache misses. An N-way set associative cache is provided, each set of said cache including an SRAM array macro having a memory element, an internal SRAM data register, and a read enable signal line. Read enable is generated as the NOR of slot miss and cache miss signals, and the internal SRAM data register is responsive to the slot miss signal for registering data output during a first cycle for use in a next following cycle.
    Type: Grant
    Filed: June 12, 1997
    Date of Patent: October 24, 2000
    Assignee: International Business Machines Corporation
    Inventors: Michael Todd Fisher, Glenn David Gilda