Patents by Inventor Charles P. Ryan

Charles P. Ryan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 6973539
    Abstract: A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.
    Type: Grant
    Filed: April 30, 2003
    Date of Patent: December 6, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Wayne R. Buzby
  • Patent number: 6970977
    Abstract: In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data.
    Type: Grant
    Filed: March 31, 2003
    Date of Patent: November 29, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan, Robert J. Baryla, William A. Shelly, Lowell D. McCulley
  • Patent number: 6898738
    Abstract: Cache memory, and thus computer system, reliability is increased by duplicating cache tag entries. Each cache tag has a primary entry and a duplicate entry. Then, when cache tags are associatively searched, both the primary and the duplicate entry are compared to the search value. At the same time, they are also parity checked and compared against each other. If a match is made on either the primary entry or the duplicate entry, and that entry does not have a parity error, a cache “hit” is indicated. All single bit cache tag parity errors are detected and compensated for. Almost all multiple bit cache tag parity errors are detected.
    Type: Grant
    Filed: July 17, 2001
    Date of Patent: May 24, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, William A. Shelly, Stephen A. Schuerich
  • Patent number: 6868483
    Abstract: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use.
    Type: Grant
    Filed: September 26, 2002
    Date of Patent: March 15, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Publication number: 20040221107
    Abstract: A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.
    Type: Application
    Filed: April 30, 2003
    Publication date: November 4, 2004
    Inventors: Charles P. Ryan, Wayne R. Buzby
  • Publication number: 20040193804
    Abstract: In a multiprocessor write-into-cache data processing system including: a memory; at least first and second shared caches; a system bus coupling the memory and the shared caches; at least one processor having a private cache coupled, respectively, to each shared cache; method and apparatus for preventing hogging of ownership of a gateword stored in the memory which governs access to common code/data shared by processes running in the processors by which a read copy of the gateword is obtained by a given processor by performing successive swap operations between the memory and the given processor's shared cache, and the given processor's shared cache and private cache. If the gateword is found to be OPEN, it is CLOSEd by the given processor, and successive swap operations are performed between the given processor's private cache and shared cache and shared cache and memory to write the gateword CLOSEd in memory such that the given processor obtains exclusive access to the governed common code/data.
    Type: Application
    Filed: March 31, 2003
    Publication date: September 30, 2004
    Applicant: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan, Robert J. Baryla, William A. Shelly, Lowell D. McCulley
  • Patent number: 6760811
    Abstract: In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor.
    Type: Grant
    Filed: August 15, 2002
    Date of Patent: July 6, 2004
    Assignee: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Publication number: 20040064645
    Abstract: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Publication number: 20040034741
    Abstract: In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor.
    Type: Application
    Filed: August 15, 2002
    Publication date: February 19, 2004
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Patent number: 6604060
    Abstract: In a Cache-Coherent Non-Uniform Memory Architecture (CC-NUMA), the time as measured in cycles that it takes for cache control signals to travel between processors (92) sharing an L2 cache (94) differs from the time it takes for those signals to travel between processors (92) not sharing the same L2 cache (94). This difference, or DELTA, is dynamically computed by computing (332) the time it takes for a invalidate cache line cache command to travel between a local processor (92) and a master processor (92). This computation (334) is then made for the time it takes the signal to travel between a remote processor (92) and the master processor (92). The difference (336) is the DELTA value in cycles. This DELTA value can then be utilized to bias delay values when exhaustively testing the interactions among multiple processors in a CC-NUMA environment (180).
    Type: Grant
    Filed: June 29, 2000
    Date of Patent: August 5, 2003
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Eric E. Conway
  • Patent number: 6530076
    Abstract: A processor (92) contains a Trace RAM (210) for tracing internal processor signals and operands. A first trace mode separately traces microcode instruction execution and cache controller execution. Selectable groups of signals are traced from both the cache controller (256) and the arithmetic (AX) processor (260). A second trace mode selectively traces full operand words that result from microcode instruction (242). Each microcode instruction word (242) has a trace enable bit (244) that when enabled causes the results of that microcode instruction (242) to be recorded in the Trace RAM (210).
    Type: Grant
    Filed: December 23, 1999
    Date of Patent: March 4, 2003
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Ron Yoder, William A. Shelly
  • Publication number: 20030018936
    Abstract: Cache memory, and thus computer system, reliability is increased by duplicating cache tag entries. Each cache tag has a primary entry and a duplicate entry. Then, when cache tags are associatively searched, both the primary and the duplicate entry are compared to the search value. At the same time, they are also parity checked and compared against each other. If a match is made on either the primary entry or the duplicate entry, and that entry does not have a parity error, a cache “hit” is indicated. All single bit cache tag parity errors are detected and compensated for. Almost all multiple bit cache tag parity errors are detected.
    Type: Application
    Filed: July 17, 2001
    Publication date: January 23, 2003
    Applicant: Bull NH Information Systems Inc.
    Inventors: Charles P. Ryan, William A. Shelly, Stephen A. Schuerich
  • Patent number: 6442681
    Abstract: A cache used with a pipelined processor includes an instruction cache, instruction buffers for receiving instruction sub-blocks from the instruction cache and providing instructions to the pipelined processor, and a branch cache. The branch cache includes an instruction buffer adjunct for storing an information set for each sub-block resident in the instruction buffers. A branch cache directory stores instruction buffer addresses corresponding to current entries in the instruction buffer adjunct, and a target address RAM stores target addresses developed from prior searches of the branch cache. A delay pipe is used to selectively step an information set read from the buffer instruction adjunct in synchronism with a transfer instruction traversing the pipeline.
    Type: Grant
    Filed: December 28, 1998
    Date of Patent: August 27, 2002
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Patrice Brossard
  • Patent number: 6249880
    Abstract: Interactions among multiple processors (92) are exhaustively tested. A master processor (92) retrieves test information for a set of tests from a test table (148). It then enters a series of embedded loops, with one loop for each of the tested processors (92). A cycle delay count for each of the tested processors (92) is incremented (152, 162, 172) through a range specified in the test table entry. For each combination of cycle delay count loop indices, a single test is executed (176). In each such test (176), the master processor (92) sets up (182) each of the other processors (92) being tested. This setup (182) specifies the delay count and the code for that processor (92) to execute. When each processor (92) is setup (182), it waits (192) for a synchronize interrupt (278). When all processors (92) have been setup (182), the master processor (92) issues (191) the synchronize interrupt signal (276). Each processor (92) then starts traces (193) and delays (194) the specified number of cycles.
    Type: Grant
    Filed: September 17, 1998
    Date of Patent: June 19, 2001
    Assignee: Bull HN Information Systems Inc.
    Inventors: William A. Shelly, Charles P. Ryan
  • Patent number: 6230263
    Abstract: A processor (92) in a data processing system (80) provides a DELAY instruction. Executing the DELAY instruction causes the processor (92) to a specified integral number of clock (98) cycles before continuing. Delays are guaranteed to have a linear relationship with a constant slope with the specified number of clock cycles. Incrementing the specified delay through a range allows exhaustive testing of interactions among multiple processors.
    Type: Grant
    Filed: September 17, 1998
    Date of Patent: May 8, 2001
    Inventors: Charles P. Ryan, Ronald W. Yoder, William A. Shelly
  • Patent number: 6223228
    Abstract: Two instructions are provided to synchronize multiple processors (92) in a data processing system (80). A Transmit Sync instruction (TSYNC) transmits a synchronize processor interrupt (276) to all of the active processors (92) in the system (80). Processors (92) wait for receipt of the synchronize signal (278) by executing a Wait for Sync (WSYNC) instruction. Each of the processors waiting for such a signal (278) is activated at the next clock cycle after receipt of the interrupt signal (278). An optional timeout value is provided to protect against hanging a waiting processor (92) that misses the interrupt (278). Whenever the WSYNC instruction is activated by receipt of the interrupt (278), a trace is started to trace a fixed number of events to an internal Trace Cache (58).
    Type: Grant
    Filed: September 17, 1998
    Date of Patent: April 24, 2001
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, William A. Shelly, Ronald W. Yoder
  • Patent number: 6175897
    Abstract: A cache used with a pipelined processor includes an instruction cache, instruction buffers for receiving instruction sub-blocks from the instruction cache and providing instructions to the pipelined processor, and a branch cache. The branch cache includes an instruction buffer adjunct for storing an information set for each sub-block resident in the instruction buffers. A branch cache directory stores instruction buffer addresses corresponding to current entries in the instruction buffer adjunct, and a target address RAM stores target addresses developed from prior searches of the branch cache. A delay pipe, constituting serially-coupled registers, is used to step an information set read from the buffer instruction adjunct in synchronism with a transfer instruction traversing the pipeline.
    Type: Grant
    Filed: December 28, 1998
    Date of Patent: January 16, 2001
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Patrice Brossard
  • Patent number: 5701426
    Abstract: A data processing system which employs a cache memory feature and a method for lowering the cache miss ratio for called operands in the data processing system are disclosed. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address.
    Type: Grant
    Filed: March 31, 1995
    Date of Patent: December 23, 1997
    Assignee: Bull Information Systems Inc.
    Inventor: Charles P. Ryan
  • Patent number: 5694572
    Abstract: In a data processing system which employs a cache memory feature, a method and exemplary special purpose apparatus for practicing the method are disclosed to lower the cache miss ratio for called operands. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same indicating a pattern yielding information which can be combined with an address in the stack to develop a predictive address.
    Type: Grant
    Filed: February 26, 1992
    Date of Patent: December 2, 1997
    Assignee: Bull HN Information Systems Inc.
    Inventor: Charles P. Ryan
  • Patent number: 5495591
    Abstract: For a data processing system which employs a cache memory, the disclosure includes both a method for lowering the cache miss ratio for requested operands and an example of special purpose apparatus for practicing the method. Recent cache misses are stored in a first in, first out miss stack, and the stored addresses are searched for displacement patterns thereamong. Any detected pattern is then employed to predict a succeeding cache miss by prefetching from main memory the signal identified by the predictive address. The apparatus for performing this task is preferably hard wired for speed purposes and includes subtraction circuits for evaluating variously displaced addresses in the miss stack and comparator circuits for determining if the outputs from at least two subtraction circuits are the same, indicating a pattern which yields information which can be combined with an address in the stack to develop a predictive address.
    Type: Grant
    Filed: June 30, 1992
    Date of Patent: February 27, 1996
    Assignee: Bull HN Information Systems Inc.
    Inventor: Charles P. Ryan