Patents by Inventor David A. Shedivy

David A. Shedivy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8566607
    Abstract: In a first aspect, a first cryptography method is provided. The first method includes the steps of (1) in response to receiving a request to perform a first operation on data in a first memory cacheline, accessing data associated with the first memory cacheline; (2) performing cryptography on data of the first memory cacheline when necessary; and (3) speculatively accessing data associated with a second memory cacheline based on the first memory cacheline before receiving a request to perform an operation on data in the second memory cacheline. Numerous other aspects are provided.
    Type: Grant
    Filed: August 26, 2005
    Date of Patent: October 22, 2013
    Assignee: International Business Machines Corporation
    Inventors: William T. Flynn, David A. Shedivy
  • Publication number: 20130194964
    Abstract: Techniques are provided for routing table synchronization for a distributed network switch. In one embodiment, a first frame having a source address and a destination address is received. If no routing entry for the source address is found in a routing table of a first switch module, routing information is determined for the source address and a routing entry is generated. An indication is sent to a second switch module, to request a routing entry for the source address to be generated in the second switch module, based on the routing information.
    Type: Application
    Filed: February 1, 2012
    Publication date: August 1, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Claude Basso, David A. Shedivy, Colin B. Verrilli, Bruce M. Walk, Daniel Wind
  • Publication number: 20130188637
    Abstract: Techniques are provided for multicast miss notification for a distributed network switch. In one embodiment, a bridge element in the distributed network switch receives a frame destined for a multicast group on a network. If a local multicast forwarding table of the bridge element does not include any forwarding entry for the multicast group, a forwarding entry is selected from the local multicast forwarding table as a candidate for being replaced. An indication of the candidate is sent to a management controller in the distributed network switch.
    Type: Application
    Filed: January 19, 2012
    Publication date: July 25, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Josep Cors, Todd A. Greenfield, David A. Shedivy, Bruce M. Walk
  • Patent number: 8379642
    Abstract: Systems and methods to multicast data frames are provided. A particular apparatus includes a plurality of computing nodes and a distributed virtual bridge. The distributed virtual bridge includes a plurality of bridge elements coupled to the plurality of computing nodes. The plurality of bridge elements are configured to forward a copy of a multicast data frame to the plurality of computing nodes using group member information associated with addresses of the plurality of server computers. A controlling bridge coupled to the plurality of bridge elements is configured to communicate the group member information to the plurality of bridge elements.
    Type: Grant
    Filed: April 26, 2010
    Date of Patent: February 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: William J. Armstrong, Claude Basso, Josep Cors, Kyle A. Lucke, David A. Shedivy, Kenneth M. Valk, Bruce M. Walk
  • Publication number: 20120236858
    Abstract: Systems and methods to multicast data frames are provided. A particular apparatus includes a plurality of computing nodes and a distributed virtual bridge. The distributed virtual bridge includes a plurality of bridge elements coupled to the plurality of computing nodes. The plurality of bridge elements are configured to forward a copy of a multicast data frame to the plurality of computing nodes using group member information associated with addresses of the plurality of server computers. A controlling bridge coupled to the plurality of bridge elements is configured to communicate the group member information to the plurality of bridge elements.
    Type: Application
    Filed: May 31, 2012
    Publication date: September 20, 2012
    Inventors: William J. Armstrong, Claude Basso, Josep Cors, Kyle A. Lucke, David A. Shedivy, Kenneth M. Valk, Bruce M. Walk
  • Publication number: 20110261815
    Abstract: Systems and methods to multicast data frames are provided. A particular apparatus includes a plurality of computing nodes and a distributed virtual bridge. The distributed virtual bridge includes a plurality of bridge elements coupled to the plurality of computing nodes. The plurality of bridge elements are configured to forward a copy of a multicast data frame to the plurality of computing nodes using group member information associated with addresses of the plurality of server computers. A controlling bridge coupled to the plurality of bridge elements is configured to communicate the group member information to the plurality of bridge elements.
    Type: Application
    Filed: April 26, 2010
    Publication date: October 27, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William J. Armstrong, Claude Basso, Josep Cors, Kyle A. Lucke, David A. Shedivy, Kenneth M. Valk, Bruce M. Walk
  • Publication number: 20110264610
    Abstract: Systems and methods to forward data frames are provided. A particular apparatus may include a plurality of server computers and a distributed virtual bridge. The distributed virtual bridge may include a plurality of bridge elements coupled to the plurality of server computers and configured to forward a data frame between the plurality of server computers. The plurality of bridge elements may further be configured to automatically learn address data associated with the data frame. A controlling bridge may be coupled to the plurality of bridge elements. The controlling bridge may include a global forwarding table that is automatically updated to include the address data and is accessible to the plurality of bridge elements.
    Type: Application
    Filed: April 26, 2010
    Publication date: October 27, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William J. Armstrong, Claude Basso, Josep Cors, David R. Engebretsen, Kyle A. Lucke, David A. Shedivy, Colin B. Verrilli, Bruce M. Walk
  • Publication number: 20110243134
    Abstract: Systems and methods to forward data frames are provided. A particular method may include receiving a data frame at a distributed virtual bridge. The distributed virtual bridge includes a first bridge element coupled to a first server computer and a second bridge element coupled to the first bridge element and to a second server computer. The distributed virtual bridge further includes a controlling bridge coupled to the first bridge element and to the second bridge element. The controlling bridge includes a global forwarding table. The data frame is forwarded from the first bridge element to the second bridge element of the distributed virtual bridge using address data associated with the data frame. A logical network associated with the frame may additionally be used to forward the data frame.
    Type: Application
    Filed: March 31, 2010
    Publication date: October 6, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: William J. Armstrong, Claude Basso, David R. Engebretsen, Kyle A. Lucke, Jeffrey J. Lynch, David A. Shedivy, Colin B. Verrilli, Bruce M. Walk
  • Patent number: 8028128
    Abstract: In a method of managing a cache directory in a memory system, an original system address is presented to the cache directory when corresponding associativity data is allocated to an associativity class in the cache directory. The original system address is normalized by removing address space corresponding to a memory hole, thereby generating a normalized address. The normalized address is stored in the cache directory. The normalized address is de-normalized, thereby generating a de-normalized address, when the associativity data is cast out of the cache directory to make room for new associativity data. The de-normalized address is sent to the memory system for coherency management.
    Type: Grant
    Filed: October 10, 2007
    Date of Patent: September 27, 2011
    Assignee: International Business Machines Corporation
    Inventors: Duane A. Averill, Herman L. Blackmon, Joseph A. Kirscht, David A. Shedivy
  • Patent number: 7925857
    Abstract: In a method of generating a cache directory to include a plurality of associativity classes, each associativity class includes an address tag including a plurality of address bits. Each address tag is configured to store a unique address to a specific location in an memory space. An amount of memory that is in an actually configured portion of the memory space is determined. A minimum number of bits necessary to address each memory location in the actually configured portion of the memory space is determined. Each address tag is configured in each associativity class to include the minimum number of bits necessary to address each memory location in the actually configured portion of the memory space. The cache directory is configured to include a maximum number of associativity classes per line in the cache directory.
    Type: Grant
    Filed: January 24, 2008
    Date of Patent: April 12, 2011
    Assignee: International Business Machines Corporation
    Inventors: Duane A. Averill, Herman L. Blackmon, Joseph A. Kirscht, David A. Shedivy
  • Patent number: 7650259
    Abstract: A method, system, and computer program product for tuning a set of chipset parameters to achieve optimal chipset performance under varying workload characteristics. A set of workload characteristics of a current workload type is determined. An instruction stream is generated using weighted parameters derived from the set of workload characteristics of the current workload type. A set of chipset parameters is generated and integrated within the instruction stream. The instruction stream is loaded to one or more processors and executed to collect and analyze performance data relating to the chipset's performance. The analysis includes comparing the set of performance data of a plurality of different instruction streams having the same set of workload characteristics. Each executed instruction stream is executed with at least one different combination of chipset parameters. A determination is made regarding which combination of chipset parameters provides the best performance data for the current workload.
    Type: Grant
    Filed: October 1, 2007
    Date of Patent: January 19, 2010
    Assignee: International Business Machines Corporation
    Inventors: Herman L. Blackmon, Joseph A. Kirscht, David A. Shedivy, Brian T. Vanderpool
  • Publication number: 20090193199
    Abstract: In a method of generating a cache directory to include a plurality of associativity classes, each associativity class includes an address tag including a plurality of address bits. Each address tag is configured to store a unique address to a specific location in an memory space. An amount of memory that is in an actually configured portion of the memory space is determined. A minimum number of bits necessary to address each memory location in the actually configured portion of the memory space is determined. Each address tag is configured in each associativity class to include the minimum number of bits necessary to address each memory location in the actually configured portion of the memory space. The cache directory is configured to include a maximum number of associativity classes per line in the cache directory.
    Type: Application
    Filed: January 24, 2008
    Publication date: July 30, 2009
    Inventors: Duane A. Averill, Herman L. Blackmon, Joseph A. Kirscht, David A. Shedivy
  • Publication number: 20090100229
    Abstract: In a method of managing a cache directory in a memory system, an original system address is presented to the cache directory when corresponding associativity data is allocated to an associativity class in the cache directory. The original system address is normalized by removing address space corresponding to a memory hole, thereby generating a normalized address. The normalized address is stored in the cache directory. The normalized address is de-normalized, thereby generating a de-normalized address, when the associativity data is cast out of the cache directory to make room for new associativity data. The de-normalized address is sent to the memory system for coherency management.
    Type: Application
    Filed: October 10, 2007
    Publication date: April 16, 2009
    Inventors: Duane A. Averill, Herman L. Blackmon, Joseph A. Kirscht, David A. Shedivy
  • Publication number: 20090094385
    Abstract: A technique for handling commands includes assigning respective first tags to ordered commands included in an ordered command stream. Respective second tags are then assigned to subsequent commands that follow an initial command (included in the ordered commands). Each of the respective second tags correspond to one the respective first tags that is associated with an immediate previous one of the ordered commands. The initial command is sent to an execution engine in a first cycle. At least one of the subsequent commands is sent to the execution engine prior to completion of execution of the initial command.
    Type: Application
    Filed: October 8, 2007
    Publication date: April 9, 2009
    Inventors: Ronald E. Freking, Ryan S. Haraden, David A. Shedivy, Kenneth M. Valk
  • Publication number: 20090089554
    Abstract: A method, system, and computer program product for tuning a set of chipset parameters to achieve optimal chipset performance under varying workload characteristics. A set of workload characteristics of a current workload type is determined. An instruction stream is generated using weighted parameters derived from the set of workload characteristics of the current workload type. A set of chipset parameters is generated and integrated within the instruction stream. The instruction stream is loaded to one or more processors and executed to collect and analyze performance data relating to the chipset's performance. The analysis includes comparing the set of performance data of a plurality of different instruction streams having the same set of workload characteristics. Each executed instruction stream is executed with at least one different combination of chipset parameters. A determination is made regarding which combination of chipset parameters provides the best performance data for the current workload.
    Type: Application
    Filed: October 1, 2007
    Publication date: April 2, 2009
    Inventors: HERMAN L. BLACKMON, Joseph A. Kirscht, David A. Shedivy, Brian T. Vanderpool
  • Publication number: 20080301376
    Abstract: A memory controller receives a stream of DMA write operations and enqueues them in a queue enforcing a First-In First-Out (FIFO) order. Prior to processing a particular DMA write operation, the memory controller acquires coherency ownership of a target memory block and stores the result in a low latency array. In response to acquiring coherency ownership, this low latency array is updated to a coherency state signifying coherency ownership by the memory controller. In a pipelined array access, both the low latency array and the second array are accessed and if the lower latency second array indicates the particular coherency state with no collision indication, the memory controller signals that the particular DMA write operation can be performed, where the signaling occurs prior to results being obtained from the higher latency first array at the normal end of the array access pipeline.
    Type: Application
    Filed: May 31, 2007
    Publication date: December 4, 2008
    Inventors: Brian D. Allison, David A. Shedivy, Kenneth M. Valk, Brian T. Vanderpool
  • Publication number: 20080244190
    Abstract: In response to a memory access request missing in a central coherence directory of a data processing system, the central coherence directory issues a back-invalidate request and provides an indication of one or more processors possibly caching a copy of a victim memory block associated with a victim memory address. In response to the back-invalidate request, a memory controller initiates a lookup of coherency information for the victim memory address in the central coherence directory and, prior to receipt of the coherency information, speculatively issues a set of back-invalidate commands on one or more of multiple processor buses to invalidate any cached copy of the victim memory block. In response to receipt of the coherency information, the memory controller determines whether the set of speculatively issued back-invalidate commands was under-inclusive, and if not, removes a victim entry associated with the victim memory address from the central coherence directory.
    Type: Application
    Filed: March 30, 2007
    Publication date: October 2, 2008
    Inventors: David A. Shedivy, Brian T. Vanderpool
  • Publication number: 20070067644
    Abstract: A method, a computer program product and a memory control unit operate to store encrypted data in a memory. In response to receiving a memory write command having write data and a memory address, a determination is made if a corresponding region of the memory is specified to store encrypted data. If the corresponding region of the memory is specified to store encrypted data, the method and computer program product retrieve an encryption key predefined for use with the received memory address and retrieve a write counter associated with the write data, increment a value of the write counter, construct data so as to include at least a portion of the memory address, a current value of the write counter and a fill pattern, and apply the constructed data to a first input of an encryption algorithm and apply the retrieved encryption key to a second input of the encryption algorithm.
    Type: Application
    Filed: August 26, 2005
    Publication date: March 22, 2007
    Inventors: William Flynn, David Shedivy, William Posey, Mark Marson
  • Publication number: 20070050641
    Abstract: In a first aspect, a first cryptography method is provided. The first method includes the steps of (1) in response to receiving a request to perform a first operation on data in a first memory cacheline, accessing data associated with the first memory cacheline; (2) performing cryptography on data of the first memory cacheline when necessary; and (3) speculatively accessing data associated with a second memory cacheline based on the first memory cacheline before receiving a request to perform an operation on data in the second memory cacheline. Numerous other aspects are provided.
    Type: Application
    Filed: August 26, 2005
    Publication date: March 1, 2007
    Applicant: International Business Machines Corporation
    Inventors: William Flynn, David Shedivy
  • Publication number: 20070050642
    Abstract: An electrical circuit includes a first interface for coupling to a data processor bus; a second interface for coupling to a memory; at least one data encryption engine and storage for storing a data structure specifying, for individual ones of a plurality of partitions of the memory, whether use of the at least one encryption engine for data read operations and data write operations is enabled for the associated partition and, if it is, information descriptive of at least one input to the encryption engine for that partition, comprising information related to a plurality of counters individual ones of which count write operations to an individual one of a plurality of data units storable in that partition.
    Type: Application
    Filed: August 26, 2005
    Publication date: March 1, 2007
    Inventors: William Flynn, David Shedivy, William Posey, Mark Marson, Brian McWhirter, Scott Mishima