Patents by Inventor Scot Rider
Scot Rider has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250103494Abstract: Systems and techniques for concurrently performing a fill and byte merge operation in a data processing system are described. An example technique includes receiving a memory access request from a user interface. A determination is made that the memory access request has encountered a cache miss within a cache directory in the computing system. In response to the determination, a fetch request is transmitted to an upper level cache within the computing system for a cache line associated with the memory access request. Dirty portions of the cache line are concurrently written and merged, based on the memory access request, with fill data of the cache line obtained from the upper level cache into a line buffer of a line engine within the computing system.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventor: Scot RIDER
-
Publication number: 20250103507Abstract: Systems and techniques for controlling allocation of a cache line when performing a memory access request are described. An example technique includes receiving a memory access request from a user interface. The memory access request includes an indication of whether a cache line associated with the memory access request is to be allocated into a lower level cache within a computing system. The cache line is written into a line buffer within a line engine in the computing system in response to receiving the memory access request. The cache line is moved from the line buffer to an upper level cache in the computing system, without involving the lower level cache, when the memory access request indicates the cache line is not to be allocated into the lower level cache.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventor: Scot RIDER
-
Publication number: 20250077430Abstract: Techniques and apparatus for performing real-time tracking and reporting of snoop activity within a data processing system are described. An example technique includes performing a local snoop operation for multiple processors within a cluster. A snoop tracing message with information associated with the local snoop operation is generated upon determining that the local snoop operation is successful. The snoop tracing message is transmitted to a storage device. Another example technique includes determining a location in memory of a computing system where a fetch request resolves. Information indicating the location in memory of the computing system where the fetch request resolves is encoded within a fetch response. The fetch response is transmitted to a processor. One or more counters within the processor that are used to track snoop activity are incremented based on the encoded information.Type: ApplicationFiled: August 28, 2023Publication date: March 6, 2025Inventors: Scot RIDER, Timothy BRONSON, Clinton E. BUBB, Craig R. WALTERS
-
Publication number: 20250053514Abstract: Techniques and apparatus for maintaining cache coherency in a data processing system are described. An example technique includes receiving a fetch request from a processor of a plurality of processors in a cluster. A local snoop operation is performed for the cluster in response to the fetch request and without involving an upper level cache associated with the cluster. A fetch response is sent to the processor based on the local snoop operation. Another technique includes receiving a fetch request from a processor of a plurality of processors in a cluster. A snoop request is sent to trigger a local snoop operation for the cluster, in response to the fetch request. A snoop response including an indication that at least one processor in the cluster is in an offline state is received in response to the snoop request.Type: ApplicationFiled: August 8, 2023Publication date: February 13, 2025Inventor: Scot RIDER
-
Publication number: 20230344667Abstract: Embodiments for providing single-producer-multiple consumers synchronization and multicast data transfer by a processor are disclosed. Multicast data transfer is synchronized based on an identification tag and a request from each one of a plurality of recipients for the multicast data. The multicast data is transferred to each of the plurality of recipients based on the identification tag, the request from each one of the plurality of recipients, and a list of the plurality of recipients.Type: ApplicationFiled: April 22, 2022Publication date: October 26, 2023Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Vijayalakshmi SRINIVASAN, Scot RIDER, Swagath VENKATARAMANI, Kailash GOPALAKRISHNAN, Sunil K. SHUKLA, Brian William CURRAN, Martin A. LUTZ
-
Patent number: 11604653Abstract: Provided are embodiments for a computer-implemented method, system and computer program product for identifying dependencies in a control sequence. Embodiments include receiving a control block that comprises a first error dependency (EDEP) level, maintaining the first EDEP level, and determining whether the received control block was successfully executed. Embodiments also include receiving a subsequent control block that comprises a second EDEP level, comparing the first EDEP level and the second EDEP level, and providing the subsequent control block for execution based at least in part on the successful execution of the received control block, and on the second EDEP level being less than or equal to the first EDEP level.Type: GrantFiled: December 11, 2020Date of Patent: March 14, 2023Assignee: International Business Machines CorporationInventors: Scot Rider, Marcel Schaal
-
Publication number: 20220188119Abstract: Provided are embodiments for a computer-implemented method, system and computer program product for identifying dependencies in a control sequence. Embodiments include receiving a control block that comprises a first error dependency (EDEP) level, maintaining the first EDEP level, and determining whether the received control block was successfully executed. Embodiments also include receiving a subsequent control block that comprises a second EDEP level, comparing the first EDEP level and the second EDEP level, and providing the subsequent control block for execution based at least in part on the successful execution of the received control block, and on the second EDEP level being less than or equal to the first EDEP level.Type: ApplicationFiled: December 11, 2020Publication date: June 16, 2022Inventors: Scot Rider, Marcel Schaal
-
Patent number: 11288001Abstract: Aspects include receiving a request from a requesting system to move data from a source memory on a source system to a target memory on a target system. The receiving is at a first hardware engine configured to access the source memory and the target memory. In response to receiving the request, the first hardware engine reads the data from the source memory and writes the data to the target memory. In response to the reading being completed, the first hardware engine transmits a data clearing request to a second hardware engine that is configured to access the source memory. The data clearing request specifies a location of the data in the source memory to be cleared.Type: GrantFiled: December 4, 2020Date of Patent: March 29, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Scot Rider, Marcel Schaal
-
Publication number: 20080112314Abstract: Method, system and program product are provided for packet flow control for a switching node of a data transfer network. The method includes actively managing space allocations in a central queue of a switching node allotted to the ports of the switching node based on the amount of unused space currently available in the central queue and an amount of currently-vacant storage space in a storage device of a port. In a further aspect, the method includes separately tracking unallocated space and vacated allocated space, which had been used to buffer packets received by the ports but were vacated since a previous management update due to a packet being removed from the central queue. Each port is offered vacated space that is currently allocated to that port and a quantity of the currently unallocated space in the central queue to distribute to one or more virtual lanes of the port.Type: ApplicationFiled: January 15, 2008Publication date: May 15, 2008Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Scot RIDER
-
Publication number: 20070280248Abstract: A method is provided for reducing size of memory required for a switching node's forwarding table by employing forwarding tables of different types to map received data packets addressed to downstream nodes and upstream nodes to appropriate output ports of the switching node. The method includes receiving a data packet at a data transfer node of a network and selecting a forwarding table from multiple types of forwarding tables accessible by the node based on an attribute associated with the received data packet, and mapping the data packet to an output port of the node utilizing the forwarding table selected from the multiple types of forwarding tables based on the attribute associated with the packet.Type: ApplicationFiled: August 20, 2007Publication date: December 6, 2007Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jay Herring, Scot Rider
-
Publication number: 20070248096Abstract: Method, system and program product are provided for reducing size of memory required for a switching node's forwarding table by employing forwarding tables of different types to map received data packets addressed to downstream nodes and upstream nodes to appropriate output ports of the switching node. The method includes receiving a data packet at a data transfer node of a network and selecting a forwarding table from multiple types of forwarding tables accessible by the node based on an attribute associated with the received data packet, and mapping the data packet to an output port of the node utilizing the forwarding table selected from the multiple types of forwarding tables based on the attribute associated with the packet.Type: ApplicationFiled: June 21, 2007Publication date: October 25, 2007Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jay Herring, Scot Rider
-
Publication number: 20050226145Abstract: Method, system and program product are provided for packet flow control for a switching node of a data transfer network. The method includes actively managing space allocations in a central queue of a switching node allotted to the ports of the switching node based on the amount of unused space currently available in the central queue. In a further aspect, the method includes separately tracking unallocated space and vacated allocated space, which had been used to buffer packets received by the ports but were vacated since a previous management update due to a packet being removed from the central queue. Each port is offered vacated space that is currently allocated to that port and a quantity of the currently unallocated space in the central queue to distribute to one or more virtual lanes of the port.Type: ApplicationFiled: April 9, 2004Publication date: October 13, 2005Applicant: International Business Machines CorporationInventors: Derrick Garmire, Jay Herring, Ronald Linton, Scot Rider
-
Publication number: 20050226146Abstract: Method, system and program product are provided for packet flow control for a switching node of a data transfer network. The method includes actively managing space allocations in a central queue of a switching node allotted to the ports of the switching node based on the amount of unused space currently available in the central queue and an amount of currently-vacant storage space in a storage device of a port. In a further aspect, the method includes separately tracking unallocated space and vacated allocated space, which had been used to buffer packets received by the ports but were vacated since a previous management update due to a packet being removed from the central queue. Each port is offered vacated space that is currently allocated to that port and a quantity of the currently unallocated space in the central queue to distribute to one or more virtual lanes of the port.Type: ApplicationFiled: April 9, 2004Publication date: October 13, 2005Applicant: International Business Machines CorporationInventor: Scot Rider
-
Publication number: 20050149600Abstract: Method, system and program product are provided for reducing size of memory required for a switching node's forwarding table by employing forwarding tables of different types to map received data packets addressed to downstream nodes and upstream nodes to appropriate output ports of the switching node. The method includes receiving a data packet at a data transfer node of a network and selecting a forwarding table from multiple types of forwarding tables accessible by the node based on an attribute associated with the received data packet, and mapping the data packet to an output port of the node utilizing the forwarding table selected from the multiple types of forwarding tables based on the attribute associated with the packet.Type: ApplicationFiled: December 17, 2003Publication date: July 7, 2005Applicant: International Business Machines CorporationInventors: Jay Herring, Scot Rider
-
Publication number: 20050120257Abstract: A method and system are provided for processing data packets at a data-transfer network node. The method and system include determining a length of time that a packet has been buffered at the node by associating a timer with each data packet received and buffered in the node's central queue. The central queue subsequently reads the associated timer to determine a length of time that a data packet has been buffered prior to the data packet being transmitted by the node. If a packet has been buffered too long, then the queue discards the packet. Otherwise, the queue permits transmission of the packet. The amount of circuitry in the switching node's central queue is reduced by locating the packet timers in timer logic external to the queue.Type: ApplicationFiled: December 2, 2003Publication date: June 2, 2005Applicant: International Business Machines CorporationInventors: Derrick Garmire, Scot Rider