Patents by Inventor Christopher Joseph Corsi

Christopher Joseph Corsi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11645113
    Abstract: In some examples, a system receives a first unit of work to be scheduled in the system that includes a plurality of collections of processing units to execute units of work, where each respective collection of processing units of the plurality of collections of processing units is associated with a corresponding scheduling queue. The system selects, for the first unit of work according to a first criterion, candidate collections from among the plurality of collections of processing units, and enqueues the first unit of work in a schedule queue associated with a selected collection of processing units that is selected, according to a selection criterion, from among the candidate collections.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: May 9, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Christopher Joseph Corsi, Prashanth Soundarapandian, Matti Antero Vanninen, Siddharth Munshi
  • Publication number: 20220350648
    Abstract: In some examples, a system receives a first unit of work to be scheduled in the system that includes a plurality of collections of processing units to execute units of work, where each respective collection of processing units of the plurality of collections of processing units is associated with a corresponding scheduling queue. The system selects, for the first unit of work according to a first criterion, candidate collections from among the plurality of collections of processing units, and enqueues the first unit of work in a schedule queue associated with a selected collection of processing units that is selected, according to a selection criterion, from among the candidate collections.
    Type: Application
    Filed: April 30, 2021
    Publication date: November 3, 2022
    Inventors: Christopher Joseph Corsi, Prashanth Soundarapandian, Matti Antero Vanninen, Siddharth Munshi
  • Patent number: 10162686
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: December 25, 2018
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Publication number: 20180067784
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Application
    Filed: November 8, 2017
    Publication date: March 8, 2018
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Patent number: 9842008
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: December 12, 2017
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Publication number: 20170315878
    Abstract: A technique efficiently manages a snapshot and/or clone by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of the cluster. According to the technique, an ownership attribute is included in metadata entries of a dense tree data structure for extents that eliminates otherwise needed reference count operations for the snapshots and reduces reference count operations for the clones. Illustratively, a copy of a parent dense tree level created by a copy-on-write (COW) operation is referred to as a “derived level”, whereas the existing level of the parent dense tree is referred to as a “source level”. The source level may be persistently linked to the derived level by keeping “level identifying key information” in a respective dense tree source level header. Moreover, two different types of dense tree derivations are defined: a derive relationship and a reverse-derive relationship.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 2, 2017
    Inventors: Prahlad Purohit, Ling Zheng, Christopher Joseph Corsi
  • Publication number: 20170315740
    Abstract: A technique paces and balances a flow of messages related to processing of input/output (I/O) requests between subsystems, such as layers of a storage input/output (I/O) stack, of one or more nodes of a cluster. The I/O requests may be directed to externally-generated user data, e.g., write requests generated by a host coupled to the cluster, and internally-generated metadata, e.g., write and delete requests generated by a volume layer of the storage I/O stack. The user data (and metadata) may be organized as an arbitrary number of variable-length extents of one or more host-visible logical units (LUNs) served by the nodes. The metadata may include mappings from host-visible logical block address ranges (i.e., offset ranges) of a LUN to extent keys, which reference locations of the extents stored on storage devices, such as solid state drivers (SSDs), of a storage array coupled to the nodes.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 2, 2017
    Inventors: Christopher Joseph Corsi, Anshul Pundir, Michael L. Federwisch, Zhen Zeng
  • Publication number: 20160246655
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 25, 2016
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa