Patents by Inventor Christopher Joseph Corsi
Christopher Joseph Corsi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240411579Abstract: In some examples, a system creates a partition map that maps partitions of a data bucket to respective virtual processors executed in a cluster of computer nodes. Responsive to a request to access a data object in the data bucket, the system identifies which partition contains metadata for the data object based on a key associated with the data object, and identifies, based on the identified partition and using the partition map, a virtual processor that has the metadata for the data object. Responsive to a migration of a first virtual processor from a first to a second computer node, the system updates a virtual processor-computer node map that maps the respective virtual processors to corresponding computer nodes of the cluster of computer nodes, where the partition map remains unchanged in response to the migration of the first virtual processor from the first computer node to the second computer node.Type: ApplicationFiled: June 12, 2023Publication date: December 12, 2024Inventors: Vinay Devadas, Srikant Varadan, Christopher Joseph Corsi, Shrikant Pramod Mether
-
Publication number: 20240354144Abstract: In some examples, a first computing node receives a write request, where the first computing node is part of a collection of multiple computing nodes, and a plurality of virtual processors are executable in the multiple computing nodes to manage access of data in a shared storage system. In response to the write request, a first virtual processor at the first computing node sends, to a second virtual processor, a request for metadata stored by the second virtual processor. The first virtual processor updates an intent structure in a nonvolatile memory with information indicating an intent to write data for the write request. In response to the metadata received at the first virtual processor from the second virtual processor, a write of the data is initiated to cause storage of the data in the shared storage system.Type: ApplicationFiled: April 24, 2023Publication date: October 24, 2024Inventors: Vinay Devadas, Srikant Varadan, Christopher Joseph Corsi
-
Patent number: 11645113Abstract: In some examples, a system receives a first unit of work to be scheduled in the system that includes a plurality of collections of processing units to execute units of work, where each respective collection of processing units of the plurality of collections of processing units is associated with a corresponding scheduling queue. The system selects, for the first unit of work according to a first criterion, candidate collections from among the plurality of collections of processing units, and enqueues the first unit of work in a schedule queue associated with a selected collection of processing units that is selected, according to a selection criterion, from among the candidate collections.Type: GrantFiled: April 30, 2021Date of Patent: May 9, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Christopher Joseph Corsi, Prashanth Soundarapandian, Matti Antero Vanninen, Siddharth Munshi
-
Publication number: 20220350648Abstract: In some examples, a system receives a first unit of work to be scheduled in the system that includes a plurality of collections of processing units to execute units of work, where each respective collection of processing units of the plurality of collections of processing units is associated with a corresponding scheduling queue. The system selects, for the first unit of work according to a first criterion, candidate collections from among the plurality of collections of processing units, and enqueues the first unit of work in a schedule queue associated with a selected collection of processing units that is selected, according to a selection criterion, from among the candidate collections.Type: ApplicationFiled: April 30, 2021Publication date: November 3, 2022Inventors: Christopher Joseph Corsi, Prashanth Soundarapandian, Matti Antero Vanninen, Siddharth Munshi
-
Patent number: 10162686Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.Type: GrantFiled: November 8, 2017Date of Patent: December 25, 2018Assignee: NetApp, Inc.Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
-
Publication number: 20180067784Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.Type: ApplicationFiled: November 8, 2017Publication date: March 8, 2018Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
-
Patent number: 9842008Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.Type: GrantFiled: February 24, 2016Date of Patent: December 12, 2017Assignee: NetApp, Inc.Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
-
Publication number: 20170315878Abstract: A technique efficiently manages a snapshot and/or clone by a volume layer of a storage input/output (I/O) stack executing on one or more nodes of the cluster. According to the technique, an ownership attribute is included in metadata entries of a dense tree data structure for extents that eliminates otherwise needed reference count operations for the snapshots and reduces reference count operations for the clones. Illustratively, a copy of a parent dense tree level created by a copy-on-write (COW) operation is referred to as a “derived level”, whereas the existing level of the parent dense tree is referred to as a “source level”. The source level may be persistently linked to the derived level by keeping “level identifying key information” in a respective dense tree source level header. Moreover, two different types of dense tree derivations are defined: a derive relationship and a reverse-derive relationship.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Prahlad Purohit, Ling Zheng, Christopher Joseph Corsi
-
Publication number: 20170315740Abstract: A technique paces and balances a flow of messages related to processing of input/output (I/O) requests between subsystems, such as layers of a storage input/output (I/O) stack, of one or more nodes of a cluster. The I/O requests may be directed to externally-generated user data, e.g., write requests generated by a host coupled to the cluster, and internally-generated metadata, e.g., write and delete requests generated by a volume layer of the storage I/O stack. The user data (and metadata) may be organized as an arbitrary number of variable-length extents of one or more host-visible logical units (LUNs) served by the nodes. The metadata may include mappings from host-visible logical block address ranges (i.e., offset ranges) of a LUN to extent keys, which reference locations of the extents stored on storage devices, such as solid state drivers (SSDs), of a storage array coupled to the nodes.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Christopher Joseph Corsi, Anshul Pundir, Michael L. Federwisch, Zhen Zeng
-
Publication number: 20160246655Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.Type: ApplicationFiled: February 24, 2016Publication date: August 25, 2016Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa