Patents by Inventor T. David Evans

T. David Evans has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12216926
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Grant
    Filed: July 20, 2023
    Date of Patent: February 4, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Publication number: 20230359377
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Application
    Filed: July 20, 2023
    Publication date: November 9, 2023
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Patent number: 11740815
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Patent number: 11507592
    Abstract: A method of adapting a first key-value store to a second key-value store may include determining a conversion strategy based on one or more characteristics of the first key-value store and one or more characteristics of the second key-value store, converting the second key-value store to a converted key-value store based on the conversion strategy, and mapping the first key-value store to the converted key-value store based on a mapping function. The converted key-value store may be accessed on-the-fly. A data storage system may include a key-value interface configured to provide access to a lower key-value store, and a key-value adapter coupled to the key-value interface and configured to adapt an upper key-value store to the lower key-value store, wherein the key-value adapter may be configured to adapt at least two different types of the upper key-value store to the lower key-value store.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: November 22, 2022
    Inventors: Zhengyu Yang, Thomas Edward Rainey, III, Michael Kurt Gehlen, Ping Terence Wong, Venkatraman Balasubramanian, Olufogorehan Adetayo Tunde-Onadele, Nithya Ramakrishnan, T. David Evans, Clay Mayers
  • Patent number: 11240294
    Abstract: A load balancing system includes: a centralized queue; a pool of resource nodes connected to the centralized queue; one or more processors; and memory coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the one or more processors to: monitor a queue status of the centralized queue to identify a bursty traffic period; calculate an index value for a load associated with the bursty traffic period; select a load balancing strategy based on the index value; distribute the load to the pool of resource nodes based on the load balancing strategy; observe a state of the pool of resource nodes in response to the load balancing strategy; calculate a reward based on the observed state; and adjust the load balancing strategy based on the reward.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: February 1, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Venkatraman Balasubramanian, Olufogorehan Adetayo Tunde-Onadele, Zhengyu Yang, Ping Terence Wong, Nithya Ramakrishnan, T. David Evans, Clay Mayers
  • Patent number: 11216190
    Abstract: A system and method for managing input output queue pairs. In some embodiments, the method includes calculating a system utilization ratio, the system utilization ratio being a ratio of: an arrival rate of input output requests, to a service rate; determining whether: the system utilization ratio has exceeded a first threshold utilization during a time period exceeding a first threshold length, and adding a new queue pair is expected to improve system performance; and in response to determining: that the system utilization ratio has exceeded the first threshold utilization during a time period exceeding the first threshold length, and that adding a new queue pair is expected to improve system performance: adding a new queue pair.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra G. Sampath, T. David Evans, Clay Mayers
  • Publication number: 20210389891
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Application
    Filed: August 30, 2021
    Publication date: December 16, 2021
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Patent number: 11144226
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: October 12, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Publication number: 20210058453
    Abstract: A load balancing system includes: a centralized queue; a pool of resource nodes connected to the centralized queue; one or more processors; and memory coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the one or more processors to: monitor a queue status of the centralized queue to identify a bursty traffic period; calculate an index value for a load associated with the bursty traffic period; select a load balancing strategy based on the index value; distribute the load to the pool of resource nodes based on the load balancing strategy; observe a state of the pool of resource nodes in response to the load balancing strategy; calculate a reward based on the observed state; and adjust the load balancing strategy based on the reward.
    Type: Application
    Filed: December 6, 2019
    Publication date: February 25, 2021
    Inventors: Venkatraman Balasubramanian, Olufogorehan Adetayo Tunde-Onadele, Zhengyu Yang, Ping Terence Wong, Nithya Ramakrishnan, T. David Evans, Clay Mayers
  • Publication number: 20200387312
    Abstract: A system and method for managing input output queue pairs. In some embodiments, the method includes calculating a system utilization ratio, the system utilization ratio being a ratio of: an arrival rate of input output requests, to a service rate; determining whether: the system utilization ratio has exceeded a first threshold utilization during a time period exceeding a first threshold length, and adding a new queue pair is expected to improve system performance; and in response to determining: that the system utilization ratio has exceeded the first threshold utilization during a time period exceeding the first threshold length, and that adding a new queue pair is expected to improve system performance: adding a new queue pair.
    Type: Application
    Filed: August 9, 2019
    Publication date: December 10, 2020
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra G. Sampath, T. David Evans, Clay Mayers
  • Patent number: 10852990
    Abstract: A non-volatile memory (NVM) express (NVMe) system includes at least one user application, an NVMe controller and a hypervisor. Each user application runs in a respective virtual machine environment and including a user input/output (I/O) queue. The NVMe controller is coupled to at least one NVM storage device, and the NVMe controller includes a driver that includes at least one device queue. The hypervisor is coupled to the user I/O queue of each user application and to the NVMe controller, and selectively forces each user I/O queue to empty to a corresponding device queue in the driver of the NVMe controller or enables a private I/O channel between the user I/O queue and a corresponding device queue in the driver of the NVMe controller.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: December 1, 2020
    Inventors: Zhengyu Yang, Morteza Hoseinzadeh, Ping Wong, John Artoux, T. David Evans
  • Publication number: 20200326868
    Abstract: A method for dispatching input-output in a system. The system may include a centralized processing circuit, a plurality of persistent storage targets, a first input-output processor, and a second input-output processor. The method may include determining whether the first input-output processor is connected to a first target of the plurality of persistent storage targets; determining whether the second input-output processor is connected to the first target; and in response to determining that both the first input-output processor is connected to the first target, and the second input-output processor is connected to the first target, dispatching a first plurality of input-output requests, each to either the first input-output processor or the second input-output processor, the dispatching being in proportion to a service rate of the first input-output processor to the first target and a service rate of the second input-output processor to the first target, respectively.
    Type: Application
    Filed: July 1, 2019
    Publication date: October 15, 2020
    Inventors: Zhengyu Yang, Nithya Ramakrishnan, Allen Russell Andrews, Sudheendra Grama Sampath, T. David Evans, Clay Mayers
  • Patent number: 10795583
    Abstract: A system for performing auto-tiering is disclosed. The system may include a plurality of storage devices offering a plurality of resources and organized into storage tiers. The storage devices may store data for virtual machines. A receiver may receive I/O commands and performance data for the virtual machines. A transmitter may transmit responses to the I/O commands. An auto-tiering controller may select storage tiers to store the data for the virtual machines and may migrate data between storage tiers responsive to the performance data. The selection of the storage tiers may optimize the performance of all virtual machines across all storage tiers, factoring the change in performance of the virtual machines and a migration cost to migrate data between storage tiers.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: October 6, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Zhengyu Yang, T. David Evans, Allen Andrews, Clay Mayers, Thomas Rory Bolt
  • Patent number: 10496541
    Abstract: A system is disclosed. The system may include a virtual machine server, which may include a processor, a memory, and at least two virtual machines that may be stored in the memory and executed by the processor. The virtual machine server may also include a virtual machine hypervisor to manage the operations of the virtual machine. The virtual machine server may also include a cache that may include at least one storage device. A Dynamic Cache Partition Manager (DCPM) may manage the partition of the cache into a performance guarantee zone, which may be partitioned into regions, and a spike buffer zone.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: December 3, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Zhengyu Yang, T. David Evans
  • Publication number: 20190163636
    Abstract: A system is disclosed. The system may include a virtual machine server, which may include a processor, a memory, and at least two virtual machines that may be stored in the memory and executed by the processor. The virtual machine server may also include a virtual machine hypervisor to manage the operations of the virtual machine. The virtual machine server may also include a cache that may include at least one storage device. A Dynamic Cache Partition Manager (DCPM) may manage the partition of the cache into a performance guarantee zone, which may be partitioned into regions, and a spike buffer zone.
    Type: Application
    Filed: February 7, 2018
    Publication date: May 30, 2019
    Inventors: Zhengyu Yang, T. David Evans
  • Patent number: 9864551
    Abstract: Example implementations relate to determining, based on a system busy level, throughput of logical volumes. In example implementations, a system busy level may be increased in response to a determination that a latency goal associated with one of a plurality of logical volumes has not been met. A throughput for a subset of the plurality of logical volumes may be determined based on the system busy level.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: January 9, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ming Ma, Siamak Nazari, James R Cook, T. David Evans
  • Publication number: 20160062670
    Abstract: Example implementations relate to determining, based on a system busy level, throughput of logical volumes. In example implementations, a system busy level may be increased in response to a determination that a latency goal associated with one of a plurality of logical volumes has not been met. A throughput for a subset of the plurality of logical volumes may be determined based on the system busy level.
    Type: Application
    Filed: August 29, 2014
    Publication date: March 3, 2016
    Inventors: Ming Ma, Siamak Nazari, James R. Cook, T. David Evans
  • Patent number: 9047225
    Abstract: An improved technique for managing data replacement in a cache dynamically selects a data replacement protocol from among multiple candidates based on which data replacement protocol produces the greatest cache hit rate. The technique includes selecting one of multiple data replacement protocols using a random selection process that can be biased to favor the selection of certain protocols over others. Data are evicted from the cache using the selected data replacement protocol, and the cache hit rate is monitored. The selected data replacement protocol is then rewarded in response to the detected cache hit rate. The selection process is repeated, and a newly selected data replacement protocol is put into use. Operation tends to converge on an optimal data replacement protocol that best suits the application and current operating environment of the cache.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: June 2, 2015
    Assignee: EMC Corporation
    Inventor: T. David Evans
  • Patent number: 8924647
    Abstract: An improved technique for managing data replacement in a multi-level cache dynamically selects a data replacement protocol from among multiple candidates based on which data replacement protocol produces the greatest cache hit rate. The technique includes selecting one of multiple data replacement protocols using a random selection process that can be biased to favor the selection of certain protocols over others. Data are evicted from each level of the multi-level cache using the selected data replacement protocol, and the cache hit rate is monitored. The selected data replacement protocol is then rewarded in response to the detected cache hit rate. The selection process is repeated, and a newly selected data replacement protocol is put into use. Operation tends to converge on an optimal data replacement protocol that best suits the application and current operating environment of the multi-level cache.
    Type: Grant
    Filed: September 26, 2012
    Date of Patent: December 30, 2014
    Assignee: EMC Corporation
    Inventor: T. David Evans
  • Patent number: 8874494
    Abstract: An improved technique for replacing storage elements in a redundant group of storage elements of a storage array dynamically selects a storage element type from among multiple candidates based on which storage element type produces the greatest service level of the redundant group. The technique includes selecting one of multiple storage element types using a random selection process that can be biased to favor the selection of certain storage element types over others. A storage element of the selected type is added to the redundant group. The selected storage element type is then rewarded based on the service level that results from adding the storage element of the selected type. The selection process is repeated, and a newly selected storage element type is put into use. Operation tends to converge on an optimal storage element type that maximizes the service level of the redundant group.
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: October 28, 2014
    Assignee: EMC Corporation
    Inventor: T. David Evans