Patents by Inventor John Krasner

John Krasner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11956130
    Abstract: The cause of a failure to satisfy a high priority service level objective of a storage object or storage group is localized within a storage array. Storage objects that have been assigned low priority service level objectives are analyzed to determine whether their performance paths overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure. The low priority service level objective storage objects having performance paths that overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure are targeted for IO data rate reduction in order to free resources to restore compliance with the high priority service level objective. The other low priority service level objective storage objects are not targeted.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: April 9, 2024
    Assignee: DELL PRODUCTS L.P.
    Inventors: John Creed, Arieh Don, John Krasner
  • Publication number: 20240113946
    Abstract: The cause of a failure to satisfy a high priority service level objective of a storage object or storage group is localized within a storage array. Storage objects that have been assigned low priority service level objectives are analyzed to determine whether their performance paths overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure. The low priority service level objective storage objects having performance paths that overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure are targeted for IO data rate reduction in order to free resources to restore compliance with the high priority service level objective. The other low priority service level objective storage objects are not targeted.
    Type: Application
    Filed: October 3, 2022
    Publication date: April 4, 2024
    Applicant: DELL PRODUCTS L.P.
    Inventors: John Creed, Arieh Don, John Krasner
  • Patent number: 11762770
    Abstract: One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: September 19, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: John Creed, John Krasner
  • Patent number: 11640311
    Abstract: One or more aspects of the present disclosure relate to allocating virtual memory to one or more virtual machines (VMs). The one or more VMs can be established by a hypervisor of a storage device. The virtual memory can be allocated to the established one or more VMs. The virtual memory can correspond to non-volatile (NV) memory of a global memory of the storage device.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: May 2, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Serge Pirotte, John Krasner, Chakib Ouarraoui, Mark Halstead
  • Patent number: 11625327
    Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: April 11, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Patent number: 11609695
    Abstract: A data model is trained to determine whether data is raw, compressed, and/or encrypted. The data model may also be trained to recognize which compression algorithm was used to compress data and predict compression ratios for the data using different compression algorithms. A storage system uses the data model to independently identify raw data. The raw data is grouped based on similarity of statistical features and group members are compressed with the same compression algorithm and may be encrypted after compression with the same encryption algorithm. The data model may also be used to identify sub-optimally compressed data, which may be uncompressed and grouped for compression using a different compression algorithm.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: March 21, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: John Krasner, Sweetesh Singh
  • Patent number: 11561698
    Abstract: A storage array that uses NVMEoF to interconnect compute nodes with NVME SSDs via a fabric and NVME offload engines implements flow control based on transaction latency. Transaction latency is the elapsed time between the send side completion message and receive side completion message for a single transaction. Counts of total transactions and over-latency-limit transactions are accumulated over a time interval. If the over limit rate exceeds a threshold, then the maximum allowed number of enqueued pending transactions is reduced. The maximum allowed number of enqueued pending transactions is periodically restored to a default value.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: January 24, 2023
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jinxian Xing, Julie Zhivich, John Krasner
  • Publication number: 20220334727
    Abstract: A storage array that uses NVMEoF to interconnect compute nodes with NVME SSDs via a fabric and NVME offload engines implements flow control based on transaction latency. Transaction latency is the elapsed time between the send side completion message and receive side completion message for a single transaction. Counts of total transactions and over-latency-limit transactions are accumulated over a time interval. If the over limit rate exceeds a threshold, then the maximum allowed number of enqueued pending transactions is reduced. The maximum allowed number of enqueued pending transactions is periodically restored to a default value.
    Type: Application
    Filed: April 20, 2021
    Publication date: October 20, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: Jinxian Xing, Julie Zhivich, John Krasner
  • Patent number: 11416057
    Abstract: One or more aspects of the present disclosure relate to data protection techniques in response to power disruptions a power supply from a continuous power source for a storage device can be monitored. A power disruption event interrupting the power supply from the continuous power source can further be identified. In response to detecting an event, a storage system can be switched to a backup power supply, power consumption of one or more components of the storage device can be controlled based on information associated with each component and an amount of power available in the backup power supply. Further, one or more power interruption operations can be performed while the backup power supply includes sufficient power for performing the power interruption operations.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: August 16, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Clifford Lim, Sweetesh Singh
  • Publication number: 20220179829
    Abstract: A data model is trained to predict compressibility of binary data structures based on component entropy and predict relative compression efficiency for various compression algorithms based on component size. A recommendation engine in a storage system uses the data model to predict compressibility of binary data and determines whether to compress the binary data based on predicted compressibility. If the recommendation engine determines that compression of the binary data is justified, then a compression algorithm is recommended based on predicted relative compression efficiency. For example, the compression algorithm predicted to yield the greatest compression ratio or shortest compression/decompression time may be recommended.
    Type: Application
    Filed: December 7, 2020
    Publication date: June 9, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: John Krasner, Sweetesh Singh
  • Publication number: 20220129379
    Abstract: One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Applicant: EMC IP Holding Company LLC
    Inventors: John Creed, John Krasner
  • Publication number: 20220121571
    Abstract: Remote cache slots are donated in a storage array without requiring a cache slot starved compute node to search for candidates in remote portions of a shared memory. One or more donor compute nodes create donor cache slots that are reserved for donation. The cache slot starved compute node broadcasts a message to the donor compute nodes indicating a need for donor cache slots. The donor compute nodes provide donor cache slots to the cache slot starved compute node in response to the message. The message may be broadcast by updating a mask of compute node operational status in the shared memory. The donor cache slots may be provided by providing pointers to the donor cache slots.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: John Creed, Steve Ivester, John Krasner, Kaustubh Sahasrabudhe
  • Publication number: 20220066647
    Abstract: A data model is trained to determine whether data is raw, compressed, and/or encrypted. The data model may also be trained to recognize which compression algorithm was used to compress data and predict compression ratios for the data using different compression algorithms. A storage system uses the data model to independently identify raw data. The raw data is grouped based on similarity of statistical features and group members are compressed with the same compression algorithm and may be encrypted after compression with the same encryption algorithm. The data model may also be used to identify sub-optimally compressed data, which may be uncompressed and grouped for compression using a different compression algorithm.
    Type: Application
    Filed: September 2, 2020
    Publication date: March 3, 2022
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: John Krasner, Sweetesh Singh
  • Patent number: 11243829
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: dynamically obtain a plurality of metadata from a global memory of a storage system; dynamically predict anticipated metadata based on the dynamically obtained metadata, wherein anticipated metadata is relevant to anticipated input/output (I/O) operations of the storage system; and dynamically instruct the storage system to load anticipated metadata into the global memory.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: February 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Jason Duquette
  • Publication number: 20220026970
    Abstract: One or more aspects of the present disclosure relate to data protection techniques in response to power disruptions a power supply from a continuous power source for a storage device can be monitored. A power disruption event interrupting the power supply from the continuous power source can further be identified. In response to detecting an event, a storage system can be switched to a backup power supply, power consumption of one or more components of the storage device can be controlled based on information associated with each component and an amount of power available in the backup power supply. Further, one or more power interruption operations can be performed while the backup power supply includes sufficient power for performing the power interruption operations.
    Type: Application
    Filed: July 27, 2020
    Publication date: January 27, 2022
    Applicant: EMC IP Holding Company LLC
    Inventors: John Krasner, Clifford Lim, Sweetesh Singh
  • Patent number: 11138123
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: October 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Patent number: 11099754
    Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to receive, via a multi-path layer of at least one host device, at least one indication of a predicted distribution of input-output operations directed from the at least one host device to a storage system for a given time interval. The at least one processing device is also configured to determine a cache memory configuration for a cache memory associated with the storage system based at least in part on the at least one indication of the predicted distribution of input-output operations for the given time interval. The at least one processing device is further configured to provision the cache memory with the determined cache memory configuration for the given time interval.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: August 24, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Sanjib Mallick, John Krasner, Arieh Don, Ramesh Doddaiah
  • Publication number: 20210173782
    Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.
    Type: Application
    Filed: December 10, 2019
    Publication date: June 10, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Publication number: 20210096997
    Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 1, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: John Krasner, Ramesh Doddaiah
  • Publication number: 20210064403
    Abstract: One or more aspects of the present disclosure relate to allocating virtual memory to one or more virtual machines (VMs). The one or more VMs can be established by a hypervisor of a storage device. The virtual memory can be allocated to the established one or more VMs. The virtual memory can correspond to non-volatile (NV) memory of a global memory of the storage device.
    Type: Application
    Filed: August 27, 2019
    Publication date: March 4, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Serge Pirotte, John Krasner, Chakib Ouarraoui, Mark Halstead