Patents by Inventor John Krasner
John Krasner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11956130Abstract: The cause of a failure to satisfy a high priority service level objective of a storage object or storage group is localized within a storage array. Storage objects that have been assigned low priority service level objectives are analyzed to determine whether their performance paths overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure. The low priority service level objective storage objects having performance paths that overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure are targeted for IO data rate reduction in order to free resources to restore compliance with the high priority service level objective. The other low priority service level objective storage objects are not targeted.Type: GrantFiled: October 3, 2022Date of Patent: April 9, 2024Assignee: DELL PRODUCTS L.P.Inventors: John Creed, Arieh Don, John Krasner
-
Publication number: 20240113946Abstract: The cause of a failure to satisfy a high priority service level objective of a storage object or storage group is localized within a storage array. Storage objects that have been assigned low priority service level objectives are analyzed to determine whether their performance paths overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure. The low priority service level objective storage objects having performance paths that overlap with the performance path of the high priority service level objective storage object or storage group at the location of the cause of the failure are targeted for IO data rate reduction in order to free resources to restore compliance with the high priority service level objective. The other low priority service level objective storage objects are not targeted.Type: ApplicationFiled: October 3, 2022Publication date: April 4, 2024Applicant: DELL PRODUCTS L.P.Inventors: John Creed, Arieh Don, John Krasner
-
Patent number: 11762770Abstract: One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned.Type: GrantFiled: October 22, 2020Date of Patent: September 19, 2023Assignee: EMC IP Holding Company LLCInventors: John Creed, John Krasner
-
Patent number: 11640311Abstract: One or more aspects of the present disclosure relate to allocating virtual memory to one or more virtual machines (VMs). The one or more VMs can be established by a hypervisor of a storage device. The virtual memory can be allocated to the established one or more VMs. The virtual memory can correspond to non-volatile (NV) memory of a global memory of the storage device.Type: GrantFiled: August 27, 2019Date of Patent: May 2, 2023Assignee: EMC IP Holding Company LLCInventors: Serge Pirotte, John Krasner, Chakib Ouarraoui, Mark Halstead
-
Patent number: 11625327Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.Type: GrantFiled: December 10, 2019Date of Patent: April 11, 2023Assignee: EMC IP Holding Company LLCInventors: John Krasner, Ramesh Doddaiah
-
Patent number: 11609695Abstract: A data model is trained to determine whether data is raw, compressed, and/or encrypted. The data model may also be trained to recognize which compression algorithm was used to compress data and predict compression ratios for the data using different compression algorithms. A storage system uses the data model to independently identify raw data. The raw data is grouped based on similarity of statistical features and group members are compressed with the same compression algorithm and may be encrypted after compression with the same encryption algorithm. The data model may also be used to identify sub-optimally compressed data, which may be uncompressed and grouped for compression using a different compression algorithm.Type: GrantFiled: September 2, 2020Date of Patent: March 21, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: John Krasner, Sweetesh Singh
-
Patent number: 11561698Abstract: A storage array that uses NVMEoF to interconnect compute nodes with NVME SSDs via a fabric and NVME offload engines implements flow control based on transaction latency. Transaction latency is the elapsed time between the send side completion message and receive side completion message for a single transaction. Counts of total transactions and over-latency-limit transactions are accumulated over a time interval. If the over limit rate exceeds a threshold, then the maximum allowed number of enqueued pending transactions is reduced. The maximum allowed number of enqueued pending transactions is periodically restored to a default value.Type: GrantFiled: April 20, 2021Date of Patent: January 24, 2023Assignee: EMC IP HOLDING COMPANY LLCInventors: Jinxian Xing, Julie Zhivich, John Krasner
-
Publication number: 20220334727Abstract: A storage array that uses NVMEoF to interconnect compute nodes with NVME SSDs via a fabric and NVME offload engines implements flow control based on transaction latency. Transaction latency is the elapsed time between the send side completion message and receive side completion message for a single transaction. Counts of total transactions and over-latency-limit transactions are accumulated over a time interval. If the over limit rate exceeds a threshold, then the maximum allowed number of enqueued pending transactions is reduced. The maximum allowed number of enqueued pending transactions is periodically restored to a default value.Type: ApplicationFiled: April 20, 2021Publication date: October 20, 2022Applicant: EMC IP HOLDING COMPANY LLCInventors: Jinxian Xing, Julie Zhivich, John Krasner
-
Patent number: 11416057Abstract: One or more aspects of the present disclosure relate to data protection techniques in response to power disruptions a power supply from a continuous power source for a storage device can be monitored. A power disruption event interrupting the power supply from the continuous power source can further be identified. In response to detecting an event, a storage system can be switched to a backup power supply, power consumption of one or more components of the storage device can be controlled based on information associated with each component and an amount of power available in the backup power supply. Further, one or more power interruption operations can be performed while the backup power supply includes sufficient power for performing the power interruption operations.Type: GrantFiled: July 27, 2020Date of Patent: August 16, 2022Assignee: EMC IP Holding Company LLCInventors: John Krasner, Clifford Lim, Sweetesh Singh
-
Publication number: 20220179829Abstract: A data model is trained to predict compressibility of binary data structures based on component entropy and predict relative compression efficiency for various compression algorithms based on component size. A recommendation engine in a storage system uses the data model to predict compressibility of binary data and determines whether to compress the binary data based on predicted compressibility. If the recommendation engine determines that compression of the binary data is justified, then a compression algorithm is recommended based on predicted relative compression efficiency. For example, the compression algorithm predicted to yield the greatest compression ratio or shortest compression/decompression time may be recommended.Type: ApplicationFiled: December 7, 2020Publication date: June 9, 2022Applicant: EMC IP HOLDING COMPANY LLCInventors: John Krasner, Sweetesh Singh
-
Publication number: 20220129379Abstract: One or more aspects of the present disclosure relate to cache memory management. In embodiments, a global memory of a storage array into one or more cache partitions based on an anticipated activity of one or more input/output (IO) service level (SL) workload volumes can be dynamically partitioned.Type: ApplicationFiled: October 22, 2020Publication date: April 28, 2022Applicant: EMC IP Holding Company LLCInventors: John Creed, John Krasner
-
Publication number: 20220121571Abstract: Remote cache slots are donated in a storage array without requiring a cache slot starved compute node to search for candidates in remote portions of a shared memory. One or more donor compute nodes create donor cache slots that are reserved for donation. The cache slot starved compute node broadcasts a message to the donor compute nodes indicating a need for donor cache slots. The donor compute nodes provide donor cache slots to the cache slot starved compute node in response to the message. The message may be broadcast by updating a mask of compute node operational status in the shared memory. The donor cache slots may be provided by providing pointers to the donor cache slots.Type: ApplicationFiled: October 20, 2020Publication date: April 21, 2022Applicant: EMC IP HOLDING COMPANY LLCInventors: John Creed, Steve Ivester, John Krasner, Kaustubh Sahasrabudhe
-
Publication number: 20220066647Abstract: A data model is trained to determine whether data is raw, compressed, and/or encrypted. The data model may also be trained to recognize which compression algorithm was used to compress data and predict compression ratios for the data using different compression algorithms. A storage system uses the data model to independently identify raw data. The raw data is grouped based on similarity of statistical features and group members are compressed with the same compression algorithm and may be encrypted after compression with the same encryption algorithm. The data model may also be used to identify sub-optimally compressed data, which may be uncompressed and grouped for compression using a different compression algorithm.Type: ApplicationFiled: September 2, 2020Publication date: March 3, 2022Applicant: EMC IP HOLDING COMPANY LLCInventors: John Krasner, Sweetesh Singh
-
Patent number: 11243829Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: dynamically obtain a plurality of metadata from a global memory of a storage system; dynamically predict anticipated metadata based on the dynamically obtained metadata, wherein anticipated metadata is relevant to anticipated input/output (I/O) operations of the storage system; and dynamically instruct the storage system to load anticipated metadata into the global memory.Type: GrantFiled: July 25, 2019Date of Patent: February 8, 2022Assignee: EMC IP Holding Company LLCInventors: John Krasner, Jason Duquette
-
Publication number: 20220026970Abstract: One or more aspects of the present disclosure relate to data protection techniques in response to power disruptions a power supply from a continuous power source for a storage device can be monitored. A power disruption event interrupting the power supply from the continuous power source can further be identified. In response to detecting an event, a storage system can be switched to a backup power supply, power consumption of one or more components of the storage device can be controlled based on information associated with each component and an amount of power available in the backup power supply. Further, one or more power interruption operations can be performed while the backup power supply includes sufficient power for performing the power interruption operations.Type: ApplicationFiled: July 27, 2020Publication date: January 27, 2022Applicant: EMC IP Holding Company LLCInventors: John Krasner, Clifford Lim, Sweetesh Singh
-
Patent number: 11138123Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.Type: GrantFiled: October 1, 2019Date of Patent: October 5, 2021Assignee: EMC IP Holding Company LLCInventors: John Krasner, Ramesh Doddaiah
-
Patent number: 11099754Abstract: An apparatus comprises at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to receive, via a multi-path layer of at least one host device, at least one indication of a predicted distribution of input-output operations directed from the at least one host device to a storage system for a given time interval. The at least one processing device is also configured to determine a cache memory configuration for a cache memory associated with the storage system based at least in part on the at least one indication of the predicted distribution of input-output operations for the given time interval. The at least one processing device is further configured to provision the cache memory with the determined cache memory configuration for the given time interval.Type: GrantFiled: May 14, 2020Date of Patent: August 24, 2021Assignee: EMC IP Holding Company LLCInventors: Sanjib Mallick, John Krasner, Arieh Don, Ramesh Doddaiah
-
Publication number: 20210173782Abstract: Embodiments of the present disclosure relate to cache memory management. Based on anticipated input/output (I/O) workloads, at least one or more of: sizes of one or more mirrored and un-mirrored caches of global memory and their respective cache slot pools are dynamically balanced. Each of the mirrored/unmirrored caches can be segmented into one or more cache pools, each having slots of a distinct size. Cache pool can be assigned an amount of the one or more cache slots of the distinct size based on the anticipated I/O workloads. Cache pools can be further assigned the amount of distinctly sized cache slots based on expected service levels (SLs) of a customer. Cache pools can also be assigned the amount of the distinctly sized cache slots based on one or more of predicted I/O request sizes and predicted frequencies of different I/O request sizes of the anticipated I/O workloads.Type: ApplicationFiled: December 10, 2019Publication date: June 10, 2021Applicant: EMC IP Holding Company LLCInventors: John Krasner, Ramesh Doddaiah
-
Publication number: 20210096997Abstract: Embodiments of the present disclosure relate to an apparatus comprising a memory and at least one processor. The at least one processor is configured to: analyze input/output (I/O) operations received by a storage system; dynamically predict anticipated I/O operations of the storage system based on the analysis; and dynamically control a size of a local cache of the storage system based on the anticipated I/O operations.Type: ApplicationFiled: October 1, 2019Publication date: April 1, 2021Applicant: EMC IP Holding Company LLCInventors: John Krasner, Ramesh Doddaiah
-
Publication number: 20210064403Abstract: One or more aspects of the present disclosure relate to allocating virtual memory to one or more virtual machines (VMs). The one or more VMs can be established by a hypervisor of a storage device. The virtual memory can be allocated to the established one or more VMs. The virtual memory can correspond to non-volatile (NV) memory of a global memory of the storage device.Type: ApplicationFiled: August 27, 2019Publication date: March 4, 2021Applicant: EMC IP Holding Company LLCInventors: Serge Pirotte, John Krasner, Chakib Ouarraoui, Mark Halstead