Patents by Inventor Jon I. Krasner

Jon I. Krasner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11573833
    Abstract: Allocating CPU cores to a thread running in a system that supports multiple concurrent threads includes training a first model to optimize core allocations to threads using training data that includes performance data, initially allocating cores to threads based on the first model, and adjusting core allocations to threads based on a second model that uses run time data and run time performance measurements. The system may be a storage system. The training data may include I/O workload data obtained at customer sites. The I/O workload data may include data about I/O rates, thread execution times, system response times, and Logical Block Addresses. The training data may include data from a site that is expected to run the second model. The first model may categorize storage system workloads and determine core allocations for different categories of workloads. Initially allocating cores to threads may include using information from the first model.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: February 7, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Edward P. Goodwin
  • Patent number: 11392306
    Abstract: Memory of a storage system is made available (i.e., exposed) for use as host memory of a host, for example, as an extension of the main memory of the host. The host may be directly connected to an internal fabric of the data storage system. Portions of the storage system memory (SSM) may be allocated for use as host memory, and this may be communicated to the host system. The host OS and applications executing thereon then may make use of the SSM as if it were memory of the host system, for example, as second-tier persistent memory. The amount of SSM made available may be dynamically increased and decreased. The SSM may be accessed by the host system as memory; i.e., in accordance with memory-based instructions, for example, using remote direct memory access instructions. The SSM may be write protected using mirroring, vaulting and other techniques.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: July 19, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Arieh Don, Yaron Dar
  • Publication number: 20210342078
    Abstract: Memory of a storage system is made available (i.e., exposed) for use as host memory of a host, for example, as an extension of the main memory of the host. The host may be directly connected to an internal fabric of the data storage system. Portions of the storage system memory (SSM) may be allocated for use as host memory, and this may be communicated to the host system. The host OS and applications executing thereon then may make use of the SSM as if it were memory of the host system, for example, as second-tier persistent memory. The amount of SSM made available may be dynamically increased and decreased. The SSM may be accessed by the host system as memory; i.e., in accordance with memory-based instructions, for example, using remote direct memory access instructions. The SSM may be write protected using mirroring, vaulting and other techniques.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 4, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Arieh Don, Yaron Dar
  • Patent number: 11003539
    Abstract: Offload processing may be provided that is not dedicated to a primary processor or a subset of primary processors. A system may have one or more offload processing devices, including one or more APUs, coupled to data storage slots of the system, which can be shared by multiple primary processors of the system. Each offload processing device may be configured to be coupled to a storage slot, for example, as if the device were a storage drive, and include an interface in conformance with a version of an NVMe specification and may have a form factor in accordance with the U.2 specification. The APU within each offload processing device may be communicatively coupled to one or more primary processors by switching fabric disposed between the one or more primary processors and the storage slot to which the offload processing device is connected.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: May 11, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Jonathan P. Sprague, Jason J. Duquette
  • Publication number: 20210034419
    Abstract: Allocating CPU cores to a thread running in a system that supports multiple concurrent threads includes training a first model to optimize core allocations to threads using training data that includes performance data, initially allocating cores to threads based on the first model, and adjusting core allocations to threads based on a second model that uses run time data and run time performance measurements. The system may be a storage system. The training data may include I/O workload data obtained at customer sites. The I/O workload data may include data about I/O rates, thread execution times, system response times, and Logical Block Addresses. The training data may include data from a site that is expected to run the second model. The first model may categorize storage system workloads and determine core allocations for different categories of workloads. Initially allocating cores to threads may include using information from the first model.
    Type: Application
    Filed: July 31, 2019
    Publication date: February 4, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Edward P. Goodwin
  • Patent number: 10795612
    Abstract: Offload processing may be provided that is not dedicated to a primary processor or a subset of primary processors. A system may have one or more offload processors, for example, GPUs, coupled to data storage slots of the system, which can be shared by multiple primary processors of the system. The offload processor(s) may be housed within a device configured to be coupled to a storage slot, for example, as if the device were a storage drive. The one or more offload processors may be housed within a device that includes an interface in conformance with a version of an NVMe specification and may have a form factor in accordance with the U.2 specification. Offload processing devices may be communicatively coupled to one or more primary processors by switching fabric disposed between the one or more primary processors and the storage slot to which the offload processing device is connected.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: October 6, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jon I Krasner, Jason J. Duquette, Jonathan P. Sprague
  • Publication number: 20200226027
    Abstract: Offload processing may be provided that is not dedicated to a primary processor or a subset of primary processors. A system may have one or more offload processing devices, including one or more APUs, coupled to data storage slots of the system, which can be shared by multiple primary processors of the system. Each offload processing device may be configured to be coupled to a storage slot, for example, as if the device were a storage drive, and include an interface in conformance with a version of an NVMe specification and may have a form factor in accordance with the U.2 specification. The APU within each offload processing device may be communicatively coupled to one or more primary processors by switching fabric disposed between the one or more primary processors and the storage slot to which the offload processing device is connected.
    Type: Application
    Filed: January 15, 2019
    Publication date: July 16, 2020
    Applicant: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Jonathan P. Sprague, Jason J. Duquette
  • Publication number: 20200042234
    Abstract: Offload processing may be provided that is not dedicated to a primary processor or a subset of primary processors. A system may have one or more offload processors, for example, GPUs, coupled to data storage slots of the system, which can be shared by multiple primary processors of the system. The offload processor(s) may be housed within a device configured to be coupled to a storage slot, for example, as if the device were a storage drive. The one or more offload processors may be housed within a device that includes an interface in conformance with a version of an NVMe specification and may have a form factor in accordance with the U.2 specification. Offload processing devices may be communicatively coupled to one or more primary processors by switching fabric disposed between the one or more primary processors and the storage slot to which the offload processing device is connected.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: EMC IP Holding Company LLC
    Inventors: Jon I. Krasner, Jason J. Duquette, Jonathan P. Sprague