Patents Examined by Glenn Gossage
  • Patent number: 10346306
    Abstract: Methods and apparatuses relating to memory performance monitoring are described, including a processor and method for memory performance monitoring utilizing a monitor flag and first and second allocators for allocating virtual memory regions.
    Type: Grant
    Filed: April 2, 2016
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Amitabha Roy, Subramanya R. Dulloor, Rajesh M. Sankaran
  • Patent number: 10339072
    Abstract: A system with memory includes a repeater architecture where the memory connects to a host with one bandwidth, and a repeater extends a channel with a lower bandwidth. A memory circuit includes a first group of memory devices coupled point-to-point to a host device via a first group of read signal lines. The memory circuit includes a second group of memory devices coupled point-to-point to the first group of memory devices second group of read signal lines to extend the memory channel to the second group of memory devices. The second group of read signal lines has fewer read signal lines than the first group. The memory circuit includes a repeater to share read bandwidth between the first and second groups of memory devices, with up to a portion of the bandwidth for reads to the second group of memory devices, and at least an amount equal to the bandwidth less the portion for reads to the first group of memory devices.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: July 2, 2019
    Assignee: Intel Corporation
    Inventors: Bill Nale, Pete D Vogt
  • Patent number: 10339050
    Abstract: An apparatus, memory controller, memory module and method are provided for controlling data transfer in memory. The apparatus comprises a memory controller and a plurality of memory modules. The memory controller orchestrates direct data transfer by issuing a first direct transfer command to a first memory module and a second direct transfer command to a second memory module. The first memory module is responsive to receipt of the first direct transfer command to directly transmit the data for receipt by the second memory module in a way that bypasses the memory controller. The second memory module is responsive to the second direct transfer command to receive the data from the first memory module directly, rather than via the memory controller. One of the first and second memory modules may be used as a cache for data stored in the other memory module. The direct data transfer may comprise a data move or a data copy operation.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: July 2, 2019
    Assignee: Arm Limited
    Inventors: Andreas Hansson, Wendy Arnott Elsasser, Michael Andrew Campbell
  • Patent number: 10331567
    Abstract: A prefetch circuit may include a memory, each entry of which may store an address and other prefetch data used to generate prefetch requests. For each entry, there may be at least one “quality factor” (QF) that may control prefetch request generation for that entry. A global quality factor (GQF) may control generation of prefetch requests across the plurality of entries. The prefetch circuit may include one or more additional prefetch mechanisms. For example, a stride-based prefetch circuit may be included that may generate prefetch requests for strided access patterns having strides larger than a certain stride size. Another example is a spatial memory streaming (SMS)-based mechanism in which prefetch data from multiple evictions from the memory in the prefetch circuit is captured and used for SMS prefetching based on how well the prefetch data appears to match a spatial memory streaming pattern.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: June 25, 2019
    Assignee: Apple Inc.
    Inventors: Stephan G. Meier, Tyler J. Huberty, Nikhil Gupta, Francesco Spadini, Gideon Levinsky
  • Patent number: 10296249
    Abstract: Systems and methods for processing non-contiguous submission and completion queues are disclosed. Non-Volatile Memory Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on a host device placing commands into the submission queue. The submission and completion queues may be contiguous or non-contiguous in host device memory. Non-contiguous queues may be defined by a link to a list on the host device that lists the non-contiguous sections in memory. In practice, the memory device stores the list in one type of memory (such as a dynamic random access memory (DRAM) cache) and the link in a different type of memory (such as always-on memory or non-volatile memory). In this way, the link may be accessed in various modes (such as low power mode) in order to recreate the list in DRAM.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: May 21, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventor: Shay Benisty
  • Patent number: 10296368
    Abstract: Hypervisor-independent block-level live browse is used for directly accessing backed up virtual machine (VM) data. Hypervisor-free file-level recovery (block-level pseudo-mount) from backed up VMs also is disclosed. Backed up virtual machine (“VM”) data can be browsed without needing or using a hypervisor. Individual backed up VM files can be requested and restored to anywhere without a hypervisor and without the need to restore the rest of the backed up virtual disk. Hypervisor-agnostic VM backups can be browsed and recovered without a hypervisor and from anywhere, and individual backed up VM files can be restored to anywhere, e.g., to a different VM platform, to a non-VM environment, without restoring an entire virtual disk, and without a recovery data agent at the destination.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: May 21, 2019
    Assignee: Commvault Systems, Inc.
    Inventors: Henry Wallace Dornemann, Rahul S. Pawar, Amit Mitkar, Sunil Kumar Gutta, Sumedh Pramod Degaonkar, Jianwei Chen
  • Patent number: 10289752
    Abstract: A processor may include a gather-update-scatter accelerator, and an allocator comprising circuitry to direct an instruction to the accelerator for execution. The instruction may include a search index, an operation to be performed, and a scalar data value. The accelerator may include a content-addressable memory (CAM) storing multiple entries, each of which stores a respective index key and a data value associated with the index key. The accelerator may include a CAM controller, which includes circuitry. The CAM controller may be configured to select, based on the information in the instruction, one of the plurality of entries in the CAM on which to operate. The CAM controller may be configured to perform an arithmetic or logical operation on the selected entry dependent on the information in the instruction. The CAM controller may be configured to store a result of the operation in the selected entry in the CAM.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: May 14, 2019
    Assignee: Intel Corporation
    Inventors: Ganesh Venkatesh, Nicholas P. Carter, Deborah T. Marr
  • Patent number: 10282294
    Abstract: A system and method for mitigating overhead for accessing metadata for a cache in a hybrid memory module are disclosed. The method includes: providing a hybrid memory module including a DRAM cache, a flash memory, and an SRAM for storing a metadata cache; obtaining a host address including a DRAM cache tag and a DRAM cache index; and obtaining a metadata address from the DRAM cache index, wherein the metadata address includes a metadata cache tag and a metadata cache index. The method further includes determining a metadata cache hit based on a presence of a matching metadata cache entry in the metadata cache stored in the SRAM; in a case of a metadata cache hit, obtaining a cached copy of data included in the DRAM cache and skipping access to metadata included in the DRAM cache; and returning the data obtained from the DRAM cache to a host computer. The SRAM may further store a Bloom filter, and a potential DRAM cache hit may be determined based on a result of a Bloom filter test.
    Type: Grant
    Filed: May 4, 2017
    Date of Patent: May 7, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mu-Tien Chang, Dimin Niu, Hongzhong Zheng
  • Patent number: 10282309
    Abstract: Systems, apparatuses, and methods for implementing per-page control of physical address space distribution among memory modules are disclosed. A computing system includes a plurality of processing units coupled to a plurality of memory modules. A determination is made as to which physical address space distribution granularity to implement for physical memory pages allocated for a first data structure. The determination can be made on a per-data-structure basis (e.g., file, page, block, etc.) or on a per-application-basis. A physical address space distribution granularity is encoded as a property of each physical memory page allocated for the first data structure, and physical memory pages of the first data structure distributed across the plurality of memory modules based on a selected physical address space distribution granularity.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: May 7, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nuwan S. Jayasena, Hyojong Kim, Hyesoon Kim
  • Patent number: 10275180
    Abstract: An Ethernet solid-state drive (Ethernet SSD or eSSD) system and corresponding method provide improved latency and throughput associated with storage functionalities. The eSSD system includes at least one primary SSD, at least one secondary SSD, an Ethernet switch, and a storage-offload engine (SoE) controller. The SoE controller may operate in a replication mode and/or an erasure-coding mode. In either mode, the SoE controller receives a first write command sent from a remote device to at least one primary SSD. In the replication mode, the SoE controller sends a second write command to the at least one secondary SSD to replicate data associated with the first write command at the at least one secondary SSD. In the erasure-coding mode, the SoE determines erasure codes associated with the first write command and manages distribution of the write data and associated erasure codes. The SoE controller may also receive read commands, data cloning commands and data movement commands from the remote device.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: April 30, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chinnakrishnan Ballapuram, Ajay Sundar Raj, Robert Brennan
  • Patent number: 10261915
    Abstract: A processor architecture which partitions on-chip data caches to efficiently cache translation entries alongside data which reduces conflicts between virtual to physical address translation and data accesses. The architecture includes processor cores that include a first level translation lookaside buffer (TLB) and a second level TLB located either internally within each processor core or shared across the processor cores. Furthermore, the architecture includes a second level data cache (e.g., located either internally within each processor core or shared across the processor cores) partitioned to store both data and translation entries. Furthermore, the architecture includes a third level data cache connected to the processor cores, where the third level data cache is partitioned to store both data and translation entries. The third level data cache is shared across the processor cores. The processor architecture can also include a data stack distance profiler and a translation stack distance profiler.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: April 16, 2019
    Assignee: Board of Regents, The University Of Texas System
    Inventors: Lizy K. John, Yashwant Marathe, Jee Ho Ryoo, Nagendra Gulur
  • Patent number: 10261688
    Abstract: An apparatus and method for performing search and replace operations at a storage controller of a storage device are disclosed. The storage controller can receive a search command with one or more parameters that instructs the storage controller to search for a data pattern in data stored in a memory of the apparatus. The storage controller can locally search the data in the memory for the data pattern according to the parameters without transferring the data to a processor to perform the search. The parameters can include, but are not limited to, the data pattern or template to be searched, a data pattern length, a bit-mask, a logical block address (LBA) range, a byte offset, and an alignment parameter. Verdict bits can be provided to indicate data chunks in the memory that match the data pattern. Flags may define potential outputs to provide after searching, such as location and number of matches.
    Type: Grant
    Filed: April 2, 2016
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Sanjeev Trika, Kshitij Doshi
  • Patent number: 10254970
    Abstract: Techniques for obtaining consistent read performance are disclosed that may include: receiving measured read I/O (input/output) response times for flash storage devices; and determining, in accordance with a specified allowable variation, whether a first of the measured read I/O response times for a first of the flash storage devices is inconsistent with respect to other ones of the measured read I/O response times. Responsive to determining the first measured read I/O response time is inconsistent first processing may be performed that corrects or alleviates the inconsistency of the first measured read I/O response time. The first processing may include varying the first measured read I/O response time of the first flash storage device by enforcing, for the first flash storage device, a write I/O workload limit a read I/O workload limit and an idle capacity limit. Data portions may be ranked and selected for data movement based on read workload, write workload or idle capacity.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: April 9, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Owen Martin, Hui Wang, Malak Alshawabkeh, Adnan Sahin, Arieh Don, Xiaomei Liu
  • Patent number: 10241908
    Abstract: Dynamically varying Over-Provisioning (OP) enables improvements in lifetime, reliability, and/or performance of a Solid-State Disk (SSD) and/or a flash memory therein. A host coupled to the SSD writes newer data to the SSD. If the newer host data is less random than older host data, then entropy of host data on the SSD decreases. In response, an SSD controller dynamically alters allocations of the flash memory, decreasing host allocation and increasing OP allocation. If the newer host data is more random, then the SSD controller dynamically increases the host allocation and decreases the OP allocation. The SSD controller dynamically allocates the OP allocation between host OP and system OP proportionally in accordance with a ratio of bandwidths of host and system data writes to the flash memory. Changes in allocations are selectively made in response to improved compression or deduplication of the host data, or in response to a host command.
    Type: Grant
    Filed: April 22, 2012
    Date of Patent: March 26, 2019
    Assignee: Seagate Technology LLC
    Inventor: Andrew John Tomlin
  • Patent number: 10235054
    Abstract: A method and systems for caching utilize first and second caches, which may include a dynamic random-access memory (DRAM) cache and a next generation non-volatile memory (NGNVM) cache such as NAND flash memory. The methods and systems may be used for memory caching and/or page caching. The second caches are managed in an exclusive fashion, resulting in an aggregate cache having a storage capacity generally equal to the sum of individual cache storage capacities. Cache free lists may be associated with the first and second page caches and pages within a cache free list may be mapped back to an associated cache without accessing a backing store. Data can be migrated between the first cache and the second caches based upon access heuristics and application hints.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: March 19, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Roy E. Clark, Adrian Michaud
  • Patent number: 10228885
    Abstract: Systems and methods are disclosed which facilitate management of thin provisioned data storage. Specifically, portions of thinly provisioned data stores may be deallocated when they contain invalid data, such as data deleted by a user. A user may transmit notifications, which may include delete notifications, such as TRIM commands, to a provider of the data store (or to the data store itself) that data has been deleted. A management component may modify the data store, or metadata corresponding to the data store, to reflect the deletion. The management component may further monitor portions of the data store to determine whether individual portions contain entirely invalid data. If so, the portion may be deallocated from the thin provisioned data store, resulting in more efficient thin provisioning. Deallocation may be enabled even where deletion notifications from a user do not correspond directly to allocated storage portions.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: March 12, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 10222993
    Abstract: A computing device and method for establishing more direct access to a storage device from unprivileged code in an unprivileged storage architecture component. Using a storage infrastructure driver to discover and enumerate storage architecture component(s), a user-mode application requests at least one portion of the storage device to store application-related data corresponding to executing input/output activity. The storage infrastructure driver maps the at least one portion of the storage device to substantially match an address space associated with the application-related data and configures at least one path for the user-mode application to perform block-level input/output between the storage device and the unprivileged storage architecture component. A virtual function may be generated corresponding to at least one path between a computing device and the unprivileged storage architecture component to execute input/output activity using the address space.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: March 5, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Dmitry Malloy, Dexter Paul Bradshaw, Suyash Sinha
  • Patent number: 10223276
    Abstract: Systems and methods for page cache management during migration are disclosed. A method may include initiating, by a processing device of a source host machine, a migration process for migration of a virtualized component from the source host machine to a destination host machine. The method may also include obtaining a list of outstanding store requests corresponding to the virtualized component, the outstanding store requests maintained in a page cache of the source host machine and transmitting the list to the destination host machine. The method may further include providing instructions to cancel the outstanding store requests in the page cache, and providing instructions to clear remaining entries associated with the virtualized component in the page cache. The virtualized component may include a virtual machine or a container, and the outstanding store requests may correspond to requests for non-shared resources, such as a memory page.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: March 5, 2019
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 10216440
    Abstract: A method, computer program product, and computer system for disk management in a distributed storage system. The distributed storage system comprises a plurality of disks within a main disk ring, where the disks store target data. In one embodiment, the method comprises dividing the target data into cold target data and hot target data, and grouping one or more disks within the main disk ring into a cold data disk ring and the remaining one or more disks within the main disk ring into a hot data disk ring. The method further comprises migrating the cold target data on disks not within the cold data disk ring onto disks within the cold data disk ring while migrating the hot target data on disks not within the hot data disk ring onto disks within the hot data disk ring, and reducing a spinning rate of disks within the cold data disk ring.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lei Chen, Li Chen, Xiaoyang Yang, Jun Wei Zhang
  • Patent number: 10216432
    Abstract: Systems and techniques are provided for managing performance of a backup environment. A set of rules are stored, with each rule specifying a threshold value of a backup configuration parameter. Configurations of the backup environment are periodically obtained. Each obtained configuration includes a current value of the backup configuration parameter. A determination is made for each configuration as to whether the current value exceeds a suggested value, where the suggested value is based on the threshold value. If the current value exceeds the suggested value, an entry including an alert of a first type is written to a log. The log is analyzed, and if the frequency of entries in the log including alerts of the first type exceeds a threshold frequency, an entry including an alert of a second type, different from the first type, is written to the log. The threshold value of the backup configuration parameter may specify a maximum number of backup streams or a maximum number of backup clients, for example.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: February 26, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Gururaj Kulkarni, Shelesh Chopra, Vladimir Mandic