Patents Examined by Hiep T. Nguyen
  • Patent number: 10579544
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to receive a request from a process to access data is a system, determine if the data is in a virtualized protected area of memory in the system, and allow access to the data if the data is in the virtualized protected area of memory and the process is a trusted process. The electronic device can also be configured to determine if new data should be protected, store the new data in the virtualized protected area of memory in the system if the new data should be protected, and store the new data in an unprotected area of memory in the system if the new data should not be protected.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: March 3, 2020
    Assignee: McAfee, LLC
    Inventors: Joel R. Spurlock, Zheng Zhang, Aditya Kapoor, Jonathan L. Edwards, Khai N. Pham
  • Patent number: 10579526
    Abstract: A data processing apparatus includes receiving circuitry to receive a snoop request sent by a source node in respect of requested data and transmitting circuitry. Cache circuitry caches at least one data value. The snoop request includes an indication as to whether the requested data is to be returned to the source node and when the at least one data value includes the requested data, the transmitting circuitry transmits a response to the source node including said requested data, in dependence on said indication.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: March 3, 2020
    Assignee: ARM Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Klas Magnus Bruce
  • Patent number: 10579301
    Abstract: A processing platform is configured to communicate over a network with one or more client devices, and to receive a request from a given one of the client devices for a proposed configuration of a storage system. The processing platform identifies based at least in part on the received request at least one processor to be utilized in implementing the storage system, selects a particular one of a plurality of storage system performance models based at least in part on the identified processor, computes a performance metric for the storage system utilizing the selected storage system performance model and one or more characteristics of the identified processor, generates presentation output comprising: (i) the performance metric, and (ii) information characterizing at least a portion of the proposed configuration of the storage system, and delivers the presentation output to the given client device over the network.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: March 3, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Aharoni, Rui Ding, Mingjie Zhou
  • Patent number: 10572171
    Abstract: A storage system according to an aspect of the present invention includes one or more storage devices for storing write data to which a write request from a host computer is directed, and a storage controller that provides one or more volumes to the host computer. Further, the storage system manages the time when a write request is last received from the host computer for each partition within the volume. Then, the storage controller performs a deduplication process upon detecting the partition not receiving a write request for a predetermined time or more from the time when the write request is last received.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: February 25, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Nobumitsu Takaoka, Akira Yamamoto, Tomohiro Kawaguchi, Yasuo Watanabe, Yoshihiro Yoshii, Kazuki Matsugami
  • Patent number: 10566063
    Abstract: A system comprises a memory device comprising a plurality of memory cells; and a processing device coupled to the memory device, the processing device configured to iteratively: determine a set of read results based on reading a subset of memory cells according to read levels maintained within optimization trim data, wherein the optimization trim data initially comprises at least one read level in addition to a target trim; calibrate the set of read levels based on the set of read results; and remove the calibrated read levels from the optimization trim data when the calibrated read levels satisfy a calibration condition.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: February 18, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Michael Sheperek, Larry J. Koudele, Steve Kientz
  • Patent number: 10558391
    Abstract: A data processing system includes: a memory device suitable for performing an operation corresponding to a command and outputting a memory data; a data collecting device suitable for collecting big data by integrating the command and the memory data at a predetermined cycle or at every predetermined time, splitting the collected big data based on a predetermined unit, and transferring the split big data; and a data processing device suitable for storing the split big data received from the data collecting device in block-based files in a High-Availability Distributed Object-Oriented Platform (HADOOP) distributed file system (HDFS), classifying the block-based files based on a particular memory command, and processing the block-based files.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: February 11, 2020
    Assignee: SK hynix Inc.
    Inventors: Kyu-Sun Lee, Nam-Young Ahn, Eung-Bo Shim
  • Patent number: 10552061
    Abstract: A metadata track stores metadata corresponding to both a first customer data track and a second customer data track. In response to receiving a first request to perform a write on the first customer data track from a two track write process, exclusive access to the first customer data track is provided to the first request, and shared access to the metadata track is provided to the first request. In response to receiving a second request to perform a write on the second customer data track from the two track write process, exclusive access to the second customer data track is provided to the second request, and shared access to the metadata track is provided to the second request prior to providing exclusive access to the metadata track to at least one process that is waiting for exclusive access to the metadata track.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: February 4, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Jared M. Minch, Beth A. Peterson
  • Patent number: 10540240
    Abstract: Embodiments of the present disclosure disclose a solution for data backup and recovery in a storage system. When a source device in the storage system backs up, to a backup-end device, a data block that is written after a snapshot Sn, the source device performs a logical operation such as an exclusive-NOR or exclusive-OR operation on the written data block and an original data block, which is recorded in the snapshot Sn, of the written data block, and then compresses a data block obtained after the logical operation, which improves a compression ratio of a data block, thereby reducing an amount of data that is sent to the backup-end device, and saving transmission bandwidth. The solution may be further applied to a scenario of data recovery in a storage system.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: January 21, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Chengwei Zhang, Chuanshuai Yu, Zongquan Zhang
  • Patent number: 10535417
    Abstract: A memory quality engine can improve the operation of a memory system by setting more effective operating parameters, disabling or removing memory devices unable to meet performance requirements, and providing evaluations between memory populations. These improvements can be accomplished by converting quality measurements of a memory population into CDF-based data, formulating comparisons of the CDF-based data to metrics for quality analysis, and applying the quality analysis. In some implementations, the metrics for quality analysis can use one or more thresholds, such as a system trigger threshold or an uncorrectable error correction condition threshold, which are set based on the error correction capabilities of a memory system. Formulating the comparison to these metrics can include determining a margin between the CDF-based data at a particular codeword frequency and one of the thresholds.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: January 14, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Bruce A. Liikanen, Gerald L. Cadloni, David Miller
  • Patent number: 10528285
    Abstract: A data storage device capable of just partially executing a read/write command issued by a host is disclosed. The data storage device uses a controller to perform a partial execution of a first read/write command issued by the host, and returns a breakpoint of the first read/write command to the host and returns information that the first read/write command is in a partial completion status to the host to drive the host to further issue a second read/write command. In this manner, fewer computational resources are required in determining read/write command granularity.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: January 7, 2020
    Assignee: SHANNON SYSTEMS LTD.
    Inventor: Zhen Zhou
  • Patent number: 10521139
    Abstract: Copy source to target operations may be selectively and preemptively undertaken in advance of source destage operations. In another aspect, logic detects sequential writes including large block writes to point-in-time copy sources. In response, destage tasks on the associated point-in-time copy targets are started which include in one embodiment, stride-aligned copy source to target operations which copy unmodified data from the point-in-time copy sources to the point-in-time copy targets in alignment with the strides of the target. As a result, when write data of write operations is destaged to the point-in-time copy sources, such source destages do not need to wait for copy source to target operations since they have already been performed. In addition, the copy source to target operations may be stride-aligned with respect to the stride boundaries of the point-in-time copy targets. Other features and aspects may be realized, depending upon the particular application.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: December 31, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Clint A. Hardy, Karl A. Nielsen
  • Patent number: 10521135
    Abstract: A data storage system includes a head node and mass storage devices. The head node is configured to flush data stored in a storage of the head node, based at least in part on one or more triggers being met, from the storage of the head node to a set of the mass storage devices of the data storage system. The flushed data is written to a segment of free storage space across the set of the mass storage devices allocated for the given data flush operation. In some embodiments, a head node may flush both current version data and point-in-time version data to the set of mass storage devices. Also, the data storage system maintains an index that indicates storage locations of data for particular portions of a volume before and after the data is flushed to the set of mass storage devices.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 31, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Norbert Paul Kusters, Nachiappan Arumugam, Andre Podnozov, Shobha Agrawal, Shreyas Ramalingam, Danny Wei, David R. Richardson, Marc John Brooker, Christopher Nathan Watson, John Luther Guthrie, II, Ravi Nankani
  • Patent number: 10521119
    Abstract: The described technology is generally directed towards a hybrid copying garbage collector in a data storage system that processes low capacity real chunks and virtual chunks (which reference data on other storage systems) into real chunks with a relatively high data capacity utilization. Real and virtual chunks with low capacity utilization are detected and copied into a higher capacity utilization real chunk, after which the low capacity chunks are deleted and their space reclaimed. As a result, much of the virtual chunk data that is to be migrated into a real chunk in the data storage system is migrated during garbage collection instead of as a separate migration process. Only the virtual chunk data that is relatively high capacity needs to be processed into real chunks by a separate migration process.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: December 31, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Mikhail Danilov, Mark A. O'Connell
  • Patent number: 10521355
    Abstract: Disclosed is a system, method and/or computer product that includes generating translation requests that are identical but have different expected results, transmitting the translation requests from a MMU tester to a non-core MMU disposed on a processor chip, where the non-core MMU is external to a processing core of the processor chip, and where the MMU tester is disposed on a computing component external to the processor chip. The method also includes receiving memory translation results from the non-core MMU at the MMU tester, comparing the results to determine if there is a flaw in the non-core MMU.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 31, 2019
    Assignee: International Business Machines Corporation
    Inventors: Manoj Dusanapudi, Shakti Kapoor, Nelson Wu
  • Patent number: 10509571
    Abstract: A storage device includes a flash memory array and a controller. The flash memory array includes a plurality of blocks. The first block among the blocks has a minimal erase count in the blocks. When determining that a difference between an average erase count of the blocks and the minimal erase count exceeds a cold-data threshold, the controller selects the first block to be a source block. When a data migration of a data-moving process is executed, the controller moves the data of the source block to a target block.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: December 17, 2019
    Assignee: VIA TECHNOLOGIES, INC.
    Inventors: Zhongyi Gao, Xiaoyu Yang
  • Patent number: 10509579
    Abstract: A memory quality engine can improve the operation of a memory system by setting more effective operating parameters, disabling or removing memory devices unable to meet performance requirements, and providing evaluations between memory populations. These improvements can be accomplished by converting quality measurements of a memory population into CDF-based data, formulating comparisons of the CDF-based data to metrics for quality analysis, and applying the quality analysis. In some implementations, the metrics for quality analysis can use one or more thresholds, such as a system trigger threshold or an uncorrectable error correction condition threshold, which are set based on the error correction capabilities of a memory system. Formulating the comparison to these metrics can include determining an intersection between the CDF-based data and one of the thresholds.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: December 17, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Bruce A. Liikanen, Gerald L. Cadloni, David Miller
  • Patent number: 10503427
    Abstract: A pod, the pod including the dataset, a set of managed objects and management operations, a set of access operations to modify or read the dataset, and a plurality of storage systems, where: management operations can modify or query managed objects equivalently through any of the storage systems, access operations to read or modify the dataset operate equivalently through any of the storage systems, each storage system stores a separate copy of the dataset as a proper subset of the datasets stored and advertised for use by the storage system, and operations to modify managed objects or the dataset performed and completed through any one storage system are reflected in subsequent management objects to query the pod or subsequent access operations to read the dataset.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: December 10, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Par Botes, John Colgrove, Alan Driscoll, David Grunwald, Steven Hodgson, Ronald Karr
  • Patent number: 10496549
    Abstract: A memory management method and a storage controller using the same are provided. The memory management method includes: establishing an array; selecting a first block from spare blocks at an initial time point and storing a first index number of the first block to a look-ahead block; adding the first index number in the look-ahead block to the array at a first time point, selecting a second block from the spare blocks and replacing the first index number stored to the look-ahead block with a second index number of the second block, and programming the first block; and adding the second index number in the look-ahead block to the array at a second time point, selecting a third block from the spare blocks and replacing the second index number in the look-ahead block with a third index number of the third block, and programming the second block.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: December 3, 2019
    Assignee: Shenzhen EpoStar Electronics Limited CO.
    Inventors: Shih-Tien Liao, Yu-Hua Hsiao
  • Patent number: 10489081
    Abstract: A method for reducing coordination times in asynchronous data replication environments is disclosed. In one embodiment, such a method includes providing multiple primary storage devices in an asynchronous data replication environment. A command is issued, to each of the primary storage devices, to begin queuing I/O in order to coordinate a consistency group. Each primary storage device receives the command. The method further calculates, for each of the primary storage devices, an amount of time to wait before executing the command with the objective that each primary storage device executes the command at substantially the same time. Each primary storage device is configured to execute the command after receiving and waiting its corresponding amount of time. A corresponding system and computer program product are also disclosed herein.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: November 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Joshua J. Crawford, Gregory E. McBride, Matthew J. Ward
  • Patent number: 10489306
    Abstract: A data processing system incorporates a cache system having a cache memory and a cache controller. The cache controller selects for cache entry eviction using a primary eviction policy. This primary eviction policy may identify a plurality of candidates for eviction with an equal preference for eviction. The cache controller provides a further selection among this plurality of candidates based upon content data read from those candidates themselves as part of the cache access operation which resulted in the cache miss leading to the cache replacement requiring the victim selection. The content data used to steer this second stage of victim selection may include transience specifying data and, for example, in the case of a cache memory comprising a translation lookaside buffer, page size data, type of translation data, memory type data, permission data and the like.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: November 26, 2019
    Assignee: ARM Limited
    Inventors: Guillaume Bolbenes, Jean-Paul Georges Poncelet