Patents Examined by Edmund Kwong
  • Patent number: 10558373
    Abstract: A method, system, and computer program product for providing, via a provisioning engine, a scalable set of indexed key-value pairs enabled to store objects in a data storage environment; wherein the data representing the objects is enabled to be spread across arrays in the data storage environment; wherein additional arrays are enabled to be added to the data storage environment and included in the indexed key-value pairs; wherein the data stored across the arrays may be balanced.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: February 11, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Shashwat Srivastav, Vishrut Shah, Sriram Sankaran, Jun Luo, Chen Wang, Huapeng Yuan, Subba Gaddamadugu, Qi Zhang, Jie Song, Andrew Robertson, Peter Musial
  • Patent number: 9959214
    Abstract: An emulated input/output memory management unit (IOMMU) includes a management processor to perform page table translation in software. The emulated IOMMU can also include a hardware input/output translation lookaside buffer (IOTLB) to store translations between virtual addresses and physical memory addresses. When a translation from a virtual address to a physical address is not found in the IOTLB for an I/O request, the translation can be generated by the management processor using page tables from a memory and can be stored in the IOTLB. Some embodiments can be used to emulate interrupt translation service for message based interrupts for an interrupt controller.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: May 1, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Leah Shalev, Nafea Bshara
  • Patent number: 9928157
    Abstract: A method for filtering multiple in-memory trace buffers for event ranges is provided. The method includes allocating a plurality of main trace buffers, based on the number of central processing units (CPU) participating in a trace. Each CPU has a dedicated main trace buffer, and each main trace buffer is circular. Each main trace buffer is divided into an equal number of sub-buffers. A plurality of events is written to the current sub-buffer. When the current sub-buffer is filled, events are written to the next sub-buffer. Events are extracted from at least one of the sub-buffers, starting with the sub-buffer that includes a compare time and ending at the end of the main trace buffer.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 9910613
    Abstract: New storage volumes are registered to a data storage environment. Registering new storage volumes is controlled based on the performance requirements of the storage volumes compared to the capacity of the data storage environment.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: March 6, 2018
    Assignee: eBay Inc.
    Inventors: Vinay Pundalika Rao, Mark S. Lewis, Anna Povzner
  • Patent number: 9904480
    Abstract: In one embodiment, a method includes creating a first number of streams for a file system manager of a deduplicating storage system to access concurrently a type of data blocks, where each stream is for one file system and is identified by a stream identifier. The method further includes mapping stream identifiers to each of the type of data blocks passing through the first number of streams. The method further includes accessing the type of data blocks in storage units of the deduplicating storage system through a second number of streams, where the second number of streams are dedicated to the type of data blocks in the deduplicating storage system, where the second number is smaller than the first number, where the data blocks are tracked according to the mapped stream identifiers, and where the data blocks are stored in the storage units after a deduplication process to remove duplication.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: February 27, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Pranay Singh, Sai Chivukula
  • Patent number: 9880928
    Abstract: Improved techniques for storing data involve storing compressed data in blocks of a first AU size and storing uncompressed data in blocks of a second AU size larger than the first AU size. For example, when a storage processor compresses a chunk of data, the storage processor checks whether the compressed chunk fits in the smaller AU size. If the compressed chunk fits, then the storage processor stores a compressed chunk in a block having the smaller AU size. Otherwise, the storage processor stores the uncompressed chunk in a block having the larger AU size. Advantageously, the improved techniques promote better disk and cache utilization, which improves performance without disrupting block mapping.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: January 30, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Jean-Pierre Bono, Philippe Armangau
  • Patent number: 9857994
    Abstract: A storage controller that performs control for storing in memory areas of a storage device, data that is grouped into redundant data in blocks each having a given data size. The storage controller includes a memory unit configured to store therein group information created by grouping performed such that logical addresses for a writing destination identified from a data writing request are correlated with the blocks; and a control unit configured to count in response to a data reading request, number of times of reading from a group including logical addresses for a reading destination identified from the reading request, based on the group information, and issues any one among a reading request that includes the logical addresses for the reading destination and a reading request that includes logical addresses for a memory destination of redundant data corresponding to data of the logical addresses for the reading destination.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: January 2, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Hironori Saito
  • Patent number: 9858990
    Abstract: An apparatus includes a register memory and circuitry. The register memory is configured to hold a minimal value specified for a performance measure of a given type of memory access commands, whose actual performance measures vary among memory devices. The circuitry is configured to receive a memory access command of the given type, to execute the received memory access command in one or more memory devices, and to acknowledge the memory access command not before reaching the minimal value stored in the register memory.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: January 2, 2018
    Assignee: APPLE INC.
    Inventors: Liran Erez, Guy Ben-Yehuda, Avraham (Poza) Meir, Ori Isachar
  • Patent number: 9836396
    Abstract: A last-level cache controller includes a system state monitor and a cache partitioning module. The system state monitor is configured to obtain a latency sensitivity factor, off-chip latency factors, and cache miss information for each of the processor cores. The cache partitioning module is configured to: obtain a first weighted latency according to the latency sensitivity factor, the off-chip latency factors and a first entry of the cache miss information that corresponds to a first cache partition configuration for each of the processor cores; obtain a first aggregated weighted latency according to the first weighted latency of each of the processor cores; determine whether a partition criterion is satisfied, where the partition criterion takes the first aggregated weighted latency into consideration; and partition the cache ways of the last-level cache using the first partition configuration when determining that the partition criterion is satisfied.
    Type: Grant
    Filed: November 10, 2015
    Date of Patent: December 5, 2017
    Assignees: MEDIATEK INC., NATIONAL TAIWAN UNIVERSITY
    Inventors: Po-Han Wang, Cheng-Hsuan Li, Chia-Lin Yang
  • Patent number: 9817600
    Abstract: According to one configuration, a memory system includes a configuration manager and multiple memory devices. The configuration manager includes status detection logic, retrieval logic, and configuration management logic. The status detection logic receives notification of a failed attempt by a first memory device to be initialized with custom configuration settings stored in the first memory device. In response to the notification, the retrieval logic retrieves a backup copy of configuration settings information from a second memory device in the memory system. The configuration management logic utilizes the backup copy of the configuration settings information retrieved from the second memory device to initialize the first memory device.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: November 14, 2017
    Assignee: Intel Corporation
    Inventors: Ning Wu, Robert E. Frickey, Hanmant P. Belgal, Xin Guo
  • Patent number: 9811465
    Abstract: A plurality of nodes includes an I/O (Input/Output) node and a plurality of computation nodes. Each computation node sends an I/O request to the I/O node. The I/O node includes a first storage device which stores data to be written or read according to the I/O request and a first memory device on which a first cache area is based to temporarily store the data written in the first storage device or read from the first storage device. The computation node includes a second memory device on which the second cache area is based to temporarily store the data according to the I/O request. At least one of the I/O node and the computation node stores management information which contains information on a physical storage area in the cache area of the other one of the I/O node and the computation node, and information on a virtual storage area which is associated with the physical storage area and has a part of its own cache area.
    Type: Grant
    Filed: July 2, 2013
    Date of Patent: November 7, 2017
    Assignee: Hitachi, Ltd.
    Inventors: Kazuhide Aikoh, Keisuke Hatasaki
  • Patent number: 9800523
    Abstract: A scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources, including: in a NUMA architecture, when a network interface card (NIC) of a virtual machine is started, getting distribution of the buffer of the NIC on each NUMA node; getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; determining a target NUMA node in combination with the distribution of the buffer of the NIC on each NUMA node and NUMA node affinities for the buffer of the NIC; scheduling the virtual processor to the CPU on the target NUMA node. The problem of affinity between the VCPU of the virtual machine and the buffer of the NIC not being optimal in the NUMA architecture is solved to reduce the speed of VCPU processing network packets.
    Type: Grant
    Filed: August 22, 2014
    Date of Patent: October 24, 2017
    Assignee: Shanghai Jiao Tong University
    Inventors: Haibing Guan, Ruhui Ma, Jian Li, Xiaolong Jia
  • Patent number: 9798475
    Abstract: According to one embodiment, a controller writes data stored in a first data group of a plurality of data groups into a first block group of the plurality of block groups and writes data stored in a second data group of the plurality of data groups into a second block group of the plurality of block groups in a case where a first condition is satisfied.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: October 24, 2017
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Naoya Kamimura
  • Patent number: 9785557
    Abstract: In a multithreaded data processing system including a plurality of processor cores, storage-modifying requests, including a translation invalidation request of an initiating hardware thread, are received in a shared queue. The translation invalidation request is broadcast so that it is received and processed by the plurality of processor cores. In response to confirmation of the broadcast, the address translated by the translation entry is stored in a queue. Once the address is stored, the initiating processor core resumes dispatch of instructions within the initiating hardware thread. In response to a request from one of the plurality of processor cores, an effective address translated by a translation entry being invalidated is accessed in the queue. A synchronization request for the address is broadcast to ensure completion of processing of any translation invalidation request for the address.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: October 10, 2017
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, Guy L. Guthrie, Cathy May, Derek E. Williams
  • Patent number: 9779044
    Abstract: A data processor system includes a local memory, a processor core, and an extent monitor. The local memory stores a block of data at a task memory location that is exclusive to a particular task during a duration of time. The processor core accesses the task memory location of the local memory during the execution of the particular task, and modifies to the block of data stored in the task memory location. The extent monitor monitors a write operation the processor core to the local memory to determine a first most-extreme address of the task memory location modified by the execution of the particular task during the duration of time. The processor core also executes a write back instruction to write back to a shared memory location less than the entire block of data based upon the most-extreme address.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: October 3, 2017
    Assignee: NXP USA, Inc.
    Inventor: William C. Moyer
  • Patent number: 9772945
    Abstract: In a multithreaded data processing system including a plurality of processor cores, storage-modifying requests, including a translation invalidation request of an initiating hardware thread, are received in a shared queue. The translation invalidation request is broadcast so that it is received and processed by the plurality of processor cores. In response to confirmation of the broadcast, the address translated by the translation entry is stored in a queue. Once the address is stored, the initiating processor core resumes dispatch of instructions within the initiating hardware thread. In response to a request from one of the plurality of processor cores, an effective address translated by a translation entry being invalidated is accessed in the queue. A synchronization request for the address is broadcast to ensure completion of processing of any translation invalidation request for the address.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: September 26, 2017
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, Guy L. Guthrie, Cathy May, Derek E. Williams
  • Patent number: 9727475
    Abstract: An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: August 8, 2017
    Assignee: Intel Corporation
    Inventors: Rahul Pal, Ishwar Agarwal, Yen-Cheng Liu, Joseph Nuzman, Ashok Jagannathan, Bahaa Fahim, Nithiyanandan Bashyam
  • Patent number: 9727485
    Abstract: A system and method for efficiently maintaining metadata stored among a plurality of solid-state storage devices. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels. Each level stores at least pairs of a key value and a physical pointer value. The levels are sorted by time. New records are inserted in a created new highest (youngest) level. No edits are performed in-place. A data storage controller determines both a cost of searching a given table exceeds a threshold and an amount of memory used to flatten levels exceeds a threshold. In response, the controller incrementally flattens selected levels within the table based on key ranges. After flattening the records in the selected levels within the key range, the records may be removed from the selected levels. The process repeats with another different key range.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: August 8, 2017
    Assignee: Pure Storage, Inc.
    Inventors: Marco Sanvido, Richard Hankins, Mark McAuliffe, Neil Vachharajani
  • Patent number: 9721643
    Abstract: Detection logic of a memory subsystem obtains a threshold for a memory device that indicates a number of accesses within a time window that causes risk of data corruption on a physically adjacent row. The detection logic obtains the threshold from a register that stores configuration information for the memory device, and can be a register on the memory device itself and/or can be an entry of a configuration storage device of a memory module to which the memory device belongs. The detection logic determines whether a number of accesses to a row of the memory device exceeds the threshold. In response to detecting the number of accesses exceeds the threshold, the detection logic can generate a trigger to cause the memory device to perform a refresh targeted to a physically adjacent victim row.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: August 1, 2017
    Assignee: Intel Corporation
    Inventors: Kuljit S Bains, John B Halbert
  • Patent number: 9703504
    Abstract: A storage system includes a plurality of storing devices configured to store data, a cache memory configured to hold data, an access control unit configured to make an access to any one of the plurality of storing devices when an access request for reading of target data or writing of the target data is made from an information processing terminal, and to store the target data in the cache memory, and a writing unit configured to write the target data stored in the cache memory in the storing device which has not stored the target data among the plurality of storing devices.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: July 11, 2017
    Assignee: FUJITSU LIMITED
    Inventor: Takashi Kuwayama