Partitioned Cache Patents (Class 711/129)
  • Patent number: 10592288
    Abstract: A computing system includes a computer in communication with a tiered storage system. The computing system identifies a set of data transferring to a storage tier within the storage system. The computing system identifies a program to which the data set is allocated and determines to increase or reduce resources of the computer allocated to the program, based on the set of data transferring to the storage tier. The computing system discontinues transferring the set of data to the storage tier if a resource allocated to the program cannot be increased.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: March 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Rahul M. Fiske, Akshat Mithal, Sandeep R. Patil, Subhojit Roy
  • Patent number: 10592420
    Abstract: One embodiment is related to a method for redistributing cache space, comprising: determining a request by a first client of a plurality of clients for additional cache space, each of the plurality of clients being associated with a guaranteed minimum amount (MIN) and a maximum amount (MAX) of cache space; and fulfilling or denying the request based on an amount of cache space the first client currently occupies, an amount of cache space requested by the first client, and the MIN and the MAX cache space associated with the first client.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: March 17, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Shuang Liang, Philip Shilane, Grant Wallace
  • Patent number: 10592418
    Abstract: Shared memory caching resolves latency issues in computing nodes associated with a cluster in a virtual computing environment. A portion of random access memory in one or more of the computing nodes is allocated for shared use by the cluster. Whenever local cache memory is unable in one of the computing nodes, a cluster neighbor cache allocated in a different computing node may be utilized as remote cache memory. Neighboring computing nodes may thus share their resources for the benefit of the cluster.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: March 17, 2020
    Assignee: Dell Products, L.P.
    Inventor: John Kelly
  • Patent number: 10585808
    Abstract: Single hypervisor call to perform pin and unpin operations. A hypervisor call relating to the pinning of units of memory is obtained. The hypervisor call specifies an unpin operation for a first memory address and a pin operation for a second memory address. Based on obtaining the hypervisor call, at least one of the unpin operation for the first memory address and the pin operation for the second memory address is performed.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: March 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael K. Gschwind, Valentina Salapura
  • Patent number: 10585670
    Abstract: A processor architecture includes a register file hierarchy to implement virtual registers that provide a larger set of registers than those directly supported by an instruction set architecture to facilitate multiple copies of the same architecture register for different processing threads, where the register file hierarchy includes a plurality of hierarchy levels. The processor architecture further includes a plurality of execution units coupled to the register file hierarchy.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 10, 2020
    Assignee: Intel Corporation
    Inventor: Mohammad A. Abdallah
  • Patent number: 10565139
    Abstract: Multiple memory devices, such as hard drives, can be combined and logical partitions can be formed between the drives to allow a user to control regions on the drives that will be used for storing content, and also to provide redundancy of stored content in the event that one of the drives fails. Priority levels can be assigned to content recordings such that higher value content can be stored in more locations and easily accessible locations within the utilized drives. Users can control and organize how recorded content is stored between the drives such that an external drive may be removed from a first gateway device and attached to a second gateway device without losing the ability to access the recorded content from the first gateway device at a later time. In this manner, a user is provided with the ability to transport an external drive containing stored content recordings between multiple different gateway devices such that the recordings may be accessed at different locations or user premises.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: February 18, 2020
    Assignee: Comcast Cable Communications, LLC
    Inventor: Ross Gilson
  • Patent number: 10552329
    Abstract: A SSD caching system for hybrid storages is disclosed. The caching system for hybrid storages includes: a Solid State Drive (SSD) for storing cached data, separated into a Repeated Pattern Cache (RPC) area and a Dynamical Replaceable Cache (DRC) area; and a caching managing module, including: an Input/output (I/O) profiling unit, for detecting I/O requests for accesses of blocks in a Hard Disk Drive (HDD) during a number of continuously detecting time intervals, and storing first data corresponding to first blocks being repeatedly accessed at least twice in individual continuously detecting time intervals to the RPC area sequentially; and a hot data searching unit, for detecting I/O requests for accesses of a HDD during a independently detecting time interval, and storing second data corresponding to second blocks being accessed at least twice in the independently detecting time interval to the DRC area sequentially.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: February 4, 2020
    Assignee: Prophetstor Data Services, Inc.
    Inventors: Wen Shyen Chen, Ming Jen Huang
  • Patent number: 10552327
    Abstract: Systems, methods, and computer readable media to improve the operation of electronic devices that use integrated cache systems are described. In general, techniques are disclosed to manage the leakage power attributable to an integrated cache memory by dynamically resizing the cache during device operations. More particularly, run-time cache operating parameters may be used to dynamically determine if the cache may be resized. If effective use of the cache may be maintained using a smaller cache, a portion of the cache may be power-gated (e.g., turned off). The power loss attributable to that portion of the cache power-gated may thereby be avoided. Such power reduction may extend a mobile device's battery runtime. Cache portions previously turned off may be brought back online as processing needs increase so that device performance does not degrade.
    Type: Grant
    Filed: August 23, 2016
    Date of Patent: February 4, 2020
    Assignee: Apple Inc.
    Inventors: Robert P. Esser, Nikolay N. Stoimenov
  • Patent number: 10541044
    Abstract: Providing efficient handling of memory array failures in processor-based systems is disclosed. In this regard, in one aspect, a memory controller of a processor-based device is configured to detect a defect within a memory element of a plurality of memory elements of a memory array. In response, a disable register of one or more disable registers is set to correspond to the memory element to indicate that the memory element is disabled. The memory controller receives a memory access request to a memory address corresponding to the memory element, and determines, based on one or more disable registers, whether the memory element is disabled. If so, the memory controller disallows the memory access request. Some aspects may provide that the memory controller, in response to detecting the defect, provides a failure indication to an executing process, and subsequently receives, from the executing process, a request to set the disable register.
    Type: Grant
    Filed: July 6, 2017
    Date of Patent: January 21, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Thomas Philip Speier, Viren Ramesh Patel, Michael Phan, Manish Garg, Kevin Magill, Paul Steinmetz, Clint Mumford, Kshitiz Saxena
  • Patent number: 10503655
    Abstract: The described embodiments include a computing device that caches data acquired from a main memory in a high-bandwidth memory (HBM), the computing device including channels for accessing data stored in corresponding portions of the HBM. During operation, the computing device sets each of the channels so that data blocks stored in the corresponding portions of the HBM include corresponding numbers of cache lines. Based on records of accesses of cache lines in the HBM that were acquired from pages in the main memory, the computing device sets a data block size for each of the pages, the data block size being a number of cache lines. The computing device stores, in the HBM, data blocks acquired from each of the pages in the main memory using a channel having a data block size corresponding to the data block size for each of the pages.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: December 10, 2019
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Mitesh R. Meswani, Jee Ho Ryoo
  • Patent number: 10474486
    Abstract: Various systems, methods, and processes for accelerating data access in application and testing environments are disclosed. A production dataset is received from a storage system, and cached in a consolidated cache. The consolidated cache is implemented by an accelerator virtual machine. A file system client intercepts a request for the production dataset from one or more application virtual machines, and transmits the request to the accelerator virtual machine. The accelerator virtual machine serves the production dataset to the one or more application virtual machines from the consolidated cache.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: November 12, 2019
    Assignee: Veritas Technologies LLC
    Inventors: Chirag Dalal, Vaijayanti Bharadwaj
  • Patent number: 10437822
    Abstract: In one respect, there is provided a method. The method can include identifying, based on a plurality of queries executed at a distributed database, a disjoint table set. The identifying of the disjoint table set can include: identifying a first table used in executing a first query; identifying a second query also using the first table used in executing the first query; identifying a second table used in executing the second query but not in executing the first query; and including, in the disjoint table set, the first table and the second table. The method can further include allocating, based at least on the first disjoint table set, a storage and/or management of the first disjoint table set such that the first disjoint table set is stored at and/or managed by at least one node in the distributed database. Related systems and articles of manufacture are also disclosed.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: October 8, 2019
    Assignee: SAP SE
    Inventors: Antje Heinle, Hans-Joerg Leu
  • Patent number: 10417139
    Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 17, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 10387329
    Abstract: Profiling cache replacement is a technique for managing data migration between a main memory and a cache memory to improve overall system performance. A profiler maintains counters that count memory requests for access to the pages maintained in both the cache memory and the main memory. Based on this access-request count information, a mover moves pages between the main and cache memories. For example, the mover can swap little-requested pages of the cache memory with highly-requested pages of the main memory. The mover can do so, for instance, when the counters indicate that the number of page access requests for highly-requested pages of the main memory is greater than the number of page access requests for little-requested pages of the cache memory. To avoid impeding the operations of memory users, the mover can perform page swapping in the background at predetermined time intervals, such as once every microsecond (?s).
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: August 20, 2019
    Assignee: Google LLC
    Inventor: Chih-Chung Chang
  • Patent number: 10387050
    Abstract: Systems and methods for reducing problems and disadvantages associated with traditional approaches to providing accessibility and redundancy for access controller storage media are provided. A method for providing accessibility for storage media of an access controller in an information handling system may include: (i) emulating the storage media such that the storage media appears to an operating system executing on the information handling system as storage media locally attached to the information handling system; (ii) mounting the storage media such that data may be communicated between the storage media and a processor integral to the access controller; (iii) mounting a portion of a network-attached storage remote to the information handling system such that data may be communicated between the portion of the network-attached storage and the processor; and (iv) maintaining redundancy between the storage media and the portion of network-attached storage in accordance with a redundancy policy.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: August 20, 2019
    Assignee: Dell Products L.P.
    Inventors: Shawn Joel Dube, Quy N. Hoang, Timothy M. Lambert
  • Patent number: 10379776
    Abstract: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: August 13, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Berger, Michael A. Blake, Ashraf Elsharif, Kenneth D. Klapproth, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy
  • Patent number: 10359933
    Abstract: A memory having a memory controller is configured to operate a hybrid cache including a dynamic cache including x-level cell (XLC) (e.g., multi-level cell (MLC)) blocks and a static cache including single level cell (SLC) blocks. A method of operating the memory includes storing at least a portion of host data into the SLC blocks as static cache; and storing at least another portion of host data into XLC blocks in an SLC mode as dynamic cache responsive to a burst of host data being determined to be greater than the static cache can handle. At least one of the static cache or dynamic cache may be disabled based on monitoring a workload of the hybrid cache relative to a Total Bytes Written (TBW) specification, such as by counting program-erase (PE) cycles of different portions of memory, or responsive to the workload exceeding a predetermined threshold defining one or more switch points.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: July 23, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Kishore K. Muchherla, Ashutosh Malshe, Sampath K. Ratnam, Peter Feeley, Michael G. Miller, Christopher S. Hale, Renato C. Padilla
  • Patent number: 10360156
    Abstract: A method of operating a data storage device in which a nonvolatile memory is included and a mapping table defining a mapping relation between a physical address and a logical address of the nonvolatile memory is stored in a host memory buffer of a host memory includes requesting a host for an asynchronous event based on information about a map miss that the mapping relation about the logical address received from the host is not included in the mapping table, receiving information about the host memory buffer adjusted by the host based on the asynchronous event, and updating the mapping table to the adjusted host memory buffer with reference to the information about the host memory buffer. A method of operating a data storage device according to example embodiments of the inventive concept can reduce the number of map misses or improve reliability of a nonvolatile memory.
    Type: Grant
    Filed: July 18, 2017
    Date of Patent: July 23, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Eun-Jin Yun, Sil Wan Chang
  • Patent number: 10346308
    Abstract: Techniques described herein generally include methods and systems related to cache partitioning in a chip multiprocessor. Cache-partitioning for a single thread or application between multiple data sources improves energy or latency efficiency of a chip multiprocessor by exploiting variations in energy cost and latency cost of the multiple data sources. Partition sizes for each data source may be selected using an optimization algorithm that minimizes or otherwise reduces latencies or energy consumption associated with cache misses.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: July 9, 2019
    Assignee: Empire Technology Development LLC
    Inventor: Yan Solihin
  • Patent number: 10331573
    Abstract: Techniques are provided to adjust the behavior of a cache based on a count of cache misses for items recently evicted. In an embodiment, a computer responds to evicting a particular item (PI) from a cache by storing a metadata entry for the PI into memory. In response to a cache miss for the PI, the computer detects whether or not the metadata entry for the PI resides in memory. When the metadata entry for the PI is detected in memory, the computer increments a victim hit counter (VHC) that may be used to calculate how much avoidable thrashing is the cache experiencing, which is how much thrashing would be reduced if the cache were expanded. Either immediately or arbitrarily later, the computer adjusts a policy of the cache based on the VHC's value. For example, the computer may adjust the capacity of the cache based on the VHC.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: June 25, 2019
    Assignee: Oracle International Corporation
    Inventors: Justin Matthew Lewis, Zuoyu Tao, Jia Shi, Kothanda Umamageswaran
  • Patent number: 10318421
    Abstract: An infrequently used method is selected for eviction from a code cache repository by accessing a memory management data structure from an operating system, using the data structure to identify a first set of pages that are infrequently referenced relative to a second set of pages, determining whether or not a page of the first set of pages is part of a code cache repository and includes at least one method, in response to the page of the first set of pages being part of the code cache repository and including at least one method, flagging the at least one method as a candidate for eviction from the code cache repository, determining whether or not a code cache storage space limit has been reached for the code cache repository, and, in response to the storage space limit being reached, evicting the at least one flagged method from the code cache repository.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventor: Marius Pirvu
  • Patent number: 10318256
    Abstract: Computer code from an application program comprising a plurality of modules that each comprise a separately loadable file is code cached in a shared and persistent caching system. A shared code caching engine receives native code comprising at least a portion of a single module of the application program, and stores runtime data corresponding to the native code in a cache data file in the non-volatile memory. The engine then converts cache data file into a code cache file and enables the code cache file to be pre-loaded as a runtime code cache. These steps are repeated to store a plurality of separate code cache files at different locations in non-volatile memory.
    Type: Grant
    Filed: November 27, 2012
    Date of Patent: June 11, 2019
    Assignee: VMware, Inc.
    Inventors: Derek Bruening, Vladimir L. Kiriansky
  • Patent number: 10296466
    Abstract: A device includes: a cache memory configured to store a first list and a second list, and a processor. The first list includes one or more entries that include any one of data pieces in a storage device and information indicating a location of the data piece on the storage device, and the second list includes one or more entries that include information indicating a location of an already discarded data piece on the storage device, the already discarded data piece having been included in an entry that has been evicted from the first list. The processor counts a count number of entries including data pieces updated and being consecutive from an eviction target entry when updating data piece of an entry in the first list, and writes data of a target entry in the storage device and discards the data from the cache memory based on a certain rule.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: May 21, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Jun Kato
  • Patent number: 10282106
    Abstract: An operating method of a memory controller includes steps of: configuring the memory controller to receive a read command and read at least one piece of first data stored in a non-volatile memory according to the received read command; configuring the memory controller to determine whether a read count of the at least one piece of first data is greater than a set value; and configuring the memory controller to copy and store the at least one piece of first data in a data temporary storage device when the read count of the at least one piece of first data is determined to be greater than the set value. A data storage device and another operating method are also provided.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: May 7, 2019
    Assignee: SILICON MOTION, INC.
    Inventors: Yen-Ting Yeh, Teng-Chi Liang
  • Patent number: 10261859
    Abstract: Methods and apparatus relating to techniques for avoiding cache lookup for cold cache. In an example, an apparatus comprises logic, at least partially comprising hardware logic, to receive metadata from an application, wherein the meta data indicates one or more processing operations which can accommodate a predetermined level of bit errors in read operations from memory, determine, from the metadata, pixel data for which error correction code bypass is acceptable, and generate one or more error correction code bypass hints for subsequent cache access to the pixel data for which error correction code bypass is acceptable, and transmit the one or more error correction code bypass hints to a graphics processing pipeline. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: April 16, 2019
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Abhishek R. Appu, Kiran C. Veernapu, Joydeep Ray
  • Patent number: 10223435
    Abstract: A parallel track/sector switching device and associated method is provided. The method includes identifying data replication sources and locating data replication targets associated with the data replication sources. Data replication instances associated with moving data from the data replication sources to the data replication targets are determined. A first data replication instance for moving first data from a first data replication source to a first data replication target is determined and an antenna capacity associated with the first data replication source and the first data replication target is identified. A memory to track ID map associated with a storage device of the first data replication target is identified and it is determined if a last replication slot has been allotted to the first data replication target based on the memory to track ID map.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Faried Abrahams, Gandhi Sivakumar, Lennox E. Thomas
  • Patent number: 10210167
    Abstract: Techniques are described for managing access to data storage in a plurality of bitstore nodes. In some situations, a data storage service uses multiple bitstore nodes to store data accessible via a network, such as the Internet. In some situations, multi-level cache systems are employed by each bitstore node and managed, for example to increase throughput of the data storage service.
    Type: Grant
    Filed: May 7, 2012
    Date of Patent: February 19, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: James C. Sorenson, III
  • Patent number: 10204056
    Abstract: A microprocessor includes a cache memory and a control module. The control module makes the cache size zero and subsequently make it between zero and a full size of the cache, counts a number of evictions from the cache after making the size between zero and full and increase the size when the number of evictions reaches a predetermined number of evictions. Alternatively, a microprocessor includes: multiple cores, each having a first cache memory; a second cache memory shared by the cores; and a control module. The control module puts all the cores to sleep and makes the second cache size zero and receives a command to wakeup one of the cores. The control module counts a number of evictions from the first cache of the awakened core after receiving the command and makes the second cache size non-zero when the number of evictions reaches a predetermined number of evictions.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: February 12, 2019
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD
    Inventors: G. Glenn Henry, Stephan Gaskins
  • Patent number: 10176060
    Abstract: Provided are a memory apparatus for applying fault repair based on a physical region and a virtual region and a control method thereof. That is, the fault repair is applied based on the physical region and the virtual region which use an information storage table of a virtual basic region using a hash function, thereby improving efficiency of the fault repair.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: January 8, 2019
    Assignee: Korea University Research and Business Foundation
    Inventors: Seon Wook Kim, Ho Kwon Kim, Jae Yung Jun, Kyu Hyun Choi
  • Patent number: 10152318
    Abstract: Systems, methods, and other embodiments associated with introducing a new data structure to an executing application are described. In one embodiment, a method includes executing an application as an executing application to process data of a data structure maintained according to a data model. The example method may also include receiving a new data structure definition of a new data structure to define for the data model. The example method may also include performing impact analysis to determine whether the executing application is capable of processing data of the new data structure. The example method may also include updating the data model to include the new data structure definition to create an updated data model. The example method may also include generating control instructions to instruct the executing application to utilize data from the new data structure according to the updated data model.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: December 11, 2018
    Assignee: ORACLE FINANCIAL SERVICES SOFTWARE LIMITED
    Inventors: Rajaram N. Vadapandeshwara, Seema M. Monteiro, Jesna Jacob, Tara Kant
  • Patent number: 10133673
    Abstract: The embodiments implement file size variance caching optimizations. The optimizations are based on a differentiated caching implementation involving a small size content optimized first cache and a large size content optimized second cache optimized. The first cache reads and writes data using a first block size. The second cache reads and writes data using a different second block size that is larger than the first block size. A request management server controls request distribution across the first and second caches. The request management server differentiates large size content requests from small size content requests. The request management server uses a first request distribution scheme to restrict large size content request distribution across the first cache and a second request distribution scheme to restrict small size content request distribution across the second cache.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 20, 2018
    Assignee: Verizon Digital Media Services Inc.
    Inventors: Harkeerat Singh Bedi, Amir Reza Khakpour, Derek Shiell
  • Patent number: 10114751
    Abstract: Disclosed is an improved approach to implement memory-efficient cache size estimations. A HyperLogLog is used to efficiently approximate an MRC with sufficient granularity to size caches.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: October 30, 2018
    Assignee: Nutanix, Inc.
    Inventors: Rickard Edward Faith, Peter Scott Wyckoff
  • Patent number: 10114753
    Abstract: Provided are a computer program product, system, and method for using cache lists for multiple processors to cache and demote tracks in a storage system. Tracks in the storage stored in the cache are indicated in lists, wherein there is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. A determination is made of one of the lists from which to select one of the tracks in the cache indicated in the determined list to demote. The selected track is demoted from the cache.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: October 30, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos
  • Patent number: 10089233
    Abstract: A method of partitioning a set-associative cache for a plurality of software components may comprise identifying a cache height equal to a number of sets in the set-associative cache based on hardware specifications of a computing platform. The method may further comprise determining at least one component demand set of the plurality of software components and dedicating a set in the set-associative cache for the at least one component demand set. The method may further comprise assembling a proportional component sequence of the at least one component demand set having a sequence length equal to an integer submultiple of the cache height. The method may further comprise concatenating assembled proportional component sequences to form a template for mapping a RAM to the dedicated sets in the set-associative cache.
    Type: Grant
    Filed: May 11, 2016
    Date of Patent: October 2, 2018
    Assignee: GE Aviation Systems, LLC
    Inventor: Christopher John Goebel
  • Patent number: 10078678
    Abstract: A parallel track/sector switching device and associated method is provided. The method includes identifying data replication sources and locating data replication targets associated with the data replication sources. Data replication instances associated with moving data from the data replication sources to the data replication targets are determined. A first data replication instance for moving first data from a first data replication source to a first data replication target is determined and an antenna capacity associated with the first data replication source and the first data replication target is identified. A memory to track ID map associated with a storage device of the first data replication target is identified and it is determined if a last replication slot has been allotted to the first data replication target based on the memory to track ID map.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: September 18, 2018
    Assignee: International Business Machines Corporation
    Inventors: Faried Abrahams, Gandhi Sivakumar, Lennox E. Thomas
  • Patent number: 10055161
    Abstract: In one aspect, a method includes splitting empty RAID stripes into sub-stripes and storing pages into the sub-stripes based on a compressibility score. In another aspect, a method includes reading pages from 1-stripes, storing compressed data in a temporary location, reading multiple stripes, determining compressibility score for each stripe and filling stripes based on the compressibility score. In a further aspect, a method includes scanning a dirty queue in a system cache, compressing pages ready for destaging, combining compressed pages in to one aggregated page, writing one aggregated page to one stripe and storing pages with same compressibility score in a stripe.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: August 21, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: David Meiri, Anton Kucherov, Vladimir Shveidel
  • Patent number: 9983999
    Abstract: A computing system includes: a memory storage unit, having memory blocks, configured as a memory cache to store storable objects; and a device control unit, coupled to the memory storage unit, configured to: calculate an entropy level for the storable objects based on an eviction policy; calculate a block entropy for each of the memory blocks based on the entropy level of the storable objects in the memory blocks; select an erase block from the memory blocks, wherein the erase block is an instance of the memory blocks with the lowest value of the block entropy; and perform an erase operation on the erase block.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: May 29, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Yang Seok Ki
  • Patent number: 9983792
    Abstract: The present invention discloses a method comprising: sending cache request; monitoring power state; comparing said power state; allocating cache resources; filling cache; updating said power state; repeating said sending, said monitoring, said comparing, said allocating, said filling, and said updating until workload is completed.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: May 29, 2018
    Assignee: Intel Corporation
    Inventors: Ryan D. Wells, Michael J. Muchnick, Chinnakrishnan S. Ballapuram
  • Patent number: 9965281
    Abstract: A unified architecture for dynamic generation, execution, synchronization and parallelization of complex instruction formats includes a virtual register file, register cache and register file hierarchy. A self-generating and synchronizing dynamic and static threading architecture provides efficient context switching.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: May 8, 2018
    Assignee: INTEL CORPORATION
    Inventor: Mohammad A. Abdallah
  • Patent number: 9952977
    Abstract: A method for managing a parallel cache hierarchy in a processing unit. The method including receiving an instruction that includes a cache operations modifier that identifies a level of the parallel cache hierarchy in which to cache data associated with the instruction; and implementing a cache replacement policy based on the cache operations modifier.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: April 24, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Steven James Heinrich, Alexander L. Minkin, Brett W. Coon, Rajeshwaran Selvanesan, Robert Steven Glanville, Charles McCarver, Anjana Rajendran, Stewart Glenn Carlton, John R. Nickolls, Brian Fahs
  • Patent number: 9954971
    Abstract: A cache server is operative as one of a set of cache servers of a distributed cache. The server includes a processor and a memory connected to the processor. The memory stores instructions executed by the processor to receive a cache storage request, establish a cache eviction requirement in response to the cache storage request, and identify an evict entry within a cache in response to the cache eviction requirement. The evict entry is selected from a random sampling of entries within the cache that are subject to an eviction policy that identifies a probabilistically favorable eviction candidate. The evict entry is removed from the cache. Content associated with the storage request is loaded into the cache.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: April 24, 2018
    Assignee: Hazelcast, Inc.
    Inventors: Greg Luck, Christoph Engelbert, Serkan Özal
  • Patent number: 9952969
    Abstract: There are disclosed techniques for use in managing data storage in a data storage system which comprise a data storage device and a cache memory. In one example, a method comprises the following steps. An I/O request is received and a durability requirement of the I/O request data associated with the I/O request is determined. Based on the durability requirement of the I/O request data, the I/O request data is classified. The classified I/O request data is stored in the cache memory.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: April 24, 2018
    Assignee: EMC IP Holding Company LLC
    Inventor: David W. Harvey
  • Patent number: 9906805
    Abstract: In an image processing device, a motion image decoding processing unit extracts a feature amount of a target image to be decoded from an input stream, and changes a read size of a cache fill from an external memory to a cache memory, based on the feature amount. The feature amount represents an intra macro block ratio in, for example, one picture (frames or fields), or a motion vector variation. When the intra macro block ratio is high, the read size of the cache fill is decreased.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: February 27, 2018
    Assignee: Renesas Electronics Corporation
    Inventors: Keisuke Matsumoto, Katsushige Matsubara, Seiji Mochizuki, Toshiyuki Kaya, Hiroshi Ueda
  • Patent number: 9892180
    Abstract: A parallel track/sector switching device and associated method is provided. The method includes identifying data replication sources and locating data replication targets associated with the data replication sources. Data replication instances associated with moving data from the data replication sources to the data replication targets are determined. A first data replication instance for moving first data from a first data replication source to a first data replication target is determined and an antenna capacity associated with the first data replication source and the first data replication target is identified. A memory to track ID map associated with a storage device of the first data replication target is identified and it is determined if a last replication slot has been allotted to the first data replication target based on the memory to track ID map.
    Type: Grant
    Filed: November 21, 2014
    Date of Patent: February 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Faried Abrahams, Gandhi Sivakumar, Lennox E. Thomas
  • Patent number: 9817759
    Abstract: A multi-core CPU system includes a shared L2 cache, an access control logic circuit, a plurality of cores, each core configured to access the shared L2 cache through the access control logic circuit, and a size adjusting circuit configured to adjust a size of the shared L2 cache in response to an indication signal that indicates a number of operation cores among the plurality of cores.
    Type: Grant
    Filed: July 24, 2014
    Date of Patent: November 14, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Young Min Shin
  • Patent number: 9811530
    Abstract: Data from a group of distributed processes to a shared file is written using a parallel log-structured file system. A metadata server of a cluster file system is configured to communicate with a plurality of object storage servers of the cluster file system over a network. The metadata server further configured to implement a Parallel Log Structured File System (PLFS) library to coordinate storage on one or more of the plurality of object storage servers of a plurality of portions of a shared file generated by a plurality of applications executing on compute nodes of the cluster file system and to store metadata for the plurality of portions of the shared file. Concurrent writes to the shared file are decoupled by writing the plurality of portions of the shared file generated by each of the plurality of applications to independent write streams for each application.
    Type: Grant
    Filed: June 29, 2013
    Date of Patent: November 7, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: John M. Bent, Sorin Faibish, Uday Gupta
  • Patent number: 9811471
    Abstract: Systems and methods for enabling programmable cache size via Class of Service (COS) cache allocation are described. In some embodiments, a method may include: identifying a resource available to an Information Handling System (IHS) having a cache, where the resource is insufficient to allow the entire cache to be flushed during a power outage event; dividing a cache into at least a first portion and a second portion using a COS cache allocation, where the second portion has a size that is entirely flushable with the resource; and flushing the second portion of the cache during the power outage event.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: November 7, 2017
    Assignee: Dell Products, L.P.
    Inventors: John Erven Jenne, Stuart A. Berke
  • Patent number: 9779027
    Abstract: Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. The data contained within the level-two cache is managing using a cache list that manages and/or maintains data chunk entries added to the level-two cache based on a temporal access of the data chunk.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: October 3, 2017
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Mark Maybee, Mark J. Musante, Victor Latushkin
  • Patent number: 9740528
    Abstract: A scheduling method whereby a virtualization unit, which has multiple nodes containing physical CPUs and physical memories, and which operates a virtual computer by generating logical partitions from the computer resources of the multiple nodes, allocates a physical CPU to the logical CPU. The multiple nodes are coupled via an interconnect, and the virtualization unit selects the physical CPU to be allocated to the logical CPU, and measures performance information related to the performance when the physical memory is accessed from the logical CPU. When the performance information satisfies a prescribed threshold value the physical CPU allocated to the logical CPU is selected from the same node as that of the previously allocated physical CPU, and when the performance information does not satisfy the prescribed threshold value the physical CPU allocated to the logical CPU is selected from a different node than the node of the previously allocated physical CPU.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: August 22, 2017
    Assignee: Hitachi, Ltd.
    Inventors: Shoichi Takeuchi, Shuhei Matsumoto
  • Patent number: 9727239
    Abstract: An electronic system includes: an interface block of a storage device configured to process system information from a system device; a memory block of the storage device, coupled to the interface block, partitioned by the interface block configured to process the system information for partitioning the memory block; and a storage block of a storage device, coupled to the memory block, configured to access a data block of the storage block provided to the system device.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: August 8, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dimin Niu, Hongzhong Zheng, Suhas, Krishna Malladi