Hierarchical Caches Patents (Class 711/122)
-
Patent number: 10402328Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.Type: GrantFiled: January 2, 2018Date of Patent: September 3, 2019Assignee: International Business Machines CorporationInventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
-
Patent number: 10394712Abstract: Topology of clusters of processors of a computer configuration, configured to support any of a plurality of cache coherency protocols, is discovered at initialization time to determine which one of the plurality of cache coherency protocols is to be used to handle coherency requests of the configuration.Type: GrantFiled: January 2, 2018Date of Patent: August 27, 2019Assignee: International Business Machines CorporationInventors: Ekaterina M Ambroladze, Deanna P Berger, Michael F Fee, Arthur J O'Neill, Robert J Sonnelitter, III
-
Patent number: 10387299Abstract: The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.Type: GrantFiled: July 20, 2016Date of Patent: August 20, 2019Assignee: Micron Technology, Inc.Inventors: Daniel B. Penney, Gary L. Howe
-
Patent number: 10379855Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode an instruction. The instruction is to indicate a packed data register of the plurality of packed data registers that is to store a source packed memory address information. The source packed memory address information is to include a plurality of memory address information data elements. An execution unit is coupled with the decode unit and the plurality of packed data registers, the execution unit, in response to the instruction, is to load a plurality of data elements from a plurality of memory addresses that are each to correspond to a different one of the plurality of memory address information data elements, and store the plurality of loaded data elements in a destination storage location. The destination storage location does not include a register of the plurality of packed data registers.Type: GrantFiled: September 30, 2016Date of Patent: August 13, 2019Assignee: Intel CorporationInventors: William C. Hasenplaugh, Chris J. Newburn, Simon C. Steely, Jr., Samantika S. Sury
-
Patent number: 10379905Abstract: Provided are a computer program product, system, and method for distributing tracks to add to cache to processor cache lists based on counts of processor access requests to the cache. There are a plurality of lists, wherein there is one list for each of the plurality of processors. A determination is made as to whether the counts of processor accesses of tracks are unbalanced. A first caching method is used to select one of the lists to indicate a track to add to the cache in response to determining that the counts are unbalanced. A second caching method is used to select one of the lists to indicate the track to add to the cache in response to determining that the counts are balanced. The first and second caching methods provide different techniques for selecting one of the lists.Type: GrantFiled: March 12, 2018Date of Patent: August 13, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
-
Patent number: 10372609Abstract: An embodiment of a semiconductor package apparatus may include technology to determine if a memory request for a second level memory results in a miss with respect to a first level memory, determine if a range of the second level memory corresponding to the memory request is unwritten, if the memory request results in the miss with respect to the first level memory, and blank a corresponding range of the first level memory if the range of the second level memory corresponding to the memory request is determined to be unwritten. Other embodiments are disclosed and claimed.Type: GrantFiled: September 14, 2017Date of Patent: August 6, 2019Assignee: Intel CorporationInventor: Sagi Weiss
-
Patent number: 10365980Abstract: An apparatus in one embodiment comprises a storage system including a plurality of storage nodes each associated with one or more storage devices. The storage system is configured to provide at least one virtual volume distributed over the storage nodes for utilization by a plurality of host devices. The storage nodes are configured to support selection between multiple operating modes for handling input-output operations directed to the distributed virtual volume by the host devices. The multiple operating modes comprise at least a cached mode of operation in which consistency across the storage nodes for the distributed virtual volume when accessed by different ones of the host devices is ensured utilizing a distributed cache coherence protocol implemented by cooperative interaction of cache controllers of respective ones of the storage nodes, and a cacheless mode of operation in which consistency is ensured without utilizing the distributed cache coherence protocol and its associated cache controllers.Type: GrantFiled: October 31, 2017Date of Patent: July 30, 2019Assignee: EMC IP Holding Company LLCInventors: Steven Bromling, Joshua Baergen, Paul A. Shelley
-
Patent number: 10366763Abstract: Disclosed in some examples, are methods, systems, and machine readable mediums which compensate for read-disturb effects by shifting the read voltages used to read the value in a NAND cell based upon a read counter. For example, the NAND memory device may have a read counter that corresponds to a group of NAND cells (e.g., a page, a block, a superblock). Anytime a NAND cell in the group is read, the read counter may be incremented. The read voltage, Vread, may be adjusted based on the read counter to account for the read disturb voltage.Type: GrantFiled: October 31, 2017Date of Patent: July 30, 2019Assignee: Micron Technology, Inc.Inventors: Harish Singidi, Kishore Kumar Muchherla, Gianni Stephen Alsasua, Ashutosh Malshe, Sampath Ratnam, Gary F. Besinga, Michael G. Miller
-
Patent number: 10360150Abstract: Techniques for managing memory in a multiprocessor architecture are presented. Each processor of the multiprocessor architecture includes its own local memory. When data is to be removed from a particular local memory or written to storage that data is transitioned to another local memory associated with a different processor of the multiprocessor architecture. If the data is then requested from the processor, which originally had the data, then the data is acquired from a local memory of the particular processor that received and now has the data.Type: GrantFiled: February 14, 2011Date of Patent: July 23, 2019Assignee: Suse LLCInventor: Nikanth Karthikesan
-
Patent number: 10353817Abstract: A simultaneous multithread (SMT) processor having a shared dispatch pipeline includes a first circuit that detects a cache miss thread. A second circuit determines a first cache hierarchy level at which the detected cache miss occurred. A third circuit determines a Next To Complete (NTC) group in the thread and a plurality of additional groups (X) in the thread. The additional groups (X) are dynamically configured based on the detected cache miss. A fourth circuit determines whether any groups in the thread are younger than the determined NTC group and the plurality of additional groups (X), and flushes all the determined younger groups from the cache miss thread.Type: GrantFiled: March 7, 2017Date of Patent: July 16, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gregory W. Alexander, Brian D. Barrick, Thomas W. Fox, Christian Jacobi, Anthony Saporito, Somin Song, Aaron Tsai
-
Patent number: 10346193Abstract: The disclosed computer-implemented method for efficient placement of virtual machines may include (1) allocating space in a cache shared by a group of virtual machines to add a new virtual machine, (2) receiving data requests from the new virtual machine for the cache, (3) recording each of the data requests as a cache hit or a cache miss in a list based on availability of the data in the cache, (4) determining a ratio of cache hits to cache misses for the new virtual machine based on the recorded data requests, and (5) placing the new virtual machine in the group of virtual machines when the ratio of cache hits to cache misses exceeds a threshold, such that the data backup device efficiently utilizes the cache for servicing the data requests from the new virtual machine. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: August 22, 2017Date of Patent: July 9, 2019Assignee: Veritas Technologies LLCInventors: Chirag Dalal, Pradip Kulkarni
-
Patent number: 10324848Abstract: A mechanism is described for facilitating independent and separate entity-based graphics cache at computing devices. A method of embodiments, as described herein, includes facilitate hosting of a plurality of cache at a plurality of entities associated with a graphics processor, wherein each entity hosts at least one cache, and wherein an entity includes a dual sub-slice (DSS) or a streaming multiprocessor (SM).Type: GrantFiled: April 10, 2017Date of Patent: June 18, 2019Assignee: INTEL CORPORATIONInventors: Altug Koker, Joydeep Ray, James A. Valerio, Abhishek R. Appu, Vasanth Ranganathan
-
Patent number: 10318424Abstract: On receiving a Store instruction from a Release side processor, a shared memory transmits a cache invalidation request to an Acquire side processor, increases the value of an execution counter, and transmits the count value to the Release side processor asynchronously with the receiving of the Store instruction. The Release side processor has: a store counter which increases its value when the Store instruction is issued and, when the count value of the execution counter is received, decreases its value by the count value; and a wait counter which, when the store counter has come to indicate 0, sets a value indicating a predetermined time and decreases its value every unit time. The Release side processor issues a Store Fence instruction to request for a guarantee of completion of invalidation of the cache of the Acquire side processor when both the counters have come to indicate 0.Type: GrantFiled: December 21, 2017Date of Patent: June 11, 2019Assignee: NEC CORPORATIONInventor: Tomohisa Fukuyama
-
Patent number: 10318352Abstract: Provided are a computer program product, system, and method for distributing tracks to add to cache to processor cache lists based on counts of processor access requests to the cache. There are a plurality of lists, wherein there is one list for each of the plurality of processors. A determination is made as to whether the counts of processor accesses of tracks are unbalanced. A first caching method is used to select one of the lists to indicate a track to add to the cache in response to determining that the counts are unbalanced. A second caching method is used to select one of the lists to indicate the track to add to the cache in response to determining that the counts are balanced. The first and second caching methods provide different techniques for selecting one of the lists.Type: GrantFiled: June 13, 2018Date of Patent: June 11, 2019Assignee: International Business Machines CorporationInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
-
Patent number: 10311542Abstract: The claimed invention discloses system comprising a plurality of logical nodes comprised in a single or plurality of stages, with defined properties and resources associated with each node, for reducing compute resources, said system further comprising: at least a recirculating ring buffer holding only any one of a control information, input, and, or out data necessary to stream a temporary data between node and, or nodes in an execution graph, thereby reducing size of said recirculating ring buffer; said recirculating ring buffer being sufficiently reduced in size to reside in an on-chip cache, such that any one of the control information, input, and, or out data between node and, or nodes need not be stored in memory; wherein the control information further comprises a command related to invalidating any one of the input and, or out data held in a recirculating ring data buffer, clearing the buffer of tasked data; and wherein a producer is stalled from writing any more control information into a recirculatiType: GrantFiled: March 6, 2017Date of Patent: June 4, 2019Assignee: THINCI, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 10310978Abstract: An apparatus and method for multi-level cache request tracking.Type: GrantFiled: September 29, 2017Date of Patent: June 4, 2019Assignee: Intel CorporationInventors: Robert G. Blankenship, Samantika S. Sury
-
Patent number: 10310981Abstract: A method and apparatus for performing memory prefetching includes determining whether to initiate prefetching. Upon a determination to initiate prefetching, a first memory row is determined as a suitable prefetch candidate, and it is determined whether a particular set of one or more cachelines of the first memory row is to be prefetched.Type: GrantFiled: September 19, 2016Date of Patent: June 4, 2019Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Yasuko Eckert, Nuwan Jayasena, Reena Panda, Onur Kayiran, Michael W. Boyer
-
Patent number: 10296335Abstract: An apparatus and method are described for efficiently processing and reassigning interrupts. For example, one embodiment of an apparatus comprises: a plurality of cores; and an interrupt controller to group interrupts into a plurality of interrupt domains, each interrupt domain to have a set of one or more interrupts assigned thereto and to map the interrupts in the set to one or more of the plurality of cores.Type: GrantFiled: September 22, 2015Date of Patent: May 21, 2019Assignee: INTEL CORPORATIONInventors: Yogesh Deshpande, Pandurang V Deshpande
-
Patent number: 10282304Abstract: Exemplary methods, apparatuses, and systems receive from a client a request to access data from a client. Whether metadata for the data is stored in a first caching layer is determined. In response to the metadata for the data not being stored in the first caching layer, it is determined if the data is stored in the second caching layer. In response to determining that the data is stored in the second caching layer, the data is retrieved from the second caching layer. In response to determining that the data is not stored in the second caching layer, writing of the data to the second caching layer is bypassed. The retrieved data is sent to the client.Type: GrantFiled: September 8, 2016Date of Patent: May 7, 2019Assignee: VMware, Inc.Inventors: Sankaran Sivathanu, Sai Inabattini
-
Patent number: 10282296Abstract: Embodiments of an invention a processor architecture are disclosed. In an embodiment, a processor includes a decoder, an execution unit, a coherent cache, and an interconnect. The decoder is to decode an instruction to zero a cache line. The execution unit is to issue a write command to initiate a cache line sized write of zeros. The coherent cache is to receive the write command, to determine whether there is a hit in the coherent cache and whether a cache coherency protocol state of the hit cache line is a modified state or an exclusive state, to configure a cache line to indicate all zeros, and to issue the write command toward the interconnect. The interconnect is to, responsive to receipt of the write command, issue a snoop to each of a plurality of other coherent caches for which it must be determined if there is a hit.Type: GrantFiled: December 12, 2016Date of Patent: May 7, 2019Assignee: Intel CorporationInventors: Jason W. Brandt, Robert S. Chappell, Jesus Corbal, Edward T. Grochowski, Stephen H. Gunther, Buford M. Guy, Thomas R. Huff, Elmoustapha Ould-Ahmed-Vall, Bret L. Toll, David Papworth, James D. Allen
-
Patent number: 10261901Abstract: An apparatus is described. The apparatus includes a last level cache and a memory controller to interface to a multi-level system memory. The multi-level system memory has a caching level. The apparatus includes a first prediction unit to predict unneeded blocks in the last level cache. The apparatus includes a second prediction unit to predict unneeded blocks in the caching level of the multi-level system memory.Type: GrantFiled: September 25, 2015Date of Patent: April 16, 2019Assignee: Intel CorporationInventors: Zhe Wang, Christopher B. Wilkerson, Zeshan A. Chishti, Seth H. Pugsley, Alaa R. Alameldeen, Shih-Lien L. Lu
-
Patent number: 10262721Abstract: The present disclosure includes apparatuses and methods for cache invalidate. An example apparatus comprises a bit vector capable memory device and a channel controller coupled to the memory device. The channel controller is configured to cause a bulk invalidate command to be sent to a cache memory system responsive to receipt of a bit vector operation request.Type: GrantFiled: March 10, 2016Date of Patent: April 16, 2019Assignee: Micron Technology, Inc.Inventor: Richard C. Murphy
-
Patent number: 10255199Abstract: Secure memory paging technologies are described. Embodiments of the disclosure may include checking attributes of secure page cache map to determine whether a target page to be evicted is clean and replay protected by a unified version-paging data structure and checking the unified version-paging data structure to determine whether contents of the unified version-paging data structure match the target page. When the target page to be evicted is clean and replay protected and the contents match, the target page can be removed without encrypting the contents of the target page.Type: GrantFiled: September 22, 2017Date of Patent: April 9, 2019Assignee: Intel CorporationInventors: Krystof C. Zmudzinski, Carlos V. Rozas
-
Patent number: 10255186Abstract: An approximate cache system is disclosed. The system includes a quality aware cache controller (QACC), a cache, a quality table configured to receive addresses and a quality specification from the processor associated with each address and further configured to provide the quality specification for each address to the QACC, wherein the QACC controls approximation is based on one or more of i) approximation through partial read operations; ii) approximation through lower read currents; iii) approximation through skipped write operations; iv) approximation through partial write operations; v) approximations through lower write duration; vi) approximation through lower write currents; and vii) approximations through skipped refreshes.Type: GrantFiled: June 14, 2017Date of Patent: April 9, 2019Assignee: Purdue Research FoundationInventors: Ashish Ranjan, Swagath Venkataramani, Zoha Pajouhi, Rangharajan Venkatesan, Kaushik Roy, Anand Raghunathan
-
Patent number: 10248572Abstract: An apparatus and method are provided for operating a virtually indexed, physically tagged cache. The apparatus has processing circuitry for performing data processing operations on data, and a virtually indexed, physically tagged cache for storing data for access by the processing circuitry. The cache is accessed using a virtual address portion of a virtual address in order to identify a number of cache entries, and then physical address portions stored in those cache entries are compared with the physical address derived from the virtual address in order to detect whether a hit condition exists.Type: GrantFiled: September 21, 2016Date of Patent: April 2, 2019Assignee: ARM LimitedInventors: Jose Gonzalez Gonzalez, Alex James Waugh, Adnan Khan
-
Patent number: 10235054Abstract: A method and systems for caching utilize first and second caches, which may include a dynamic random-access memory (DRAM) cache and a next generation non-volatile memory (NGNVM) cache such as NAND flash memory. The methods and systems may be used for memory caching and/or page caching. The second caches are managed in an exclusive fashion, resulting in an aggregate cache having a storage capacity generally equal to the sum of individual cache storage capacities. Cache free lists may be associated with the first and second page caches and pages within a cache free list may be mapped back to an associated cache without accessing a backing store. Data can be migrated between the first cache and the second caches based upon access heuristics and application hints.Type: GrantFiled: December 9, 2014Date of Patent: March 19, 2019Assignee: EMC IP Holding Company LLCInventors: Roy E. Clark, Adrian Michaud
-
Patent number: 10229064Abstract: Provided are a computer program product, system, and method for using cache lists for processors to determine tracks in a storage to demote from a cache. Tracks in the storage stored in the cache are indicated in lists. There is one list for each of a plurality of processors. Each of the processors processes the list for that processor to process the tracks in the cache indicated on the list. There is a timestamp for each of the tracks indicated in the lists indicating a time at which the track was added to the cache. Tracks indicated in each of the lists having timestamps that fall within a range of timestamps are demoted.Type: GrantFiled: January 30, 2018Date of Patent: March 12, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos
-
Patent number: 10229065Abstract: Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses.Type: GrantFiled: December 31, 2016Date of Patent: March 12, 2019Assignee: Intel CorporationInventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
-
Patent number: 10203935Abstract: Techniques are disclosed for power conservation. A plurality of processing elements and a plurality of instructions are configured. The plurality of processing elements is controlled by instructions contained in a plurality of circular buffers. The plurality of processing elements can comprise a dataflow processor. A first processing element, from the plurality of interconnected processing elements, is set into a sleep state by a first instruction from the plurality of instructions. The first processing element is woken from the sleep state as a result of valid data being presented to the first processing element. A subsection of the plurality of interconnected processing elements is also set into a sleep state based on the first processing element being set into a sleep state.Type: GrantFiled: August 2, 2017Date of Patent: February 12, 2019Assignee: Wave Computing, Inc.Inventor: Christopher John Nicol
-
Patent number: 10204053Abstract: A method may include assigning a cacheability status to a page. The page may be in a memory of a host computer communicatively connected to a processor core on a field-programmable gate array (FPGA). The FPGA may include one or more caches. The method may further include obtaining an instruction including a reference to the page, determining, based on the cacheability status, whether the page is non-cacheable, and resolving the reference to the page, based on determining that the page is non-cacheable, bypassing the one or more caches of the FPGA.Type: GrantFiled: September 30, 2016Date of Patent: February 12, 2019Assignee: Oracle International CorporationInventors: David Michael Wilkins, James Anthony Quigley
-
Patent number: 10198357Abstract: A coherent interconnect is provided. The coherent interconnect includes a snoop filter and a circuit that receives a write request, strobe bits, and write data from a central processing unit (CPU); generates a snoop filter request based on the write request; and transmits, at substantially the same time, the snoop filter request to the snoop filter and the write request, the strobe bits, and the write data to a memory controller.Type: GrantFiled: August 18, 2016Date of Patent: February 5, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jae Young Hur, Sung Min Hong
-
Patent number: 10198354Abstract: Provided are an apparatus, system, and method to flush modified data from a first memory to a persistent second memory. A first memory controller coupled to the first memory includes at least one RAS controller to read a range of addresses in the first memory. In response to receiving a command from the power control unit, the at least one RAS controller is invoked to read data from at least one range of addresses specified for the RAS controller from the first memory. A second memory controller transfers data read from the first memory determined to be modified to the second memory. The first memory controller sends a signal to the power control unit to indicate that the modified data in the range of addresses specified for the RAS controller was flushed to the second memory in response to the RAS controller completing reading the range of addresses.Type: GrantFiled: March 21, 2017Date of Patent: February 5, 2019Assignee: INTEL CORPORATIONInventors: Wei Chen, Rajat Agarwal, Jing Ling, Daniel W. Liu
-
Patent number: 10193814Abstract: A method for categorizing a downloading of a resource to a user device from a resource server in a data network is provided. Data of one or more Layer 7 protocol requests from the client device is received at an intermediate server in the data network. The intermediate server forwards the data of each of the one or more Layer 7 protocol requests to the resource server and receives data of one or more Layer 7 protocol responses from the resource server, each Layer 7 protocol response corresponding to a respective one of the Layer 7 protocol requests. The intermediate server categorizes the downloading of the resource to the client device as being one of one or more pre-defined download categories, based on a count of the one or more Layer 7 protocol responses and the determined sizes or estimated sizes of the one or more Layer 7 protocol responses.Type: GrantFiled: September 8, 2016Date of Patent: January 29, 2019Assignee: Openwave Mobility Inc.Inventors: Fergus M Wills, Matt Halligan, Shaun McGinnity
-
Patent number: 10185668Abstract: Systems and methods relate to cost-aware cache management policies. In a cost-aware least recently used (LRU) replacement policy, temporal locality as well as miss cost is taken into account in selecting a cache line for replacement, wherein the miss cost is based on an associated operation type including instruction cache read, data cache read, data cache write, prefetch, and write back. In a cost-aware dynamic re-reference interval prediction (DRRIP) based cache management policy, miss costs associated with operation types pertaining to a cache line are considered for assigning re-reference interval prediction values (RRPV) for inserting the cache line, pursuant to a cache miss and for updating the RRPV upon a hit for the cache line. The operation types comprise instruction cache access, data cache access, prefetch, and write back. These policies improve victim selection, while minimizing cache thrashing and scans.Type: GrantFiled: September 20, 2016Date of Patent: January 22, 2019Assignee: QUALCOMM IncorporatedInventors: Rami Mohammad A. Al Sheikh, Shivam Priyadarshi, Harold Wade Cain, III
-
Patent number: 10185498Abstract: A memory system includes a write buffer, a main memory having a higher latency than the write buffer, and a memory controller. In response to a write request indicating first data for storing at a write address in the main memory, the memory controller adds a new write entry in the write buffer, where the new write entry includes the write address and the first data, and updates a pointer of a previous write entry in the write buffer to point to the new write entry. In response to a write-back instruction, the memory controller traverses a plurality of write entries stored in the write buffer, and writes into the main memory second data of the previous write entry and the first data of the new write entry.Type: GrantFiled: June 16, 2016Date of Patent: January 22, 2019Assignee: Advanced Micro Devices, Inc.Inventor: David A. Roberts
-
Patent number: 10162526Abstract: Some embodiments include apparatuses and methods including memory cells and a control unit to store information in a portion of the memory cells and to generate an entry associated with the information. The information is associated with a logical address recognized by a host. The entry includes an indicator indicating that the information is to be preserved for a creation of an image of information associated with logical addresses in a logical address space recognized by the host.Type: GrantFiled: October 20, 2015Date of Patent: December 25, 2018Assignee: Micron Technology, Inc.Inventors: Cory J Reche, Phil W. Lee
-
Patent number: 10157129Abstract: A memory system includes multiple levels of cache and an auxiliary storage element for storing a copy of a cache line from one of the levels of cache when the cache line of the one of the levels of cache is determined to have been modified. The system also includes a flag configured to indicate a cache state of the modified cache line. The cache state indicates the modified cache line has been copied to the auxiliary storage element. The system also includes a controller communicatively coupled to each of the multiple levels of cache and the auxiliary storage element. The controller is configured to, in response to determining the cache line of the one of the levels of cache has been modified, copy the modified cache line to the auxiliary storage element and set the flag for the modified cache line to indicate the cache state.Type: GrantFiled: December 17, 2014Date of Patent: December 18, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Timothy J. Dell, Shwetha Janardhan, Sairam Kamaraju, Saravanan Sethuraman
-
Patent number: 10146478Abstract: Techniques are disclosed for managing access to shared computing resources in a computing system which include representing resources as objects and managing access to those objects using the construct of a resource instance manager. A set of resource instance managers respond to all commands requesting access to a set of respective shared resources. Access to each shared resource is managed by a unique resource instance manager for that resource which maintains a consistent state for that shared resource. When commands are designed according to an appropriate model and processed by a set of resource instance managers disclosed herein, multiple processes may execute in parallel without causing deadlocks or introducing data corruption.Type: GrantFiled: July 28, 2017Date of Patent: December 4, 2018Assignee: EMC IP Holding Company LLCInventors: Amitava Roy, Norman Speciner, Rajesh Kumar Gandhi, Hongxin Zhang
-
Patent number: 10140212Abstract: Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction.Type: GrantFiled: September 30, 2013Date of Patent: November 27, 2018Assignee: VMware, Inc.Inventors: Pratap Subrahmanyam, Rajesh Venkatasubramanian
-
Patent number: 10133667Abstract: Techniques related to efficient data storage and retrieval using a heterogeneous main memory are disclosed. A database includes a set of persistent format (PF) data that is stored on persistent storage in a persistent format. The database is maintained on the persistent storage and is accessible to a database server. The database server converts the set of PF data to sets of mirror format (MF) data and stores the MF data in a hierarchy of random-access memories (RAMs). Each RAM in the hierarchy has an associated latency that is different from a latency associated with any other RAM in the hierarchy. Storing the sets of MF data in the hierarchy of RAMs includes (1) selecting, based on one or more criteria, a respective RAM in the hierarchy to store each set of MF data and (2) storing said each set of MF data in the respective RAM.Type: GrantFiled: September 6, 2016Date of Patent: November 20, 2018Assignee: Orcle International CorporationInventors: Niloy Mukherjee, Tirthankar Lahiri, Juan R. Loaiza, Jesse Kamp, Prashant Gaharwar, Hariharan Lakshmanan, Dhruvil Shah
-
Patent number: 10133670Abstract: In an example, a system-on-a-chip comprises a plurality of multi-core processors, such as four dual-core processors for eight total cores. Each of the processors connects to shared resources such as memory and peripherals via a shared uncore fabric. Because each input bus for each core can include hundreds of data lines, the number of lines into the shared uncore fabric can become prohibitive. Thus, inputs from each core are multiplexed, such as in a two-to-one configuration. The multiplexing may be a non-blocking, queued (such as FIFO) multiplexing to ensure that all packets from all cores are delivered to the uncore fabric. In certain embodiment, some smaller input lines may be provided to the uncore fabric non-multiplexed, and returns (outputs) from the uncore fabric to the cores may also be non-multiplexed.Type: GrantFiled: December 27, 2014Date of Patent: November 20, 2018Assignee: Intel CorporationInventors: Ramadass Nagarajan, Jose S. Niell, Michael T. Klinglesmith, Derek T. Bachand, Ganesh Kumar
-
Patent number: 10133785Abstract: A data storage device includes a first memory device configured to provide first read data in response to a first read command, a controller including a hardware filter configured to generate first hint information based on a result of comparison of the first read data with filtering condition data and a processor configured to determine whether the first read data is to be filtered based on the first hint information, selectively filter the first read data based on the filtering condition data based on the determination result, and generate first filtered data, and a second memory device configured to store the first filtered data. The controller communicates the first filtered data to a host.Type: GrantFiled: December 28, 2015Date of Patent: November 20, 2018Assignee: Samsung Electronics Co., Ltd.Inventors: Kwang-Hoon Kim, Man-Keun Seo, Sang-Kyoo Jeong
-
Patent number: 10126981Abstract: A write command is received to store data in a Data Storage Device (DSD). At least one of a Non-Volatile Random Access Memory (NVRAM) and a Storage Class Memory (SCM) is selected for storing the data of the write command based on a number of previously received write commands indicating an address of the write command or a priority of the write command. The SCM has at least one characteristic of being faster than the NVRAM in storing data, using less power to store data, and providing a greater usable life for repeatedly storing data in a same memory location. In one example, at least a portion of the SCM is allocated for use by a host. Logical addresses assigned to the SCM are mapped to device addresses of the NVRAM. The host is provided with an indication of the logical addresses assigned to the SCM.Type: GrantFiled: December 14, 2015Date of Patent: November 13, 2018Assignee: Western Digital Technologies, Inc.Inventors: James N. Malina, Albert H. Chen, Takeaki Kato
-
Patent number: 10115468Abstract: A solid state storage device includes a non-volatile memory and a controlling circuit. In a first read retry process, the controlling circuit judges whether an information corresponding to a first block of the non-volatile memory is recorded in the cache table. If the information is not recorded in the cache table, the controlling circuit sequentially provides plural predetermined retry read voltage sets to the non-volatile memory according to a sequence of the plural predetermined retry read voltage sets in the retry table and performs a read retry operation. If a read data of the first block is successfully decoded through the read retry operation according to a first predetermined retry read voltage set of the plural predetermined retry read voltage sets in the retry table, the controlling circuit records the first predetermined retry read voltage set into the cache table.Type: GrantFiled: April 5, 2017Date of Patent: October 30, 2018Assignees: LITE-ON ELECTRONICS (GUANGZHOU) LIMITED, LITE-ON TECHNOLOGY CORPORATIONInventors: Shih-Jia Zeng, Jen-Chien Fu
-
Patent number: 10108364Abstract: An integrated circuit (IC) module comprising at least one memory mapped resource, at least one port arranged to be coupled to a further IC module, and an address decoding component. Upon receipt of a resource access request by the IC module, the address decoding component is arranged to extract at least one position parameter from an address field of the received resource access request, determine if the at least one position parameter indicates a target resource as residing within the IC module, and if it is determined that the at least one position parameter indicates the target resource as not residing within the IC module, modify the at least one position parameter to represent a change of one position and forward the resource access request with the modified position parameter over the port to the further IC module.Type: GrantFiled: June 26, 2015Date of Patent: October 23, 2018Assignee: NXP USA, Inc.Inventors: Mark Maiolani, Derek James Beattie, Robert Freddie Moran
-
Patent number: 10108368Abstract: The system includes a plurality of storage volumes, a data synchronization module, a space-efficient storage module, and a heat data module. A second storage volume of the plurality of storage volumes includes a backup storage location for a first storage volume. The data synchronization module, coupled to the first storage volume and the second storage volume, provides a backup by synchronizing information from the first storage volume to the second storage volume during a synchronization event. The information includes data chunks, heat map data, and first metadata. The space-efficient storage module receives the information from the data synchronization module and allocates the information to the second storage volume in accordance with a space-efficient storage model. The heat data module reads the first metadata and the heat map data and adjusts a location of the data chunks in the second storage volume based on the heat map data.Type: GrantFiled: January 9, 2017Date of Patent: October 23, 2018Assignee: International Business Machines CorporationInventors: Duo Chen, Min Fang, Da Liu, Jinyi Pu
-
Patent number: 10101978Abstract: A system, for use with a compiler architecture framework, includes performing a statically speculative compilation process to extract and use speculative static information, encoding the speculative static information in an instruction set architecture of a processor, and executing a compiled computer program using the speculative static information, wherein executing supports static speculation driven mechanisms and controls.Type: GrantFiled: December 9, 2015Date of Patent: October 16, 2018Assignee: III HOLDINGS 2, LLCInventor: Csaba Andras Moritz
-
Patent number: 10102129Abstract: A processor includes a processing core, a L1 cache comprising a first processing core and a first L1 cache comprising a first L1 cache data entry of a plurality of L1 cache data entries to store data. The processor also includes an L2 cache comprising a first L2 cache data entry of a plurality of L2 cache data entries. The first L2 cache data entry corresponds to the first L1 cache data entry and each of the plurality of L2 cache data entries are associated with a corresponding presence bit (pbit) of a plurality of pbits. Each of the plurality of pbits indicates a status of a corresponding one of the plurality of L2 cache data entries. The processor also includes a cache controller, which in response to a first request among a plurality of requests to access the data at the first L1 cache data entry, determines that a copy of the data is stored in the first L2 cache data entry; and retrieves the copy of the data from the L2 cache data entry in view of the status of the pbit.Type: GrantFiled: December 21, 2015Date of Patent: October 16, 2018Assignee: Intel CorporationInventors: Krishna N. Vinod, Avinash Sodani, Zainulabedin J. Aurangabadwala
-
Patent number: 10095622Abstract: Embodiments of systems, method, and apparatuses for remote monitoring are described. In some embodiments, an apparatus includes at least one monitoring circuit to monitor for memory accesses to an address space; at least one a monitoring table to store an identifier of the address space; and a tag directory per core used by the core to track entities that have access to the address space.Type: GrantFiled: December 29, 2015Date of Patent: October 9, 2018Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Robert G. Blankenship, Raj K. Ramanujan, Thomas Willhalm, Narayan Ranganathan
-
Patent number: 10089240Abstract: A computer architecture provides a memory cache that is accessed not by physical addresses but by virtual addresses directly from running processes. Ambiguities that can result from multiple virtual addresses mapping to a single physical address are handled by dynamically tracking synonyms and connecting a limited number of virtual synonyms mapping to the same physical address to a single key virtual address that is used exclusively for cache access.Type: GrantFiled: September 28, 2015Date of Patent: October 2, 2018Assignee: Wisconsin Alumni Research FoundationInventors: Gurindar S. Sohi, Hongil Yoon