With Plurality Of Cache Hierarchy Levels (epo) Patents (Class 711/E12.043)
-
Patent number: 12259820Abstract: A processor-based system for allocating a higher-level cache line in a higher-level cache memory in response to an eviction request of a lower-level cache line is disclosed. The processor-based system determines whether the cache line is opportunistic, sets an opportunistic indicator to indicate that the lower-level cache line is opportunistic, and communicates the lower-level cache line and the opportunistic indicator. The processor-based system determines, based on the opportunistic indicator of the lower-level cache line, whether a higher-level cache line of a plurality of higher-level cache lines in the higher-level cache memory has less or equal importance than the lower-level cache line. In response, the processor-based system replaces the higher-level cache line in the higher-level cache memory with the lower-level cache line and associates the opportunistic indicator with the lower-level cache line in the higher-level cache memory.Type: GrantFiled: April 2, 2024Date of Patent: March 25, 2025Assignee: QUALCOMM IncorporatedInventor: Ramkumar Srinivasan
-
Patent number: 12216574Abstract: According to one embodiment, a storage system including a volatile memory with a first area, a second area, and a third area. The first area stores a logical address specified by an external device, a physical address of a nonvolatile storage medium associated with the logical address, and first information related to association between the logical address and the physical address. The second area stores a first number of second information indicating states of the first number of first ranges of the logical address. The third area stores third information indicating a state of a second range of the logical address, which includes the first number of the first ranges.Type: GrantFiled: September 9, 2022Date of Patent: February 4, 2025Assignee: Kioxia CorporationInventors: Hiroyuki Takatsu, Takeshi Tanaka
-
Patent number: 12210463Abstract: A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes: line type bits configured to store an indication that a corresponding cache line of the second sub-cache is configured to store write-miss data, and an eviction controller configured to evict a cache line of the second sub-cache storing write-miss data based on an indication that the cache line has been fully written.Type: GrantFiled: September 9, 2022Date of Patent: January 28, 2025Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
-
Patent number: 12124367Abstract: Methods, systems, and devices for techniques for accessing managed not-AND (NAND) memory are described. An indicator of a first type that indicates whether each physical address in a group of physical addresses stores valid data may be accessed. Indicators of a second type may be used to indicate whether respective physical addresses of the group of physical addresses store valid data. Data stored at the group of physical addresses may be transferred to a different group of physical addresses based on the indicator of the first type. Also, another indicator of the first type that indicates whether each physical address in the different group of physical addresses stores valid data may be updated.Type: GrantFiled: December 7, 2020Date of Patent: October 22, 2024Assignee: Micron Technology, Inc.Inventors: Junjun Wang, Yi Heng Sun
-
Patent number: 12073220Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.Type: GrantFiled: May 18, 2022Date of Patent: August 27, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12066940Abstract: Data reuse cache techniques are described. In one example, a load instruction is generated by an execution unit of a processor unit. In response to the load instruction, data is loaded by a load-store unit for processing by the execution unit and is also stored to a data reuse cache communicatively coupled between the load-store unit and the execution unit. Upon receipt of a subsequent load instruction for the data from the execution unit, the data is loaded from the data reuse cache for processing by the execution unit.Type: GrantFiled: September 29, 2022Date of Patent: August 20, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Alok Garg, Neil N Marketkar, Matthew T. Sobel
-
Patent number: 12061553Abstract: Systems and methods for microservices prefetching are disclosed. The systems and methods include generating an artificial intelligence model using received user data; prefetching data using the artificial intelligence model, storing the prefetched data in a cache; and using the cache to respond to an information request.Type: GrantFiled: March 15, 2022Date of Patent: August 13, 2024Assignee: CVS Pharmacy, Inc.Inventor: Joydeep Bhattacharjee
-
Patent number: 12050538Abstract: Castout handling in a distributed cache topology, including: detecting, by a first cache of a plurality of caches, a cache miss; providing, by the first cache to each other cache of the plurality of caches, a message comprising: data indicating a cache address corresponding to the cache miss; and data indicating a cache line to be evicted.Type: GrantFiled: March 30, 2022Date of Patent: July 30, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Robert J. Sonnelitter, III, Ekaterina M. Ambroladze, Timothy Bronson, Michael A. Blake, Tu-An T. Nguyen
-
Patent number: 12045619Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.Type: GrantFiled: May 18, 2022Date of Patent: July 23, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12032482Abstract: Systems, apparatuses, and methods related to a memory controller for performing row access tracking to mitigate row hammer attacks. A memory controller comprises a dual cache system including a direct mapped cache and a victim cache. The direct mapped cache functions as the main cache while a fully associative victim cache is used to reduce hammer attacks to targeted rows. The direct mapped cache performs an aliasing operation to map at least a portion of data stored in a memory device to the direct mapped cache. The direct mapped cache also uses a plurality of counters operatively coupled to the direct mapped cache to track and monitor the number of activations of the data stored in the direct mapped cache. The memory controller proactively refreshes all adjacent rows in the memory device when the respective counter of the direct mapped cache exceeds a predetermined threshold.Type: GrantFiled: June 17, 2022Date of Patent: July 9, 2024Assignee: Micron Technology, Inc.Inventor: Sandeep Krishna Thirumala
-
Patent number: 11972293Abstract: A data structure for a jointly utilized memory device, in particular, for inter-process communication, in an application system. The memory device includes a memory cell. The data structure includes a management structure, the management structure being configured to hold a pointer object to the memory cell.Type: GrantFiled: September 29, 2020Date of Patent: April 30, 2024Assignee: ROBERT BOSCH GMBHInventors: Christian Eltzschig, Dietrich Kroenke, Mathias Kraus, Matthias Killat, Michael Poehnl
-
Patent number: 11967364Abstract: Described are memory modules that support different error detection and correction (EDC) schemes in both single- and multiple-module memory systems. The memory modules are width configurable and support the different EDC schemes for relatively wide and narrow module data widths. Data buffers on the modules support the half-width and full-width modes, and also support time-division-multiplexing to access additional memory components on each module in support of enhanced EDC.Type: GrantFiled: May 30, 2023Date of Patent: April 23, 2024Assignee: Rambus Inc.Inventors: Frederick A. Ware, John Eric Linstadt, Kenneth L. Wright
-
Patent number: 11954339Abstract: A memory allocation device includes a storage including at least one memory pool in which a memory piece used to search for a route is previously generated and a controller that determines whether it is possible to search for the route using the previously generated memory piece and determines an added amount of memory pieces to previously allocate a memory of the storage, when it is impossible to search for the route.Type: GrantFiled: May 21, 2021Date of Patent: April 9, 2024Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATIONInventors: Pyoung Hwa Lee, Jin Woo Kim
-
Patent number: 11934886Abstract: Methods, systems and computer program products for intra-footprint computing cluster bring-up within a virtual private cloud. A network connection is established between an initiating module and a virtual private cloud (VPC). An initiating module allocates resources of the virtual private cloud including a plurality of nodes that correspond to members of a to-be-configured computing cluster. A cluster management module having coded therein an intended computing cluster configuration is configured into at least one of the plurality of nodes. The members of the to-be-configured computing cluster interoperate from within the VPC to accomplish a set of computing cluster bring-up operations that configure the plurality of members into the intended computing cluster configuration. Execution of bring-up instructions of the management module serve to allocate networking IP addresses of the virtual private cloud.Type: GrantFiled: January 9, 2023Date of Patent: March 19, 2024Assignee: Nutanix, Inc.Inventors: Mohan Maturi, Abhishek Arora, Manoj Sudheendra
-
Patent number: 11860786Abstract: A cache system, having: a first cache; a second cache; a configurable data bit; and a logic circuit coupled to a processor to control the caches based on the configurable bit. When the configurable bit is in a first state, the logic circuit is configured to: implement commands for accessing a memory system via the first cache, when an execution type is a first type; and implement commands for accessing the memory system via the second cache, when the execution type is a second type. When the configurable data bit is in a second state, the logic circuit is configured to: implement commands for accessing the memory system via the second cache, when the execution type is the first type; and implement commands for accessing the memory system via the first cache, when the execution type is the second type.Type: GrantFiled: December 13, 2021Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11847058Abstract: A request to access data at an address is received from a host system. A tag associated with the address is determined to not be found in first entries in a first content-addressable memory (CAM) or in second entries in a second CAM. Responsive to determining that the tag is not found in the first entries or in the second entries, a particular entry of the first entries that each includes valid data is selected. A determination is made whether the particular entry satisfies a condition indicating that content in the particular entry is to be stored in the second CAM. The content is associated with other data stored in the cache. Responsive to determining that the condition is satisfied, the content of the particular entry is stored in one of the second entries to maintain the data in the cache.Type: GrantFiled: August 8, 2022Date of Patent: December 19, 2023Assignee: Micron Technology, Inc.Inventors: Laurent Isenegger, Dhawal Bavishi, Jeffrey Frederiksen
-
Patent number: 11789868Abstract: An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.Type: GrantFiled: October 12, 2021Date of Patent: October 17, 2023Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria, Pete Michael Hippleheuser
-
Patent number: 11734015Abstract: A cache system, having a first cache, a second cache, and a logic circuit coupled to control the first cache and the second cache according to an execution type of a processor. When an execution type of a processor is a first type indicating non-speculative execution of instructions and the first cache is configured to service commands from a command bus for accessing a memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. The cache system can include a configurable data bit. The logic circuit can be coupled to control the caches according to the bit. Alternatively, the caches can include cache sets. The caches can also include registers associated with the cache sets respectively. The logic circuit can be coupled to control the cache sets according to the registers.Type: GrantFiled: June 13, 2022Date of Patent: August 22, 2023Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11709711Abstract: An electronic device includes a memory; a plurality of clients; at least one arbiter circuit; and a management circuit. A given client of the plurality of clients communicates a request to the management circuit requesting an allocation of memory access bandwidth for accesses of the memory by the given client. The management circuit then determines, based on the request, a set of memory access bandwidths including a respective memory access bandwidth for each of the given client and other clients of the plurality of clients that are allocated memory access bandwidth. The management circuit next configures the at least one arbiter circuit to use respective memory access bandwidths from the set of memory access bandwidths for the given client and the other clients for subsequent accesses of the memory.Type: GrantFiled: December 13, 2020Date of Patent: July 25, 2023Assignee: Advanced Micro Devices, Inc.Inventor: Guhan Krishnan
-
Patent number: 11550633Abstract: Methods, systems and computer program products for intra-footprint computing cluster bring-up within a virtual private cloud. A network connection is established between an initiating module and a virtual private cloud (VPC). An initiating module allocates resources of the virtual private cloud including a plurality of nodes that correspond to members of a to-be-configured computing cluster. A cluster management module having coded therein an intended computing cluster configuration is configured into at least one of the plurality of nodes. The members of the to-be-configured computing cluster interoperate from within the VPC to accomplish a set of computing cluster bring-up operations that configure the plurality of members into the intended computing cluster configuration. Execution of bring-up instructions of the management module serve to allocate networking IP addresses of the virtual private cloud.Type: GrantFiled: October 31, 2020Date of Patent: January 10, 2023Inventors: Mohan Maturi, Abhishek Arora, Manoj Sudheendra
-
Patent number: 11422947Abstract: A page directory entry cache (PDEC) can be checked to potentially rule out one or more possible page sizes for a translation lookaside buffer (TLB) lookup. Information gained from the PDEC lookup can reduce the number of TLB checks required to conclusively determine if the TLB lookup is a hit or a miss.Type: GrantFiled: August 12, 2020Date of Patent: August 23, 2022Assignee: International Business Machines CorporationInventors: David Campbell, Jake Truelove, Charles D. Wait, Jon K. Kriegel
-
Patent number: 10922114Abstract: A processing system includes a first register to store an invalidation mode flag associated with a virtual processor identifier (VPID) and a processing core, communicatively coupled to the first register, the processing core comprising a logic circuit to execute a virtual machine monitor (VMM) environment, the VMM environment comprising a root mode VMM supporting a non-root mode VMM, the non-root mode VMM to execute a virtual machine (VM) identified by the VPID, the logic circuit further comprising an invalidation circuit to execute a virtual processor invalidation (INVVPID) instruction issued by the non-root mode VMM, the INVVPID instruction comprising a reference to an INVVPID descriptor that specifies a linear address and the VPID and responsive to determining that the invalidation mode flag is set, invalidate, without triggering a VM exit event, a memory address mapping associated with the linear address.Type: GrantFiled: December 12, 2016Date of Patent: February 16, 2021Assignee: Intel CorporationInventors: Bing Zhu, Kai Wang, Peng Zou, Fangjian Zhong
-
Patent number: 10853081Abstract: A processor is disclosed that performs pipelining which processes a plurality of threads and executes instructions in concurrent processing, the instructions corresponding to thread numbers of the threads and including a branch instruction. The processor may include a pipeline processor, which includes a fetch part that fetches the instruction of the thread having an execution right, and a computation execution part that executes the instruction fetched by the fetch part. The processor may include a branch controller that determines whether to drop an instruction subsequent to the branch instruction within the pipeline processor based on the thread number of the thread where the branch instruction is executed and on the thread number of the subsequent instruction.Type: GrantFiled: November 27, 2018Date of Patent: December 1, 2020Assignee: SANKEN ELECTRIC CO., LTD.Inventors: Kazuhiro Mima, Hitomi Shishido
-
Patent number: 10475150Abstract: Graphics processing systems and methods are described. For example, one embodiment of a graphics processing apparatus comprises a graphics processing unit (GPU), the GPU including a high priority command streamer to dispatch high priority commands from an application, a normal priority command streamer to receive normal priority commands through a command path, one or more execution units, and a thread dispatcher. The thread dispatcher to dispatch normal priority commands to the one or more executions units, determine the high priority command streamer includes at least one command, cause the one or more execution units to save their states, and dispatch at least one command from the high priority queue to the one or more execution units.Type: GrantFiled: September 29, 2017Date of Patent: November 12, 2019Assignee: Intel CorporationInventors: William A. Hux, Girish Ravunnikutty, Adam T. Lake
-
Patent number: 10198260Abstract: A system that for storing program counter values is disclosed. The system may include a program counter, a first memory including a plurality of sectors, a first circuit configured to retrieve a program instruction from a location in memory dependent upon a value of the program counter, send the value of the program counter to an array for storage and determination a predicted outcome of the program instruction in response to a determination that execution of the program instruction changes a program flow. The second circuit may be configured to retrieve the value of the program counter from a given entry in a particular sector of the array, and determine an actual outcome of the program instruction dependent upon the retrieved value of the program counter.Type: GrantFiled: January 13, 2016Date of Patent: February 5, 2019Assignee: Oracle International CorporationInventors: Manish Shah, Christopher Olson
-
Patent number: 10148671Abstract: A functional program stored in a memory area of an electronic card may be protected against an attack by disturbance of electrical origin intended to modify at least one logic state of at least one code of this program. The method may include: a storage step during which codes of the functional program and codes of a check program intended to check the logical behavior of the functional program are stored in the memory of the card; and a step of executing at least one code of the functional program followed by a step of checking the logic states of the functional program by executing the check program. During the storage step, the codes of the check program are stored in a memory area formed by addresses that are defined so that the attack by disturbance of electrical origin has no influence on the logic states of this program.Type: GrantFiled: July 8, 2013Date of Patent: December 4, 2018Assignee: IDEMIA IDENTITY & SECURITY FRANCEInventors: Thanh Ha Le, Julien Bringer, Louis-Philippe Goncalves, Maël Berthier
-
Patent number: 9990199Abstract: A method and system are disclosed. The method may include receiving instructions in a hardware accelerator coupled to a computing device. The instructions may describe operations and data dependencies between the operations. The operations and the data dependencies may be predetermined. The method may include performing a splitter operation in the hardware accelerator, performing an operation in each of a plurality of branches, and performing a combiner operation in the hardware accelerator.Type: GrantFiled: September 18, 2015Date of Patent: June 5, 2018Assignee: Axis ABInventors: Niclas Danielsson, Mikael Asker, Hans-Peter Nilsson, Markus Skans, Mikael Pendse
-
Patent number: 9965391Abstract: A first threshold number of cache lines may be fetched to populate each of the ways of a first cache set of a higher level cache and each of the ways of a first cache set of a lower level cache. A second threshold number of cache lines may be fetched to map to the first cache set of the higher level cache and a second cache set of the lower level cache. The first threshold number of cache lines may be accessed from the second from the first cache set of the lower level cache.Type: GrantFiled: June 30, 2014Date of Patent: May 8, 2018Assignee: Hewlett Packard Enterprise Development LPInventor: Anys Bacha
-
Patent number: 9921849Abstract: Embodiments relate to address expansion and contraction in a multithreading computer system. According to one aspect, a computer implemented method for address adjustment in a configuration is provided. The configuration includes a core configurable between an ST mode and an MT mode, where the ST mode addresses a primary thread and the MT mode addresses the primary thread and one or more secondary threads on shared resources of the core. The primary thread is accessed in the ST mode using a core address value. Switching from the ST mode to the MT mode is performed. The primary thread or one of the one or more secondary threads is accessed in the MT mode using an expanded address value. The expanded address value includes the core address value concatenated with a thread address value.Type: GrantFiled: August 18, 2015Date of Patent: March 20, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan D. Bradbury, Fadi Y. Busaba, Mark S. Farrell, Charles W. Gainey, Jr., Dan F. Greiner, Lisa Cranton Heller, Jeffrey P. Kubala, Damian L. Osisek, Donald W. Schmidt, Timothy J. Slegel
-
Patent number: 9892482Abstract: Technologies may provide for processing video content. A request to process video content may be received at a user mode driver. In response, the user mode driver may insert a command associated with the request into a command buffer. In addition, the user mode driver may enqueue the command buffer to receive a further request to process further video content independent of an execution of the command by platform hardware. Additionally, a command submission process may dequeue the command buffer and call a kernel mode driver. The kernel mode driver may receive the system call independent of the user mode driver and submit the command buffer to the platform hardware to process the video content.Type: GrantFiled: December 19, 2012Date of Patent: February 13, 2018Assignee: Intel CorporationInventors: Hua You, Jiaping Wu
-
Patent number: 9703615Abstract: In one embodiment, a method includes receiving a request to execute first program code that is configured to perform a step of a computation, wherein the request includes a current state of the computation, determining whether the first program code is to be invoked based on an execution condition, when the execution condition is true, executing the first program code based on the current state of the computation, and returning a response that includes a result of executing the first program code, and when the execution condition is false, returning a response indicating that the result of the executing is invalid. The execution condition may be false when an amount of time that has passed since a previous execution of the first program code is greater than a threshold time limit.Type: GrantFiled: July 18, 2014Date of Patent: July 11, 2017Assignee: Facebook, Inc.Inventors: Ari Alexander Grant, Jonanthan P. Dann
-
Patent number: 9558035Abstract: A system and method can support queue processing in a computing environment such as a distributed data grid. A thread can be associated with a queue in the computing environment, wherein the thread runs on one or more microprocessors that support a central processing unit (CPU). The system can use the thread to process one or more tasks when said one or more tasks arrive at the queue. Furthermore, the system can configure the thread to be in one of a sleep state and an idle state adaptively, when there is no task in the queue.Type: GrantFiled: August 1, 2014Date of Patent: January 31, 2017Assignee: ORACLE INTERNATIONAL CORPORATIONInventor: Mark A. Falco
-
Patent number: 9009408Abstract: This invention handles write request cache misses. The cache controller stores write data, sends a read request to external memory for a corresponding cache line, merges the write data with data returned from the external memory and stores merged data in the cache. The cache controller includes buffers with plural entries storing the write address, the write data, the position of the write data within a cache line and unique identification number. This stored data enables the cache controller to proceed to servicing other access requests while waiting for response from the external memory.Type: GrantFiled: September 26, 2011Date of Patent: April 14, 2015Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Raguram Damodaran, David Matthew Thompson
-
Patent number: 8972661Abstract: The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.Type: GrantFiled: October 31, 2011Date of Patent: March 3, 2015Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
-
Patent number: 8972662Abstract: The population of data to be inserted into secondary data storage cache is controlled by determining a heat metric of candidate data; adjusting a heat metric threshold; rejecting candidate data provided to the secondary data storage cache whose heat metric is less than the threshold; and admitting candidate data whose heat metric is equal to or greater than the heat metric threshold. The adjustment of the heat metric threshold is determined by comparing a reference metric related to hits of data most recently inserted into the secondary data storage cache, to a reference metric related to hits of data most recently evicted from the secondary data storage cache; if the most recently inserted reference metric is greater than the most recently evicted reference metric, decrementing the threshold; and if the most recently inserted reference metric is less than the most recently evicted reference metric, incrementing the threshold.Type: GrantFiled: April 26, 2012Date of Patent: March 3, 2015Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Stephen L. Blinick, Evangelos S. Eleftheriou, Lokesh M. Gupta, Robert Haas, Xiao-Yu Hu, Ioannis Koltsidas, Roman A. Pletka
-
Populating a first stride of tracks from a first cache to write to a second stride in a second cache
Patent number: 8966178Abstract: Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.Type: GrantFiled: January 17, 2012Date of Patent: February 24, 2015Assignee: International Business Machines CorporationInventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Karl A. Nielsen -
Patent number: 8966221Abstract: A lookup operation is performed in a translation look aside buffer based on a first translation request as current translation request, wherein a respective absolute address is returned to a corresponding requestor for the first translation request as translation result in case of a hit. A translation engine is activated to perform at least one translation table fetch in case the current translation request does not hit an entry in the translation look aside buffer, wherein the translation engine is idle waiting for the at least one translation table fetch to return data, reporting the idle state of the translation engine as lookup under miss condition and accepting a currently pending translation request as second translation request, wherein a lookup under miss sequence is performed in the translation look aside buffer based on said second translation request.Type: GrantFiled: June 21, 2011Date of Patent: February 24, 2015Assignee: International Business Machines CorporationInventors: Ute Gaertner, Thomas Koehler
-
Populating a first stride of tracks from a first cache to write to a second stride in a second cache
Patent number: 8959279Abstract: Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.Type: GrantFiled: May 4, 2012Date of Patent: February 17, 2015Assignee: International Business Machines CorporationInventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Karl A. Nielsen -
Patent number: 8938585Abstract: Embodiments of the present disclosure provide a system on a chip (SOC) comprising a processing core including a core bus agent, a bus interface unit (BIU), and a bridge module operatively coupling the processing core to the BIU, the bridge module configured to selectively route information from the core bus agent to a cache or to the BIU by bypassing the cache. Other embodiments are also described and claimed.Type: GrantFiled: March 25, 2014Date of Patent: January 20, 2015Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Tarek Rohana, Gil Stoler
-
Patent number: 8935478Abstract: According to one aspect of the present disclosure, a system and technique for variable cache line size management is disclosed. The system includes a processor and a cache hierarchy, where the cache hierarchy includes a sectored upper level cache and an unsectored lower level cache, and wherein the upper level cache includes a plurality of sub-sectors, each sub-sector having a cache line size corresponding to a cache line size of the lower level cache. The system also includes logic executable to, responsive to determining that a cache line from the upper level cache is to be evicted to the lower level cache: identify referenced sub-sectors of the cache line to be evicted; invalidate unreferenced sub-sectors of the cache line to be evicted; and store the referenced sub-sectors in the lower level cache.Type: GrantFiled: November 1, 2011Date of Patent: January 13, 2015Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Wen-Tzer T. Chen, Diane G. Flemming, Hong L. Hua, William A. Maron, Mysore S. Srinivas
-
Patent number: 8904110Abstract: This invention permits user controlled cache coherence operations with the flexibility to do these operations on all levels of cache together or each level independently. In the case of an all level operation, the user does not have to monitor and sequence each phase of the operation. This invention also provides a way for users to track completion of these operations. This is critical for multi-core/multi-processor devices. Multiple cores may be accessing the end point and the user/application needs to be able to identify when the operation from one core is complete, before permitting other cores access that data or code.Type: GrantFiled: September 22, 2011Date of Patent: December 2, 2014Assignee: Texas Instruments IncorporatedInventors: Raguram Damodaran, Abhijeet A. Chachad
-
Patent number: 8832377Abstract: Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.Type: GrantFiled: February 27, 2013Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 8825957Abstract: Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.Type: GrantFiled: January 17, 2012Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 8825956Abstract: Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.Type: GrantFiled: February 27, 2013Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 8825953Abstract: Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.Type: GrantFiled: January 17, 2012Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 8793437Abstract: A cache memory system using temporal locality information and a data storage method are provided. The cache memory system including: a main cache which stores data accessed by a central processing unit; an extended cache which stores the data if the data is evicted from the main cache; and a separation cache which stores the data of the extended cache when the data of the extended cache is evicted from the extended cache and temporal locality information corresponding to the data of the extended cache satisfies a predetermined condition.Type: GrantFiled: August 8, 2007Date of Patent: July 29, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Jong Myon Kim, Soojung Ryu, Dong-Hoon Yoo, Dong Kwan Suh, Jeongwook Kim
-
Patent number: 8719507Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.Type: GrantFiled: January 4, 2012Date of Patent: May 6, 2014Assignee: International Business Machines CorporationInventors: Miguel Comparan, Robert A. Shearer
-
Patent number: 8719508Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.Type: GrantFiled: December 10, 2012Date of Patent: May 6, 2014Assignee: International Business Machines CorporationInventors: Miguel Comparan, Robert A. Shearer
-
Publication number: 20140108734Abstract: A processor includes a first processing unit and a first level cache associated with the first processing unit and operable to store data for use by the first processing unit used during normal operation of the first processing unit. The first processing unit is operable to store first architectural state data for the first processing unit in the first level cache responsive to receiving a power down signal. A method for controlling power to processor including a hierarchy of cache levels includes storing first architectural state data for a first processing unit of the processor in a first level of the cache hierarchy responsive to receiving a power down signal and flushing contents of the first level including the first architectural state data to a first lower level of the cache hierarchy prior to powering down the first level of the cache hierarchy and the first processing unit.Type: ApplicationFiled: October 17, 2012Publication date: April 17, 2014Inventors: Paul Edward Kitchin, William L. Walker
-
Patent number: 8688911Abstract: Embodiments of the present disclosure provide a system on a chip (SOC) comprising a processing core including a core bus agent, a bus interface unit (BIU), and a bridge module operatively coupling the processing core to the BIU, the bridge module configured to selectively route information from the core bus agent to a cache or to the BIU by bypassing the cache. Other embodiments are also described and claimed.Type: GrantFiled: November 23, 2009Date of Patent: April 1, 2014Assignee: Marvell Israel (M.I.S.L) Ltd.Inventors: Tarek Rohana, Gil Stoler