Associative Patents (Class 711/128)
-
Patent number: 12242388Abstract: Row hammer attacks takes advantage of unintended and undesirable side effects of memory devices in which memory cells interact electrically between themselves by leaking their charges and possibly changing the contents of nearby memory rows that were not addressed in an original memory access. Row hammer attacks are mitigated by using a victim cache. Data is written to cache lines of a cache. A least recently used cache line of the cache is written to the victim cache.Type: GrantFiled: September 15, 2022Date of Patent: March 4, 2025Assignee: Micron Technology, Inc.Inventors: Ameen D. Akel, Shivam Swami
-
Patent number: 12130691Abstract: Disclosed embodiments relate to a dNap architecture that accurately transitions cache lines to full power state before an access to them. This ensures that there are no additional delays due to waking up drowsy lines. Only cache lines that are determined by the DMC to be accessed in the immediate future are fully powered while others are put in drowsy mode. As a result, we are able to significantly reduce leakage power with no cache performance degradation and minimal hardware overhead, especially at higher associativities. Up to 92% static/leakage power savings are accomplished with minimal hardware overhead and no performance tradeoff.Type: GrantFiled: August 15, 2023Date of Patent: October 29, 2024Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Oluleye Olorode, Mehrdad Nourani
-
Patent number: 12124371Abstract: An apparatus and method to reduce bandwidth and latency associated with probabilistic caches.Type: GrantFiled: March 26, 2021Date of Patent: October 22, 2024Assignee: Intel CorporationInventors: Ruchira Sasanka, Rajat Agarwal
-
Patent number: 12073119Abstract: Disclosed are techniques for processing uncommitted writes in a store queue. In an aspect, an apparatus comprises a processor and a dual store queue having an in-order queue (IOQ) for storing uncommitted writes and an uncommitted data gather queue (UGQ) for gathering uncommitted data. The dual store queue receives, from a processor, a first write instruction for writing first data to at least a portion of memory at a first memory address, allocates an IOQ entry corresponding to the first write instruction, and allocates or updates a UGQ entry associated with the first memory address to contain the first data.Type: GrantFiled: August 19, 2022Date of Patent: August 27, 2024Assignee: QUALCOMM IncorporatedInventors: Cerine Marguerite Hill, Derek Robert Hower
-
Patent number: 12050702Abstract: Embodiments are directed to trusted local memory management in a virtualized GPU. An embodiment of an apparatus includes one or more processors including a trusted execution environment (TEE); a GPU including a trusted agent; and a memory, the memory including GPU local memory, the trusted agent to ensure proper allocation/deallocation of the local memory and verify translations between graphics physical addresses (PAs) and PAs for the apparatus, wherein the local memory is partitioned into protection regions including a protected region and an unprotected region, and wherein the protected region to store a memory permission table maintained by the trusted agent, the memory permission table to include any virtual function assigned to a trusted domain, a per process graphics translation table to translate between graphics virtual address (VA) to graphics guest PA (GPA), and a local memory translation table to translate between graphics GPAs and PAs for the local memory.Type: GrantFiled: July 25, 2023Date of Patent: July 30, 2024Assignee: INTEL CORPORATIONInventors: Pradeep M. Pappachan, Luis S. Kida, Reshma Lal
-
Patent number: 12045644Abstract: A method includes receiving a first request to allocate a line in an N-way set associative cache and, in response to a cache coherence state of a way indicating that a cache line stored in the way is invalid, allocating the way for the first request. The method also includes, in response to no ways in the set having a cache coherence state indicating that the cache line stored in the way is invalid, randomly selecting one of the ways in the set. The method also includes, in response to a cache coherence state of the selected way indicating that another request is not pending for the selected way, allocating the selected way for the first request.Type: GrantFiled: May 22, 2020Date of Patent: July 23, 2024Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson
-
Patent number: 12032472Abstract: Omitting or obfuscating physical memory addresses within an execution trace. A microprocessor identifies a first translation lookaside buffer (TLB) entry mapping a first virtual memory page to a physical memory page, and initiates logging of the first TLB entry by initiating logging of at least a first virtual address of the first virtual memory page and a first identifier. The microprocessor identifies a second TLB entry mapping a second virtual memory page to the physical memory page, and initiates logging of the second TLB entry by initiating logging of at least a second virtual address of the second virtual memory page and a second identifier. The microprocessor determines that the first and second TLB entries are each live, logged into the execution trace, and mapped to the same physical address, and ensures that the execution trace indicates that the first and second TLB entries each map to the physical address.Type: GrantFiled: March 21, 2022Date of Patent: July 9, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Jordi Mola
-
Patent number: 12014178Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.Type: GrantFiled: June 8, 2022Date of Patent: June 18, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
-
Patent number: 11977893Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.Type: GrantFiled: June 8, 2022Date of Patent: May 7, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
-
Patent number: 11966336Abstract: Some embodiments provide a program that receives a first set of data and a first greenhouse gas emission value. The program stores, in a cache, the first set of data and the first greenhouse gas emission value. The program receives a second set of data and a second greenhouse gas emission value. The program stores, in the cache, the second set of data and the second greenhouse gas emission value. The program receives a third set of data and a third greenhouse gas emission value. The program determines one of the first and second sets of data to remove from the cache based on the first and second greenhouse gas emission values. The program replaces, in the cache, one of the first and second sets of data and the corresponding first or second greenhouse gas emission value with the third set of data and the third greenhouse gas emission value.Type: GrantFiled: November 8, 2021Date of Patent: April 23, 2024Assignee: SAP SEInventors: Debashis Banerjee, Prateek Agarwal, Kavitha Krishnan
-
Patent number: 11961073Abstract: To achieve efficient reading of data from a memory including a plurality of banks by specifying different banks and accessing the memory from a plurality of hash computation circuits simultaneously, an information processing device includes a memory 1 including a plurality of banks, a plurality of hash computation circuits 8, and an interconnect 2 respectively connecting the banks in the memory 1 and the hash computation circuits 8 to each other, wherein the hash computation circuits 8 execute control in such a manner that read requests for reading data from the memory 1 respectively include bank numbers for specifying different banks in the same cycle.Type: GrantFiled: February 5, 2020Date of Patent: April 16, 2024Assignee: AXELL CORPORATIONInventors: Hirofumi Iwato, Takehiro Ogawa
-
Patent number: 11880685Abstract: An instruction fetch pipeline includes first, second, and third sub-pipelines that respectively include: a TLB that receives a fetch virtual address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a predicted set index, and a data RAM that receives the predicted set index and a predicted way number that specifies a way of the entry from which a block of instructions was previously fetched. The predicted set index specifies the instruction cache set that includes the entry. The three sub-pipelines respectively initiate in parallel: a TLB access using the fetch virtual address to obtain a translation thereof into a fetch physical address that includes a tag, a tag RAM access using the predicted set index to read a set of tags, and a data RAM access using the predicted set index and the predicted way number to fetch the block of instructions.Type: GrantFiled: June 8, 2022Date of Patent: January 23, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
-
Patent number: 11836086Abstract: Aspects of the present disclosure relate to systems and methods for improving performance of a partial cache collapse by a processing device. Certain embodiments provide a method for performing a partial cache collapse procedure, the method including: counting, in each cache way of a group of cache ways, a number of dirty cache lines having dirty bits indicating the cache line has been modified; selecting, from the group, at least one cache way for collapse, based on its corresponding number of dirty cache lines; and performing the partial cache collapse procedure based on the at least one cache way selected from the group for collapse.Type: GrantFiled: June 10, 2022Date of Patent: December 5, 2023Assignee: QUALCOMM IncorporatedInventors: Hithesh Hassan Lepaksha, Sharath Kumar Nagilla, Darshan Kumar Nandanwar, Nirav Narendra Desai, Venkata Biswanath Devarasetty
-
Patent number: 11755748Abstract: Embodiments are directed to trusted local memory management in a virtualized GPU. An embodiment of an apparatus includes one or more processors including a trusted execution environment (TEE); a GPU including a trusted agent; and a memory, the memory including GPU local memory, the trusted agent to ensure proper allocation/deallocation of the local memory and verify translations between graphics physical addresses (PAs) and PAs for the apparatus, wherein the local memory is partitioned into protection regions including a protected region and an unprotected region, and wherein the protected region to store a memory permission table maintained by the trusted agent, the memory permission table to include any virtual function assigned to a trusted domain, a per process graphics translation table to translate between graphics virtual address (VA) to graphics guest PA (GPA), and a local memory translation table to translate between graphics GPAs and PAs for the local memory.Type: GrantFiled: December 19, 2022Date of Patent: September 12, 2023Assignee: INTEL CORPORATIONInventors: Pradeep M. Pappachan, Luis S. Kida, Reshma Lal
-
Patent number: 11736597Abstract: A data exchange method is applied to a first electronic device and includes receiving a first message sent by a second electronic device, wherein the first message carries a first identifier, the first identifier is configured to identify a first transaction to which the first message belongs, an active transaction indicates a transaction initiated by the first electronic device, and a passive transaction indicates a transaction initiated by the second electronic device; in a case where a set bit indicates that the first transaction is the active transaction, feeding a processing result about the first transaction in the first message back to an application layer; and in a case where the set bit indicates that the first transaction is the passive transaction, requesting the application layer to output the processing result about the first transaction based on the first message.Type: GrantFiled: December 29, 2022Date of Patent: August 22, 2023Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Tao Feng, Chunliang Zeng, Zhigang Yu, Taiyue Wu, Zhaoxuan Zhai
-
Patent number: 11709776Abstract: N-way associative cache pools can be implemented in an N-way associative cache. Different cache pools can be indicated by pool values. Different processes running on a computer can use different cache pools. An N-way associative cache circuit can be configured to have one or more stripe mode cache pools that are N-way associative. A cache control circuit can receive a physical address for a memory location and can interpret the physical address as fields including a tag field that contains a tag value and a set field that contains a set value. The physical address can also be used to determine a pool value that identifies one of the stripe mode cache pools. A set of N cache entries in the one of the stripe mode cache pools can be concurrently searched for the tag value. The set of N cache entries is determined using the set value.Type: GrantFiled: March 29, 2021Date of Patent: July 25, 2023Assignee: Pensando Systems Inc.Inventor: Changqi Yang
-
Patent number: 11620224Abstract: Techniques for controlling prefetching of instructions into an instruction cache are provided. The techniques include tracking either or both of branch target buffer misses and instruction cache misses, modifying a throttle toggle based on the tracking, and adjusting prefetch activity based on the throttle toggle.Type: GrantFiled: December 10, 2019Date of Patent: April 4, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Aparna Thyagarajan, Ashok Tirupathy Venkatachar, Marius Evers, Angelo Wong, William E. Jones
-
Patent number: 11531770Abstract: Embodiments are directed to trusted local memory management in a virtualized GPU. An embodiment of an apparatus includes one or more processors including a trusted execution environment (TEE); a GPU including a trusted agent; and a memory, the memory including GPU local memory, the trusted agent to ensure proper allocation/deallocation of the local memory and verify translations between graphics physical addresses (PAs) and PAs for the apparatus, wherein the local memory is partitioned into protection regions including a protected region and an unprotected region, and wherein the protected region to store a memory permission table maintained by the trusted agent, the memory permission table to include any virtual function assigned to a trusted domain, a per process graphics translation table to translate between graphics virtual address (VA) to graphics guest PA (GPA), and a local memory translation table to translate between graphics GPAs and PAs for the local memory.Type: GrantFiled: December 23, 2019Date of Patent: December 20, 2022Assignee: Intel CorporationInventors: Pradeep M. Pappachan, Luis S. Kida, Reshma Lal
-
Patent number: 11507513Abstract: Methods, apparatus, systems and articles of manufacture to facilitate an atomic operation and/or a histogram operation in cache pipeline are disclosed. An example system includes a cache storage coupled to an arithmetic component; and a cache controller coupled to the cache storage, wherein the cache controller is operable to: receive a memory operation that specifies a set of data; retrieve the set of data from the cache storage; utilize the arithmetic component to determine a set of counts of respective values in the set of data; generate a vector representing the set of counts; and provide the vector.Type: GrantFiled: May 22, 2020Date of Patent: November 22, 2022Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 11507372Abstract: An apparatus and method are provided for processing instructions fetched from memory. Decode circuitry is used to decode the fetched instructions in order to produce decoded instructions, and downstream circuitry then processes the decoded instructions in order to perform the operations specified by those decoded instructions. Dispatch circuitry is arranged to dispatch to the downstream circuitry up to N decoded instructions per dispatch cycle, and is arranged to determine, based on a given candidate sequence of decoded instructions being considered for dispatch in a given dispatch cycle, whether at least one resource conflict within the downstream circuitry would occur in the event that the given candidate sequence of decoded instructions is dispatched in the given dispatch cycle.Type: GrantFiled: October 7, 2020Date of Patent: November 22, 2022Assignee: Arm LimitedInventors: Michael Brian Schinzler, Yasuo Ishii, Muhammad Umar Farooq, Jason Lee Setter
-
Patent number: 11481332Abstract: A microprocessor includes a physically-indexed-and-tagged second-level set-associative cache. Each cache entry is uniquely identified by a set index and a way number. Each entry of a write-combine buffer (WCB) holds write data to be written to a write physical memory address, a portion of which is a write physical line address. Each WCB entry also holds a write physical address proxy (PAP) for the write physical line address. The write PAP specifies the set index and the way number of the cache entry into which a cache line specified by the write physical line address is allocated. In response to receiving a store instruction that is being committed and that specifies a store PAP, the WCB compares the store PAP with the write PAP of each WCB entry and requires a match as a necessary condition for merging store data of the store instruction into a WCB entry.Type: GrantFiled: July 8, 2021Date of Patent: October 25, 2022Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11436144Abstract: Described apparatuses and methods order memory address portions advantageously for cache-memory addressing. An address bus can have a smaller width than a memory address. The multiple bits of the memory address can be separated into most-significant bits (MSB) and least-significant bits (LSB) portions. The LSB portion is provided to a cache first. The cache can process the LSB portion before the MSB portion is received. The cache can use index bits of the LSB portion to index into an array of memory cells and identify multiple corresponding tags. The cache can also check the corresponding tags against lower tag bits of the LSB portion. A partial match may be labeled as a predicted hit, and a partial miss may be labeled as an actual miss, which can initiate a data fetch. With the remaining tag bits from the MSB portion, the cache can confirm or refute the predicted hit.Type: GrantFiled: April 10, 2020Date of Patent: September 6, 2022Assignee: Micron Technology, Inc.Inventors: Joseph Thomas Pawlowski, Elliott Clifford Cooper-Balis, David Andrew Roberts
-
Patent number: 11416151Abstract: An efficient mapping information management technology for non-volatile memory is disclosed. When a host requests to access data of a first logical address, a microprocessor of a controller of the non-volatile memory loads a first sub-mapping table from the non-volatile memory to a volatile memory. The microprocessor loads hierarchical pointer tables related to the first logical address into the volatile memory. Among the hierarchical pointer tables, each higher-level pointer table lists non-volatile memory physical addresses of lower-level pointer tables. A non-volatile memory physical address of the first sub-mapping table is obtained from a first pointer table according to a first index, for the microprocessor to load the first sub-mapping table from the non-volatile memory into the volatile memory for mapping information of the first logical address, and the first pointer table is in the lowest level among the hierarchical pointer tables loaded in the volatile memory.Type: GrantFiled: December 4, 2020Date of Patent: August 16, 2022Assignee: SILICON MOTION, INC.Inventor: Hsueh-Chun Fu
-
Patent number: 11372768Abstract: The present disclosure provides methods, systems, and non-transitory computer readable media for fetching data for an accelerator.Type: GrantFiled: June 12, 2020Date of Patent: June 28, 2022Assignee: Alibaba Group Holding LimitedInventors: Yongbin Gu, Pengcheng Li, Tao Zhang
-
Patent number: 11321235Abstract: A cache memory device includes a cache circuit and a way prediction circuit. The cache circuit generates a cache hit signal indicating whether target data corresponding to an access address are stored in cache lines and performs a current cache access operation primarily with respect to candidate ways based on a candidate way signal indicating the candidate ways in a way prediction mode. The way prediction circuit stores accumulation information by accumulating a cache hit result indicating whether the target data are stored in one of ways and a way prediction hit result indicating whether the target data are stored in one of the candidate ways based on the cache hit signal provided during previous cache access operations. The way prediction circuit generates the candidate way signal by determining the candidate ways for the current cache access operation based on the accumulation information in the way prediction mode.Type: GrantFiled: August 24, 2020Date of Patent: May 3, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Hyunwook Joo
-
Patent number: 11316945Abstract: A method for managing a memory of a vehicle multimedia system includes analyzing a use rate per streaming service, allocating a partition of a memory per streaming service according to the analyzed use rate, compressing streaming data when the streaming data is downloaded, and caching the compressed streaming data in a partition allocated for a specific streaming service corresponding to the streaming data.Type: GrantFiled: August 7, 2020Date of Patent: April 26, 2022Assignees: Hyundai Motor Company, Kia Motors CornorationInventors: Hye Won You, Dae Bong An, Hyung Jin Kim
-
Patent number: 11288209Abstract: An apparatus comprises a cache comprising cache entries, each cache entry storing cached information and an entry usefulness value indicative of usefulness of the cached information. Base usefulness storage circuitry stores a base usefulness value. Cache replacement control circuitry controls, based on a usefulness level determined for a given cache entry, whether the given cache entry is selected for replacement. The cache replacement control circuitry determines the usefulness level for the given cache entry based on a difference between the entry usefulness value specified by the given cache entry and the base usefulness value stored in the base usefulness storage circuitry.Type: GrantFiled: September 20, 2019Date of Patent: March 29, 2022Assignee: Arm LimitedInventors: Yasuo Ishii, Thibaut Elie Lanois, Houdhaifa Bouzguarrou
-
Patent number: 11289137Abstract: Methods, systems, and devices for a multi-port storage-class memory interface are described. A memory controller of the storage-class memory subsystem may receive, from a host device, a request associated with host addresses. The memory controller may generate interleaved addresses with a low latency based on the host addresses. The interleaved addresses parallelize processing of the request utilizing a set of memory media ports. Each memory media port of the set of memory media port may operate independent of each other to obtain a desired aggregated data transfer rate and a memory capacity. The interleaved address may leave no gaps in memory space. The memory controller may control a wear-leveling operation to distribute access operations across one or more zones of the memory media port.Type: GrantFiled: October 26, 2018Date of Patent: March 29, 2022Assignee: Micron Technology, Inc.Inventor: Joseph Thomas Pawlowski
-
Patent number: 11269516Abstract: A method, computer program product, and computing system for receiving content on a high-availability storage system. The content is compared to one or more entries in a static database associated with a cache memory system of the high-availability storage system. If the content does not match the one or more entries in the static database, the content is compared to one or more entries in a dynamic database associated with the cache memory system. If the content does not match the one or more entries in the dynamic database: the content is written to the cache memory system and a representation of the content is written to a temporal database associated with the cache memory system and maintained for a defined period of time.Type: GrantFiled: October 31, 2017Date of Patent: March 8, 2022Assignee: EMC IP HOLDING COMPANY, LLCInventors: Philippe Armangau, Pierluca Chiodelli, George Papadopoulos
-
Patent number: 11269771Abstract: A storage device includes a nonvolatile memory including a main meta data area and a journal area, and a controller. The controller updates an address mapping table including a plurality of page mapping entries divided into a plurality of segments by executing a flash translation layer (FTL) stored in a working memory, stores updated page mapping entries of the plurality of page mapping entries in the journal area as journal data, and stores the plurality of segments, each having a size smaller than a physical page of the nonvolatile memory, in the main meta data area.Type: GrantFiled: March 13, 2020Date of Patent: March 8, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Junghoon Kim, Seonghun Kim
-
Patent number: 11264114Abstract: A test pattern generator includes a random command address generator suitable for generating N combinations, each combination of a command and an address, where N is an integer greater than or equal to 2; an address converter suitable for converting the N combinations into an N-dimensional address; a history storage circuit which is accessed based on the N-dimensional address; and a controller suitable for classifying the N combinations as issue targets, when an area in the history storage circuit, which is accessed based on the N-dimensional address, indicates a value of no hit.Type: GrantFiled: December 23, 2019Date of Patent: March 1, 2022Assignee: SK hynix Inc.Inventors: Dong-Ho Kang, Jae-Han Park
-
Patent number: 11249908Abstract: An apparatus and method are disclosed for managing cache coherency. The apparatus has a plurality of agents with cache storage for caching data, and coherency control circuitry for acting as a point of coherency for the data by implementing a cache coherency protocol. In accordance with the cache coherency protocol the coherency control circuitry responds to certain coherency events by issuing coherency messages to one or more of the agents. A given agent is arranged, prior to entering a given state in which its cache storage is unused, to perform a flush operation in respect of its cache storage that may cause one or more evict messages to be issued to the coherency control circuitry. Further, once all evict messages resulting from performance of the flush operation has been issued, the given agent issues an evict barrier message to the coherency control circuitry.Type: GrantFiled: September 17, 2020Date of Patent: February 15, 2022Assignee: Arm LimitedInventors: Ole Henrik Jahren, Ian Rudolf Bratt, Sigurd Røed Scheistrøen
-
Patent number: 11243889Abstract: The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.Type: GrantFiled: May 24, 2019Date of Patent: February 8, 2022Assignee: Micron Technology, Inc.Inventor: Robert M. Walker
-
Patent number: 11232033Abstract: Systems, apparatuses, and methods for dynamically partitioning a memory cache among a plurality of agents are described. A system includes a plurality of agents, a communication fabric, a memory cache, and a lower-level memory. The partitioning of the memory cache for the active data streams of the agents is dynamically adjusted to reduce memory bandwidth and increase power savings across a wide range of applications. A memory cache driver monitors activations and characteristics of the data streams of the system. When a change is detected, the memory cache driver dynamically updates the memory cache allocation policy and quotas for the agents. The quotas specify how much of the memory cache each agent is allowed to use. The updates are communicated to the memory cache controller to enforce the new policy and enforce the new quotas for the various agents accessing the memory.Type: GrantFiled: August 2, 2019Date of Patent: January 25, 2022Assignee: Apple Inc.Inventors: Wolfgang H. Klingauf, Connie W. Cheung, Rohit K. Gupta, Rohit Natarajan, Vanessa Cristina Heppolette, Varaprasad V. Lingutla, Muditha Kanchana
-
Patent number: 11216382Abstract: A cache system may maintain size and/or request rate metrics for objects in a lower level cache and for objects in a higher level cache. When an L1 cache does not have an object, it requests the object from an L2 cache and sends to the L2 cache aggregate size and request rate metrics for objects in the L1 cache. The L2 cache may obtain a size metric and a request rate metric for the requested object and then determine, based on the aggregate size and request rate metrics for the objects in the L1 cache and the size metric and the request rate metric for the requested object in the L2 cache, an indication of whether or not the L1 cache should cache the requested object. The L2 cache provides the object and the indication to the L1 cache.Type: GrantFiled: March 16, 2020Date of Patent: January 4, 2022Assignee: Amazon Technologies, Inc.Inventors: Karthik Uthaman, Ronil Sudhir Mokashi, Prashant Verma
-
Patent number: 11210020Abstract: A memory access technology applied to a computer system includes a first-level memory, a second-level memory, and a memory controller. The first-level memory is configured to cache data in the second-level memory. A plurality of access requests for accessing different memory blocks has a mapping relationship with a first cache line in the first-level memory, and the memory controller compares tags of the plurality of access requests with a tag of the first cache line in a centralized manner to determine whether the plurality of access requests hit the first-level memory.Type: GrantFiled: May 16, 2019Date of Patent: December 28, 2021Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Shihai Xioa, Qiaosha Zou, Wei Yang
-
Patent number: 11204869Abstract: One embodiment provides a system for facilitating data placement. The system receives a sector of data to be written to a first non-volatile memory and a second non-volatile memory, wherein the first non-volatile memory resides on a first storage device which supports sequential writes, and wherein the second non-volatile memory resides on a second storage device. The system writes the sector and its corresponding logical block address to the first non-volatile memory in a sequential manner. The system writes, at approximately a same time, the sector and its corresponding logical block address to the second non-volatile memory. In response to completing the write to the first non-volatile memory or the second non-volatile memory, the system generates an acknowledgment that the sector is successfully committed for a host from which the sector is received.Type: GrantFiled: December 5, 2019Date of Patent: December 21, 2021Assignee: Alibaba Group Holding LimitedInventor: Shu Li
-
Patent number: 11188468Abstract: A processor includes a prediction table, a prediction logic circuit, and a prediction verification circuit. The prediction table has a plurality of sets, each of the sets has a hot way number, at least one warm way number, and at least one confidence value corresponding to the at least one warm way number. The prediction logic circuit generates a prediction result by predicting if the at least one warm way number is an opened way. The prediction verification circuit generates a correct/incorrect information according to the prediction result, and generates an update information according to the correct/incorrect information. The prediction verification circuit updates the hot way number, the at least one warm way number and the at least one confidence value of the at least one warm way number according to the update information.Type: GrantFiled: June 15, 2020Date of Patent: November 30, 2021Assignee: ANDES TECHNOLOGY CORPORATIONInventors: Kun-Ho Liu, Chieh-Jen Cheng, Chuan-Hua Chang, I-Cheng Kevin Chen
-
Patent number: 11138119Abstract: There is provided an apparatus that includes storage circuitry. The storage circuitry is made up from a plurality of sets, each of the sets having at least one storage location. Receiving circuitry receives an access request that includes an input address. Lookup circuitry obtains a plurality of candidate sets that correspond with an index part of the input address. The lookup circuitry determines a selected storage location from the candidate sets using an access policy. The access policy causes the lookup circuitry to iterate through the candidate sets to attempt to locate an appropriate storage location. The appropriate storage location is accessed in response to the appropriate storage location being found.Type: GrantFiled: January 15, 2019Date of Patent: October 5, 2021Assignee: Arm LimitedInventors: Damien Guillaume Pierre Payet, Natalya Bondarenko, Florent Begon, Lucas Garcia
-
Patent number: 11126714Abstract: A data processing apparatus comprises branch prediction circuitry adapted to store at least one branch prediction state entry in relation to a stream of instructions, input circuitry to receive at least one input to generate a new branch prediction state entry, wherein the at least one input comprises a plurality of bits; and coding circuitry adapted to perform an encoding operation to encode at least some of the plurality of bits based on a value associated with a current execution environment in which the stream of instructions is being executed. This guards against potential attacks which exploit the ability for branch prediction entries trained by one execution environment to be used by another execution environment as a basis for branch predictions.Type: GrantFiled: October 2, 2018Date of Patent: September 21, 2021Assignee: Arm LimitedInventors: Alastair David Reid, Dominic Phillip Mulligan, Milosch Meriac, Matthias Lothar Boettcher, Nathan Yong Seng Chong, Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre, Thomas Christopher Grocutt, Yasuo Ishii
-
Patent number: 11119780Abstract: A device including a processor configured to access data to execute multiple instructions and a first cache coupled to the processor, are provided. The first cache is configured to hold a first data fetched from a memory by a first instruction that has been retired. The device also includes a side cache coupled to the first cache and to the processor, the side cache configured to hold a second data fetched from the memory by a second instruction, wherein the second instruction has not been retired from the processor. And the device includes a cache management unit configured to move the second data from the side cache to the first cache when the second instruction is retired, the cache management unit further configured to discard the second data when it is determined that the second instruction is abandoned.Type: GrantFiled: April 30, 2018Date of Patent: September 14, 2021Assignee: Hewlett Packard Enterprise Development LPInventor: Gregg B. Lesartre
-
Patent number: 11106599Abstract: A processor includes an associative memory including ways organized in an asymmetric tree structure, a replacement control unit including a decision node indicator whose value determines the side of the tree structure to which a next memory element replacement operation is directed, and circuitry to cause, responsive to a miss in the associative memory while the decision node indicator points to the minority side of the tree structure, the decision node indicator to point a majority side of the tree structure, and to determine, responsive to a miss while the decision node indicator points to the majority side of the tree structure, whether or not to cause the decision node indicator to point to the minority side of the tree structure, the determination being dependent on a current replacement weight value. The replacement weight value may be counter-based or a probabilistic weight value.Type: GrantFiled: March 30, 2019Date of Patent: August 31, 2021Assignee: Intel CorporationInventors: Chunhui Zhang, Robert S. Chappell, Yury N. Ilin
-
Patent number: 11099998Abstract: A computer-implemented method includes caching data from a persistent storage device into a cache. The method also includes caching a physical address and a logical address of the data in the persistent storage device into the cache. The method further includes in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. The embodiments of the present disclosure also provide an electronic apparatus and a computer program product.Type: GrantFiled: February 10, 2020Date of Patent: August 24, 2021Assignee: EMC IP Holding Company LLCInventors: Wei Cui, Denny Dengyu Wang, Jian Gao, Lester Zhang, Chen Gong
-
Patent number: 11074185Abstract: Provided are a computer program product, system, and method for adjusting a number of insertion points used to determine locations in a cache list at which to indicate tracks. Tracks added to the cache are indicated in a cache list. The cache list has a least recently used (LRU) end and a most recently used (MRU) end. In response to indicating in a cache list an insertion point interval number of tracks in the cache in a cache list, setting an insertion point to indicate one of the tracks of the insertion point interval number of tracks indicated in the cache list. Insertion points to tracks in the cache list are used to determine locations in the cache list at which to indicate tracks in the cache in the cache list.Type: GrantFiled: August 7, 2019Date of Patent: July 27, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
-
Patent number: 11073995Abstract: A method and device generates a slab identifier and a hash function identifier in response to a memory allocation request with a request identifier and allocation size from a memory allocation requestor. The slab identifier indicates a memory region associated with a base data size and the hash function identifier indicates a hash function. The method and device provides a bit string including the slab identifier and the hash function identifier to the memory allocation requestor.Type: GrantFiled: April 14, 2020Date of Patent: July 27, 2021Assignee: Advanced Micro Devices, Inc.Inventor: Alexander Dodd Breslow
-
Method and apparatus for performing atomic operations on local cache slots of a shared global memory
Patent number: 11074113Abstract: A storage system includes at least two independent storage engines interconnected by a fabric, each storage engine having two compute nodes. A shared global memory is implemented using cache slots of each of the compute nodes. Memory access operations to the slots of shared global memory are managed by a fabric adapter to guarantee that the operations are atomic. To enable local cache operations to be managed independent of the fabric adapter, a cache metadata data structure includes a global flag bit for each cache slot, that is used to designate the cache slot as globally available or temporarily reserved for local IO processing. The cache metadata data structure also includes a mutex (Peterson lock) for each cache slot to enforce a mutual exclusion concurrency control policy on the cache slot between the two compute nodes of the storage engine when the cache slot is used for local IO processing.Type: GrantFiled: May 21, 2020Date of Patent: July 27, 2021Assignee: EMC IP Holding Company LLCInventors: Steven Ivester, Kaustubh Sahasrabudhe -
Patent number: 11068335Abstract: A memory system may include a first memory device including a first input/output buffer, a second memory device including a second input/output buffer, and a cache memory suitable for selectively and temporarily storing first and second data to be respectively programmed in the first and second memory devices. The first data is programmed to the first memory device in a first program section by being stored in the cache memory only in a first monopoly section of the first program section. The second data is programmed to the second memory device in a second program section by being stored in the cache memory only in a second monopoly section of a second program section. The first monopoly section and the second monopoly section are set not to overlap each other.Type: GrantFiled: June 18, 2015Date of Patent: July 20, 2021Assignee: SK hynix Inc.Inventor: Byoung-Sung You
-
Patent number: 11048636Abstract: A cache system, having: a first cache set; a second cache set; and a logic circuit coupled to a processor to control the caches based on at least respective first and second registers. When a connection to an address bus receives a memory address from the processor, the logic circuit is configured to: generate a set index from at least the address; and determine whether the generated set index matches with a content stored in the first register or with a content stored in the second register. And, the logic circuit is configured to implement a command via the first cache set in response to the generated set index matching with the content stored in the first register and via the second cache set in response to the generated set index matching with the content stored in the second register.Type: GrantFiled: July 31, 2019Date of Patent: June 29, 2021Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11024382Abstract: Methods, systems, and devices for fully associative cache management are described. A memory subsystem may receive an access command for storing a first data word in a storage component associated with an address space. The memory subsystem may include a fully associative cache for storing the data words associated with the storage component. The memory subsystem may determine an address within the cache to store the first data word. For example, the memory subsystem may determine an address of the cache indicated by an address pointer (e.g., based on the order of the addresses) and determine a quantity of accesses associated with the data word stored in that cache address. Based on the indicated cache address and the quantity of accesses, the memory subsystem may store the first data word in the indicated cache address or a second cache address sequential to the indicated cache address.Type: GrantFiled: August 29, 2019Date of Patent: June 1, 2021Assignee: Micron Technology, Inc.Inventor: Joseph T. Pawlowski
-
Patent number: 11010297Abstract: A memory unit includes a data storage to store data, an operation controller to receive operation requests issued by an upstream source, a downstream capabilities storage to store an indication of operations performable by at least one downstream memory unit, and processing circuitry to perform operations on data stored in the data storage under control of the operation controller. When an operation request to perform an operation on target data is received from the upstream request source, the operation controller is arranged to determine when to control the processing circuitry to perform the operation, and when to forward the operation to a downstream memory unit in dependence on whether the target data is stored in the data storage unit and the indication of operations performable by at least one downstream memory unit.Type: GrantFiled: June 26, 2017Date of Patent: May 18, 2021Assignee: ARM LimitedInventor: Andreas Hansson