Cache Status Data Bit Patents (Class 711/144)
-
Patent number: 12124551Abstract: A memory module includes first and second data storage locations. The first data storage location stores an expansion license. The memory module operates with a base set of functions, and is configurable to operate with an expanded set of functions based on the expansion license. The second data storage location stores an expansion capability certificate that is signed by an information handling system and includes a subset of the expanded set of functions that are enabled by the expansion capability certificate. The memory module determines that the memory module is installed into the information handling system based on the expansion capability certificate, and enables the subset of the expanded set of functions in response to determining that the memory module is installed into the information handling system.Type: GrantFiled: July 22, 2022Date of Patent: October 22, 2024Assignee: Dell Products L.P.Inventors: Milton Taveira, Isaac Q. Wang, Jordan Chin
-
Patent number: 12117968Abstract: A versioned file system comprising network accessible storage is provided. Aspects of the system include globally locking files or groups of files so as to better control the stored files in the file system and to avoid problems associated with simultaneous remote access or conflicting multiple access requests for the same files. A method for operating, creating and using the global locks is also disclosed. A multiprotocol global lock can be provided for filing nodes that have multiple network protocols for generating local lock requests.Type: GrantFiled: January 31, 2023Date of Patent: October 15, 2024Assignee: Nasuni CorporationInventors: Robert M. Mason, Jr., David M. Shaw, Kevin W. Baughman, Christopher S. Lacasse, Matthew M. McDonald, Russell A. Neufeld, Akshay K. Saxena
-
Patent number: 12094522Abstract: An apparatus that includes: a plurality of first data amplifiers arranged in line in a first direction; a plurality of first read data buses each coupled to a corresponding one of the plurality of first data amplifiers, the plurality of first read data buses having different lengths one another; and a plurality of first write data buses each coupled to the corresponding one of the plurality of first data amplifiers, the plurality of first write data buses having different lengths one another. The plurality of first read data buses and the plurality of first write data buses are alternately arranged in parallel in a second direction vertical to the first direction. The plurality of first read data buses are arranged in longest order and the plurality of first write data buses are arranged in shortest order.Type: GrantFiled: September 29, 2022Date of Patent: September 17, 2024Assignee: MICRON TECHNOLOGY, INC.Inventors: Akeno Ito, Mamoru Nishizaki
-
Patent number: 12073118Abstract: A method, computer program product, and computing system for processing, using a storage node, one or more updates to one or more metadata pages of a multi-node storage system. The one or more updates may be stored in one or more data containers in a cache memory system of the storage node, thus defining an active working set of data containers. Flushing ownership for each data container of the active working set may be assigned to one of the storage nodes based upon an assigned flushing ownership for each data container of a frozen working set and a number of updates within the frozen working set processed by each storage node, thus defining an assigned flushing storage node for each data container of the active working set. The one or more updates may be flushed, using the assigned flushing storage node, to a storage array.Type: GrantFiled: April 20, 2022Date of Patent: August 27, 2024Assignee: EMC IP Holding Company, LLCInventors: Vladimir Shveidel, Jibing Dong, Geng Han
-
Patent number: 12039168Abstract: A storage system is configured to accept subsequent versions of write data on a given track to multiple respective slots of shared global memory. A track index table presents metadata at the track level, and can hold up to N slots of data. All slots of shared global memory holding data owed to the source volume and to snapshots of the source volume are bound to the track in the track index table. Each time a write occurs on a track, the track index table is used to determine when a write pending slot for the track is owed to a snapshot copy of the storage volume. When a write pending slot contains data that is owed to a snapshot copy of the source volume, a new slot is allocated to the write IO and bound to the track in the track index table.Type: GrantFiled: October 14, 2022Date of Patent: July 16, 2024Assignee: Dell Products, L.P.Inventors: Sandeep Chandrashekhara, Mark Halstead, Michael Ferrari, Rong Yu, Michael Scharland
-
Patent number: 12008361Abstract: A device tracks accesses to pages of code executed by processors and modifies a portion of the code without terminating the execution of the code. The device is connected to the processors via a coherence interconnect and a local memory of the device stores the code pages. As a result, any requests to access cache lines of the code pages made by the processors will be placed on the coherence interconnect, and the device is able to track any cache-line accesses of the code pages by monitoring the coherence interconnect. In response to a request to read a cache line having a particular address, a modified code portion is returned in place of the code portion stored in the code pages.Type: GrantFiled: November 19, 2021Date of Patent: June 11, 2024Assignee: VMware LLCInventors: Irina Calciu, Andreas Nowatzyk, Pratap Subrahmanyam
-
Patent number: 11960436Abstract: A method of synchronizing system state data is provided. The method includes executing a first processor based on initial state data during an update cycle, wherein the initial state data represents a state of the system prior to initiation of the update cycle, detecting changes in state of the system by the first processor using sensors, the changes in state being added to a record of modified state data until a predefined progress position within the update cycle, designating the modified state data as next state data, based on reaching the predefined progress position within the update cycle, and transitioning from execution of the first processor based on the initial state data to execution of the first processor based on the next state data, based on completion of the update cycle.Type: GrantFiled: April 29, 2022Date of Patent: April 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Nadav Shlomo Ben-Amram, Netanel Hadad, Liran Biber
-
Patent number: 11954031Abstract: A method performed by a controller comprising assigning a first status indicator to entries in a first address line in a volatile memory belonging to a first region of an LUT stored in a non-volatile memory, and a second status indicator to entries in the first address line in the volatile memory belonging to a second region of the LUT, setting either the first or second status indicator to a dirty status based on whether a cache updated entry at an address m in the volatile memory belongs to the first or second region of the LUT, and writing, based on the dirty status of the first and second status indicator at the address m, all entries in the volatile memory associated with the first region or the second region containing the updated entry to the non-volatile memory.Type: GrantFiled: August 15, 2022Date of Patent: April 9, 2024Assignee: KIOXIA CORPORATIONInventors: David Symons, Ezequiel Alves
-
Patent number: 11947456Abstract: Techniques for invalidating cache lines are provided. The techniques include issuing, to a first level of a memory hierarchy, a weak exclusive read request for a speculatively executing store instruction; determining whether to invalidate one or more cache lines associated with the store instruction in one or more memories; and issuing the weak invalidation request to additional levels of the memory hierarchy.Type: GrantFiled: September 30, 2021Date of Patent: April 2, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Paul J. Moyer
-
Patent number: 11914508Abstract: A memory system includes nonvolatile physical memory, such as flash memory, that exhibits a wear mechanism asymmetrically associated with write operations. A relatively small cache of volatile memory reduces the number of writes, and wear-leveling memory access methods distribute writes evenly over the nonvolatile memory.Type: GrantFiled: October 7, 2020Date of Patent: February 27, 2024Assignee: Rambus Inc.Inventors: Frederick A. Ware, Ely K. Tsern
-
Patent number: 11914530Abstract: Memory having internal processors, and methods of data communication within such a memory are provided. In one embodiment, an internal processor may concurrently access one or more banks on a memory array on a memory device via one or more buffers. The internal processor may be coupled to a buffer capable of accessing more than one bank, or coupled to more than one buffer that may each access a bank, such that data may be retrieved from and stored in different banks concurrently. Further, the memory device may be configured for communication between one or more internal processors through couplings between memory components, such as buffers coupled to each of the internal processors. Therefore, a multi-operation instruction may be performed by different internal processors, and data (such as intermediate results) from one internal processor may be transferred to another internal processor of the memory, enabling parallel execution of an instruction(s).Type: GrantFiled: July 14, 2022Date of Patent: February 27, 2024Inventors: Robert M. Walker, Dan Skinner, Todd A. Merritt, J. Thomas Pawlowski
-
Patent number: 11893272Abstract: A memory storage device is capable of improving reliability of a memory system. The memory storage device comprises a memory controller, and a non-volatile memory connected to the memory controller. A method includes receiving, by the memory controller, a command from a host device, the command requesting lost LBA (logical block address) information resulting from a system shutdown of the memory storage device, in response to the command, providing, by the memory controller, the lost LBA information, and receiving, by the memory controller, recovered data corresponding to the lost LBA information, wherein the lost LBA information includes at least one of the number of LBAs lost by system shutdown, an LBA list lost by system shutdown, and deletion of a previous LBA list lost by system shutdown.Type: GrantFiled: December 22, 2021Date of Patent: February 6, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hye Jeong Jang, Min Cheol Kwon, Eun Joo Oh, Sung Kyun Lee, Sang Won Jung, Young Rae Jo
-
Patent number: 11868244Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.Type: GrantFiled: January 10, 2022Date of Patent: January 9, 2024Assignee: QUALCOMM IncorporatedInventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
-
Patent number: 11799827Abstract: A first edge server of multiple edge servers of a distributed edge computing network receives a request from a client device regarding a resource hosted at an origin server according to an anycast implementation. The first edge server modifies the request to include identifying information for the first edge server prior to sending the request to the origin server. The origin server responds with a response packet that includes the identifying information of the first edge server. Instead of routing the response packet to the client device directly, one of the multiple edge servers receives the response packet due to the edge servers each having the same anycast address. If the edge server that receives the response packet is not the first edge server, that edge server transmits the response packet to the first edge server, who processes the response packet and transmits the response packet to the client device.Type: GrantFiled: September 29, 2022Date of Patent: October 24, 2023Assignee: CLOUDFLARE, INC.Inventors: Marek Przemyslaw Majkowski, Alexander Forster, Maciej Biłas
-
Patent number: 11797450Abstract: An electronic device includes a cache memory including a memory space for storing a first cache set including a plurality of sector data and a plurality of dirty bits, each of the plurality of dirty bits representing whether corresponding sector data of the plurality of sector data are modified, a memory controller, connected to a plurality of data lines and a data mask line, for receiving the plurality of sector data and the plurality of dirty bits from the cache memory, setting a logic level of a data mask signal based on a logic level of each of the plurality of dirty bits, and outputting the plurality of sector data through the plurality of data lines and the data mask signal through the data mask line, and a memory device, connected to the plurality of data lines and the data mask line, for receiving the plurality of sector data through the plurality of data lines, and receiving the data mask signal through the data mask line.Type: GrantFiled: April 16, 2021Date of Patent: October 24, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Byoungsul Kim, Youngsan Kang, Daehyun Kwon, Myong-Seob Song, Byung Yo Lee, Yejin Jo
-
Patent number: 11789518Abstract: Embodiments relate to a system, program product, and method for mitigating voltage overshoot in one or more cores in a multicore processing device including a plurality of cores. The method includes determining, in real-time, an indication of power consumption within each core of the one or more cores. The method also includes determining, through the indication of power consumption, a voltage overshoot condition in the one or more cores. The method further includes increasing, for the one or more cores, a power demand thereof. The method also includes increasing, subject to the increasing the power demand, power delivery to the one or more cores, thereby at least arresting the rate of increase of the voltage overshoot.Type: GrantFiled: June 22, 2021Date of Patent: October 17, 2023Assignee: International Business Machines CorporationInventors: Pradeep Bhadravati Parashurama, Alper Buyuktosunoglu, Ramon Bertran Monfort, Tobias Webel, Srinivas Bangalore Purushotham, Preetham M. Lobo
-
Patent number: 11782919Abstract: Embodiments are provided for using metadata presence information to determine when to access a higher-level metadata table. It is determined that an incomplete hit occurred for a line of metadata in a lower-level structure of a processor, the lower-level structure being coupled to a higher-level structure in a hierarchy. It is determined that metadata presence information in a metadata presence table is a match to the line of metadata from the lower-level structure. Responsive to determining the match, it is determined to avoid accessing the higher-level structure of the processor.Type: GrantFiled: August 19, 2021Date of Patent: October 10, 2023Assignee: International Business Machines CorporationInventors: Adam Benjamin Collura, James Bonanno, Brian Robert Prasky
-
Patent number: 11747998Abstract: An application and a plurality of types of storage in a distributed storage system are communicated with. A write instruction that includes a key-value pair that in turn includes a key and value is received from the application. The key-value pair is stored in a selected one of the plurality of types of storage where the selected type of storage is selected based at least in part on a size or access frequency of the key-value pair. A link to the stored key-value pair is stored, including by: generating a key hash based at least in part on the key from the key-value pair and selecting one of a plurality of rows in an extensible primary table in an index based at least in part on the key hash. If it is determined there is sufficient space, the link to the stored key-value pair is stored, in the selected row. If it is determined there is insufficient space, the key-value pair is stored in an overflow row in a secondary table.Type: GrantFiled: January 10, 2022Date of Patent: September 5, 2023Assignee: OmniTier, Inc.Inventors: Suneel Indupuru, Prasanth Kumar, Derrick Preston Chu, Daryl Ng
-
Patent number: 11734251Abstract: Lock table management is provided for a lock manager of a database system, in which lock management is provided in a manner that is fast and efficient, and that conserves processing, memory, and other computational resources. For example, the lock table management can use a hashmap in which keys and values are stored in separate arrays, which can be loaded into separate CPU cache lines.Type: GrantFiled: May 17, 2022Date of Patent: August 22, 2023Assignee: SAP SEInventor: Chang Gyoo Park
-
Patent number: 11669458Abstract: A non-transitory computer-readable recording medium stores an adjustment program for causing a computer to perform a process including: acquiring a computation performance characteristic that indicates a computation performance value that corresponds to each adjustable dimension, through computation in which a cache memory in a processor that includes the cache memory is used; extracting, by using the computation performance characteristic, an adjustment condition for adjusting an adjustable dimension for which a decrease in computation performance due to a cache miss caused by a cache-line conflict in the cache memory occurs; and inserting adjustment processing based on the adjustment condition into a specific program that is executed by the processor and uses the adjustable dimension.Type: GrantFiled: March 9, 2022Date of Patent: June 6, 2023Assignee: FUJITSU LIMITEDInventor: Eiji Ohta
-
Patent number: 11640353Abstract: A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit.Type: GrantFiled: January 7, 2022Date of Patent: May 2, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyojin Jeong, Youngjoon Choi, Sunghoon Lee, Jae-Hyeon Ju
-
Patent number: 11636893Abstract: An example memory sub-system includes: a plurality bank groups, wherein each bank group comprises a plurality of memory banks; a plurality of row buffers, wherein two or more row buffers of the plurality of row buffers are associated with each bank group; and a processing logic communicatively coupled to the plurality of bank groups and the plurality of row buffers, the processing logic to perform operations comprising: receiving, from a host, a command identifying a row buffer of the plurality of row buffers; and perform an operation with respect to the identified row buffer.Type: GrantFiled: October 19, 2020Date of Patent: April 25, 2023Assignee: MICRON TECHNOLOGY, INC.Inventors: Sean S. Eilert, Ameen D. Akel, Shivam Swami
-
Patent number: 11636032Abstract: A data management method of a data storage device having a data management unit different from a data management unit of a user device receives information regarding a storage area of a file to be deleted, from the user device, selects a storage area which matches with the data management unit of the data storage device, from among the storage area of the deleted file, and performs an erasing operation on the selected storage area which matches with the data management unit.Type: GrantFiled: January 7, 2022Date of Patent: April 25, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyojin Jeong, Youngjoon Choi, Sunghoon Lee, Jae-Hyeon Ju
-
Patent number: 11630691Abstract: Disclosed embodiments relate to an improved memory system architecture for multi-threaded processors. In one example, a system includes a system comprising a multi-threaded processor core (MTPC), the MTPC comprising: P pipelines, each to concurrently process T threads; a crossbar to communicatively couple the P pipelines; a memory for use by the P pipelines, a scheduler to optimize reduction operations by assigning multiple threads to generate results of commutative arithmetic operations, and then accumulate the generated results, and a memory controller (MC) to connect with external storage and other MTPCs, the MC further comprising at least one optimization selected from: an instruction set architecture including a dual-memory operation; a direct memory access (DMA) engine; a buffer to store multiple pending instruction cache requests; multiple channels across which to stripe memory requests; and a shadow-tag coherency management unit.Type: GrantFiled: August 24, 2021Date of Patent: April 18, 2023Assignee: Intel CorporationInventors: Robert Pawlowski, Ankit More, Jason M. Howard, Joshua B. Fryman, Tina C. Zhong, Shaden Smith, Sowmya Pitchaimoorthy, Samkit Jain, Vincent Cave, Sriram Aananthakrishnan, Bharadwaj Krishnamurthy
-
Patent number: 11625600Abstract: A neural network system for predicting a polling time and a neural network model processing method using the neural network system are provided. The neural network system includes a first resource to generate a first calculation result obtained by performing at least one calculation operation corresponding to a first calculation processing graph and a task manager to calculate a first polling time taken for the first resource to perform the at least one calculation operation and to poll the first calculation result from the first resource based on the calculated first polling time.Type: GrantFiled: August 6, 2019Date of Patent: April 11, 2023Assignee: Samsung Electronics Co., Ltd.Inventor: Seung-soo Yang
-
Patent number: 11599467Abstract: The present disclosure advantageously provides a system cache and a method for storing coherent data and non-coherent data in a system cache. A transaction is received from a source in a system, the transaction including at least a memory address, the source having a location in a coherent domain or a non-coherent domain of the system, the coherent domain including shareable data and the non-coherent domain including non-shareable data. Whether the memory address is stored in a cache line is determined, and, when the memory address is not determined to be stored in a cache line, a cache line is allocated to the transaction including setting a state bit of the allocated cache line based on the source location to indicate whether shareable or non-shareable data is stored in the allocated cache line, and the transaction is processed.Type: GrantFiled: May 27, 2021Date of Patent: March 7, 2023Assignee: Arm LimitedInventors: Jamshed Jalal, Bruce James Mathewson, Tushar P Ringe, Sean James Salisbury, Antony John Harris
-
Patent number: 11573719Abstract: Techniques are described for providing one or more clients with direct access to cached data blocks within a persistent memory cache on a storage server. In an embodiment, a storage server maintains a persistent memory cache comprising a plurality of cache lines, each of which represent an allocation unit of block-based storage. The storage server maintains an RDMA table that include a plurality of table entries, each of which maps a respective client to one or more cache lines and a remote access key. An RDMA access request to access a particular cache line is received from a storage server client. The storage server identifies access credentials for the client and determines whether the client has permission to perform the RDMA access on the particular cache line. Upon determining that the client has permissions, the cache line is accessed from the persistent memory cache and sent to the storage server client.Type: GrantFiled: March 26, 2020Date of Patent: February 7, 2023Assignee: Oracle International CorporationInventors: Wei Zhang, Jia Shi, Zuoyu Tao, Kothanda Umamageswaran
-
Patent number: 11567791Abstract: A processor comprises a core, a cache, and a ZCM manager in communication with the core and the cache. In response to an access request from a first software component, wherein the access request involves a memory address within a cache line, the ZCM manager is to (a) compare an OTAG associated with the memory address against a first ITAG for the first software component, (b) if the OTAG matches the first ITAG, complete the access request, and (c) if the OTAG does not match the first ITAG, abort the access request. Also, in response to a send request from the first software component, the ZCM manager is to change the OTAG associated with the memory address to match a second ITAG for a second software component. Other embodiments are described and claimed.Type: GrantFiled: June 26, 2020Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Vedvyas Shanbhogue, Doddaballapur Jayasimha, Raghu Ram Kondapalli
-
Patent number: 11513945Abstract: The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.Type: GrantFiled: February 22, 2021Date of Patent: November 29, 2022Assignee: Micron Technology, Inc.Inventors: Daniel B. Penney, Gary L. Howe
-
Patent number: 11507516Abstract: Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.Type: GrantFiled: August 19, 2020Date of Patent: November 22, 2022Assignee: Micron Technology, Inc.Inventors: David Andrew Roberts, Joseph Thomas Pawlowski
-
Patent number: 11487673Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.Type: GrantFiled: October 16, 2013Date of Patent: November 1, 2022Assignee: NVIDIA CorporationInventors: Jerome F. Duluk, Jr., Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove, Chenghuan Jia, John Mashey
-
Patent number: 11449425Abstract: A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array.Type: GrantFiled: October 10, 2019Date of Patent: September 20, 2022Assignee: EMC IP Holding Company LLCInventors: Arieh Don, Adnan Sahin, Owen Martin, Peter Blok, Philip Derbeko
-
Patent number: 11449423Abstract: A method performed by a controller comprising assigning a first status indicator to entries in a first address line in a volatile memory belonging to a first region of an LUT stored in a non-volatile memory, and a second status indicator to entries in the first address line in the volatile memory belonging to a second region of the LUT, setting either the first or second status indicator to a dirty status based on whether a cache updated entry at an address m in the volatile memory belongs to the first or second region of the LUT, and writing, based on the dirty status of the first and second status indicator at the address m, all entries in the volatile memory associated with the first region or the second region containing the updated entry to the non-volatile memory.Type: GrantFiled: March 12, 2021Date of Patent: September 20, 2022Assignee: Kioxia CorporationInventors: David Symons, Ezequiel Alves
-
Patent number: 11442855Abstract: A cache memory circuit that evicts cache lines based on which cache lines are storing background data patterns is disclosed. The cache memory circuit can store multiple cache lines and, in response to receiving a request to store a new cache line, can select a particular one of previously stored cache lines. The selection may be performed based on data patterns included in the previously stored cache lines. The cache memory circuit can also perform accesses where the internal storage arrays are not activated in response to determining data in the location specified by the requested address is background data. In systems employing virtual addresses, a translation lookaside buffer can track the location of background data in the cache memory circuit.Type: GrantFiled: September 25, 2020Date of Patent: September 13, 2022Assignee: Apple Inc.Inventor: Michael R. Seningen
-
Patent number: 11386975Abstract: A three-dimensional stacked memory device includes a buffer die having a plurality of core die memories stacked thereon. The buffer die is configured as a buffer to occupy a first space in the buffer die. The first memory module, disposed in a second space unoccupied by the buffer, is configured to operate as a cache of the core die memories. The controller is configured to detect a fault in a memory area corresponding to a cache line in the core die memories based on a result of a comparison between data stored in the cache line and data stored in the memory area corresponding to the cache line in the core die memories. The second memory module, disposed in a third space unoccupied by the buffer and the first memory module, is configured to replace the memory area when the fault is detected in the memory area.Type: GrantFiled: June 28, 2019Date of Patent: July 12, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Shinhaeng Kang, Joonho Song, Seungwon Lee
-
Patent number: 11372759Abstract: A directory processing method and apparatus are provided to resolve a problem that a directory occupies a relatively large quantity of caches in an existing directory processing solution. The method includes: receiving, by a first data node, a first request sent by a second data node; searching for, by the first data node, a matched directory entry in a directory of the first data node based on tag information and index information in a first physical address; creating, when no matched directory entry is found, a first directory entry of the directory based on the first request, where the first directory entry includes the tag information, first indication information, first pointer information, and first status information, the first pointer information is used to indicate that data in the memory address corresponding to the indication bit that is set to valid is read by the second data node.Type: GrantFiled: July 17, 2020Date of Patent: June 28, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yongbo Cheng, Chenghong He, Tao He
-
Patent number: 11372703Abstract: A memory controller receives, via a first interface, a first read request requesting a requested data granule. Based on receipt of the first read request, the memory controller transmits, via a second interface, a second read request to initiate access of the requested data granule from a system memory. Based on a determination to schedule accelerated data delivery and receipt by the memory controller of a data scheduling indication that indicates a timing of future delivery of the requested data granule, the memory controller requests, prior to receipt of the requested data granule, permission to transmit the requested data granule on the system interconnect fabric. Based on receipt of the requested data granule at the indicated timing and a grant of the permission to transmit, the memory controller initiates transmission of the requested data granule on the system interconnect fabric and transmits an error indication for the requested data granule.Type: GrantFiled: February 19, 2021Date of Patent: June 28, 2022Assignee: International Business Machines CorporationInventors: John Samuel Liberty, Brad William Michael, Stephen J. Powell, Nicholas Steven Rolfe
-
Patent number: 11360777Abstract: A cache system, having a first cache, a second cache, and a logic circuit coupled to control the first cache and the second cache according to an execution type of a processor. When an execution type of a processor is a first type indicating non-speculative execution of instructions and the first cache is configured to service commands from a command bus for accessing a memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. The cache system can include a configurable data bit. The logic circuit can be coupled to control the caches according to the bit. Alternatively, the caches can include cache sets. The caches can also include registers associated with the cache sets respectively. The logic circuit can be coupled to control the cache sets according to the registers.Type: GrantFiled: January 29, 2021Date of Patent: June 14, 2022Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11327793Abstract: Systems and methods for processing hierarchical tasks in a garbage collection mechanism are provided. The method includes determining chunks in a task queue. Each chunk is a group of child tasks created after processing one task. The method includes popping, by an owner thread, tasks from a top side of the task queue pointed at by a chunk in a first in first out (FIFO) pop. The method also includes stealing, by a thief thread, tasks from a chunk in an opposite side of the task queue.Type: GrantFiled: February 18, 2020Date of Patent: May 10, 2022Assignee: International Business Machines CorporationInventors: Michihiro Horie, Kazunori Ogata, Mikio Takeuchi, Hiroshi Horii
-
Patent number: 11301403Abstract: The present disclosure includes apparatuses and methods related to a command bus in memory. A memory module may be equipped with multiple memory media types that are responsive to perform various operations in response to a common command. The operations may be carried out during the same clock cycle in response to the command. An example apparatus can include a first number of memory devices coupled to a host via a first number of ports and a second number of memory devices each coupled to the first number of memory devices via a second number of ports, wherein the second number of memory devices each include a controller, and wherein the first number of memory devices and the second number of memory devices can receive a command from the host to perform the various (e.g., the same or different) operations, sometime concurrently.Type: GrantFiled: March 1, 2019Date of Patent: April 12, 2022Assignee: Micron Technology, Inc.Inventors: Frank F. Ross, Matthew A. Prather
-
Patent number: 11288008Abstract: A reflective memory system includes network-connected computing systems including respective memory subsystems. A reflective memory management subsystem in a first computing system receives a processor memory-centric reflective write request associated with a local reflective memory write operation and remote reflective memory write operations, performs the local reflective memory write operation to write data to a memory subsystem in the first computing system, and uses remote memory access hardware to generate remote memory write information for performing the remote reflective memory write operations to write the data at respective second memory subsystems in second computing systems.Type: GrantFiled: October 30, 2020Date of Patent: March 29, 2022Assignee: Dell Products L.P.Inventors: Robert W. Hormuth, Jimmy D. Pike, Gaurav Chawla, William Price Dawkins, Elie Jreij, Mukund P. Khatri, Walter A. O'Brien, III, Mark Steven Sanders
-
Patent number: 11281488Abstract: A solution is proposed for managing a computing environment. A corresponding method comprises detecting critical commands and applying each critical command and possible following commands to the computing environment by maintaining an unchanged image thereof; a command effect of the application of the critical command on the computing environment is verified according to one or more operative parameters thereof, and the application of the critical/following commands is integrated into the computing environment in in response to a positive result of the verification. A computer program and a computer program product for performing the method are also proposed. Moreover, a system for implementing the method is proposed.Type: GrantFiled: October 8, 2020Date of Patent: March 22, 2022Assignee: International Business Machines CorporationInventors: Damiano Bassani, Antonio Di Cocco, Alfonso D'Aniello, Catia Mecozzi
-
Patent number: 11263151Abstract: Translation lookaside buffer (TLB) invalidation using virtual addresses is provided. A cache is searched for a virtual address matching the input virtual address. Based on a matching virtual address in the cache, the corresponding cache entry is invalidated. The load/store queue is searched for a set and a way corresponding to the set and the way of the invalidated cache entry. Based on an entry in the load/store queue matching the set and the way of the invalidated cache entry, the entry in the load/store queue is marked as pending. Indicating a completion of the TLB invalidate instruction is delayed until all pending entries in the load/store queues are complete.Type: GrantFiled: July 29, 2020Date of Patent: March 1, 2022Assignee: International Business Machines CorporationInventors: David Campbell, Bryan Lloyd, David A. Hrusecky, Kimberly M. Fernsler, Jeffrey A. Stuecheli, Guy L. Guthrie, Samuel David Kirchhoff, Robert A. Cordes, Michael J. Mack, Brian Chen
-
Patent number: 11200294Abstract: The present disclosure describes page updating methods and display devices. The method includes displaying, by a display device in a shopping mall mode, a first presentation page as an overlay on a user interface of the display device. The first presentation page comprises a first presentation file, and the first presentation file corresponds to a first URL. The method includes sending, by the display device, an update message associated with a web page presentation to a web application on the display device; generating, by the display device, a second URL based on the first URL and the update message, the update message indicating that the first presentation file is updated to a second presentation file; obtaining, by the display device, the second presentation file according to the second URL; and updating, by the display device, the first presentation page to a second presentation page based on the second presentation file.Type: GrantFiled: June 2, 2020Date of Patent: December 14, 2021Assignee: HISENSE VISUAL TECHNOLOGY CO., LTD.Inventor: Jingbo Qin
-
Patent number: 11182307Abstract: A method for demoting data elements from a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method maintains, for the ghost cache, multiple LRU lists that designate an order in which data elements are demoted from the lower performance portion. The method utilizes the statistics to determine in which LRU lists the data elements are referenced. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 22, 2020Date of Patent: November 23, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Matthew G. Borlick, Kyler A. Anderson, Kevin J. Ash
-
Patent number: 11132247Abstract: Aspects of the present disclosure include accessing block data stored in a memory component including memory blocks. The block data identifies bad blocks and reusable bad blocks, the reusable bad blocks having a higher level of reliability than bad blocks. Block selection is performed to select a block based on a block address. Based on the block selection and based on the block data, a tag operation is performed by setting a latch of the selected block to a first state in which access to the selected block is disabled.Type: GrantFiled: July 30, 2018Date of Patent: September 28, 2021Assignee: Micron Technology, Inc.Inventors: Fulvio Rori, Chiara Cerafogli, Scott Anthony Stoller
-
Patent number: 11119770Abstract: Performing atomic store-and-invalidate operations in processor-based devices is disclosed. In this regard, a processing element (PE) of one or more PEs of a processor-based device includes a store-and-invalidate logic circuit used by a memory access stage of an execution pipeline of the PE to perform an atomic store-and-invalidate operation. Upon receiving an indication to perform a store-and-invalidate operation (e.g., in response to a store-and-invalidate instruction execution) comprising a store address and store data, the memory access stage uses the store-and-invalidate logic circuit to write the store data to a memory location indicated by the store address, and to invalidate an instruction cache line corresponding to the store address in an instruction cache of the PE.Type: GrantFiled: July 26, 2019Date of Patent: September 14, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Thomas Philip Speier, Eric Francis Robinson
-
Patent number: 11113197Abstract: A method for joining an event stream with reference data includes loading a plurality of reference data snapshots from a reference data source into a cache. Punctuation events are supplied that indicate temporal validity for the plurality of reference data snapshots in the cache. A logical barrier is provided that restricts a flow of data events in the event stream to a cache lookup operation based on the punctuation events. The cache lookup operation is performed with respect to the data events in the event stream that are permitted to cross the logical barrier.Type: GrantFiled: April 8, 2019Date of Patent: September 7, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Boris Shulman, Shoupei Li, Alexander Alperovich, Xindi Zhang, Kanstantsyn Zoryn
-
Patent number: 11093388Abstract: The present disclosure relates to a method, an apparatus, an electronic device and a computer readable storage medium for accessing static random access memories. The method includes: receiving an access request for data associated with the static random access memories; writing a plurality of sections of the data into a plurality of different static random access memories in an interleaved manner in response to the access request being a write request for the data, each of the plurality of sections having its respective predetermined size; and reading the plurality of sections of the data from the plurality of static random access memories in an interleaved manner in response to the access request being a read request for the data, each of the plurality of sections having its respective predetermined size.Type: GrantFiled: November 13, 2019Date of Patent: August 17, 2021Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Xiaozhang Gong, Jing Wang
-
Patent number: 11042479Abstract: Techniques are provided for providing a fully active and non-replicated block storage solution in a clustered filesystem that implements cache coherency. In a clustered filesystem where one or more data blocks are stored in a respective cache of each host node of a plurality of host nodes, a request is received at a host node of the plurality of host nodes from a client device to write the one or more data blocks to a shared storage device. In response to the request, the one or more data blocks are stored in the cache of the host node and a particular notification is sent to another host node of the plurality of host nodes that the one or more data blocks have been written to the shared storage device. In response to receiving the notification, the other host node invalidates a cached copy of the one or more data blocks in the respective cache of the other host node.Type: GrantFiled: January 8, 2019Date of Patent: June 22, 2021Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Donald Allan Graves, Jr., Frederick S. Glover, Alan David Brunelle, Pranav Dayananda Bagur, James Bensson