Patents Examined by Pierre-Michel Bataille
-
Patent number: 12287970Abstract: A method and system are provided for limiting unnecessary data traffic on the data busses connecting the various levels of system memory. Some embodiments may include processing an invalidation command associated with a system or network operation requiring temporary storage of data in a local memory area. The invalidation command may comprise a memory location indicator capable of identifying the physical addresses of the associated data in the local memory area. Some embodiments may preclude the data associated with the system or network operation from being written to a main memory by invalidating the memory locations holding the temporary data once the system or network operation has finished utilizing the local memory area.Type: GrantFiled: January 25, 2024Date of Patent: April 29, 2025Assignee: Mellanox Technologies, Ltd.Inventors: Yamin Friedman, Idan Burstein, Hillel Chapman, Gal Yefet
-
Patent number: 12277064Abstract: A method for operating a memory system relates to the memory field and aims to address problems such as the over-long waiting time caused by moving data during the buffer flush process. The method for operating the memory system includes: in response to a first space flush command, configuring a part of the free space as the available space of the first space in the case where the size of the free space of the memory device is greater than or equal to the first threshold.Type: GrantFiled: December 29, 2022Date of Patent: April 15, 2025Assignee: Yangtze Memory Technologies Co., Ltd.Inventor: Tao Zhang
-
Patent number: 12267391Abstract: ID numbers of connected multiple modules are easily automatically set. Each of the multiple modules includes an ID number information holding unit that holds its own ID number information, a first command generation unit that generates a first command for notifying its rear stage of its own ID number information, a first command output unit that outputs the first command from a rear-stage output port, a second command generation unit that generates a second command for notifying its front stage of its own ID number information and ID number information on its rear-stage modules, a second command output unit that outputs the second command from a front-stage output port, and an ID number information update unit that sets, when the first command is received from a front-stage module, new ID number obtained by adding “1” to the ID number of the front-stage module contained in the received first command as its own ID number information in the ID number information holding unit.Type: GrantFiled: April 21, 2021Date of Patent: April 1, 2025Assignee: MITUTOYO CORPORATIONInventor: Takuma Mizunaga
-
Patent number: 12265476Abstract: Provided herein are systems, methods and computer readable media for providing an out of band cache mechanism for ensuring availability of data. An example system may include a client device configured to, in response to determining requested data is not available in a cache, access the requested data from a data source, transmit, to a cache mechanism, an indication that the requested data is unavailable in the cache, the indication configured to be placed in a queue as an element pointing to the requested data, a cache mechanism configured to receive an indication of requested data, determine whether an element, the element indicative of the requested data, exists in a queue, and in an instance in which the element is not present in the queue, placing the element in the queue, the queue being a list of elements, each indicative of requested data needing to be placed in the cache.Type: GrantFiled: July 20, 2022Date of Patent: April 1, 2025Assignee: BYTEDANCE INC.Inventors: Steven Black, Stuart Siegrist, Gilligan Markham
-
Patent number: 12259820Abstract: A processor-based system for allocating a higher-level cache line in a higher-level cache memory in response to an eviction request of a lower-level cache line is disclosed. The processor-based system determines whether the cache line is opportunistic, sets an opportunistic indicator to indicate that the lower-level cache line is opportunistic, and communicates the lower-level cache line and the opportunistic indicator. The processor-based system determines, based on the opportunistic indicator of the lower-level cache line, whether a higher-level cache line of a plurality of higher-level cache lines in the higher-level cache memory has less or equal importance than the lower-level cache line. In response, the processor-based system replaces the higher-level cache line in the higher-level cache memory with the lower-level cache line and associates the opportunistic indicator with the lower-level cache line in the higher-level cache memory.Type: GrantFiled: April 2, 2024Date of Patent: March 25, 2025Assignee: QUALCOMM IncorporatedInventor: Ramkumar Srinivasan
-
Patent number: 12253948Abstract: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory.Type: GrantFiled: November 9, 2020Date of Patent: March 18, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Zhongyan Lu, Thomas Willhalm
-
Patent number: 12254213Abstract: Described apparatuses and methods relate to a write request buffer for a memory system that may support a nondeterministic protocol. A host device and connected memory device may include a controller with a read queue and a write queue. A controller includes a write request buffer to buffer write addresses and write data associated with write requests directed to the memory device. The write request buffer can include a write address buffer that stores unique write addresses and a write data buffer that stores most-recent write data associated with the unique write addresses. Incoming read requests are compared with the write requests stored in the write request buffer. If a match is found, the write request buffer can service the requested data without forwarding the read request downstream to backend memory. Accordingly, the write request buffer can improve the latency and bandwidth in accessing a memory device over an interconnect.Type: GrantFiled: December 21, 2021Date of Patent: March 18, 2025Assignee: Micron Technology, Inc.Inventors: Nikesh Agarwal, Laurent Isenegger, Robert Walker
-
Patent number: 12253956Abstract: A hybrid scheme is provided for performing translation lookaside buffer (TLB) shootdowns in a computer system whose processing cores support both inter-processor interrupt (IPI) and broadcast TLB invalidate (TLBI) shootdown mechanisms. In one set of embodiments, this hybrid scheme dynamically determines, for each instance where a TLB shootdown is needed, whether to use the IPI mechanism or the broadcast TLBI mechanism to optimize shootdown performance (or otherwise make the TLB shootdown operation functional/practical).Type: GrantFiled: November 7, 2022Date of Patent: March 18, 2025Assignee: VMWare LLCInventors: Andrei Warkentin, Jared McNeill, Grant Foudree, Anil Veliyankaramadam
-
Patent number: 12248396Abstract: Embodiments of the present disclosure provide a memory system, and a method for garbage collection of the memory system. The method can include reading N valid data sets in a to-be-collected virtual block (VB) of a memory out to a copy buffer sequentially, where N is an integer greater than or equal to 2, transferring a valid data set in the copy buffer to a corresponding cache, and reading a next valid data set out to the copy buffer, and programming the valid data set in the cache to a corresponding target die group of a target VB. A time period in which a current valid data set is programmed from the cache to the target die group corresponding to the cache overlaps at least partially with a time period in which a next valid data set is read from the to-be-collected VB to the copy buffer.Type: GrantFiled: December 29, 2022Date of Patent: March 11, 2025Assignee: Yangtze Memory Technologies Co., Ltd.Inventors: Huadong Huang, Yonggang Chen
-
Patent number: 12242739Abstract: An apparatus includes an interface circuit and a monitor circuit communicatively coupled to the interface circuit. The monitor circuit is configured to identify a command issued to a memory communicatively coupled to the monitor circuit through the interface circuit, determine whether the command is authorized, and, based on a determination that the command is not authorized, cancel the command.Type: GrantFiled: April 30, 2024Date of Patent: March 4, 2025Assignee: Microchip Technology IncorporatedInventors: Brian J. Marley, Richard E. Wahler
-
Patent number: 12242740Abstract: A data storage device has a controller, a decryption engine, and a memory storing encrypted data. Instead of using the decryption engine to generate a tweak value needed to decrypt the encrypted data, the tweak value is generated by the controller while the controller is waiting for the encrypted data to be read from the memory. This hides the latency to compute the tweak value in the latency to read the encrypted data from the memory.Type: GrantFiled: July 19, 2023Date of Patent: March 4, 2025Assignee: Sandisk Technologies, Inc.Inventors: Mark Branstad, Martin Lueker-Boden, Lunkai Zhang
-
Patent number: 12242396Abstract: A memory permissions model for a processor that is based on the memory address accessed by an instruction as well as the program counter of the instruction. These permissions may be stored in permissions tables and indexed using the memory addresses of the instruction and the address of the memory locations that it accesses. Those indexes may be obtained from a page table in some cases. These memory permissions may be used in conjunction with other permissions, such as execute permissions and secondary execution privileges that are based on whether the instruction belongs to a particular instruction group.Type: GrantFiled: June 28, 2023Date of Patent: March 4, 2025Assignee: Apple Inc.Inventors: Jeffry E. Gonion, Bernard J. Semeria
-
Patent number: 12222865Abstract: The system described herein introduces a cache that a file system uses to determine, for a current object, if the process to merge different types of access control information into merged access control information has already been performed for a previous object. Stated alternatively, the file system uses the cache to determine whether a current object being processed for storage has the same combination of access control information as a previous object that has already been processed for storage. If the current object has the same combination of access control information as the previous object, the file system is able to associate merged access control information for the previous object with the current object via the use of a pointer. Consequently, the file system avoids having to perform the resource-intensive process of merging the different types of access control information for the current object.Type: GrantFiled: May 30, 2023Date of Patent: February 11, 2025Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Neal Robert Christiansen, Neeraj Kumar Singh, Yanran Hao
-
Patent number: 12222867Abstract: A technology flushing a hierarchical cache structure based on a designated key identification code and a designated address. A processor includes a first core and a last level cache (LLC). The first core includes a decoder, a memory ordering buffer, and a first in-core cache module. In response to an Instruction Set Architecture (ISA) instruction that requests to flush a hierarchical cache structure according to a designated key identification code and a designated address, the decoder outputs at least one microinstruction. According to the at least one microinstruction, a flushing request with the designated key identification code and the designated address is provided to the first in-core cache module through the memory ordering buffer, and then the first in-core cache module provides the LLC with the flushing request, so that the LLC flushes its matching cache line which matches the designated key identification code and the designated address.Type: GrantFiled: October 14, 2022Date of Patent: February 11, 2025Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.Inventors: Weilin Wang, Yingbing Guan, Minfang Zhu
-
Patent number: 12216596Abstract: Systems and methods disclosed herein provide for an improved termination leg unit design and method of trimming impedance thereof, which provides for improved impedance matching for process variations, along with variations in temperature and voltage. Example implementation provide for a leg unit circuit design that includes a first circuit compensating for temperature and voltage variations and a second circuit, connected in series with the first circuit, compensating for process variations. Furthermore, disclosed herein is ZQ calibration method that provides for calibrating of the impedance of each of an on-die termination, a pull-up driver, and a pull-down driver using a single calibration circuit.Type: GrantFiled: September 9, 2022Date of Patent: February 4, 2025Assignee: SANDISK TECHNOLOGIES LLCInventors: Mohammad Reza Mahmoodi, Martin Lueker-Boden
-
Patent number: 12216582Abstract: Various embodiments for a disk-based merge for combining merged hash maps are described herein. An embodiment operates by identifying a first hash map and a second hash map, and comparing a first hash value from the first hash map with a second hash value from the second hash map, with the lowest index values. A lowest hash value is identified based on the comparison, and an entry corresponding to the lowest hash value is stored in a combined hash map. This process is repeated until all of the hash values from both the first set of hash values and the second set of hash values are stored in the combined hash map. A query is received, and processed based on the combined hash map.Type: GrantFiled: July 31, 2023Date of Patent: February 4, 2025Assignee: SAP SEInventors: Christian Bensberg, Frederik Transier, Kai Stammerjohann
-
Patent number: 12197342Abstract: An arithmetic processing device includes: an arithmetic circuit that executes an instruction; a first cache which is coupled to the arithmetic circuit and which has a plurality of first entries each including a first tag region and a first data region that holds cache line data; a second tag region; a processor which controls the first cache based on information held in the second tag region; and a second cache which is coupled to the first cache via the processor and which includes a plurality of second entries each of which includes a third tag region and a second data region that holds cache line data. The second tag region includes a first region that holds first information which specifies whether or not the second data region holds cache line data which has the same address as the address of cache line data held in the first data region.Type: GrantFiled: March 6, 2023Date of Patent: January 14, 2025Assignee: FUJITSU LIMITEDInventor: Toru Hikichi
-
Patent number: 12189949Abstract: In some implementations, a memory device may receive a command to read data in a first format from non-volatile memory, the data being stored in a second format in the non-volatile memory, the second format comprising a plurality of copies of the data in the first format. The memory device may compare, using an error correction circuit, the plurality of copies of the data to determine a dominant bit state for bits of the data. The memory device may store the dominant bit state for bits of the data in the non-volatile memory as error-corrected data in the first format. The memory device may cause the error-corrected data to be read from the non-volatile memory in the first format as a response to the command to read the data in the first format.Type: GrantFiled: October 24, 2022Date of Patent: January 7, 2025Assignee: Micron Technology, Inc.Inventors: Jeremy Binfet, Tommaso Vali, Walter Di Francesco, Luigi Pilolli, Angelo Covello, Andrea D'Alessandro, Agostino Macerola, Cristina Lattaro, Claudia Ciaschi
-
Patent number: 12182620Abstract: Systems and methods related to integrated memory pooling and direct swap caching are described. A system includes a compute node comprising a local memory and a pooled memory. The system further includes a host operating system (OS) having initial access to: (1) a first swappable range of memory addresses associated with the local memory and a non-swappable range of memory addresses associated with the local memory, and (2) a second swappable range of memory addresses associated with the pooled memory. The system further includes a data-mover offload engine configured to perform a cleanup operation, including: (1) restore a state of any memory content swapped-out from a memory location within the first swappable range of memory addresses to the pooled memory, and (2) move from the local memory any memory content swapped-in from a memory location within the second swappable range of memory addresses back out to the pooled memory.Type: GrantFiled: March 8, 2022Date of Patent: December 31, 2024Assignee: Microsoft Technology Licensing, LLCInventor: Ishwar Agarwal
-
Patent number: 12182033Abstract: An address translation cache (ATC) is configured to store translation entries indicating mapping information between a virtual address and a physical address of a memory device. The ATC includes a plurality flexible page group caches, a shared cache and a cache manager. Each flexible page group cache stores translation entries corresponding to a page size allocated to the flexible group cache. The shared cache stores, regardless of page sizes, translation entries that are not stored in the plurality of flexible page group caches. The cache manager allocates a page size to each flexible page group cache, manages cache page information on the page sizes allocated to the plurality of flexible page group caches, and controls the plurality of flexible page group caches and the shared cache based on the cache page information.Type: GrantFiled: October 13, 2022Date of Patent: December 31, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Youngsuk Moon, Hyunwoo Kang, Jaegeun Park, Sangmuk Hwang