Patents Examined by Hashem Farrokh
-
Patent number: 11144456Abstract: An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.Type: GrantFiled: May 22, 2020Date of Patent: October 12, 2021Assignee: Texas Instmments IncorporatedInventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria, Peter Michael Hippleheuser
-
Patent number: 11137925Abstract: A method, computer program product, and computer system for identifying, by a computing device, a current persisted value of a reclamation pool. A default value of the reclamation pool may be identified. The current persisted value may be compared with the default value to determine which is a higher value. The current persisted value may be selected as a minimum memory operating state of the reclamation pool when the current persisted value is higher than the default value. The default value may be selected as the minimum memory operating state of the reclamation pool when the default value is higher than the current persisted value plus a multiplier defining a threshold size.Type: GrantFiled: November 6, 2019Date of Patent: October 5, 2021Assignee: EMC IP Holding Company, LLCInventors: Michael L. Burriss, Bolt Liangliang Liu, Eric Qi Yao, Doris Jia Qian
-
Patent number: 11132303Abstract: A request to perform a program operation at a memory device is received. An entry of a device block record stored at the memory device is determined to be removed based on parameters associated with the program operation and a firmware block record that corresponds to the device block record. The firmware block record tracks the entries of the device block record. The entries of the device block record are associated with blocks of the memory device and identify start voltages that are applied to wordlines of the blocks to program memory cells associated with the wordlines. A command is submitted to the memory device to remove the entry associated with a particular block from the device block record and to make a space available at the device block record for a new entry associated with a new block that is to be written in view of the program operation.Type: GrantFiled: February 4, 2020Date of Patent: September 28, 2021Assignee: Micron Technology, Inc.Inventors: Jiangang Wu, Jung Sheng Hoei, Qisong Lin, Mark Ish, Peng Xu
-
Patent number: 11132130Abstract: In a computer system with a hybrid memory architecture consisting of a volatile main memory and a non-volatile main memory, there are provided a segment cleaning method for a storage file system and a memory management apparatus for implementing the same, the segment cleaning method comprising: selecting a victim segment in storage; copying valid blocks in the victim segment to the volatile main memory; and moving the copied valid blocks to the non-volatile main memory. This can effectively overcome cleaning overhead, thereby improving the I/O performance of applications and increasing the lifetime of storage.Type: GrantFiled: May 29, 2020Date of Patent: September 28, 2021Assignee: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Young Ik Eom, Jong Gyu Park
-
Patent number: 11126366Abstract: A data erasing method, a memory control circuit unit and a memory storage device are provided. The method includes selecting a first physical erasing unit group from a plurality of physical erasing unit groups, and performing an erase operation to the first physical erasing unit group, wherein the first physical erasing unit group includes a plurality of first physical erasing units, and the number of at least one second physical erasing unit used to perform the erasing operation at the same time point of the plurality of first physical erasing units is different from the number of the plurality of first physical erasing units.Type: GrantFiled: August 22, 2019Date of Patent: September 21, 2021Assignee: PHISON ELECTRONICS CORP.Inventor: Chih-Kang Yeh
-
Patent number: 11119690Abstract: Erasure coding for scaling-out of a geographically diverse data storage system is disclosed. Chunks can be stored according to a first erasure coding scheme in zones of a geographically diverse data storage system. In response to scaling-out the geographically diverse data storage system, chunks can be moved to store data in a more diverse manner. The more diverse chunk storage can facilitate changing storage from the first erasure coding scheme to a second erasure coding scheme. The second erasure coding scheme can have a lower storage overhead than the first erasure coding scheme. In an aspect, the erasure coding scheme change can occur by combining erasure coding code chunks having complementary coding matrixes. Combining erasure coding code chunks having complementary coding matrixes can consume fewer computing resources than re-encoding data chunks for the second erasure coding scheme in a conventional manner.Type: GrantFiled: October 31, 2019Date of Patent: September 14, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Mikhail Danilov, Yohannes Altaye
-
Patent number: 11119922Abstract: In a data processing system having a local first level cache which covers an address range of a backing store, a distributed second level cache has a plurality of distributed cache portions, each assigned as a home cache portion for a corresponding non-overlapping address sub-range of the address range of the backing store. Upon receipt of a read access request to a read-only address location of the backing store, the local first level cache is configured to, when the read-only address location misses in the local first level cache, send the read access request to a most local distributed cache portion of the plurality of distributed cache portions for the local first level cache to determine whether the read-only access location hits or misses in the most local distributed cache portion, in which the most local distributed cache portion is not the home cache portion for the read-only address location.Type: GrantFiled: February 21, 2020Date of Patent: September 14, 2021Assignee: NXP USA, Inc.Inventors: Paul Kimelman, Brian Christopher Kahne
-
Patent number: 11106392Abstract: A data processing system includes: a host suitable for generating an initialization command and generating program mode information by selecting a program mode; a memory device including a plurality of memory cells storing a single level data and a multiple-level data; and a controller suitable for: receiving the initialization command and the program mode information from the host; controlling the memory device to perform an initialization operation on the memory device in response to the initialization command; and controlling the memory device to perform a program operation on the memory device based on the program mode information after the initialization operation is performed.Type: GrantFiled: June 18, 2019Date of Patent: August 31, 2021Assignee: SK hynix Inc.Inventor: Eu-Joon Byun
-
Patent number: 11106585Abstract: A method, computer program product, and computer system for receiving, by a computing device, an IO request on a first node. It may be determined whether a virtual address for the IO request is in a virtual cache. A read to RAID may be issued using the virtual address when the virtual address for the IO request is not in the virtual cache. A return of a cached page associated with the virtual address may be issued when the virtual address for the IO request is in the virtual cache.Type: GrantFiled: October 31, 2019Date of Patent: August 31, 2021Assignee: EMC IP Holding Company, LLCInventors: Anton Kucherov, Ronen Gazit, Oran Baruch
-
Patent number: 11099986Abstract: A method of operating a storage unit having non-volatile random-access memory (NVRAM) and solid-state memory is provided. The method includes transferring contents of the NVRAM to the solid-state memory, in response to detecting a power loss. The method includes during the transferring, having each of a plurality of channels in parallel, reading one or more of a plurality of logical unit numbers (LUNs) each corresponding to a portion of the NVRAM, performing an XOR of data of each of the one or more of the plurality of LUNs with data of a preceding LUN, and writing results of the XOR to the solid-state memory.Type: GrantFiled: April 12, 2019Date of Patent: August 24, 2021Assignee: Pure Storage, Inc.Inventors: Yuhong Mao, Russell Sears
-
Patent number: 11099985Abstract: A storage controller, includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: perform cache processing of storing, in a cache storage, data stored in a physical disk; specify a set of data that are adjacent to each other in a logical disk and are not adjacent to each other in the physical disk, among data stored in the cache storage, and set a group including the specified set of data; determine, at an opportunity in which all cached data among data belonging to the group become target data of deletion from the cache storage, a range in the physical disk in which the target data are stored in such a way that all data belonging to the group are continuously arranged in the physical disk; and write the target data into the determined range.Type: GrantFiled: December 17, 2018Date of Patent: August 24, 2021Assignee: NEC Platforms, Ltd.Inventor: Toshinori Fukunaga
-
Patent number: 11099998Abstract: A computer-implemented method includes caching data from a persistent storage device into a cache. The method also includes caching a physical address and a logical address of the data in the persistent storage device into the cache. The method further includes in response to receiving an access request for the data, accessing the data cached in the cache using at least one of the physical address and the logical address. The embodiments of the present disclosure also provide an electronic apparatus and a computer program product.Type: GrantFiled: February 10, 2020Date of Patent: August 24, 2021Assignee: EMC IP Holding Company LLCInventors: Wei Cui, Denny Dengyu Wang, Jian Gao, Lester Zhang, Chen Gong
-
Patent number: 11099766Abstract: An apparatus is configured to initiate a first replication session to replicate data of a first consistency group in a first storage system to a second consistency group in a second storage system, to create an additional consistency group linked to the first consistency group in the first storage system, and to initiate a second replication session to replicate data of the additional consistency group to another consistency group in a third storage system. The additional consistency group linked to the first consistency group in some embodiments is periodically updated against the first consistency group. For example, in one or more embodiments the second consistency group is updated based at least in part on an active snapshot set of the first replication session, and the additional consistency group is updated based at least in part on the first consistency group.Type: GrantFiled: June 21, 2019Date of Patent: August 24, 2021Assignee: EMC IP Holding Company LLCInventors: Xiangping Chen, Aharon Blitzer
-
Patent number: 11093386Abstract: The technology described herein is directed towards consolidating garbage collection of data stored in data structures such as chunks, to facilitate efficient garbage collection. Low capacity utilization chunks are detected as source chunks, and live data of an object (e.g., in segments) is copied from the source chunks to new destination chunk(s). A source chunk is deleted when it no longer contains live data. By copying the data on an object-determined basis, new chunks contain more coherent object data, which increases the possibility of future chunk deletion without data copying or with a reduced amount of copying. When data segments of an object are adjacent, the consolidating garbage collector may unite them into a united segment, which reduces an amount of system metadata per object. New chunks can be associated with a generation number (e.g., indicating the oldest previous generation) to further facilitate more efficient future chunk deletion.Type: GrantFiled: December 18, 2019Date of Patent: August 17, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Mikhail Danilov, Konstantin Buinov
-
Patent number: 11093169Abstract: A processing device is configured to obtain an input-output operation that corresponds to a first metadata page and to identify a corresponding binary tree. The binary tree comprises a plurality of nodes each comprising a delta corresponding to a metadata page. The processing device is further configured to perform a read traversal process corresponding to the first metadata page which comprises traversing the binary tree based at least in part on the first metadata page while the binary tree is locked by a node insertion process corresponding to a second metadata page. The node insertion process comprises inserting a first node corresponding to the second metadata page into the binary tree. The read traversal process further comprises locating a second node of the binary tree that corresponds to the first metadata page and obtaining a first delta that corresponds to the first metadata page from the second node.Type: GrantFiled: April 29, 2020Date of Patent: August 17, 2021Assignee: EMC IP Holding Company LLCInventors: Vladimir Shveidel, Ronen Gazit
-
Patent number: 11068392Abstract: A system and method is disclosed for managing data in a non-volatile memory. The system may include a non-volatile memory having multiple non-volatile memory sub-drives. A controller of the memory system is configured to route incoming host data to a desired sub-drive, keep data within the same sub-drive as its source during a garbage collection operation, and re-map data between sub-drives, separate from any garbage collection operation, when a sub-drive overflows its designated amount logical address space. The method may include initial data sorting of host writes into sub-drives based on any number of hot/cold sorting functions. In one implementation, the initial host write data sorting may be based on a host list of recently written blocks for each sub-drive and a second write to a logical address encompassed by the list may trigger routing the host write to a hotter sub-drive than the current sub-drive.Type: GrantFiled: September 27, 2019Date of Patent: July 20, 2021Assignee: Western Digital Technologies, Inc.Inventors: Sergey Anatolievich Gorobets, Liam Michael Parker
-
Patent number: 11061616Abstract: The present technology relates to a memory device and a method of operating the memory device. The memory device includes a target block manager configured to store a target block address on which a refresh operation is to be performed and output a refresh signal for the target block corresponding to the target block address when an auto refresh command is received, and a data transmission controller configured to output a transmission signal and a buffer control signal for transmitting data between the target block or the buffer block and the temporary buffer circuit in response to the refresh signal.Type: GrantFiled: December 27, 2019Date of Patent: July 13, 2021Assignee: SK hynix Inc.Inventors: Won Jae Choi, Ki Chang Gwon
-
Patent number: 11055234Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues by training a machine learning module. A machine learning module is provided input comprising cache segment management information related to management of segments in the local queues by the processing units and accesses of the global queue to transfer cache segments between the local queues and the global queue to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to move between a local queue and the global queue. The machine learning module is retrained based on the cache segment management information to output an adjusted transfer number parameter and an adjusted optimum number parameter for the processing units.Type: GrantFiled: May 21, 2019Date of Patent: July 6, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew R. Craig
-
Patent number: 11036654Abstract: The disclosed technology is generally directed to protection against unauthorized code. In one example of the technology, a read request to a restricted region of memory is detected. The read request is associated with a first processor. In response to detecting the read request to the restricted region of memory, a data value that causes an exception in response to execution by the first processor is provided.Type: GrantFiled: June 21, 2018Date of Patent: June 15, 2021Assignee: Microsoft Technology Licensing, LLCInventors: George Thomas Letey, Felix Stefan Domke, Edmund B. Nightingale
-
Patent number: 11036400Abstract: A backup storage includes persistent storage and a backup manager. The persistent storage stores backups of entities and an entity list that lists the entities. The backup manager obtains a restoration availability request from a user; filters the entity list based on an identity of the user to obtain an available entity list; identifies, based on user input obtain based on the available entity list, an entity of the entities; and restores the entity using the backups.Type: GrantFiled: April 26, 2019Date of Patent: June 15, 2021Assignee: EMC IP HOLDING COMPANY LLCInventors: Sudha Vamanraj Hebsur, Shelesh Chopra, Vipin Kumar Kaushal, Nitin Anand, Krishnendu Bagchi, Matthew Dickey Buchman, Pallavi Prakash, Gajendran Raghunathan, Niketan Narayan Kalaskar, Anand Reddy, Jaishree Balasubramanian