Patents by Inventor Nicola Colella
Nicola Colella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11966632Abstract: Methods, systems, and devices are described to indicate, in an entry of logical to physical (L2P) mapping information stored at a host system, whether data associated with the entry is sequential to other data associated with a next entry or a previous entry. Each entry may have a third field, which may indicate whether the data is sequential. Based on the third field, the host system may determine whether data to be read from a memory system is sequential. The host system may transmit one read command to the memory system if the data is sequential, where the read command may include at least a portion of an L2P entry associated with the data. Similarly, based on the third field, the memory system may determine whether the data to be read is sequential, and may read additional, sequential data if the memory system determines that the data is sequential.Type: GrantFiled: December 20, 2021Date of Patent: April 23, 2024Assignee: Micron Technology, Inc.Inventors: Roberto Izzi, Nicola Colella, Luca Porzio, Marco Onorato
-
Patent number: 11928063Abstract: A method includes: creating L2P tables while programming virtual blocks (VBs) across memory planes; creating an L2P bitmap for each VB, the L2P bitmap identifying logical addresses, within each L2P table, that belong to each VB; creating a VB bitmap for each L2P table, the VB bitmap identifying virtual blocks to which the respective L2P table points; creating an updated VB bitmap for a first L2P table based on changes to the first L2P table; determining that an entry in the VB bitmap is different than the entry in the updated VB bitmap, the entry corresponding to a particular VB; identifying an L2P bitmap corresponding to the particular VB; changing a bit within the identified L2P bitmap for an L2P mapping corresponding to the entry; and employing the identified L2P bitmap to determine L2P table(s) of the respective L2P tables that contain valid logical addresses for the particular VB.Type: GrantFiled: August 18, 2022Date of Patent: March 12, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Publication number: 20240061787Abstract: A method includes: creating L2P tables while programming virtual blocks (VBs) across memory planes; creating an L2P bitmap for each VB, the L2P bitmap identifying logical addresses, within each L2P table, that belong to each VB; creating a VB bitmap for each L2P table, the VB bitmap identifying virtual blocks to which the respective L2P table points; creating an updated VB bitmap for a first L2P table based on changes to the first L2P table; determining that an entry in the VB bitmap is different than the entry in the updated VB bitmap, the entry corresponding to a particular VB; identifying an L2P bitmap corresponding to the particular VB; changing a bit within the identified L2P bitmap for an L2P mapping corresponding to the entry; and employing the identified L2P bitmap to determine L2P table(s) of the respective L2P tables that contain valid logical addresses for the particular VB.Type: ApplicationFiled: August 18, 2022Publication date: February 22, 2024Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Publication number: 20240054070Abstract: A memory device comprises a memory array and a memory controller operatively coupled to the memory array. The memory controller includes a processor configured to initiate read operations to the memory array; compare the number of rad operations to a predetermined threshold number of read operations; initiate scanning memory pages of a block of memory cells for errors in response to reaching the threshold number of read operations for the block; and iteratively change the threshold number to a new threshold number, perform the new threshold number of read operations on the block of memory cells, and error scan memory pages associated with the last read operation of the new threshold number of rad operations.Type: ApplicationFiled: December 23, 2020Publication date: February 15, 2024Inventors: Hua Tan, Zhen Shu, Nicola Colella
-
Patent number: 11899577Abstract: Methods, systems, and devices for selective garbage collection are described. A host system may determine that a battery level is below a threshold or determine whether a power parameter of a memory system that includes a memory device satisfies a criterion. The host system may set a value of a flag. The memory system may perform an access operation and identify the value of the flag. The memory system may determine whether performing a garbage collection procedure is permitted based on identifying the value of the flag.Type: GrantFiled: November 24, 2020Date of Patent: February 13, 2024Assignee: Micron Technology, Inc.Inventors: Antonio Mauro, Luigi Costanzo, Nicola Colella
-
Patent number: 11886341Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.Type: GrantFiled: June 27, 2022Date of Patent: January 30, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio, Hua Tan
-
Publication number: 20240028215Abstract: Methods, systems, and devices for data storage during power state transition of a memory system are described. A memory system may receive a command indicating a transition from a first power state to a second power state or a third power state. Upon receiving the command, the memory system may write a first set of data to a volatile memory of the memory system. For example, the first set of data may be a snapshot or a copy of one or more elements of a second set of data. The memory system may flush the first set of data from the volatile memory to a non-volatile memory of the memory system. The memory system may transition from the first power state to the second power state or the third power state and read the snapshot from the volatile memory or the non-volatile memory upon transitioning back to the first power state.Type: ApplicationFiled: July 25, 2022Publication date: January 25, 2024Inventors: Nicola Colella, Rakeshkumar Dayabhai Vaghasiya
-
Publication number: 20230409242Abstract: A processing device of a memory sub-system can monitor a plurality of received commands to identify a forced unit access command. The processing device can identify a metadata area of the memory device based on the forced unit access command. The processing device can also perform an action responsive to identifying a subsequent forced unit access command to the metadata area.Type: ApplicationFiled: August 25, 2023Publication date: December 21, 2023Inventors: Luca Porzio, Roberto Izzi, Nicola Colella, Danilo Caraccio, Alessandro Orlando
-
Patent number: 11829646Abstract: A processing device of a memory sub-system can monitor a plurality of received commands to identify a forced unit access command. The processing device can identify a metadata area of the memory device based on the forced unit access command. The processing device can also perform an action responsive to identifying a subsequent forced unit access command to the metadata area.Type: GrantFiled: April 25, 2022Date of Patent: November 28, 2023Assignee: Micron Technology, Inc.Inventors: Luca Porzio, Nicola Colella, Dionisio Minopoli
-
Patent number: 11768627Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.Type: GrantFiled: January 11, 2023Date of Patent: September 26, 2023Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Patent number: 11740837Abstract: A processing device of a memory sub-system can monitor a plurality of received commands to identify a forced unit access command. The processing device can identify a metadata area of the memory device based on the forced unit access command. The processing device can also perform an action responsive to identifying a subsequent forced unit access command to the metadata area.Type: GrantFiled: July 1, 2022Date of Patent: August 29, 2023Assignee: Micron Technology, Inc.Inventors: Luca Porzio, Roberto Izzi, Nicola Colella, Danilo Caraccio, Alessandro Orlando
-
Publication number: 20230244414Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.Type: ApplicationFiled: January 11, 2023Publication date: August 3, 2023Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Publication number: 20230236762Abstract: Methods, systems, and devices for data relocation scheme selection for a memory system are described. A system may select, based on a fragmentation characteristic of data associated with a block of addresses, whether to perform a relocation associated with relocating invalid data, or to perform a relocation associated with refraining from relocating invalid data. A relocation associated with relocating invalid data may be selected for relatively more-fragmented data, which may avoid a relatively higher latency or processing load associated with evaluating validity or updating logical-to-physical mapping at a more-granular level.Type: ApplicationFiled: January 25, 2022Publication date: July 27, 2023Inventors: Rakeshkumar Dayabhai Vaghasiya, Nicola Colella, Mani Raghavendra Aravapalli, Anil Sindhi, Dhruv Chauhan
-
Publication number: 20230195374Abstract: Methods, systems, and devices are described to indicate, in an entry of logical to physical (L2P) mapping information stored at a host system, whether data associated with the entry is sequential to other data associated with a next entry or a previous entry. Each entry may have a third field, which may indicate whether the data is sequential. Based on the third field, the host system may determine whether data to be read from a memory system is sequential. The host system may transmit one read command to the memory system if the data is sequential, where the read command may include at least a portion of an L2P entry associated with the data. Similarly, based on the third field, the memory system may determine whether the data to be read is sequential, and may read additional, sequential data if the memory system determines that the data is sequential.Type: ApplicationFiled: December 20, 2021Publication date: June 22, 2023Inventors: Roberto Izzi, Nicola Colella, Luca Porzio, Marco Onorato
-
Publication number: 20230015332Abstract: Methods, systems, and devices for a split cache for address mapping data are described. A memory system may include a cache (e.g., including a first and second portion) for storing data that indicates a mapping between logical addresses associated with a host system and physical addresses of the memory system. The memory system may store data (e.g., the address mapping data) within the first portion of the cache. Additionally, the memory system may store an indication of whether the data is used for any access operations during a duration that the data is stored in the first portion of the cache. The memory system may transfer subsets of the data to the second portion of the cache if they are used for access operations during the duration.Type: ApplicationFiled: August 11, 2022Publication date: January 19, 2023Inventors: Nicola Colella, Antonino Pollio
-
Patent number: 11556275Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.Type: GrantFiled: May 18, 2021Date of Patent: January 17, 2023Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Publication number: 20220405205Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.Type: ApplicationFiled: June 27, 2022Publication date: December 22, 2022Inventors: Nicola Colella, Antonino Pollio, Hua Tan
-
Patent number: 11513952Abstract: Methods, systems, and devices for data separation for garbage collection are described. A control component coupled to the memory array may identify a source block for a garbage collection procedure. In some cases, a first set of pages of the source block may be identified as a first type associated with a first access frequency and a second set of pages of the source block ay be identified as a second type associated with a second access frequency. Once the pages are identified as either the first type or the second type, the first set of pages may be transferred to a first destination block, and the second set of pages may be transferred to a second destination block as part of the garbage collection procedure.Type: GrantFiled: July 1, 2020Date of Patent: November 29, 2022Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio
-
Publication number: 20220374163Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.Type: ApplicationFiled: May 18, 2021Publication date: November 24, 2022Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
-
Publication number: 20220334773Abstract: A processing device of a memory sub-system can monitor a plurality of received commands to identify a forced unit access command. The processing device can identify a metadata area of the memory device based on the forced unit access command. The processing device can also perform an action responsive to identifying a subsequent forced unit access command to the metadata area.Type: ApplicationFiled: July 1, 2022Publication date: October 20, 2022Inventors: Luca Porzio, Roberto Izzi, Nicola Colella, Danilo Caraccio, Alessandro Orlando