Patents by Inventor Antonino Pollio

Antonino Pollio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928063
    Abstract: A method includes: creating L2P tables while programming virtual blocks (VBs) across memory planes; creating an L2P bitmap for each VB, the L2P bitmap identifying logical addresses, within each L2P table, that belong to each VB; creating a VB bitmap for each L2P table, the VB bitmap identifying virtual blocks to which the respective L2P table points; creating an updated VB bitmap for a first L2P table based on changes to the first L2P table; determining that an entry in the VB bitmap is different than the entry in the updated VB bitmap, the entry corresponding to a particular VB; identifying an L2P bitmap corresponding to the particular VB; changing a bit within the identified L2P bitmap for an L2P mapping corresponding to the entry; and employing the identified L2P bitmap to determine L2P table(s) of the respective L2P tables that contain valid logical addresses for the particular VB.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: March 12, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Publication number: 20240069784
    Abstract: Methods, systems, and devices for idle mode temperature control for memory systems are described. A memory system may implement the use of one or more dummy access commands to reduce the effects of errors introduced by temperature changes while the memory system is in an idle mode. For example, performing one or more access commands, such as one or more read commands, may increase a temperature of a memory device and support a desired operating temperature for the memory device while the memory system is in the idle mode. The memory system may measure the temperature of the memory device during the idle mode. If the memory system determines that the temperature of the memory device has fallen below a threshold temperature, the memory system may issue a quantity of dummy access commands to the memory device, and the corresponding dummy access operations may result in a temperature increase at the memory device.
    Type: Application
    Filed: August 31, 2022
    Publication date: February 29, 2024
    Inventors: Francesco Basso, Antonino Pollio, Francesco Falanga, Massimo Iaculo
  • Publication number: 20240061787
    Abstract: A method includes: creating L2P tables while programming virtual blocks (VBs) across memory planes; creating an L2P bitmap for each VB, the L2P bitmap identifying logical addresses, within each L2P table, that belong to each VB; creating a VB bitmap for each L2P table, the VB bitmap identifying virtual blocks to which the respective L2P table points; creating an updated VB bitmap for a first L2P table based on changes to the first L2P table; determining that an entry in the VB bitmap is different than the entry in the updated VB bitmap, the entry corresponding to a particular VB; identifying an L2P bitmap corresponding to the particular VB; changing a bit within the identified L2P bitmap for an L2P mapping corresponding to the entry; and employing the identified L2P bitmap to determine L2P table(s) of the respective L2P tables that contain valid logical addresses for the particular VB.
    Type: Application
    Filed: August 18, 2022
    Publication date: February 22, 2024
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Patent number: 11886341
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: January 30, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Patent number: 11768627
    Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: September 26, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Publication number: 20230244414
    Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.
    Type: Application
    Filed: January 11, 2023
    Publication date: August 3, 2023
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Patent number: 11657878
    Abstract: Methods, systems, and devices for initialization techniques for memory devices are described. A memory system may include a memory array on a first die and a controller on a second die, where the second die is coupled with the first die. The controller may perform an initialization procedure based on operating instructions stored within the memory system. For example, the controller may read a first set of operating instructions from read-only memory on the second die. The controller may obtain a second set of operating instructions stored at a memory block of the memory array on the first die, with the memory block indicated by the first set of operating instructions. The controller may complete or at least further the initialization procedure based on the second set of operating instructions.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Antonino Pollio, Giuseppe Vito Portacci, Mauro Luigi Sali, Alessandro Magnavacca
  • Publication number: 20230015332
    Abstract: Methods, systems, and devices for a split cache for address mapping data are described. A memory system may include a cache (e.g., including a first and second portion) for storing data that indicates a mapping between logical addresses associated with a host system and physical addresses of the memory system. The memory system may store data (e.g., the address mapping data) within the first portion of the cache. Additionally, the memory system may store an indication of whether the data is used for any access operations during a duration that the data is stored in the first portion of the cache. The memory system may transfer subsets of the data to the second portion of the cache if they are used for access operations during the duration.
    Type: Application
    Filed: August 11, 2022
    Publication date: January 19, 2023
    Inventors: Nicola Colella, Antonino Pollio
  • Patent number: 11556275
    Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: January 17, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Publication number: 20220406388
    Abstract: Methods, systems, and devices for setting switching for single-level cells (SLCs) are described. A memory system may receive an access command from a host. The access command may correspond to an SLC block or to a multiple-level cell block. If the access command corresponds to an SLC block, the memory system may modify the access command to include one or more bits indicating a setting to use for performing the access operation corresponding to the access command. The setting may define one or more operating parameters for performing the access operation. The memory system may use bits to indicate the setting that are used to indicate a page address for multiple-level cell blocks. The memory system may issue the access command to a memory device, which may perform the access operation using the setting indicated in the one or more bits included by the memory system.
    Type: Application
    Filed: May 4, 2022
    Publication date: December 22, 2022
    Inventors: Umberto Siciliani, Tao Liu, Ting Luo, Dionisio Minopoli, Giuseppe D'Eliseo, Giuseppe Ferrari, Walter Di'Francesco, Antonino Pollio, Luigi Esposito, Anna Scalesse, Allison J. Olson, Anna Chiara Siviero
  • Publication number: 20220405205
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Application
    Filed: June 27, 2022
    Publication date: December 22, 2022
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Patent number: 11513952
    Abstract: Methods, systems, and devices for data separation for garbage collection are described. A control component coupled to the memory array may identify a source block for a garbage collection procedure. In some cases, a first set of pages of the source block may be identified as a first type associated with a first access frequency and a second set of pages of the source block ay be identified as a second type associated with a second access frequency. Once the pages are identified as either the first type or the second type, the first set of pages may be transferred to a first destination block, and the second set of pages may be transferred to a second destination block as part of the garbage collection procedure.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: November 29, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio
  • Publication number: 20220374163
    Abstract: Methods, systems, and devices for using page line filler data are described. In some examples, a memory system may store data within a write buffer of the memory system. The memory system may initiate an operation to transfer the write buffer data to a memory device, for example, due to a command to perform a memory management operation (e.g., cache synchronization, context switching, or the like) from a host system. In some examples, a quantity of write buffer data may fail to satisfy a data size threshold. Thus, the memory system may aggregate the data in the write buffer with valid data from a block of the memory device associated with garbage collection. The memory system may aggregate the write buffer data with the garbage collection data until the aggregated data satisfies the data size threshold. The memory system may then write the aggregated data to the memory device.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Inventors: Nicola Colella, Antonino Pollio, Gianfranco Ferrante
  • Patent number: 11429528
    Abstract: Methods, systems, and devices for a split cache for address mapping data are described. A memory system may include a cache (e.g., including a first and second portion) for storing data that indicates a mapping between logical addresses associated with a host system and physical addresses of the memory system. The memory system may store data (e.g., the address mapping data) within the first portion of the cache. Additionally, the memory system may store an indication of whether the data is used for any access operations during a duration that the data is stored in the first portion of the cache. The memory system may transfer subsets of the data to the second portion of the cache if they are used for access operations during the duration.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: August 30, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio
  • Publication number: 20220223211
    Abstract: Methods, systems, and devices for initialization techniques for memory devices are described. A memory system may include a memory array on a first die and a controller on a second die, where the second die is coupled with the first die. The controller may perform an initialization procedure based on operating instructions stored within the memory system. For example, the controller may read a first set of operating instructions from read-only memory on the second die. The controller may obtain a second set of operating instructions stored at a memory block of the memory array on the first die, with the memory block indicated by the first set of operating instructions. The controller may complete or at least further the initialization procedure based on the second set of operating instructions.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 14, 2022
    Inventors: Antonino Pollio, Giuseppe Vito Portacci, Mauro Luigi Sali, Alessandro Magnavacca
  • Patent number: 11379367
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: July 5, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Publication number: 20220156196
    Abstract: Methods, systems, and devices for a split cache for address mapping data are described. A memory system may include a cache (e.g., including a first and second portion) for storing data that indicates a mapping between logical addresses associated with a host system and physical addresses of the memory system. The memory system may store data (e.g., the address mapping data) within the first portion of the cache. Additionally, the memory system may store an indication of whether the data is used for any access operations during a duration that the data is stored in the first portion of the cache. The memory system may transfer subsets of the data to the second portion of the cache if they are used for access operations during the duration.
    Type: Application
    Filed: November 19, 2020
    Publication date: May 19, 2022
    Inventors: Nicola Colella, Antonino Pollio
  • Publication number: 20220156185
    Abstract: Methods, systems, and devices for read operations for regions of a memory device are described. In some examples, a memory device may include a first cache for storing mappings between logical addresses and physical addresses of the memory device, and a second cache for storing indices associated with entries removed from the first cache. The memory device may include a controller configured to load mappings to the first cache upon receiving read commands. When the first cache is full, and when the memory device receives a read command, the controller may remove an entry from the first cache and may store an index associated with the removed entry to the second cache. The controller may then transmit a mapping associated with the index to a host device for use in a HPB operation.
    Type: Application
    Filed: November 19, 2020
    Publication date: May 19, 2022
    Inventors: Nicola Colella, Antonino Pollio, Hua Tan
  • Patent number: 11238940
    Abstract: Methods, systems, and devices for initialization techniques for memory devices are described. A memory system may include a memory array on a first die and a controller on a second die, where the second die is coupled with the first die. The controller may perform an initialization procedure based on operating instructions stored within the memory system. For example, the controller may read a first set of operating instructions from read-only memory on the second die. The controller may obtain a second set of operating instructions stored at a memory block of the memory array on the first die, with the memory block indicated by the first set of operating instructions. The controller may complete or at least further the initialization procedure based on the second set of operating instructions.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: February 1, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Antonino Pollio, Giuseppe Vito Portacci, Mauro Luigi Sali, Alessandro Magnavacca
  • Publication number: 20220004493
    Abstract: Methods, systems, and devices for data separation for garbage collection are described. A control component coupled to the memory array may identify a source block for a garbage collection procedure. In some cases, a first set of pages of the source block may be identified as a first type associated with a first access frequency and a second set of pages of the source block ay be identified as a second type associated with a second access frequency. Once the pages are identified as either the first type or the second type, the first set of pages may be transferred to a first destination block, and the second set of pages may be transferred to a second destination block as part of the garbage collection procedure.
    Type: Application
    Filed: July 1, 2020
    Publication date: January 6, 2022
    Inventors: Nicola Colella, Antonino Pollio