Patents by Inventor Nicola Del Gatto
Nicola Del Gatto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11977769Abstract: A memory controller may calculate a sum of a first number of entries stored in a read buffer and a second number of entries stored in a write buffer. If the sum is less than a first threshold and the read/write buffer is not full of entries, then the memory controller can request read/write commands from a host computing device. If the sum is not less than the first threshold or the read/write buffer is full of entries, then the memory controller can assert backpressure to stop the incoming flow newly incoming read/write commands from the host computing device. Additionally, or alternatively, the memory controller may dequeue a write command entry only if a number of write command entries stored in a write command FIFO memory is greater than a second threshold.Type: GrantFiled: September 6, 2022Date of Patent: May 7, 2024Assignee: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Publication number: 20240126441Abstract: An apparatus can include a plurality of memory devices and a memory controller coupled to the plurality of memory devices via a plurality of memory channels. The plurality of memory channels can be each organized as a plurality of channel groups that can be operated as independent RAS channels (e.g., channels for independent RAS accesses). Data received at the memory controller via different memory channels of one RAS channel can be aligned at various circuits and/or components of the memory controller.Type: ApplicationFiled: October 18, 2022Publication date: April 18, 2024Inventors: Emanuele Confalonieri, Antonino CaprÃ, Nicola Del Gatto, Federica Cresci, Massimiliano Turconi
-
Patent number: 11954035Abstract: Methods, systems, and devices for cache architectures for memory devices are described. For example, a memory device may include a main array having a first set of memory cells, a cache having a second set of memory cells, and a cache delay register configured to store an indication of cache addresses associated with recently performed access operations. In some examples, the cache delay register may be operated as a first-in-first-out (FIFO) register of cache addresses, where a cache address associated with a performed access operation may be added to the beginning of the FIFO register, and a cache address at the end of the FIFO register may be purged. Information associated with access operations on the main array may be maintained in the cache, and accessed directly (e.g., without another accessing of the main array), at least as long as the cache address is present in the cache delay register.Type: GrantFiled: October 18, 2022Date of Patent: April 9, 2024Assignee: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Patent number: 11934270Abstract: One or more data blocks of a write command can be written to memory devices independently of other data blocks that are grouped together for an error correction operation with the data blocks. Further, data blocks of different write commands can be executed together and simultaneously rather than being executed separately at different times, which can reduce the latencies associated with executing the write commands.Type: GrantFiled: June 2, 2022Date of Patent: March 19, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Del Gatto, Marco Sforzin, Paolo Amato
-
Publication number: 20240070024Abstract: Described apparatuses and methods relate to a read data path for a memory system. A memory system can include logic that receives data from a memory. The data may include first data, parity data, and metadata that enables a reliability check of the data. The logic may perform the reliability check of the data to determine an accuracy of the data. If the data is determined not to include an error, the data may be transmitted for accessing by a requestor. If the data is determined to include an error, however, a data recovery process may be initiated to recover the corrupted data along a separate data path. In doing so, the apparatuses and methods related to a read data path for a memory system and described herein may reduce the likelihood that a memory system returns corrupted data to a requestor.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Micron Technology, Inc.Inventors: Nicola Del Gatto, Emanuele Confalonieri
-
Publication number: 20240070015Abstract: Described apparatuses and methods relate to a read data path for a memory system. The memory system may include logic that receives data and associated metadata from a memory. The logic may perform a reliability check on the data using the associated metadata to determine if the data has an error. If the data is determined not to include an error, the data may be transmitted to a requestor. If the data is determined to include an error, however, a data recovery process may be initiated to recover the data. This may reduce a likelihood the memory system returns corrupted data to a requestor. The memory system may process a different read request at least partially in parallel with the data recovery process to increase throughput or reduce latency. In some cases, the data recovery process may involve one or more techniques related to redundant array of disks (RAID) technology.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Micron Technology, Inc.Inventors: Nicola Del Gatto, Emanuele Confalonieri
-
Patent number: 11914893Abstract: Methods, systems, and devices for managed memory systems with multiple priority queues are described. Memory access commands may be received from a host and stored in a command queue. First and second subsets of the commands, respectively associated with first and second priorities, may be determined. The first and second subsets may be routed from the command queue to first and second queues, respectively. The first and second subsets may be processed from the first and second queues to third and fourth queues, respectively, at a storage controller, according to first and second processes that may be run concurrently according to parameters for prioritization between the first and second priorities. Data associated with the commands may be received from the host, temporarily stored in a buffer, then moved to a storage memory (for write commands) or retrieved from the storage memory, temporarily stored in the buffer, then transmitted to the host (for read commands).Type: GrantFiled: November 18, 2020Date of Patent: February 27, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Del Gatto, Massimiliano Patriarca, Antonino Caprì, Emanuele Confalonieri, Angelo Alberto Rovelli
-
Patent number: 11886749Abstract: Methods, systems, and devices for event management for memory devices are described. A memory system may include a frontend (FE) queue and a backend (BE). Each queue may include an interface that can be operated in an interrupt mode or a polling mode based on certain metrics. For example, the interface associated with the FE queue may be operated in a polling mode or an interrupt mode based on whether a quantity of commands being executed on one or more memory devices of the memory system satisfies a threshold value. Additionally or alternatively, the interface associated with the BE queue may be operated in a polling mode or an interrupt mode based on whether a quantity of active logical block addresses (LBAs) associated with one or more operations being executed on one or more memory devices of the memory system satisfies a threshold value.Type: GrantFiled: December 27, 2022Date of Patent: January 30, 2024Inventors: Federica Cresci, Nicola Del Gatto, Massimiliano Turconi, Massimiliano Patriarca
-
Publication number: 20240004791Abstract: An apparatus can include a plurality of memory devices and a memory controller coupled to the plurality of memory devices via a plurality of memory channels. The plurality of memory channels are organized as a plurality of channel groups, and the memory controller comprises respective independent caches corresponding to the plurality of channel groups.Type: ApplicationFiled: May 26, 2023Publication date: January 4, 2024Inventors: Emanuele Confalonieri, Nicola Del Gatto
-
Publication number: 20230418756Abstract: Systems, apparatuses, and methods related to a memory controller for cache bypass are described. An example memory controller can be coupled to a memory device. The example memory controller can include a cache including a cache sequence controller configured to determine a quantity of a given type of result of cache look-up operations, determine the quantity satisfies a bypass threshold, and cause performance of a bypass memory operation that bypasses the cache and accesses the memory device.Type: ApplicationFiled: June 27, 2023Publication date: December 28, 2023Inventors: Emanuele Confalonieri, Patrick Estep, Stephen S. Pawlowski, Nicola Del Gatto
-
Publication number: 20230393940Abstract: One or more data blocks of a write command can be written to memory devices independently of other data blocks that are grouped together for an error correction operation with the data blocks. Further, data blocks of different write commands can be executed together and simultaneously rather than being executed separately at different times, which can reduce the latencies associated with executing the write commands.Type: ApplicationFiled: June 2, 2022Publication date: December 7, 2023Inventors: Nicola Del Gatto, Marco Sforzin, Paolo Amato
-
Patent number: 11763887Abstract: Methods, systems, and devices for cleaning memory blocks using multiple types of write operations are described. A counter may be incremented each time a write command is received. In response to the counter reaching a threshold, the counter may be reset and a flag may be set. Each time a cleaning of a memory block is to take place, the flag may be checked. If the flag is set, the memory block may be cleaned using a second type of cleaning operation, such as one using a force write approach. Otherwise, the memory block may be cleaned using a first type of cleaning operation, such as one using a normal write approach. Once set, the flag may be reset after one or more memory blocks are cleaned using the second type of cleaning operation.Type: GrantFiled: September 8, 2022Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Publication number: 20230280940Abstract: A memory controller can include a front end portion configured to interface with a host, a central controller portion configured to manage data, a back end portion configured to interface with memory devices. The memory controller can include interface management circuitry coupled to a cache and a memory device. The memory controller can receive, by the interface management controller, a first signal indicative of data associated with a memory access request from a host. The memory controller can transmit a second signal indicative of the data to cache the data in a first location in the cache. The memory controller can transmit a third signal indicative of the data to cache the data in a second location in the cache.Type: ApplicationFiled: March 1, 2022Publication date: September 7, 2023Inventors: Nicola Del Gatto, Emanuele Confalonieri, Paolo Amato, Patrick Estep, Stephen S. Pawlowski
-
Patent number: 11720284Abstract: Methods, systems, and devices for low latency storage based on data size are described. A memory system may include logic, a processor, a first memory, and a second memory. The logic may be configured to receive commands, or data, or both, from a host system. The first memory and the second memory may be coupled with the processor. The processor may be configured to store, or to cause the storage of, data for commands associated with data that are smaller than a threshold in the first memory and to store data for commands associated with data that are larger than the threshold in the second memory. A first communication path between the logic and the first memory may be associated with a faster transfer speed than a second communication path between the logic and the second memory.Type: GrantFiled: April 29, 2021Date of Patent: August 8, 2023Assignee: Micron Technology, Inc.Inventors: Federica Cresci, Nicola Del Gatto, Massimiliano Patriarca, Maddalena Calzolari, Michela Spagnolo, Massimiliano Turconi
-
Publication number: 20230236949Abstract: Provided is a system and method for storing, via a processor, in a memory of an application specific integrated circuit (ASIC), one or more threshold values responsive to at least one of physical layer and processing layer operating conditions of the ASIC. Also included is monitoring at least one of a physical layer operating condition value and a processing layer performance condition value of the ASIC, the moderating forming a monitored value, comparing the monitored value with the stored threshold values, and throttling processing layer performance of the ASIC when the monitored value exceeds at least one of the stored threshold values.Type: ApplicationFiled: September 2, 2022Publication date: July 27, 2023Applicant: Micron Technology, Inc.Inventors: Federica CRESCI, Nicola DEL GATTO, Emanuele CONFANOLIERI
-
Publication number: 20230236726Abstract: A memory controller may calculate a sum of a first number of entries stored in a read buffer and a second number of entries stored in a write buffer. If the sum is less than a first threshold and the read/write buffer is not full of entries, then the memory controller can request read/write commands from a host computing device. If the sum is not less than the first threshold or the read/write buffer is full of entries, then the memory controller can assert backpressure to stop the incoming flow newly incoming read/write commands from the host computing device. Additionally, or alternatively, the memory controller may dequeue a write command entry only if a number of write command entries stored in a write command FIFO memory is greater than a second threshold.Type: ApplicationFiled: September 6, 2022Publication date: July 27, 2023Applicant: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Publication number: 20230236729Abstract: Provided is a method for regulating, via a hardware performance throttling block (PTB) of a memory module, the performance of a memory system in response to read and write requests from a processing system which hosts the memory system. The host system sends memory service requests to the memory system in the form of memory read requests and memory write requests. The host system may also send requests to throttle, that is, to limit the responses of the memory system in response to memory requests; the host system may also send to the memory system various parameters indicative of current memory usage. In response to the throttling request, the PTB of the memory module either stops any reception of memory requests, or limits (throttles) the number of memory requests (either read requests, write requests, or both) for a specified number of clock/command cycles. The PTB also determines when full, un-throttled performance may be resumed.Type: ApplicationFiled: September 2, 2022Publication date: July 27, 2023Applicant: Micron Technology, Inc.Inventors: Federica CRESCI, Nicola DEL GATTO, Emanuele CONFANOLIERI
-
Publication number: 20230236758Abstract: A memory controller may calculate a sum of a first number of entries stored in a read buffer and a second number of entries stored in a write buffer. If the sum is less than a first threshold and the read/write buffer is not full of entries, then the memory controller can request read/write commands from a host computing device. If the sum is not less than the first threshold or the read/write buffer is full of entries, then the memory controller can assert backpressure to stop the incoming flow newly incoming read/write commands from the host computing device. Additionally, or alternatively, the memory controller may dequeue a write command entry only if a number of write command entries stored in a write command FIFO memory is greater than a second threshold.Type: ApplicationFiled: September 6, 2022Publication date: July 27, 2023Applicant: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Publication number: 20230236757Abstract: A memory controller may calculate a sum of a first number of entries stored in a read buffer and a second number of entries stored in a write buffer. If the sum is less than a first threshold and the read/write buffer is not full of entries, then the memory controller can request read/write commands from a host computing device. If the sum is not less than the first threshold or the read/write buffer is full of entries, then the memory controller can assert backpressure to stop the incoming flow newly incoming read/write commands from the host computing device. Additionally, or alternatively, the memory controller may dequeue a write command entry only if a number of write command entries stored in a write command FIFO memory is greater than a second threshold.Type: ApplicationFiled: September 6, 2022Publication date: July 27, 2023Applicant: Micron Technology, Inc.Inventor: Nicola Del Gatto
-
Publication number: 20230229556Abstract: There are provided methods and systems for improving RAS features of a memory device. For example, there is provided a system that includes a memory and a memory side cache. The system further includes a processor that is configured to minimize accesses to the memory by executing certain operations. The operations can include computing a new parity based on old data, new data, and an old parity in response to data from the memory side cache being written to the memory.Type: ApplicationFiled: August 26, 2022Publication date: July 20, 2023Applicant: Micron Technology, Inc.Inventors: Patrick Estep, Steve Pawlowski, Emanuele Confalonieri, Nicola Del Gatto, Paolo Amato