Patents by Inventor David Andrew Roberts
David Andrew Roberts has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11797198Abstract: Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to copy, move, or swap data across (e.g., between) different memory tiers of the memory sub-system, where each of the memory tiers is associated with different memory locations (e.g., different physical memory locations) on one or more memory devices of the memory sub-system.Type: GrantFiled: April 23, 2021Date of Patent: October 24, 2023Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 11797439Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.Type: GrantFiled: September 12, 2022Date of Patent: October 24, 2023Assignee: Micron Technologies, Inc.Inventor: David Andrew Roberts
-
Patent number: 11775370Abstract: Methods, systems, and apparatuses related to a memory fault map for an accelerated neural network. An artificial neural network can be accelerated by operating memory outside of the memory's baseline operating parameters. Doing so, however, often increases the amount of faulty data locations in the memory. Through creation and use of the disclosed fault map, however, artificial neural networks can be trained more quickly and using less bandwidth, which reduces the neural networks' sensitivity to these additional faulty data locations. Hardening a neural network to these memory faults allows the neural network to operate effectively even when using memory outside of that memory's baseline operating parameters.Type: GrantFiled: November 9, 2022Date of Patent: October 3, 2023Assignee: Micron Technologies, Inc.Inventor: David Andrew Roberts
-
Patent number: 11775458Abstract: Techniques for implementing and/or operating an apparatus, which includes a host system, a memory system, and a shared memory bus. The memory system includes a first memory type that is subject to a first memory type-specific timing constraint and a second memory type that is subject to a second memory type-specific timing constraint. Additionally, the shared memory bus is shared by the first memory type and the second memory type. Furthermore, the apparatus utilizes a first time period to communicate with the first memory type via the shared memory bus at least in part by enforcing the first memory type-specific timing constraint during the first time period and utilizes a second time period to communicate with the second memory type via the shared memory bus at least in part by enforcing the second memory type-specific timing constraint during the second time period.Type: GrantFiled: February 28, 2022Date of Patent: October 3, 2023Assignee: Micron Technology, Inc.Inventors: David Andrew Roberts, Joseph Thomas Pawlowski, Elliott Cooper-Balis
-
Patent number: 11768770Abstract: Described apparatuses and methods order memory address portions advantageously for cache-memory addressing. An address bus can have a smaller width than a memory address. The multiple bits of the memory address can be separated into most-significant bits (MSB) and least-significant bits (LSB) portions. The LSB portion is provided to a cache first. The cache can process the LSB portion before the MSB portion is received. The cache can use index bits of the LSB portion to index into an array of memory cells and identify multiple corresponding tags. The cache can also check the corresponding tags against lower tag bits of the LSB portion. A partial match may be labeled as a predicted hit, and a partial miss may be labeled as an actual miss, which can initiate a data fetch. With the remaining tag bits from the MSB portion, the cache can confirm or refute the predicted hit.Type: GrantFiled: August 30, 2022Date of Patent: September 26, 2023Assignee: Micron Technology, Inc.Inventors: Joseph Thomas Pawlowski, Elliott Clifford Cooper-Balis, David Andrew Roberts
-
Patent number: 11755488Abstract: Systems, apparatuses, and methods for predictive memory access are described. Memory control circuitry instructs a memory array to read a data block from or write the data block to a location targeted by a memory access request, determines memory access information including a data value correlation parameter determined based on data bits used to indicate a raw data value in the data block and/or an inter-demand delay correlation parameter determined based on a demand time of the memory access request, predicts that read access to another location in the memory array will subsequently be demanded by another memory access request based on the data value correlation parameter and/or the inter-demand delay correlation parameter, and instructs the memory array to output another data block stored at the other location to a different memory level that provides faster data access speed before the other memory access request is received.Type: GrantFiled: September 30, 2021Date of Patent: September 12, 2023Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 11693775Abstract: Described apparatuses and methods form adaptive cache lines having a configurable capacity from hardware cache lines having a fixed capacity. The adaptive cache lines can be formed in accordance with a programmable cache-line parameter. The programmable cache-line parameter can specify a capacity for the adaptive cache lines. The adaptive cache lines may be formed by combining respective groups of fixed-capacity hardware cache lines. The quantity of fixed-capacity hardware cache lines included in respective adaptive cache lines may be based on the programmable cache-line parameter. The programmable cache-line parameter can be selected in accordance with characteristics of the cache workload.Type: GrantFiled: April 4, 2022Date of Patent: July 4, 2023Assignee: Micron Technologies, Inc.Inventors: David Andrew Roberts, Joseph Thomas Pawlowski
-
Patent number: 11693593Abstract: Various embodiments enable versioning of data stored on a memory device, where the versioning allows the memory device to maintain different versions of data within a set of physical memory locations (e.g., a row) of the memory device. In particular, some embodiments provide for a memory device or a memory sub-system that uses versioning of stored data to facilitate a rollback operation/behavior, a checkpoint operation/behavior, or both. Additionally, some embodiments provide for a transactional memory device or a transactional memory sub-system that uses versioning of stored data to enable rollback of a memory transaction, commitment of a memory transaction, or handling of a read or write command associated with respect to a memory transaction.Type: GrantFiled: October 28, 2020Date of Patent: July 4, 2023Assignee: Micron Technology, Inc.Inventors: David Andrew Roberts, Sean Stephen Eilert
-
Publication number: 20230169011Abstract: Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.Type: ApplicationFiled: November 21, 2022Publication date: June 1, 2023Applicant: Micron Technology, Inc.Inventors: David Andrew Roberts, Joseph Thomas Pawlowski
-
Publication number: 20230114921Abstract: Methods, systems, and apparatuses related to a memory fault map for an accelerated neural network. An artificial neural network can be accelerated by operating memory outside of the memory's baseline operating parameters. Doing so, however, often increases the amount of faulty data locations in the memory. Through creation and use of the disclosed fault map, however, artificial neural networks can be trained more quickly and using less bandwidth, which reduces the neural networks' sensitivity to these additional faulty data locations. Hardening a neural network to these memory faults allows the neural network to operate effectively even when using memory outside of that memory's baseline operating parameters.Type: ApplicationFiled: November 9, 2022Publication date: April 13, 2023Applicant: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Publication number: 20230100328Abstract: Disclosed in some examples are improved address prediction and memory preloading that leverages next-delta prediction and/or far-delta prediction for scheduling using a DNN. Previous memory access sequence data that identify one or more memory addresses previously accessed by one or more processors of a system may be processed and then converted into a sequence of delta values. The sequence of delta values are then mapped to one or more classes that are then input to a DNN. The DNN then outputs a predicted future class identifier sequence that represents addresses that the DNN predicts will be accessed by the processor in the future. The predicted future class identifier sequence is then converted back to a predicted delta value sequence and back into a set of one or more predicted addresses.Type: ApplicationFiled: July 18, 2022Publication date: March 30, 2023Inventors: Aliasger Tayeb Zaidy, David Andrew Roberts, Patrick Michael Sheridan, Lukasz Burzawa
-
Publication number: 20230088638Abstract: Described apparatuses and methods track access metadata pertaining to activity within respective address ranges. The access metadata can be used to inform prefetch operations within the respective address ranges. The prefetch operations may involve deriving access patterns from access metadata covering the respective ranges. Suitable address range sizes for accurate pattern detection, however, can vary significantly from region to region of the address space based on, inter alia, workloads produced by programs utilizing the regions. Advantageously, the described apparatuses and methods can adapt the address ranges covered by the access metadata for improved prefetch performance. A data structure may be used to manage the address ranges in which access metadata are tracked. The address ranges can be adapted to improve prefetch performance through low-overhead operations implemented within the data structure.Type: ApplicationFiled: August 17, 2022Publication date: March 23, 2023Applicant: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Publication number: 20230060587Abstract: Disclosed in some examples are methods, systems, memory devices, and machine-readable mediums that allows an application thread to indicate an undo logging operation when calculations are beginning that may need to be rolled back if a crash or other failure occurs. During the undo logging operation, memory writes an identified memory are done to a copy and the original value is preserved. If the undo logging operation is committed, then the copy becomes the correct value and may then be subsequently used in place of the original, or the value stored in the copy is copied to the original. If the undo logging operation is abandoned, the copy is not preserved and the value goes back to the original.Type: ApplicationFiled: June 9, 2022Publication date: March 2, 2023Inventors: Tony M. Brewer, David Boles, David Andrew Roberts
-
Publication number: 20230061668Abstract: Described apparatuses and methods order memory address portions advantageously for cache-memory addressing. An address bus can have a smaller width than a memory address. The multiple bits of the memory address can be separated into most-significant bits (MSB) and least-significant bits (LSB) portions. The LSB portion is provided to a cache first. The cache can process the LSB portion before the MSB portion is received. The cache can use index bits of the LSB portion to index into an array of memory cells and identify multiple corresponding tags. The cache can also check the corresponding tags against lower tag bits of the LSB portion. A partial match may be labeled as a predicted hit, and a partial miss may be labeled as an actual miss, which can initiate a data fetch. With the remaining tag bits from the MSB portion, the cache can confirm or refute the predicted hit.Type: ApplicationFiled: August 30, 2022Publication date: March 2, 2023Applicant: Micron Technology, Inc.Inventors: Joseph Thomas Pawlowski, Elliott Clifford Cooper-Balis, David Andrew Roberts
-
Publication number: 20230058668Abstract: A cache memory can maintain multiple cache lines and each cache line can include a data field, an encryption status attribute, and an encryption key attribute. The encryption status attribute can indicate whether the data field in the corresponding cache line includes encrypted or unencrypted data and the encryption key attribute can include an encryption key identifier for the corresponding cache line. In an example, a cryptographic controller can access keys from a key table to selectively encrypt or unencrypt cache data. Infrequently accessed cache data can be maintained as encrypted data, and more frequently accessed cache data can be maintained as unencrypted data. In some examples, different cache lines in the same cache memory can be maintained as encrypted or unencrypted data, and different cache lines can use respective different encryption keys.Type: ApplicationFiled: June 17, 2022Publication date: February 23, 2023Inventors: David Andrew Roberts, Tony M. Brewer
-
Patent number: 11586361Abstract: Described apparatuses and methods control a voltage or a temperature of a memory domain to balance memory performance and energy use. In some aspects, an adaptive controller monitors memory performance metrics of a host processor that correspond to commands made to a memory domain of a memory system, including one operating at cryogenic temperatures. Based on the memory performance metrics, the adaptive controller can determine memory performance demand of the host processor, such as latency demand or bandwidth demand, for the memory domain. The adaptive controller may alter, using the determined performance demand, a voltage or a temperature of the memory domain to enable memory access performance that is tailored to meet the demand of the host processor. By so doing, the adaptive controller can manage various settings of the memory domain to address short- or long-term changes in memory performance demand.Type: GrantFiled: May 16, 2022Date of Patent: February 21, 2023Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Publication number: 20230052043Abstract: Described apparatuses and methods track access metadata pertaining to activity within respective address ranges. The access metadata can be used to inform prefetch operations within the respective address ranges. The prefetch operations may involve deriving access patterns from access metadata covering the respective ranges. Suitable address range sizes for accurate pattern detection, however, can vary significantly from region to region of the address space based on, inter alia, workloads produced by programs utilizing the regions. Advantageously, the described apparatuses and methods can adapt the address ranges covered by the access metadata for improved prefetch performance. A data structure may be used to manage the address ranges in which access metadata are tracked. The address ranges can be adapted to improve prefetch performance through low-overhead operations implemented within the data structure.Type: ApplicationFiled: July 27, 2022Publication date: February 16, 2023Applicant: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Publication number: 20230051103Abstract: Various embodiments provide for one or more processor instructions and memory instructions that enable a memory sub-system to predict a schedule for migrating data between memory devices, which can be part of a memory sub-system.Type: ApplicationFiled: August 16, 2021Publication date: February 16, 2023Inventors: David Andrew Roberts, Aliasger Tayeb Zaidy
-
Publication number: 20230031680Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.Type: ApplicationFiled: September 12, 2022Publication date: February 2, 2023Applicant: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 11507516Abstract: Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.Type: GrantFiled: August 19, 2020Date of Patent: November 22, 2022Assignee: Micron Technology, Inc.Inventors: David Andrew Roberts, Joseph Thomas Pawlowski