Patents by Inventor Patrick Estep
Patrick Estep has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12386656Abstract: Devices and techniques for thread scheduling control and memory splitting in a processor are described herein. An apparatus includes a hardware interface configured to receive a first request to execute a first thread, the first request including an indication of a workload; and processing circuitry configured to: determine the workload to produce a metric based at least in part on the indication; compare the metric with a threshold to determine that the metric is beyond the threshold; divide, based at least in part on the comparison, the workload into a set of sub-workloads consisting of predefined number of equal parts from the workload; create a second request to execute a second thread, the second request including a first member of the set of sub-workloads; and process a second member of the set of sub-workloads in the first thread.Type: GrantFiled: September 2, 2021Date of Patent: August 12, 2025Assignee: Micron Technology, Inc.Inventors: Skyler Arron Windh, Tony M. Brewer, Patrick Estep
-
Patent number: 12340125Abstract: Systems, apparatuses, and methods related to data reconstruction based on queue depth comparison are described. To avoid accessing the “congested” channel, a read command to access the “congested” channel can be executed by accessing the other relatively “idle” channels and utilize data read from the “idle” channels to reconstruct data corresponding to the read command.Type: GrantFiled: December 6, 2023Date of Patent: June 24, 2025Assignee: Micron Technology, Inc.Inventors: Patrick Estep, Sean S. Eilert, Ameen D. Akel
-
Patent number: 12332803Abstract: An apparatus can include a plurality of memory devices and a memory controller coupled to the plurality of memory devices via a plurality of memory channels. The plurality of memory channels are organized as a plurality of channel groups. The memory controller comprises a plurality of memory access request/response buffer sets, and each memory access request/response buffer set of the plurality of memory access request/response buffer sets corresponds to a different one of the plurality of channel groups.Type: GrantFiled: May 26, 2023Date of Patent: June 17, 2025Assignee: Micron Technology, Inc.Inventors: Emanuele Confalonieri, Stephen S. Pawlowski, Patrick Estep
-
Patent number: 12282433Abstract: Systems, apparatuses, and methods related to a memory controller for cache bypass are described. An example memory controller can be coupled to a memory device. The example memory controller can include a cache including a cache sequence controller configured to determine a quantity of a given type of result of cache look-up operations, determine the quantity satisfies a bypass threshold, and cause performance of a bypass memory operation that bypasses the cache and accesses the memory device.Type: GrantFiled: June 27, 2023Date of Patent: April 22, 2025Assignee: Micron Technology, Inc.Inventors: Emanuele Confalonieri, Patrick Estep, Stephen S. Pawlowski, Nicola Del Gatto
-
Publication number: 20250094242Abstract: Devices and techniques for chained resource locking are described herein. Threads form a last-in-first-out (LIFO) queue on a resource lock to create a chained lock on the resource. A data store representing the lock for the resource holds the previous thread's identifier, enabling a subsequent thread to wake the previous thread using the identifier when the subsequent thread releases the lock. Generally, the thread releasing the lock need not interact with the data store, reducing contention for the data store among many threads.Type: ApplicationFiled: November 25, 2024Publication date: March 20, 2025Inventors: Patrick Estep, Tony M. Brewer
-
Publication number: 20250068572Abstract: Linear interpolation is performed within a memory system. The memory system receives a floating-point point index into an integer-indexed memory array. The memory system accesses the two values of the two adjacent integer indices, performs the linear interpolation, and provides the resulting interpolated value. In many system architectures, the critical limitation on system performance is the data transfer rate between memory and processing elements. Accordingly, reducing the amount of data transferred improves overall system performance and reduces power consumption.Type: ApplicationFiled: November 15, 2024Publication date: February 27, 2025Inventors: Bryan Hornung, Tony M. Brewer, Douglas Vanesko, Patrick Estep
-
Publication number: 20250036284Abstract: Methods, systems, and devices for techniques for data transfer between tiered memory devices are described. A memory system may include a data transfer engine to manage data transfers between different tiers of memory devices within the memory system. The data transfer engine may receive a command which includes a set of source addresses of each of a set of data sets and a set of destination addresses to which the data sets are to be transferred. The data transfer engine may schedule and perform a transfer operation to transfer each of the set of data sets from the respective source address to the respective destination address. The command may further include an indication of an interrupt policy of a set of interrupt policies supported by the data transfer engine. The set of interrupt policies may determine how the data transfer engine may handle interruptions to the data transfer operation.Type: ApplicationFiled: July 16, 2024Publication date: January 30, 2025Inventors: David Andrew Roberts, Patrick Estep
-
Publication number: 20250028632Abstract: Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which solve the above problems using a global shared region of memory that combines memory segments from multiple CXL devices. Each memory segment is a same size and naturally aligned in its own physical address space. The global shared region is contiguous and naturally aligned in the virtual address space. By organizing this global shared region in this manner, a series of three tables may be used to quickly translate a virtual address in the global shared region to a physical address. This prevents TLB thrashing and improves performance of the computing system.Type: ApplicationFiled: October 7, 2024Publication date: January 23, 2025Inventors: Bryan Hornung, Patrick Estep
-
Publication number: 20250021317Abstract: Devices and techniques for parallelizing loops that have loop-dependent variables are described herein. A system includes a processing device; and a memory device configured to store instructions, which when executed by the processing device, cause the processing device to perform operations comprising: accessing, by a compiler executing on a processing device, a computer code listing; determining that the computer code listing includes a loop with a loop-carried dependency variable; optimizing the loop for parallel execution by removing the loop-carried dependency variable; and compiling the computer code listing into executable software code with the loop executable in parallel in hardware.Type: ApplicationFiled: July 10, 2024Publication date: January 16, 2025Inventors: Bashar Romanous, Skyler Arron Windh, Patrick Estep
-
Patent number: 12182635Abstract: Devices and techniques for CHAINED RESOURCE LOCKING are described herein. Threads form a last-in-first-out (LIFO) queue on a resource lock to create a chained lock on the resource. A data store representing the lock for the resource holds the previous thread's identifier, enabling a subsequent thread to wake the previous thread using the identifier when the subsequent thread releases the lock. Generally, the thread releasing the lock need not interact with the data store, reducing contention for the data store among many threads.Type: GrantFiled: August 18, 2021Date of Patent: December 31, 2024Assignee: Micron Technology, Inc.Inventors: Patrick Estep, Tony M. Brewer
-
Publication number: 20240427526Abstract: A memory controller can include a front end portion configured to interface with a host, a central controller portion configured to manage data, a back end portion configured to interface with memory devices. The memory controller can include interface management circuitry coupled to a cache and a memory device. The memory controller can receive, by the interface management controller, a first signal indicative of data associated with a memory access request from a host. The memory controller can transmit a second signal indicative of the data to cache the data in a first location in the cache. The memory controller can transmit a third signal indicative of the data to cache the data in a second location in the cache.Type: ApplicationFiled: September 10, 2024Publication date: December 26, 2024Inventors: Nicola Del Gatto, Emanuele Confalonieri, Paolo Amato, Patrick Estep, Stephen S. Pawlowski
-
Patent number: 12174759Abstract: Linear interpolation is performed within a memory system. The memory system receives a floating-point point index into an integer-indexed memory array. The memory system accesses the two values of the two adjacent integer indices, performs the linear interpolation, and provides the resulting interpolated value. In many system architectures, the critical limitation on system performance is the data transfer rate between memory and processing elements. Accordingly, reducing the amount of data transferred improves overall system performance and reduces power consumption.Type: GrantFiled: April 26, 2021Date of Patent: December 24, 2024Assignee: Micron Technology, Inc.Inventors: Bryan Hornung, Tony M. Brewer, Douglas Vanesko, Patrick Estep
-
Patent number: 12141055Abstract: Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which solve the above problems using a global shared region of memory that combines memory segments from multiple CXL devices. Each memory segment is a same size and naturally aligned in its own physical address space. The global shared region is contiguous and naturally aligned in the virtual address space. By organizing this global shared region in this manner, a series of three tables may be used to quickly translate a virtual address in the global shared region to a physical address. This prevents TLB thrashing and improves performance of the computing system.Type: GrantFiled: August 31, 2022Date of Patent: November 12, 2024Assignee: Micron Technology, Inc.Inventors: Bryan Hornung, Patrick Estep
-
Patent number: 12093566Abstract: A memory controller can include a front end portion configured to interface with a host, a central controller portion configured to manage data, a back end portion configured to interface with memory devices. The memory controller can include interface management circuitry coupled to a cache and a memory device. The memory controller can receive, by the interface management controller, a first signal indicative of data associated with a memory access request from a host. The memory controller can transmit a second signal indicative of the data to cache the data in a first location in the cache. The memory controller can transmit a third signal indicative of the data to cache the data in a second location in the cache.Type: GrantFiled: March 1, 2022Date of Patent: September 17, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Del Gatto, Emanuele Confalonieri, Paolo Amato, Patrick Estep, Stephen S. Pawlowski
-
Publication number: 20240192892Abstract: Systems, apparatuses, and methods related to data reconstruction based on queue depth comparison are described. To avoid accessing the “congested” channel, a read command to access the “congested” channel can be executed by accessing the other relatively “idle” channels and utilize data read from the “idle” channels to reconstruct data corresponding to the read command.Type: ApplicationFiled: December 6, 2023Publication date: June 13, 2024Inventors: Patrick Estep, Sean S. Eilert, Ameen D. Akel
-
Publication number: 20240192955Abstract: Various examples are directed to systems and methods for executing a loop in a reconfigurable compute fabric. A first flow controller may initiate a first thread at a first synchronous flow to execute a first portion of a first iteration of the loop. A second flow controller may receive a first asynchronous message instructing the second flow controller to initiate a first thread at a second synchronous flow to execute a second portion of the first iteration. The second flow controller may determine that the first iteration of the loop is the last iteration of the loop to be executed and initiate the first thread at the second synchronous flow with a last iteration flag set.Type: ApplicationFiled: January 29, 2024Publication date: June 13, 2024Inventors: Douglas Vanesko, Bryan Hornung, Patrick Estep
-
Patent number: 11907718Abstract: Various examples are directed to systems and methods for executing a loop in a reconfigurable compute fabric. A first flow controller may initiate a first thread at a first synchronous flow to execute a first portion of a first iteration of the loop. A second flow controller may receive a first asynchronous message instructing the second flow controller to initiate a first thread at a second synchronous flow to execute a second portion of the first iteration. The second flow controller may determine that the first iteration of the loop is the last iteration of the loop to be executed and initiate the first thread at the second synchronous flow with a last iteration flag set.Type: GrantFiled: August 18, 2021Date of Patent: February 20, 2024Assignee: Micron Technology, Inc.Inventors: Douglas Vanesko, Bryan Hornung, Patrick Estep
-
Publication number: 20240004799Abstract: An apparatus can include a plurality of memory devices and a memory controller coupled to the plurality of memory devices via a plurality of memory channels. The plurality of memory channels are organized as a plurality of channel groups. The memory controller comprises a plurality of memory access request/response buffer sets, and each memory access request/response buffer set of the plurality of memory access request/response buffer sets corresponds to a different one of the plurality of channel groups.Type: ApplicationFiled: May 26, 2023Publication date: January 4, 2024Inventors: Emanuele Confalonieri, Stephen S. Pawlowski, Patrick Estep
-
Publication number: 20230418756Abstract: Systems, apparatuses, and methods related to a memory controller for cache bypass are described. An example memory controller can be coupled to a memory device. The example memory controller can include a cache including a cache sequence controller configured to determine a quantity of a given type of result of cache look-up operations, determine the quantity satisfies a bypass threshold, and cause performance of a bypass memory operation that bypasses the cache and accesses the memory device.Type: ApplicationFiled: June 27, 2023Publication date: December 28, 2023Inventors: Emanuele Confalonieri, Patrick Estep, Stephen S. Pawlowski, Nicola Del Gatto
-
Publication number: 20230393970Abstract: Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which solve the above problems using a global shared region of memory that combines memory segments from multiple CXL devices. Each memory segment is a same size and naturally aligned in its own physical address space. The global shared region is contiguous and naturally aligned in the virtual address space. By organizing this global shared region in this manner, a series of three tables may be used to quickly translate a virtual address in the global shared region to a physical address. This prevents TLB thrashing and improves performance of the computing system.Type: ApplicationFiled: August 31, 2022Publication date: December 7, 2023Inventors: Bryan Hornung, Patrick Estep