Patents by Inventor Tony M. Brewer

Tony M. Brewer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143184
    Abstract: A system includes a host device, a memory device, and a command manager configured to reorder respective command responses for corresponding commands between the host device and the memory device. The command manager is further configured to receive a command response associated with a transaction identifier for each command. An index value for the command is written to a reordering queue. In response to a command response write for the command response, the index value from the reordering queue is read. The index value is written in an index update queue. A network write index update message is transmitted.
    Type: Application
    Filed: November 2, 2022
    Publication date: May 2, 2024
    Inventors: Michael Keith Dugan, Tony M. Brewer
  • Patent number: 11966741
    Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute a received instruction; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In another embodiment, the core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, to reserve a predetermined amount of memory space in a thread control memory to store return arguments, and to generate one or more work descriptor data packets to another processor or hybrid threading fabric circuit for execution of a corresponding plurality of execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed.
    Type: Grant
    Filed: June 12, 2021
    Date of Patent: April 23, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11960403
    Abstract: System and techniques for variable execution time atomic operations are described herein. When an atomic operation for a memory device is received, the run length of the operation is measured. If the run length is beyond a threshold, a cache line for the operation is locked while the operation runs. A result of the operation is queued until it can be written to the cache line. At that point, the cache line is unlocked.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: April 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Dean E. Walker, Tony M. Brewer
  • Patent number: 11960768
    Abstract: Systems and techniques for a memory-side cache directory-based request queue are described herein. A memory request is received at an interface of a memory device. One or more fields of the memory request are written into an entry of a directory data structure. The identifier of the entry is pushed onto a queue. To perform the memory request, the identifier is popped off of the queue and a field of the memory request is retrieved from the entry of the directory data structure using the identifier. Then, a process on the memory request can be performed using the field retrieved from the entry of the directory data structure.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: April 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Tony M. Brewer, Dean E. Walker
  • Publication number: 20240111538
    Abstract: Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which provide for more efficient CGRA execution by assigning different initiation intervals to different PEs executing a same code base. The initiation intervals may be a multiple of each other and the PE with the lowest initiation interval may be used to execute instructions of the code that is to be executed at a greater frequency than other instructions than other instructions that may be assigned to PEs with higher initiation intervals.
    Type: Application
    Filed: November 30, 2023
    Publication date: April 4, 2024
    Inventors: Douglas Vanesko, Tony M. Brewer
  • Patent number: 11940919
    Abstract: System and techniques for recall pending cache line eviction are described herein. A queue that includes a deferred memory request is kept for a cache line. Metadata for the queue is stored in a cache line tag. When a recall is needed, the metadata is written from the tag to a first recall storage, referenced by a memory request ID. After the recall request is transmitted, the memory request ID is written to a second recall storage referenced by the message ID of the recall request. Upon receipt of a response to the recall request, the queue for the cache line can be restored by using the message ID in the response to lookup the memory request ID from the second recall storage, then using the memory request ID to lookup the metadata from the first recall storage, and then writing the metadata into the tag for the cache line.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: March 26, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Dean E. Walker, Tony M. Brewer
  • Patent number: 11935869
    Abstract: A three-dimensional stacked integrated circuit (3D SIC) having a non-volatile memory die, a volatile memory die, a logic die, and a thermal management component. The non-volatile memory die, the volatile memory die, the logic die, and the thermal management component are stacked. The thermal management component can be stacked in between the non-volatile memory die and the logic die, stacked in between the volatile memory die and the logic die, or both.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: March 19, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Publication number: 20240086355
    Abstract: A reconfigurable compute fabric can include multiple nodes, and each node can include multiple tiles with respective processing and storage elements. The tiles can be arranged in an array or grid and can be communicatively coupled. In an example, the tiles can be arranged in a one-dimensional array and each tile can be coupled to its respective adjacent neighbor tiles using a direct bus coupling. Each tile can be further coupled to at least one non-adjacent neighbor tile that is one tile, or device space, away using a passthrough bus. The passthrough bus can extend through intervening tiles.
    Type: Application
    Filed: November 10, 2023
    Publication date: March 14, 2024
    Inventors: Bryan Hornung, Tony M. Brewer
  • Patent number: 11928025
    Abstract: Systems, apparatuses, and methods related to memory device protection are described. A quantity of errors within a memory device can be determined and the determined quantity can be used to further determine whether to utilize single or multiple memory devices for an error correction and/or detection operation. Multiple memory devices need not be utilized for the error correction and/or detection operation unless a quantity of errors within the memory device exceeds a threshold quantity.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: March 12, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Tony M. Brewer, Brent Keeth
  • Publication number: 20240069769
    Abstract: Disclosed in some examples are methods, systems, memory devices, and machine-readable mediums that allow a memory device to efficiently mark memory extents involved in an enhanced memory operation. An extent is marked if a meta state associated with the extent indicates that the extent is included in the enhanced memory operation. The largest memory extents of the operation are maintained in the memory device as a list of unmarked extents. When a primitive memory operation is received, the memory address is compared to the unmarked extents in the list to the meta state for that memory line. If the address is covered by the list of extents, or that line's meta state is marked, then the memory operation is performed including the enhanced memory operation.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventor: Tony M. Brewer
  • Publication number: 20240069984
    Abstract: Devices and techniques for asynchronous event message handing in a processor are described herein. A barrel multithreaded processor may include an asynchronous event handler to receive an indication of a thread create instruction from a parent thread, determine a return value size of return values from the indication of the thread create instruction, determine whether sufficient space exists in the memory to store the return values, allocate space in the memory to store the return parameters in response to determining that there is sufficient space in the memory to store the return values, and provide access to the return values from the allocated space to the parent thread based at least in part on a thread return instruction from the child thread.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Christopher Baronne, Tony M. Brewer
  • Publication number: 20240069800
    Abstract: System and techniques for host-preferred memory operation are described herein. At a memory-side cache of a memory device that includes accelerator hardware, a first memory operation can be received from a host. A determination that the first memory operation corresponds to a cache set based on an address of the first memory operation is made. A second memory operation can be received from the accelerator hardware. Another determination can be made that the second memory operation corresponds to the cache set. Here, the first memory operation can be enqueued in a host queue of the cache set and the second memory operation can be enqueued in an internal request queue of the cache set. The first memory operation and the second memory operation can be executed as each is dequeued.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Tony M. Brewer, Dean E. Walker
  • Publication number: 20240069802
    Abstract: A method performed by a distributed computing system includes receiving a work packet from a separate computing device via a fabric interconnect at a command manager (CM) of a memory controller of a fabric attached memory (FAM) device, wherein the work packet includes a memory access to be performed by a FAM computing resource local to the FAM device; determining a work class of the work packet; placing the work packet in a CM work queue local to the CM for the work class when space is available in the CM work queue for the work class; and when the CM work queue for the work class is full, placing the work packet in a destination work queue according to an address included in the work packet, wherein the destination queue is implemented in a memory array of the FAM device external to the memory controller.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventor: Tony M. Brewer
  • Publication number: 20240069795
    Abstract: A system includes a host device having a first buffer with ordered data. An accelerator device has a data movement processor and a reordering buffer. A multiple-channel interface couples the host device and the data movement processor of the accelerator device. The data movement processor is configured to issue a read command for a portion of the ordered data. In coordination with issuing the read command, an entry of the reordering buffer is allocated. A transaction identifier for the read command is allocated. Unordered responses are received from the host device via the multiple-channel interface. The responses include respective portions of the ordered data and a respective transaction identifier. The responses are reordered in the reordering buffer based on the respective transaction identifiers and the allocated entry of the reordering buffer.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Michael Keith Dugan, Tony M. Brewer
  • Publication number: 20240070082
    Abstract: System and techniques for evicting a cache line with pending control request are described herein. A memory request—that includes an address corresponding to a set of cache lines—can be received. A determination can be made that a cache line of the set of cache lines will be evicted to process the memory request. Another determination can be made that a control request has been made to a host from the memory device and that the control request pending when it is determined that the cache line will be evicted. Here, a counter corresponding to the set of cache lines can be incremented (e.g., by one) to track the pending control request in face of eviction. Then, the cache line can be evicted.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Tony M. Brewer, Dean E. Walker
  • Publication number: 20240069805
    Abstract: A system includes a memory device and a command manager configured to reorder read requests. The command manager is configured to receive a read response associated with a transaction identifier for the read request. A free list entry for the read request is allocated from a free list. The free list entry is associated with a transaction identifier of the read request. A tail index of a reordering queue is written to a remapping queue based on the free list entry. The tail index is configured to provide a write address of the reordering queue that is allocated for the read request. A read response associated with the transaction identifier for the read request is received. The read response is written to the allocated entry of the reordering queue.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Michael Keith Dugan, Tony M. Brewer
  • Publication number: 20240070078
    Abstract: System and techniques for recall pending cache line eviction are described herein. A queue that includes a deferred memory request is kept for a cache line. Metadata for the queue is stored in a cache line tag. When a recall is needed, the metadata is written from the tag to a first recall storage, referenced by a memory request ID. After the recall request is transmitted, the memory request ID is written to a second recall storage referenced by the message ID of the recall request. Upon receipt of a response to the recall request, the queue for the cache line can be restored by using the message ID in the response to lookup the memory request ID from the second recall storage, then using the memory request ID to lookup the metadata from the first recall storage, and then writing the metadata into the tag for the cache line.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Dean E. Walker, Tony M. Brewer
  • Publication number: 20240070083
    Abstract: System and techniques for silent cache line eviction are described herein. A memory device receives a memory operation from a host. The memory operation establishes data and metadata in a cache line of the memory device upon receipt. The metadata is stored in a memory element that corresponds to the cache line. Later, an eviction trigger to evict the cache line is identified. Then, in response to the eviction trigger, current metadata of the cache line is compared with the metadata in the memory element to determine whether the metadata has changed. the cache line can be evicted without writing to backing memory in response to the metadata being unchanged.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Tony M. Brewer, Dean E. Walker
  • Publication number: 20240070060
    Abstract: System and techniques for synchronized request handling at a memory device are described herein. A request is received at the memory device. Here, the request indicates a memory address corresponding to a set of cache lines and a single cache line in the set of cache lines. The memory device maintains a deferred list for the set of cache lines and a set of lists with each member of the set of lists corresponding to one cache line in the set of cache lines. The memory device tests the deferred list to determine that the deferred list is not empty and places the request in the deferred list.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Tony M. Brewer, Dean E. Walker
  • Publication number: 20240069801
    Abstract: Systems and techniques for a memory-side cache directory-based request queue are described herein. A memory request is received at an interface of a memory device. One or more fields of the memory request are written into an entry of a directory data structure. The identifier of the entry is pushed onto a queue. To perform the memory request, the identifier is popped off of the queue and a field of the memory request is retrieved from the entry of the directory data structure using the identifier. Then, a process on the memory request can be performed using the field retrieved from the entry of the directory data structure.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Tony M. Brewer, Dean E. Walker