Patents by Inventor Ameen D. Akel

Ameen D. Akel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11740899
    Abstract: Methods, systems, and devices for in-memory associative processing are described. An apparatus may receive a set of instructions that indicate a first vector and a second vector as operands for a computational operation. The apparatus may select, from a set of vector mapping schemes, a vector mapping scheme for performing the computational operation using associative processing. The apparatus may write the first vector and the second vector to a set of planes each comprising an array of content-addressable memory cells based on the selected vector mapping scheme.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: August 29, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Sean S. Eilert, Ameen D. Akel, Justin Eno, Brian Hirano
  • Publication number: 20230267043
    Abstract: Methods, systems, and devices for parity-based error management are described. A processing system that performs a computational operation on a set of operands may perform a computational operation, (e.g., the same computational operation) on parity bits for the operands. The processing system may then use the parity bits that result from the computational operation on the parity bits to detect, and discretionarily correct, one or more errors in the output that results from the computational operation on the operands.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Inventors: Ameen D. Akel, Helena Caminal, Sean S. Eilert
  • Publication number: 20230236735
    Abstract: Systems and methods for area-efficient mitigation of errors that are caused by row hammer attacks and the like in a memory media device are described. The counters for counting row accesses are maintained in a content addressable memory (CAM) the provides fast access times. The detection of errors is deterministically performed while maintaining a number of row access counters that is smaller than the total number of rows protected in the memory media device. The circuitry for the detection and mitigation may be in the memory media device or in a memory controller to which the memory media device attaches. The memory media device may be dynamic random access memory (DRAM).
    Type: Application
    Filed: August 29, 2022
    Publication date: July 27, 2023
    Applicant: Micron Technology, Inc.
    Inventors: Sujeet AYYAPUREDDI, Yang LU, Edmund GIESKE, Cagdas DIRIK, Ameen D. AKEL, Elliott C. COOPER-BALIS, Amitava MAJUMDAR, Danilo CARACCIO, Robert M. WALKER
  • Publication number: 20230236747
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Application
    Filed: March 27, 2023
    Publication date: July 27, 2023
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Publication number: 20230214148
    Abstract: Methods, systems, and devices for redundant computing across planes are described. A device may perform a computational operation on first data that is stored in a first plane that includes content-addressable memory cells. The first data may be representative of a set of contiguous bits of a vector. The device may perform, concurrent with performing the computational operation on the first data, the computational operation on second data that is stored in a second plane. The second data may be representative of the set of contiguous bits of the vector. The device may read from the first plane and write to the second plane, third data representative of a result of the computational operation on the first data.
    Type: Application
    Filed: February 23, 2022
    Publication date: July 6, 2023
    Inventors: Sean S. Eilert, Kenneth M. Curewitz, Helena Caminal, Ameen D. Akel
  • Patent number: 11693657
    Abstract: Methods, apparatuses, and systems for in- or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-serial way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of the memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: July 4, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
  • Publication number: 20230208444
    Abstract: Methods, systems, and devices for associative computing for error correction are described. A device may receive first data representative of a first codeword of a size for error correction. The device may identify a set of content-addressable memory cells that stores data representative of a set of codewords each of which is the size of the first codeword. The device may identify second data representative of the first codeword in the set of content-addressable memory cells. Based on identifying the second data, the device may transmit an indication of a valid codeword that is mapped to the second data.
    Type: Application
    Filed: February 22, 2022
    Publication date: June 29, 2023
    Inventors: Ameen D. Akel, Helena Caminal, Sean S. Eilert
  • Patent number: 11687282
    Abstract: A memory sub-system configured to be responsive to a time to live requirement for load commands from a processor. For example, a load command issued by the processor (e.g., SoC) can include, or be associated with, an optional time to live parameter. The parameter requires that the data at the memory address be available within the time specified by the time to live parameter. When the requested data is currently in the lower speed memory (e.g., NAND flash) and not available in the higher speed memory (e.g., DRAM, NVRAM), the memory sub-system can determine that the data cannot be made available with the specified time and optionally skip the operations and return an error response immediately.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: June 27, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Shivasankar Gunasekaran, Samuel E. Bradshaw, Justin M. Eno, Ameen D. Akel
  • Patent number: 11657002
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11650742
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 16, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Patent number: 11636893
    Abstract: An example memory sub-system includes: a plurality bank groups, wherein each bank group comprises a plurality of memory banks; a plurality of row buffers, wherein two or more row buffers of the plurality of row buffers are associated with each bank group; and a processing logic communicatively coupled to the plurality of bank groups and the plurality of row buffers, the processing logic to perform operations comprising: receiving, from a host, a command identifying a row buffer of the plurality of row buffers; and perform an operation with respect to the identified row buffer.
    Type: Grant
    Filed: October 19, 2020
    Date of Patent: April 25, 2023
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Sean S. Eilert, Ameen D. Akel, Shivam Swami
  • Patent number: 11636334
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Sean Stephen Eilert, Ameen D. Akel, Kenneth Marion Curewitz
  • Patent number: 11621029
    Abstract: Several embodiments of memory devices and systems with selective page-based refresh are disclosed herein. In one embodiment, a memory device includes a controller operably coupled to a main memory having at least one memory region comprising a plurality of memory pages. The controller is configured to track, in one or more refresh schedule tables stored on the memory device and/or on a host device, a subset of memory pages in the plurality of memory pages configured to be refreshed according to a refresh schedule. In some embodiments, the controller is further configured to refresh the subset of memory pages in accordance with the refresh schedule.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 4, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Ameen D. Akel
  • Publication number: 20230065783
    Abstract: Methods, systems, and devices for in-memory associative processing for vectors are described. A device may perform a computational operation on a first set of contiguous bits of a first vector and a first set of contiguous bits of a second vector. The first sets of contiguous bits may be stored in a first plane of a memory die and the computational operation may be based on a truth table for the computational operation. The device may perform a second computational operation on a second set of contiguous bits of the first vector and a second set of contiguous bits of the second vector. The second sets of contiguous bits may be stored in a second plane of the memory die and the computational operation based on the truth table for the computational operation.
    Type: Application
    Filed: January 13, 2022
    Publication date: March 2, 2023
    Inventors: Sean S. Eilert, Ameen D. Akel, Justin Eno, Brian Hirano
  • Publication number: 20230069790
    Abstract: Methods, systems, and devices for in-memory associative processing are described. An apparatus may receive a set of instructions that indicate a first vector and a second vector as operands for a computational operation. The apparatus may select, from a set of vector mapping schemes, a vector mapping scheme for performing the computational operation using associative processing. The apparatus may write the first vector and the second vector to a set of planes each comprising an array of content-addressable memory cells based on the selected vector mapping scheme.
    Type: Application
    Filed: January 18, 2022
    Publication date: March 2, 2023
    Inventors: Sean S. Eilert, Ameen D. Akel, Justin Eno, Brian Hirano
  • Patent number: 11556656
    Abstract: Methods and apparatus of Exclusive OR (XOR) engine in a random access memory device to accelerate cryptographical operations in processors. For example, an integrated circuit memory device enclosed within a single integrated circuit package can include an XOR engine that is coupled with memory units in the random access memory device (e.g., having dynamic random access memory (DRAM) or non-volatile random access memory (NVRAM)). A processor (e.g., System-on-Chip (SoC) or Central Processing Unit (CPU)) can have encryption logic that performs cryptographical operations using XOR operations that are performed by the XOR engine in the random access memory device using the data in the random access memory device.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 17, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Shivam Swami, Sean S. Eilert, Ameen D. Akel, Kenneth Marion Curewitz, Hongyu Wang
  • Publication number: 20230009642
    Abstract: Methods, systems, and devices for programmable metadata and related operations are described. A method may include receiving signaling that indicates a set of rules for transitions of states of metadata at a memory device storing the metadata. The memory device may receive a command from a host device associated with a set of data after receiving the set of rules. The memory device may transition metadata associated with the set of data stored at the memory device from a first state to a second state based in part on the set of rules and the command. The memory device may execute the command received from the host device.
    Type: Application
    Filed: July 7, 2021
    Publication date: January 12, 2023
    Inventors: Sean S. Eilert, Justin Eno, Ameen D. Akel
  • Publication number: 20230004502
    Abstract: Systems, methods and apparatuses of distributed computing based on memory as a service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 5, 2023
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20220417326
    Abstract: Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.
    Type: Application
    Filed: August 30, 2022
    Publication date: December 29, 2022
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Publication number: 20220392509
    Abstract: Methods, systems, and devices for memory accessing with auto-precharge are described. For example, a memory system may be configured to support an activate with auto-precharge command, which may be associated with a memory device opening a page of memory cells, latching respective logic states stored by the memory cells at a row buffer, writing logic states back to the page of memory cells, and maintaining the latched logic states at the row buffer (e.g., while maintaining power to latches of the row buffer, after closing the page of memory cells, while the page of memory cells is closed).
    Type: Application
    Filed: June 22, 2022
    Publication date: December 8, 2022
    Inventors: Shivam Swami, Sean S. Eilert, Ameen D. Akel