Patents by Inventor Kenneth Marion Curewitz

Kenneth Marion Curewitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960738
    Abstract: Systems, methods, and apparatus related to a memory system that manages an interface for a volatile memory device and a non-volatile memory device to control memory system power. In one approach, a controller evaluates a demand on memory performance. If the demand of a current computation task needed by the host is high, a DRAM device is powered-up to meet the demand. Otherwise, if the non-volatile memory device is adequate to meet the demand, the DRAM memory is partially or fully-powered down to save power. In another approach, a task performed for a host device uses one or more resources of a first memory device (e.g., DRAM). A performance capability of a second memory device (e.g., NVRAM) is determined. A controller of the memory system determines whether the performance capability of the second memory device is adequate to service the task.
    Type: Grant
    Filed: October 27, 2022
    Date of Patent: April 16, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Shivam Swami, Kenneth Marion Curewitz
  • Patent number: 11954042
    Abstract: Systems, methods and apparatuses of distributed computing based on memory as a service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: April 9, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11868268
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, data is stored in memory at one or more logical addresses allocated to an application by an operating system. The data is physically stored in a first memory device of a first memory type (e.g., NVRAM). The operating system determines an access pattern for the stored data. In response to determining the access pattern, the data is moved from the first memory device to a second memory device of a different memory type (e.g., DRAM).
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: January 9, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Sean S. Eilert, Hongyu Wang, Samuel E. Bradshaw, Shivasankar Gunasekaran, Justin M. Eno, Shivam Swami
  • Patent number: 11755884
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, changes to local versions of the ANN can be combined with a master version of the ANN. In the system, a first device can include memory that can store the master version, a second device can include memory that can store a local version of the ANN, and there can be many devices that store local versions of the ANN. The second device (or any other device of the system hosting a local version) can include a processor that can train the local version, and a transceiver that can transmit changes to the local version generated from the training. The first device can include a transceiver that can receive the changes to a local version, and a processing device that can combine the received changes with the master version.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: September 12, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Sean Stephen Eilert, Shivasankar Gunasekaran, Ameen D. Akel, Kenneth Marion Curewitz, Hongyu Wang
  • Publication number: 20230236747
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Application
    Filed: March 27, 2023
    Publication date: July 27, 2023
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Patent number: 11657002
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Patent number: 11650742
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 16, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Patent number: 11636334
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Sean Stephen Eilert, Ameen D. Akel, Kenneth Marion Curewitz
  • Publication number: 20230046808
    Abstract: Systems, methods, and apparatus related to a memory system that manages an interface for a volatile memory device and a non-volatile memory device to control memory system power. In one approach, a controller evaluates a demand on memory performance. If the demand of a current computation task needed by the host is high, a DRAM device is powered-up to meet the demand. Otherwise, if the non-volatile memory device is adequate to meet the demand, the DRAM memory is partially or fully-powered down to save power. In another approach, a task performed for a host device uses one or more resources of a first memory device (e.g., DRAM). A performance capability of a second memory device (e.g., NVRAM) is determined. A controller of the memory system determines whether the performance capability of the second memory device is adequate to service the task.
    Type: Application
    Filed: October 27, 2022
    Publication date: February 16, 2023
    Inventors: Shivam Swami, Kenneth Marion Curewitz
  • Patent number: 11556656
    Abstract: Methods and apparatus of Exclusive OR (XOR) engine in a random access memory device to accelerate cryptographical operations in processors. For example, an integrated circuit memory device enclosed within a single integrated circuit package can include an XOR engine that is coupled with memory units in the random access memory device (e.g., having dynamic random access memory (DRAM) or non-volatile random access memory (NVRAM)). A processor (e.g., System-on-Chip (SoC) or Central Processing Unit (CPU)) can have encryption logic that performs cryptographical operations using XOR operations that are performed by the XOR engine in the random access memory device using the data in the random access memory device.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 17, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Shivam Swami, Sean S. Eilert, Ameen D. Akel, Kenneth Marion Curewitz, Hongyu Wang
  • Publication number: 20230004502
    Abstract: Systems, methods and apparatuses of distributed computing based on memory as a service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 5, 2023
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20220417326
    Abstract: Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.
    Type: Application
    Filed: August 30, 2022
    Publication date: December 29, 2022
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Patent number: 11500555
    Abstract: Systems, methods, and apparatus related to a memory system that manages an interface for a volatile memory device and a non-volatile memory device to control memory system power. In one approach, a controller evaluates a demand on memory performance. If the demand of a current computation task needed by the host is high, a DRAM device is powered-up to meet the demand. Otherwise, if the non-volatile memory device is adequate to meet the demand, the DRAM memory is partially or fully-powered down to save power. In another approach, a task performed for a host device uses one or more resources of a first memory device (e.g., DRAM). A performance capability of a second memory device (e.g., NVRAM) is determined. A controller of the memory system determines whether the performance capability of the second memory device is adequate to service the task.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: November 15, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Shivam Swami, Kenneth Marion Curewitz
  • Patent number: 11481334
    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: October 25, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20220309291
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN) as well as different versions of a feature dictionary. In the system, encoded inputs for the ANN can be decoded by the feature dictionary, which allows for encoded input to be sent to a master version of the ANN over a network instead of an original version of the input which usually includes more data than the encoded input. Thus, by using the feature dictionary for training of a master ANN there can be reduction of data transmission.
    Type: Application
    Filed: June 15, 2022
    Publication date: September 29, 2022
    Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Hongyu Wang, Sean Stephen Eilert
  • Publication number: 20220300437
    Abstract: A memory chip (e.g., DRAM) connecting a SoC and an accelerator chip (e.g., an AI accelerator chip). A system including the memory chip and the accelerator chip. The system can include the SoC. The memory chip can include first memory cells to store and provide computation input data (e.g., AI computation input data) received from the SoC to be used by the accelerator chip as computation input (e.g., AI computation input). The memory chip can include second memory cells to store and provide first computation output data (e.g., AI computation output data) received from the accelerator chip to be retrieved by the SoC or reused by the accelerator chip as computation input. The memory chip can also include third memory cells to store second computation output data (e.g., non-AI computation output data) related to non-AI tasks received from the SoC to be retrieved by the SoC for non-AI tasks.
    Type: Application
    Filed: June 10, 2022
    Publication date: September 22, 2022
    Inventors: Sean S. Eilert, Kenneth Marion Curewitz, Justin M. Eno
  • Patent number: 11438414
    Abstract: Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: September 6, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Publication number: 20220237039
    Abstract: Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.
    Type: Application
    Filed: April 19, 2022
    Publication date: July 28, 2022
    Inventors: Sean Stephen Eilert, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Dmitri Yudanov
  • Patent number: 11397694
    Abstract: A memory chip (e.g., DRAM) connecting a SoC and an accelerator chip (e.g., an AI accelerator chip). A system including the memory chip and the accelerator chip. The system can include the SoC. The memory chip can include first memory cells to store and provide computation input data (e.g., AI computation input data) received from the SoC to be used by the accelerator chip as computation input (e.g., AI computation input). The memory chip can include second memory cells to store and provide first computation output data (e.g., AI computation output data) received from the accelerator chip to be retrieved by the SoC or reused by the accelerator chip as computation input. The memory chip can also include third memory cells to store second computation output data (e.g., non-AI computation output data) related to non-AI tasks received from the SoC to be retrieved by the SoC for non-AI tasks.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: July 26, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Sean S. Eilert, Kenneth Marion Curewitz, Justin M. Eno
  • Patent number: 11392796
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN) as well as different versions of a feature dictionary. In the system, encoded inputs for the ANN can be decoded by the feature dictionary, which allows for encoded input to be sent to a master version of the ANN over a network instead of an original version of the input which usually includes more data than the encoded input. Thus, by using the feature dictionary for training of a master ANN there can be reduction of data transmission.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: July 19, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Hongyu Wang, Sean Stephen Eilert