Patents by Inventor Kenneth Marion Curewitz

Kenneth Marion Curewitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210081121
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Application
    Filed: September 17, 2019
    Publication date: March 18, 2021
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Publication number: 20210056387
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, changes to local versions of the ANN can be combined with a master version of the ANN. In the system, a first device can include memory that can store the master version, a second device can include memory that can store a local version of the ANN, and there can be many devices that store local versions of the ANN. The second device (or any other device of the system hosting a local version) can include a processor that can train the local version, and a transceiver that can transmit changes to the local version generated from the training. The first device can include a transceiver that can receive the changes to a local version, and a processing device that can combine the received changes with the master version.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Sean Stephen Eilert, Shivasankar Gunasekaran, Ameen D. Akel, Kenneth Marion Curewitz, Hongyu Wang
  • Publication number: 20210056405
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Sean Stephen Eilert, Ameen D. Akel, Kenneth Marion Curewitz
  • Publication number: 20210056350
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN) as well as different versions of a feature dictionary. In the system, encoded inputs for the ANN can be decoded by the feature dictionary, which allows for encoded input to be sent to a master version of the ANN over a network instead of an original version of the input which usually includes more data than the encoded input. Thus, by using the feature dictionary for training of a master ANN there can be reduction of data transmission.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Hongyu Wang, Sean Stephen Eilert
  • Publication number: 20200379809
    Abstract: Systems, methods and apparatuses of Artificial Neural Network (ANN) applications implemented via Memory as a Service (MaaS) are described. For example, a computing system can include a computing device and a remote device. The computing device can borrow memory from the remote device over a wired or wireless network. Through the borrowed memory, the computing device and the remote device can collaborate with each other in storing an artificial neural network and in processing based on the artificial neural network. Some layers of the artificial neural network can be stored in the memory loaned by the remote device to the computing device. The remote device can perform the computation of the layers stored in the borrowed memory on behalf of the computing device. When the network connection degrades, the computing device can use an alternative module to function as a substitute of the layers stored in the borrowed memory.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Publication number: 20200379919
    Abstract: Systems, methods and apparatuses to accelerate accessing of borrowed memory over network connection are described. For example, a memory management unit (MMU) of a computing device can be configured to be connected both to the random access memory over a memory bus and to a computer network via a communication device. The computing device can borrow an amount of memory from a remote device over a network connection using the communication device; and applications running in the computing device can use virtual memory addresses mapped to the borrowed memory. When a virtual address mapped to the borrowed memory is used, the MMU translates the virtual address into a physical address and instruct the communication device to access the borrowed memory.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Samuel E. Bradshaw, Ameen D. Akel, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20200382590
    Abstract: Systems, methods and apparatuses to provide memory as a service are described. For example, a borrower device is configured to: communicate with a lender device; borrow an amount of memory from the lender device; expand memory capacity of the borrower device for applications running on the borrower device, using at least the local memory of the borrower device and the amount of memory borrowed from the lender device; and service accesses by the applications to memory via communication link between the borrower device and the lender device.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Publication number: 20200379913
    Abstract: Systems, methods and apparatuses of distributed computing based on Memory as a Service are described. For example, a set of networked computing devices can each be configured to execute an application that accesses memory using a virtual memory address region. Each respective device can map the virtual memory address region to the local memory for a first period of time during which the application is being executed in the respective device, map the virtual memory address region to a local memory of a remote device in the group for a second period of time after starting the application in the respective device and before terminating the application in the respective device, and request the remote device to process data in the virtual memory address region during at least the second period of time.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20200379908
    Abstract: Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Kenneth Marion Curewitz, Ameen D. Akel, Samuel E. Bradshaw, Sean Stephen Eilert, Dmitri Yudanov
  • Publication number: 20200379808
    Abstract: Systems, methods and apparatuses to throttle network communications for memory as a service are described. For example, a computing device can borrow an amount of random access memory of the lender device over a communication connection between the lender device and the computing device. The computing device can allocate virtual memory to applications running in the computing device, and configure at least a portion of the virtual memory to be hosted on the amount of memory loaned by the lender device to the computing device. The computing device can throttle data communications used by memory regions in accessing the amount of memory over the communication connection according to the criticality levels of the contents stored in the memory regions.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Sean Stephen Eilert, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Dmitri Yudanov
  • Publication number: 20200379914
    Abstract: Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
    Type: Application
    Filed: May 28, 2019
    Publication date: December 3, 2020
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert