Patents by Inventor Shivasankar Gunasekaran

Shivasankar Gunasekaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210149595
    Abstract: A memory sub-system configured to be responsive to a time to live requirement for load commands from a processor. For example, a load command issued by the processor (e.g., SoC) can include, or be associated with, an optional time to live parameter. The parameter requires that the data at the memory address be available within the time specified by the time to live parameter. When the requested data is currently in the lower speed memory (e.g., NAND flash) and not available in the higher speed memory (e.g., DRAM, NVRAM), the memory subsystem can determine that the data cannot be made available with the specified time and optionally skip the operations and return an error response immediately.
    Type: Application
    Filed: November 19, 2019
    Publication date: May 20, 2021
    Inventors: Shivasankar Gunasekaran, Samuel E. Bradshaw, Justin M. Eno, Ameen D. Akel
  • Patent number: 10963396
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, an operating system allocates memory from a namespace for use by an application. The namespace is a logical reference to physical memory devices in which physical addresses are defined. The namespace is bound to a memory type. In response to binding the namespace to the memory type, the operating system adjusts a page table to map a logical memory address in the namespace to a memory device of the memory type.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: March 30, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Hongyu Wang, Justin M. Eno
  • Publication number: 20210081121
    Abstract: A computer system stores metadata that is used to identify physical memory devices that store randomly-accessible data for memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. Stored metadata associates a first address range of the address space with a first memory device, and a second address range of the address space with a second memory device. The operating system manages processes running on the computer system by accessing the stored metadata. This management includes allocating memory based on the stored metadata so that data for a first process is stored in the first memory device, and data for a second process is stored in the second memory device.
    Type: Application
    Filed: September 17, 2019
    Publication date: March 18, 2021
    Inventors: Kenneth Marion Curewitz, Shivasankar Gunasekaran, Ameen D. Akel, Hongyu Wang, Justin M. Eno, Shivam Swami, Samuel E. Bradshaw
  • Publication number: 20210081324
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. A virtual page is associated with a first memory type. A page table entry is generated to map a virtual address of the virtual page to a physical address in a first memory device of the first memory type. The page table entry is used by a memory management unit to store the virtual page at the physical address.
    Type: Application
    Filed: September 17, 2019
    Publication date: March 18, 2021
    Inventors: Samuel E. Bradshaw, Justin M. Eno, Sean S. Eilert, Shivasankar Gunasekaran, Hongyu Wang, Shivam Swami
  • Publication number: 20210081325
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, an operating system allocates memory from a namespace for use by an application. The namespace is a logical reference to physical memory devices in which physical addresses are defined. The namespace is bound to a memory type. In response to binding the namespace to the memory type, the operating system adjusts a page table to map a logical memory address in the namespace to a memory device of the memory type.
    Type: Application
    Filed: September 17, 2019
    Publication date: March 18, 2021
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Hongyu Wang, Justin M. Eno
  • Publication number: 20210081326
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, data is stored in memory at one or more logical addresses allocated to an application by an operating system. The data is physically stored in a first memory device of a first memory type (e.g., NVRAM). The operating system determines an access pattern for the stored data. In response to determining the access pattern, the data is moved from the first memory device to a second memory device of a different memory type (e.g., DRAM).
    Type: Application
    Filed: September 17, 2019
    Publication date: March 18, 2021
    Inventors: Kenneth Marion Curewitz, Sean S. Eilert, Hongyu Wang, Samuel E. Bradshaw, Shivasankar Gunasekaran, Justin M. Eno, Shivam Swami
  • Publication number: 20210072986
    Abstract: Methods, apparatuses, and systems for in- or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-serial way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of the memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.
    Type: Application
    Filed: December 17, 2019
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
  • Publication number: 20210072957
    Abstract: Systems, apparatuses, and methods of operating memory systems are described. Processing-in-memory capable memory devices are also described, and methods of performing fused-multiply-add operations within the same. Bit positions of bits stored at one or more portions of one or more memory arrays, may be accessed via data lines by activating the same or different access lines. A sensing circuit operatively coupled to a data line may be temporarily formed and measured to determine a state (e.g., a count of the number of bits that are a logic “1”) of accessed bit positions of a data line, and state information may be used to determine a computational result.
    Type: Application
    Filed: May 29, 2020
    Publication date: March 11, 2021
    Inventors: Sean S. Eilert, Shivasankar Gunasekaran, Ameen D. Akel, Dmitri Yudanov, Sivagnanam Parthasarathy
  • Publication number: 20210072987
    Abstract: Methods, apparatuses, and systems for in-or near-memory processing are described. Strings of bits (e.g., vectors) may be fetched and processed in logic of a memory device without involving a separate processing unit. Operations (e.g., arithmetic operations) may be performed on numbers stored in a bit-parallel way during a single sequence of clock cycles. Arithmetic may thus be performed in a single pass as numbers are bits of two or more strings of bits are fetched and without intermediate storage of the numbers. Vectors may be fetched (e.g., identified, transmitted, received) from one or more bit lines. Registers of a memory array may be used to write (e.g., store or temporarily store) results or ancillary bits (e.g., carry bits or carry flags) that facilitate arithmetic operations. Circuitry near, adjacent, or under the memory array may employ XOR or AND (or other) logic to fetch, organize, or operate on the data.
    Type: Application
    Filed: April 6, 2020
    Publication date: March 11, 2021
    Inventors: Dmitri Yudanov, Sean S. Eilert, Sivagnanam Parthasarathy, Shivasankar Gunasekaran, Ameen D. Akel
  • Publication number: 20210056387
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, changes to local versions of the ANN can be combined with a master version of the ANN. In the system, a first device can include memory that can store the master version, a second device can include memory that can store a local version of the ANN, and there can be many devices that store local versions of the ANN. The second device (or any other device of the system hosting a local version) can include a processor that can train the local version, and a transceiver that can transmit changes to the local version generated from the training. The first device can include a transceiver that can receive the changes to a local version, and a processing device that can combine the received changes with the master version.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Sean Stephen Eilert, Shivasankar Gunasekaran, Ameen D. Akel, Kenneth Marion Curewitz, Hongyu Wang
  • Publication number: 20210056405
    Abstract: A system having multiple devices that can host different versions of an artificial neural network (ANN). In the system, inputs for the ANN can be obfuscated for centralized training of a master version of the ANN at a first computing device. A second computing device in the system includes memory that stores a local version of the ANN and user data for inputting into the local version. The second computing device includes a processor that extracts features from the user data and obfuscates the extracted features to generate obfuscated user data. The second device includes a transceiver that transmits the obfuscated user data. The first computing device includes a memory that stores the master version of the ANN, a transceiver that receives obfuscated user data transmitted from the second computing device, and a processor that trains the master version based on the received obfuscated user data using machine learning.
    Type: Application
    Filed: August 20, 2019
    Publication date: February 25, 2021
    Inventors: Samuel E. Bradshaw, Shivasankar Gunasekaran, Sean Stephen Eilert, Ameen D. Akel, Kenneth Marion Curewitz