Patents by Inventor Parag R. Maharana

Parag R. Maharana has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10691611
    Abstract: A computing system having memory components, including first memory and second memory. The computing system further includes a processing device, operatively coupled with the memory components, to: store a memory ratio in association with a context of executing instructions; execute a set of instructions in the context; allocate, for execution of the set of instructions in the context, an amount of memory, including an amount of the first memory and an amount of the second memory; and access the amount of the second memory via the amount of the first memory during the execution of the set of instructions in the context. A ratio between the amount of the first memory and an amount of the second memory allocated for the execution of the set of instructions in the context is in accordance with the memory ratio.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: June 23, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Anirban Ray, Parag R. Maharana
  • Patent number: 10671460
    Abstract: A memory system having a plurality of memory components and a controller, operatively coupled to the plurality of memory components to: store data in the memory components; communicate with a host system via a bus; service the data to the host system via communications over the bus; communicate with a processing device that is separate from the host system using a message passing interface over the bus; and provide data access to the processing device through communications made using the message passing interface over the bus.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: June 2, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Samir Mittal, Gurpreet Anand, Anirban Ray, Parag R. Maharana
  • Publication number: 20200042246
    Abstract: A system controller, operatively coupled with one or more memory devices, is configured to provide a plurality of virtual memory controllers, wherein each of the plurality of virtual memory controllers is associated with a different portion of the one or more memory devices, and provide a plurality of physical functions, wherein each of the plurality of physical functions corresponds to a different one of the plurality of virtual memory controllers. The system controller further presents the plurality of physical functions to a host computing system over a peripheral component interconnect express (PCIe) interface, the host computing system to assign each of the plurality of physical functions to a different virtual machine running on the host computing system.
    Type: Application
    Filed: March 15, 2019
    Publication date: February 6, 2020
    Inventors: Parag R. Maharana, Anirban Ray, Gurpreet Anand, Samir Rajadnya, Paul Stonelake, Samir Mittal
  • Publication number: 20200019506
    Abstract: A computing system having memory components, including first memory and second memory. The computing system further includes a processing device, operatively coupled with the memory components, to: receive, in a prediction engine, usage history of pages in the second memory; train a prediction model based on the usage history; predict, by the prediction engine using the prediction model, likelihood of the pages being used in a subsequent period of time; and responsive to the likelihood predicted by the prediction engine, copy by a controller data in a page in the second memory to the first memory.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 16, 2020
    Inventors: Anirban Ray, Samir Mittal, Gurpreet Anand, Parag R. Maharana
  • Publication number: 20200019510
    Abstract: A computing system having memory components, including first memory and second memory. The computing system further includes a processing device, operatively coupled with the memory components, to: store a memory ratio in association with a context of executing instructions; execute a set of instructions in the context; allocate, for execution of the set of instructions in the context, an amount of memory, including an amount of the first memory and an amount of the second memory; and access the amount of the second memory via the amount of the first memory during the execution of the set of instructions in the context. A ratio between the amount of the first memory and an amount of the second memory allocated for the execution of the set of instructions in the context is in accordance with the memory ratio.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 16, 2020
    Inventors: Anirban Ray, Parag R. Maharana
  • Publication number: 20190253520
    Abstract: A memory system having one or more memory components and a controller. The controller can receive access requests from a communication connection. The access requests can identify data items associated with the access requests, addresses of the data items, and contexts of the data items in which the data items are used for the access requests. The controller can identify separate memory regions for separate contexts respectively, determine placements of the data items in the separate memory regions based on the contexts of the data items, and determine a mapping between the addresses of the data items and memory locations that are within the separate memory regions corresponding to the contexts of the data items. The memory system stores store the data items at the memory locations separated by different memory regions according to different contexts.
    Type: Application
    Filed: November 7, 2018
    Publication date: August 15, 2019
    Inventors: Parag R. Maharana, Anirban Ray, Gurpreet Anand
  • Publication number: 20190243756
    Abstract: A computing system having at least one bus, a plurality of different memory components, and a processing device operatively coupled with the plurality of memory components through the at least one bus. The different memory components include first memory and second memory having different memory access speeds. The computing system further includes a memory virtualizer operatively to: store an address map between first addresses used by the processing device to access memory and second addresses used to access the first memory and the second memory; monitor usages of the first memory and the second memory; adjust the address map based on the usages to improve speed of the processing device in memory access involving the first memory and the second memory; and swap data content in the first memory and the second memory according to adjustments to the address map.
    Type: Application
    Filed: August 3, 2018
    Publication date: August 8, 2019
    Inventors: Anirban Ray, Parag R. Maharana, Gurpreet Anand
  • Publication number: 20190243695
    Abstract: A memory system having a plurality of memory components and a controller, operatively coupled to the plurality of memory components to: store data in the memory components; communicate with a host system via a bus; service the data to the host system via communications over the bus; communicate with a processing device that is separate from the host system using a message passing interface over the bus; and provide data access to the processing device through communications made using the message passing interface over the bus.
    Type: Application
    Filed: August 3, 2018
    Publication date: August 8, 2019
    Inventors: Samir Mittal, Gurpreet Anand, Anirban Ray, Parag R. Maharana
  • Publication number: 20190243552
    Abstract: A memory system having memory components, a remote direct memory access (RDMA) network interface card (RNIC), and a host system, and configured to: allocate a page of virtual memory for an application; map the page of virtual memory to a page of physical memory in the memory components; instruct the RNIC to perform an RDMA operation; perform, during the RDMA operation, a data transfer between the page of physical memory in the plurality of memory components and a remote device that is connected via a computer network to the remote direct memory access network interface card; and at least for a duration of the data transfer, lock a mapping between the page of virtual memory and the page of physical memory in the memory components.
    Type: Application
    Filed: August 21, 2018
    Publication date: August 8, 2019
    Inventors: Parag R. Maharana, Anirban Ray, Gurpreet Anand, Samir Mittal
  • Patent number: 9977626
    Abstract: Methods, systems, and computer-readable storage media for performing scattered atomic I/O writes in a storage device. A list of block I/O write requests to be completed as an atomic unit is received from a requester with at least two of the block I/O write requests specifying non-contiguous data locations on a storage media. The plurality of block I/O write requests are buffered in a write buffer with each buffer entry marked as having an invalid state, wherein marking a buffer entry as having an invalid state prevents it from being flushed to the storage media. Upon buffering all of the plurality of block I/O writes, all of the buffer entries are marked as having a valid state at the same time. Upon marking all of the buffer entries as having a valid state, successful completion of the list of block I/O write requests is acknowledged to the requester.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: May 22, 2018
    Assignee: Seagate Technology LLC
    Inventors: Kishore Sampathkumar, Penchala Narasimha reddy Chilakala, Parag R. Maharana, Durga Prasad Bhattarai
  • Publication number: 20180004454
    Abstract: Methods, systems, and computer-readable storage media for performing scattered atomic I/O writes in a storage device. A list of block I/O write requests to be completed as an atomic unit is received from a requester with at least two of the block I/O write requests specifying non-contiguous data locations on a storage media. The plurality of block I/O write requests are buffered in a write buffer with each buffer entry marked as having an invalid state, wherein marking a buffer entry as having an invalid state prevents it from being flushed to the storage media. Upon buffering all of the plurality of block I/O writes, all of the buffer entries are marked as having a valid state at the same time. Upon marking all of the buffer entries as having a valid state, successful completion of the list of block I/O write requests is acknowledged to the requester.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Kishore Sampathkumar, Penchala Narasimha Reddy Chilakala, Parag R. Maharana, Durga Prasad Bhattarai
  • Patent number: 9400759
    Abstract: Methods and structure are provided for cache load balancing in storage controllers that utilize Solid State Drive (SSD) caches. One embodiment is a storage controller of a storage system. The storage controller includes a host interface operable to receive Input and Output (I/O) operations from a host computer. The storage controller also includes a cache memory that includes an SSD. Further, the storage controller includes a cache manager that is distinct from the cache memory. The cache manager is able to determine physical locations in the multiple SSDs that are unused, to identify an unused location that was written to a longer period of time ago than other unused locations, and to store a received I/O operation in the identified physical location. Further, the cache manager is able to trigger transmission of the stored I/O operations to storage devices of the storage system for processing.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: July 26, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Parag R. Maharana, Kishore K. Sampathkumar
  • Publication number: 20160124754
    Abstract: A method for virtual function boot in a system including a single-root I/O virtualization (SR-IOV) enabled server includes loading a PF driver of the PF of a storage adapter onto the server utilizing the virtual machine manager of the server; creating a plurality of virtual functions utilizing the PF driver, detecting each of the virtual functions on an interconnection bus, maintaining a boot list associated with the plurality of virtual functions, querying the storage adapter for the boot list utilizing a VMBIOS associated with the plurality of VMs, presenting the detected boot list to a VM boot manager of the VMM, and booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs.
    Type: Application
    Filed: August 3, 2015
    Publication date: May 5, 2016
    Inventor: Parag R. Maharana
  • Patent number: 9239679
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache may comprise one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. The controller is connected to the memory and configured to (A) process normal read/write operations in a first mode and (B) process special read/write operations in a second mode by (i) tracking a write followed by read condition on each of said cache windows and (ii) discarding data on the cache-lines associated with the cache windows after completion of the write followed by a read condition on the cache-lines.
    Type: Grant
    Filed: January 1, 2014
    Date of Patent: January 19, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Kishore Kaniyar Sampathkumar, Saugata Das Purkayastha, Parag R. Maharana
  • Publication number: 20150378947
    Abstract: Methods and structure are provided for cache load balancing in storage controllers that utilize Solid State Drive (SSD) caches. One embodiment is a storage controller of a storage system. The storage controller includes a host interface operable to receive Input and Output (I/O) operations from a host computer. The storage controller also includes a cache memory that includes an SSD. Further, the storage controller includes a cache manager that is distinct from the cache memory. The cache manager is able to determine physical locations in the multiple SSDs that are unused, to identify an unused location that was written to a longer period of time ago than other unused locations, and to store a received I/O operation in the identified physical location. Further, the cache manager is able to trigger transmission of the stored I/O operations to storage devices of the storage system for processing.
    Type: Application
    Filed: July 9, 2015
    Publication date: December 31, 2015
    Inventors: Parag R. Maharana, Kishore K. Sampathkumar
  • Patent number: 9201681
    Abstract: A method and controller device for sharing computing resources in a virtualized environment having a plurality of virtual machines. The method includes assigning a portion of the computing resources to the plurality of virtual machines. The method also includes leasing by a first virtual machine at least a portion of the assigned computing resources of at least one second virtual machine. The first virtual machine leases computing resources from the at least one second virtual machine when the first virtual machine needs additional computing resources and at least a portion of the assigned computing resources of the at least one second virtual machine are not being used by the at least one second virtual machine.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: December 1, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Parag R. Maharana
  • Patent number: 9189409
    Abstract: Methods and structure are provided for reducing the number of writes to a cache of a storage controller. One exemplary embodiment includes a storage controller that has a non-volatile flash cache memory, a primary memory that is distinct from the cache memory, and a memory manager. The memory manager is able to receive data for storage in the cache memory, to generate a hash key from the received data, and to compare the hash key to hash values for entries in the cache memory. The memory manager can write the received data to the cache memory if the hash key does not match one of the hash values. Also, the memory manager can modify the primary memory instead of writing to the cache if the hash key matches a hash value, in order to reduce the amount of data written to the cache memory.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: November 17, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Parag R. Maharana
  • Patent number: 9135044
    Abstract: A method for virtual function boot in a system including a single-root I/O virtualization (SR-IOV) enabled server includes loading a PF driver of the PF of a storage adapter onto the server utilizing the virtual machine manager of the server; creating a plurality of virtual functions utilizing the PF driver, detecting each of the virtual functions on an interconnection bus, maintaining a boot list associated with the plurality of virtual functions, querying the storage adapter for the boot list utilizing a VMBIOS associated with the plurality of VMs, presenting the detected boot list to a VM boot manager of the VMM, and booting each of the plurality of virtual machines utilizing each of the virtual functions, wherein each VF of the plurality of VFs is assigned to a VM of the plurality of VMs via an interconnect passthrough between the VMM and the plurality of VMs.
    Type: Grant
    Filed: October 6, 2011
    Date of Patent: September 15, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Parag R. Maharana
  • Patent number: 9110813
    Abstract: Methods and structure are provided for cache load balancing in storage controllers that utilize Solid State Drive (SSD) caches. One embodiment is a storage controller of a storage system. The storage controller includes a host interface operable to receive Input and Output (I/O) operations from a host computer. The storage controller also includes a cache memory that includes an SSD. Further, the storage controller includes a cache manager that is distinct from the cache memory. The cache manager is able to determine physical locations in the multiple SSDs that are unused, to identify an unused location that was written to a longer period of time ago than other unused locations, and to store a received I/O operation in the identified physical location. Further, the cache manager is able to trigger transmission of the stored I/O operations to storage devices of the storage system for processing.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: August 18, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte Ltd
    Inventors: Parag R. Maharana, Kishore K. Sampathkumar
  • Publication number: 20150178201
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache may comprise one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. The controller is connected to the memory and configured to (A) process normal read/write operations in a first mode and (B) process special read/write operations in a second mode by (i) tracking a write followed by read condition on each of said cache windows and (ii) discarding data on the cache-lines associated with the cache windows after completion of the write followed by a read condition on the cache-lines.
    Type: Application
    Filed: January 1, 2014
    Publication date: June 25, 2015
    Applicant: LSI Corporation
    Inventors: Kishore Kaniyar Sampathkumar, Saugata Das Purkayastha, Parag R. Maharana