Patents by Inventor Parthasarathy Ranganathan

Parthasarathy Ranganathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10474584
    Abstract: A technique includes using a cache controller of an integrated circuit to control a cache including cached data content and associated cache metadata. The technique includes storing the metadata and the cached data content off of the integrated circuit and organizing the storage of the metadata relative to the cached data content such that a bus operation initiated by the cache controller to target the cached data content also targets the associated metadata.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: November 12, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Jichuan Chang, Justin James Meza, Parthasarathy Ranganathan
  • Patent number: 10402324
    Abstract: According to an example, a processor generates a memory access request and sends the memory access request to a memory module. The processor receives data from the memory module in response to the memory access request when a memory device in the memory module for the memory access request is busy and unable to execute the memory access request.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: September 3, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Kevin T. Lim, Sheng Li, Parthasarathy Ranganathan, William C. Hallowell
  • Publication number: 20190236010
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Application
    Filed: April 9, 2019
    Publication date: August 1, 2019
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20190163381
    Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.
    Type: Application
    Filed: January 8, 2019
    Publication date: May 30, 2019
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin
  • Patent number: 10303604
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20190155658
    Abstract: Methods, systems, and computer storage media storing instructions for managing processing system efficiency. One of the methods includes obtaining data splitting a plurality of general-purpose processing units in a processing system into a high-priority domain and a low-priority domain, wherein the general-purpose processing units in the high-priority domain are assigned to perform one or more tasks comprising one or more high-priority tasks, and the general-purpose processing units in the low-priority domain are assigned to perform one or more low-priority tasks; and during runtime of the processing system, obtaining memory usage measurements that characterize usage of system memory by the high-priority domain and the low-priority domain; and adjusting, based on the memory usage measurements, a configuration of (i) the high-priority domain, (ii) the low-priority domain, or (iii) both to adjust utilization of the system memory by the general-purpose processing units.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 23, 2019
    Inventors: Liqun Cheng, Rama Krishna Govindaraju, Haishan Zhu, David Lo, Parthasarathy Ranganathan, Nishant Patil
  • Patent number: 10270652
    Abstract: A system and method for network management are described herein. The system includes a number of servers and a first network coupling the servers to each other and configured to connect the servers to one or more client computing devices. The system also includes a second network coupling the servers to each other, wherein data transferred between the servers is transferred though the second network. Network management requests for configuring the second network are communicated to the servers through the first network.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: April 23, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jichuan Chang, Parthasarathy Ranganathan
  • Publication number: 20190108261
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for disaggregating latent causes for computer system optimization. In one aspect, a method includes accessing a data stream for data values resulting from operations performed by a computer system; providing the data values as input to a data disaggregation machine learning model that generates descriptors of latent causes of the data values; providing the data values and the descriptors of the latent causes of the data values as inputs to a control system model that generates embedded representations of commands to modify the operations performed by the computer system; determining commands to modify the operations performed by the computer system based on the embedded representations of commands to modify the operations performed by the computer system; and providing the commands to the computer system.
    Type: Application
    Filed: October 5, 2017
    Publication date: April 11, 2019
    Inventors: Milad Olia Hashemi, Parthasarathy Ranganathan, Harsh Satija
  • Patent number: 10218779
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for machine level resource distribution are disclosed. In one aspect, a method is implemented in a data processing apparatus, which includes, for each server computer in a set of two or more server computers within a data center, wherein each server computer includes a plurality of processing cores, receiving wear data describing, for each processing core of the server computer, a wear level for the processing core that is indicative of accumulated wear of the processing core, and moderating accumulation of wear in the processor cores based on the wear level of the processing cores from at least two different server computers.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Liqun Cheng, Rama Krishna Govindaraju, Parthasarathy Ranganathan
  • Patent number: 10216636
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Patent number: 10191672
    Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: January 29, 2019
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin
  • Patent number: 10152247
    Abstract: A technique includes acquiring a plurality of write requests from at least one memory controller and logging information associated with the plurality of write requests in persistent storage. The technique includes applying the plurality of write requests atomically as a group to persistent storage.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: December 11, 2018
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Sheng Li, Jishen Zhao, Jichuan Chang, Parthasarathy Ranganathan, Alistair Veitch, Kevin T. Lim, Mark Lillibridge
  • Publication number: 20180336137
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Application
    Filed: July 26, 2018
    Publication date: November 22, 2018
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Publication number: 20180307420
    Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
    Type: Application
    Filed: June 18, 2018
    Publication date: October 25, 2018
    Inventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan
  • Patent number: 10055350
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Grant
    Filed: November 5, 2014
    Date of Patent: August 21, 2018
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Patent number: 10025663
    Abstract: Local checkpointing using a multi-level call is described herein. An example method includes storing a first datum in a first level of a multi-level cell. A second datum is stored in a second level of the multi-level cell, the second datum representing a checkpoint of the first datum. The first datum is copied from the first level to the second level of the multi-level cell to create the checkpoint.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: July 17, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Doe Hyun Yoon, Robert Schreiber, Paolo Faraboschi, Jichuan Chang, Naveen Muralimanohar, Parthasarathy Ranganathan
  • Patent number: 10019176
    Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: July 10, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan
  • Patent number: 9935899
    Abstract: A switch, a system and operational method for packet switching between virtual machines running in a server and a network. The server comprises a switch with swappable, virtual ports. The switch routes packets to and from the various virtual machines resident in the server memory.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: April 3, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Jayaram Mudigonda, Parthasarathy Ranganathan
  • Patent number: 9934085
    Abstract: A detector detects, using an error code, an error in data stored in a memory. The detector determines whether the error is uncorrectable using the error code. In response to determining that the error is uncorrectable, an error handler associated with an application is invoked to handle the error in the data by recovering the data to an application-wide consistent state.
    Type: Grant
    Filed: May 29, 2013
    Date of Patent: April 3, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Doe-Hyun Yoon, Jichuan Chang, Naveen Muralimanohar, Parthasarathy Ranganathan, Robert Schreiber, Norman Paul Jouppi
  • Patent number: 9785472
    Abstract: Illustrated is a system and method that includes identifying a search space based upon available resources, the search space to be used to satisfy a resource request. The system and method also includes selecting from the search space an initial candidate set, each candidate of the candidate set representing a potential resource allocation to satisfy the resource request. The system and method further includes assigning a fitness score, based upon a predicted performance, to each member of the candidate set. The system and method also includes transforming the candidate set into a fittest candidate set, the fittest candidate set having a best predicted performance to satisfy the resource request.
    Type: Grant
    Filed: June 11, 2010
    Date of Patent: October 10, 2017
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gunho Lee, Niraj Tolia, Parthasarathy Ranganathan