Patents by Inventor Naveen Muralimanohar

Naveen Muralimanohar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11853301
    Abstract: Compiled portions of code generated to perform a query plan at a query engine may be shared with other query engines. A data store, separate from the query engines, may store compiled portions of query code generated for different queries. If a query engine does not have a locally stored compiled portion of query code, then the separate data store may be accessed in order to obtain a compiled portion of query code, allowing reuse of compiled query code across different queries engines for queries directed to different databases.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ippokratis Pandis, Naresh Chainani, Kiran Kumar Chinta, Venkatraman Govindaraju, Andrew Edward Caldwell, Naveen Muralimanohar, Martin Grund, Fabian Oliver Nagel, Nikolaos Armenatzoglou
  • Publication number: 20230359627
    Abstract: Compiled portions of code generated to perform a query plan at a query engine may be shared with other query engines. A data store, separate from the query engines, may store compiled portions of query code generated for different queries. If a query engine does not have a locally stored compiled portion of query code, then the separate data store may be accessed in order to obtain a compiled portion of query code, allowing reuse of compiled query code across different queries engines for queries directed to different databases.
    Type: Application
    Filed: July 12, 2023
    Publication date: November 9, 2023
    Applicant: Amazon Technologies, Inc.
    Inventors: Ippokratis Pandis, Naresh Chainani, Kiran Kumar Chinta, Venkatraman Govindaraju, Andrew Edward Caldwell, Naveen Muralimanohar, Martin Grund, Fabian Oliver Nagel, Nikolaos Armenatzoglou
  • Patent number: 11556438
    Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Cong Xu, Naveen Muralimanohar, Harumi Kuno
  • Patent number: 11308106
    Abstract: Caching results of sub-queries to different locations in a data store may be performed. A database query may be received that causes different storage engines to perform sub-queries to different locations in a data store that stores data for a database. The results of the sub-queries may be stored in a cache. When another database query is received, sub-queries generated to perform the other database query that are the same as one or more of the previously performed sub-queries may obtain the results of the sub-queries from the cache instead of performing the sub-queries again.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: April 19, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Naveen Muralimanohar, Bhaven Avalani, Martin Grund, William Michael McCreedy, Ippokratis Pandis, Michalis Petropoulos
  • Patent number: 11157237
    Abstract: In some examples, memristive dot product circuit based floating point computations may include ascertaining a matrix and a vector including floating point values, and partitioning the matrix into a plurality of sub-matrices according to a size of a plurality of memristive dot product circuits. For each sub-matrix of the plurality of sub-matrices, the floating point values may be converted to fixed point values. Based on the conversion and selected ones of the plurality of memristive dot product circuits, a dot product operation may be performed with respect to a sub-matrix and the vector. Each ones of the plurality of memristive dot product circuits may include rows including word line voltages corresponding to the floating point values of the vector, conductances corresponding to the floating point values of an associated sub-matrix, and columns that include bitline currents corresponding to dot products of the voltages and conductances.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: October 26, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Benjamin Feinberg
  • Patent number: 11126549
    Abstract: In an example, a method includes identifying, using at least one processor, data portions of a plurality of distinct data objects stored in at least one memory which are to be processed using the same logical operation. The method may further include identifying a representation of an operand stored in at least one memory, the operand being to provide the logical operation and providing a logical engine with the operand. The data portions may be stored in a plurality of input data buffers, wherein each of the input data buffers comprises a data portion of a different data object. The logical operation may be carried out on each of the data portions using the logical engine, and the outputs for each data portion may be stored in a plurality of output data buffers, wherein each of the outputs comprising data derived from a different data object.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: September 21, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Ali Shafiee Ardestani
  • Patent number: 11018692
    Abstract: Computer-implemented methods, systems, and devices to perform lossless compression of floating point format time-series data are disclosed. A first data value may be obtained in floating point format representative of an initial time-series parameter. For example, an output checkpoint of a computer simulation of a real-world event such as weather prediction or nuclear reaction simulation. A first predicted value may be determined representing the parameter at a first checkpoint time. A second data value may be obtained from the simulation. A prediction error may be calculated. Another predicted value may be generated for a next point in time and may be adjusted by the previously determined prediction error (e.g., to increase accuracy of the subsequent prediction). When a third data value is obtained, the adjusted prediction value may be used to generate a difference (e.g., XOR) for storing in a compressed data store to represent the third data value.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: May 25, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Anirban Nag, Naveen Muralimanohar, Paolo Faraboschi
  • Patent number: 10942673
    Abstract: In an example, a method includes receiving, in a memory, input data to be processed in a first and a second processing layer. A processing operation of the second layer may be carried out on an output of a processing operation of the first processing layer. The method may further include assigning the input data to be processed according to at least one processing operation of the first layer, which may comprise using a resistive memory array, and buffering output data. It may be determined whether the buffered output data exceeds a threshold data amount to carry out at least one processing operation of the second layer and when it is determined that the buffered output data exceeds the threshold data amount, at least a portion of the buffered output data may be assigned to be processed according to a processing operation of the second layer.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: March 9, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ali Shafiee Ardestani, Naveen Muralimanohar
  • Publication number: 20200379858
    Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
    Type: Application
    Filed: August 17, 2020
    Publication date: December 3, 2020
    Inventors: Cong Xu, Naveen Muralimanohar, Harumi Kuno
  • Publication number: 20200358455
    Abstract: Computer-implemented methods, systems, and devices to perform lossless compression of floating point format time-series data are disclosed. A first data value may be obtained in floating point format representative of an initial time-series parameter. For example, an output checkpoint of a computer simulation of a real-world event such as weather prediction or nuclear reaction simulation. A first predicted value may be determined representing the parameter at a first checkpoint time. A second data value may be obtained from the simulation. A prediction error may be calculated. Another predicted value may be generated for a next point in time and may be adjusted by the previously determined prediction error (e.g., to increase accuracy of the subsequent prediction). When a third data value is obtained, the adjusted prediction value may be used to generate a difference (e.g., XOR) for storing in a compressed data store to represent the third data value.
    Type: Application
    Filed: July 29, 2020
    Publication date: November 12, 2020
    Inventors: Anirban Nag, Naveen Muralimanohar, Paolo Faraboschi
  • Patent number: 10776225
    Abstract: While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 15, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Cong Xu, Naveen Muralimanohar, Harumi Kuno
  • Patent number: 10754581
    Abstract: In an example, a method comprises receiving a first matrix of values to be mapped to a resistive memory array, wherein each value in the matrix is to be represented as a resistance of a resistive memory element. An outlying value may be identified in the first matrix. At least one value of a portion of the first matrix containing the outlying value may be substituted with at least one substitute value to form a substituted first matrix.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: August 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Ali Shafiee Ardestani
  • Patent number: 10756756
    Abstract: Computer-implemented methods, systems, and devices to perform lossless compression of floating point format time-series data are disclosed. A first data value may be obtained in floating point format representative of an initial time-series parameter. For example, an output checkpoint of a computer simulation of a real-world event such as weather prediction or nuclear reaction simulation. A first predicted value may be determined representing the parameter at a first checkpoint time. A second data value may be obtained from the simulation. A prediction error may be calculated. Another predicted value may be generated for a next point in time and may be adjusted by the previously determined prediction error (e.g., to increase accuracy of the subsequent prediction). When a third data value is obtained, the adjusted prediction value may be used to generate a difference (e.g., XOR) for storing in a compressed data store to represent the third data value.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: August 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Anirban Nag, Naveen Muralimanohar, Paolo Faraboschi
  • Patent number: 10754582
    Abstract: In an example, a method includes receiving input data and dividing the input data into a plurality of data portions, wherein the size of each data portion is based on a significance level. The input data may be assigned to at least one resistive memory array. Assigning the input data to at least one resistive memory array may comprises at least one of (i) assigning at least one data portion of the input data to be represented by a resistive memory array representing a number of bits, wherein the number of bits represented within the resistive memory array is based on the size of the at least one data portion; and (ii) processing each data portion of the input data with at least one resistive memory array.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: August 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Ali Shafiee Ardestani, Ben Feinberg
  • Patent number: 10698975
    Abstract: Example implementations of the present disclosure relate to in situ transposition of the data values in a memory array. An example system may include a non-volatile memory (NVM) array, including a plurality of NVM elements, usable in performance of computations. The example system may include an input engine to input a plurality of data values for storage by a corresponding plurality of original NVM elements. The example system may further include a transposition engine to direct performance of the in situ transposition such that the plurality of data values remains stored by the original NVM elements.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: June 30, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Benjamin Feinberg, John Paul Strachan
  • Patent number: 10664271
    Abstract: Examples disclosed herein include a dot product engine, which includes a resistive memory array to receive an input vector, perform a dot product operation on the input vector and a stored vector stored in the memory array, and output an analog signal representing a result of the dot product operation. The dot product engine includes a stored negation indicator to indicate whether elements of the stored vector have been negated, and a digital circuit to generate a digital dot product result value based on the analog signal and the stored negation indicator.
    Type: Grant
    Filed: January 30, 2016
    Date of Patent: May 26, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Naveen Muralimanohar, Ali Shafiee Ardestani
  • Publication number: 20200150923
    Abstract: In some examples, memristive dot product circuit based floating point computations may include ascertaining a matrix and a vector including floating point values, and partitioning the matrix into a plurality of sub-matrices according to a size of a plurality of memristive dot product circuits. For each sub-matrix of the plurality of sub-matrices, the floating point values may be converted to fixed point values. Based on the conversion and selected ones of the plurality of memristive dot product circuits, a dot product operation may be performed with respect to a sub-matrix and the vector. Each ones of the plurality of memristive dot product circuits may include rows including word line voltages corresponding to the floating point values of the vector, conductances corresponding to the floating point values of an associated sub-matrix, and columns that include bitline currents corresponding to dot products of the voltages and conductances.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Naveen MURALIMANOHAR, Benjamin FEINBERG
  • Patent number: 10620861
    Abstract: Techniques for retrieving data blocks from memory devices are provided. In one aspect, a request to retrieve a block of data may be received. The block of data may be in a line in a rank of memory. The rank of memory may include multiple devices. The devices used to store the line in the rank of memory may be determined. The determined devices may be read.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: April 14, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Rajeev Balasubramonian, Paolo Faraboschi, Gregg B. Lesartre, Naveen Muralimanohar
  • Publication number: 20200091930
    Abstract: Computer-implemented methods, systems, and devices to perform lossless compression of floating point format time-series data are disclosed. A first data value may be obtained in floating point format representative of an initial time-series parameter. For example, an output checkpoint of a computer simulation of a real-world event such as weather prediction or nuclear reaction simulation. A first predicted value may be determined representing the parameter at a first checkpoint time. A second data value may be obtained from the simulation. A prediction error may be calculated. Another predicted value may be generated for a next point in time and may be adjusted by the previously determined prediction error (e.g., to increase accuracy of the subsequent prediction). When a third data value is obtained, the adjusted prediction value may be used to generate a difference (e.g., XOR) for storing in a compressed data store to represent the third data value.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Anirban Nag, Naveen Muralimanohar, Paolo Faraboschi
  • Patent number: 10585602
    Abstract: An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: March 10, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Doe Hyun Yoon, Naveen Muralimanohar, Jichuan Chang, Parthasarathy Ranganathan