Patents by Inventor Vanish Talwar

Vanish Talwar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10691375
    Abstract: In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: June 23, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Vanish Talwar, Paolo Faraboschi, Daniel Gmach, Yuan Chen, Al Davis, Adit Madan
  • Patent number: 10545909
    Abstract: A system management command is stored in a management partition of a global memory by a first node of a multi-node computing system. The global memory is shared by each node of the multi-node computing system. In response to an indication to access the management partition, the system management command is accessed from the management partition by a second node of the multi-node computing system. The system management command is executed by the second node. Executing the system management command includes managing the second node.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: January 28, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Yuan Chen, Daniel Juergen Gmach, Dejan S. Milojicic, Vanish Talwar, Zhikui Wang
  • Patent number: 10417052
    Abstract: According to an example, an instruction to run a kernel of an application on an apparatus having a first processing unit integrated with a second processing unit may be received. In addition, an application profile for the application at a runtime of the application kernel on the second processing unit may be created, in which the application profile identifies an affinity of the application kernel to be run on either the first processing unit or the second processing unit, and identifies a characterization of an input data set of the application. The application profile may also be stored in a data store.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: September 17, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Yuan Chen, Vanish Talwar, Naila Farooqui, Indrajit Roy
  • Publication number: 20190213096
    Abstract: A system comprises a plurality of functional units powered via a power source. The system further comprises a first functional unit and a second functional unit, wherein the second functional unit is to promote the first functional unit to a management unit based on a management requirement of the system. The management unit is to administrate operations of the system. Once the first functional unit is promoted, the management unit is isolated from the functional units that were not promoted via a virtual network path and a power management unit.
    Type: Application
    Filed: March 14, 2019
    Publication date: July 11, 2019
    Inventors: Dejan S. Milojicic, Yuan Chen, Daniel J. Gmach, Vanish Talwar, Zhikui Wang
  • Patent number: 10261882
    Abstract: A system comprises a plurality of functional units powered via a power source. The system further comprises a first functional unit and a second functional unit, wherein the second functional unit is to promote the first functional unit to a management unit based on a management requirement of the system. The management unit is to administrate operations of the system. Once the first functional unit is promoted, the management unit is isolated from the functional units that were not promoted via a virtual network path and a power management unit.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: April 16, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Dejan S. Milojicic, Yuan Chen, Daniel J. Gmach, Vanish Talwar, Zhikui Wang
  • Publication number: 20180300065
    Abstract: Performance of a computing system is improved by identifying and mitigating a bottleneck along a path that spans a storage system and a virtual machine causing the bottleneck. A mitigation action is selected and performed according to the bottleneck location. To identify a virtual machine involved in the bottleneck, end-to-end latency values connected with individual virtual machines are used, some of which are estimated using the presently disclosed techniques. Specifically, a backend storage latency from a specific virtual machine, and a flash virtualization platform, network, and queuing latency for the virtual machine are not conventionally observable, but are instead estimated using other readily available usage statistics.
    Type: Application
    Filed: April 16, 2017
    Publication date: October 18, 2018
    Inventors: Vanish Talwar, Gokul Nadathur
  • Publication number: 20180293023
    Abstract: Performance of a computing system is improved by identifying storage clients that generate relatively large workloads and mitigating overall impact of the identified storage clients. Latency is monitored for different write and read I/O request block sizes. When latency for any request bock size increases above a predefined threshold for the request block size, diagnostics are performed to identify a storage client that generated an excessive workload or workloads comprising large blocks that caused the increased latency. A mitigation action can then be performed.
    Type: Application
    Filed: April 6, 2017
    Publication date: October 11, 2018
    Inventors: Vanish Talwar, Gokul Nadathur
  • Patent number: 10062137
    Abstract: The communication between integrated graphics processing units (GPUs) is disclosed. A first integrated GPU of a first computing device obtains a tuple pertaining to data to be transmitted to a second integrated GPU of a second computing device. The tuple comprises at least a length of the data. The first integrated GPU allocates a virtual address space to the data based on the length of the data, where the virtual address space has a plurality of virtual addresses. Further, a mapping table of a mapping between the plurality of virtual addresses and a plurality of bus addresses is provided by the first integrated GPU to a communication module of the first computing device to transmit the data, where the plurality of bus addresses indicate physical locations of the data.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: August 28, 2018
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Indrajit Roy, Sangman Kim, Vanish Talwar
  • Patent number: 9971548
    Abstract: Performance of a computing system is improved by avoiding and/or eliminating overload conditions in storage systems. Performance utilization calculations are used to predict that a candidate placement configuration of storage resources within the storage systems will avoid overload conditions. The performance utilization calculations use performance profiles in conjunction with workload profiles to account for both storage system performance and actual workload traffic. Performance utilization calculations can also be used to report storage controller utilization information.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: May 15, 2018
    Assignee: NUTANIX, INC.
    Inventors: Vanish Talwar, Gokul Nadathur
  • Publication number: 20180004456
    Abstract: In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
    Type: Application
    Filed: January 30, 2015
    Publication date: January 4, 2018
    Inventors: Vanish Talwar, Paolo Faraboschi, Daniel Gmach, Yuan Chen, Al Davis, Adit Madan
  • Publication number: 20170315847
    Abstract: According to an example, an instruction to run a kernel of an application on an apparatus having a first processing unit integrated with a second processing unit may be received. In addition, an application profile for the application at a runtime of the application kernel on the second processing unit may be created, in which the application profile identifies an affinity of the application kernel to be run on either the first processing unit or the second processing unit, and identifies a characterization of an input data set of the application. The application profile may also be stored in a data store.
    Type: Application
    Filed: October 31, 2014
    Publication date: November 2, 2017
    Inventors: Yuan Chen, Vanish Talwar, Naila Farooqui, Indrajit Roy
  • Patent number: 9619430
    Abstract: A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.
    Type: Grant
    Filed: February 24, 2012
    Date of Patent: April 11, 2017
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sudarsun Kannan, Dejan S. Milojicic, Vanish Talwar
  • Patent number: 9600791
    Abstract: Managing a network system includes determining metrics for a plurality of nodes in the network system, determining a plurality of zones including the plurality of nodes based on the metrics for the network system, and, for each zone of the plurality of zones, determining a computational architecture to be implemented for the zone based on the metrics for each node of the plurality of nodes in the zone.
    Type: Grant
    Filed: June 7, 2010
    Date of Patent: March 21, 2017
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Vanish Talwar, Susanta Adhikary, Jeffrey R. Hilland, Kannan Vidhya, V Prashanth, KS Sandeep
  • Publication number: 20170046240
    Abstract: A system comprises a plurality of functional units powered via a power source. The system further comprises a first functional unit and a second functional unit, wherein the second functional unit is to promote the first functional unit to a management unit based on a management requirement of the system. The management unit is to administrate operations of the system. Once the first functional unit is promoted, the management unit is isolated from the functional units that were not promoted via a virtual network path and a power management unit.
    Type: Application
    Filed: January 31, 2014
    Publication date: February 16, 2017
    Inventors: Dejan S. Milojicic, Yuan Chen, Daniel J Gmach, Vanish Talwar, Zhikui Wang
  • Publication number: 20170046304
    Abstract: A system management command is stored in a management partition of a global memory by a first node of a multi-node computing system. The global memory is shared by each node of the multi-node computing system. In response to an indication to access the management partition, the system management command is accessed from the management partition by a second node of the multi-node computing system. The system management command is executed by the second node. Executing the system management command includes managing the second node.
    Type: Application
    Filed: April 29, 2014
    Publication date: February 16, 2017
    Inventors: Yuan Chen, Daniel Juergen Gmach, Dejan S. Milojicic, Vanish Talwar, Zhikui Wang
  • Publication number: 20170018050
    Abstract: The communication between integrated graphics processing units (CPUs) is disclosed. A first integrated GPU of a first computing device obtains a tuple pertaining to data to be transmitted to a second integrated CPU of a second computing device. The tuple comprises at least a length of the data. The first integrated GPU allocates a virtual address space to the data based on the length of the data, where the virtual address space has a plurality of virtual addresses. Further, a mapping table of a mapping between the plurality of virtual addresses and a plurality of bus addresses is provided by the first integrated GPU to a communication module of the first computing device to transmit the data, where the plurality of bus addresses indicate physical locations of the data.
    Type: Application
    Filed: February 27, 2014
    Publication date: January 19, 2017
    Inventors: Indrajit Roy, Sangman Kim, Vanish Talwar
  • Publication number: 20170010915
    Abstract: Examples for performing processing tasks using an auxiliary processing unit are described. In an example, a computing system may include a processor to perform a plurality of processing tasks for each of a plurality of applications hosted by the computing system. An auxiliary processing task may be determined for an active application from the plurality of applications. The auxiliary processing tasks may be from among the plurality of processing tasks performed for the active application. Further, the processing code corresponding to the auxiliary processing task may be provided to the auxiliary processing unit of the computing system. The auxiliary processing unit may execute the processing code to perform corresponding auxiliary processing tasks and share a processing load with the processor.
    Type: Application
    Filed: January 31, 2014
    Publication date: January 12, 2017
    Inventors: Indrajit ROY, Vanish TALWAR, Pravin Bhanudas SHINDE
  • Patent number: 9395786
    Abstract: A method for cross-layer power management in a multi-layer system includes determining whether there is a service level violation for an application running on a hardware platform. Power consumption of the hardware platform is controlled in response to the service level violation.
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: July 19, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Vanish Talwar, Jeffrey S. Autor, Sanjay Kumar, Parthasarathy Ranganathan
  • Publication number: 20160117196
    Abstract: Log analysis can include transferring compiled log analysis code, executing log analysis code, and performing a log analysis on the executed log analysis code.
    Type: Application
    Filed: July 31, 2013
    Publication date: April 28, 2016
    Inventors: Vanish Talwar, Indrajit Roy, Kevin T. Lim, Jichuan Chang, Parthasarathy Ranganathan
  • Publication number: 20160034528
    Abstract: A technique includes receiving a user input in an array-oriented database. The user input indicates a database operation and processing a plurality of chunks of data stored by the database to perform the operation. The processing in dudes selectively distributing the processing of the plurality of chunks between a first group of at least one central processing unit and a second group of at least one co-processor.
    Type: Application
    Filed: March 15, 2013
    Publication date: February 4, 2016
    Inventors: Indrajit Roy, Feng Liu, Vanish Talwar, Shimin Chen, Jichuan Chang, Parthasarathy Ranganathan