Patents by Inventor Keerthi Kumar
Keerthi Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230017804Abstract: Disclosed are various embodiments for improving the resiliency and performance of cluster memory. First, a computing device can submit a write request to a byte-addressable chunk of memory stored by a memory host, wherein the byte-addressable chunk of memory is read-only. Then, the computing device can determine that a page-fault occurred in response to the write request. Next, the computing device can copy a page associated with the write request from the byte-addressable chunk of memory to the memory of the computing device. Subsequently, the computing device can free the page from the memory host. Then, the computing device can update a page table entry for the page to refer to a location of the page in the memory of the computing device.Type: ApplicationFiled: September 22, 2021Publication date: January 19, 2023Inventors: MARCOS K. AGUILERA, Keerthi Kumar, Pramod Kumar, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian
-
Publication number: 20230012693Abstract: Disclosed are various embodiments for optimizing hypervisor paging. A hypervisor can save a machine page to a swap device, the machine page comprising data for a physical page of a virtual machine allocated to a virtual page for a process executing within the virtual machine. The hypervisor can then catch a page fault for a subsequent access of the machine page by the virtual machine. Next, the hypervisor can determine that the physical page is currently unallocated by the virtual machine in response to the page fault. Subsequently, the hypervisor can send a command to the swap device to discard the machine page saved to the swap device in response to a determination that the physical page is currently unallocated by the virtual machine.Type: ApplicationFiled: October 4, 2021Publication date: January 19, 2023Inventors: MARCOS K. AGUILERA, DHANTU BURAGOHAIN, KEERTHI KUMAR, PRAMOD KUMAR, PRATAP SUBRAHMANYAM, SAIRAM VEERASWAMY, RAJESH VENKATASUBRAMANIAN
-
Publication number: 20220387690Abstract: Various implementations include an arteriovenous graft device. The device includes an outer conduit and an inner conduit. The outer conduit has a first outer conduit end and a second outer conduit end opposite the first outer conduit end. The first outer conduit end defines an outer conduit opening extending from the first outer conduit end to the second outer conduit end. The first outer conduit end is configured to be in fluid communication with an artery and the second outer conduit end is configured to be in fluid communication with a vein. The inner conduit defines an inner conduit opening. The inner conduit is sized to be disposed within the outer conduit opening. The inner conduit is movable between a collapsed position and an expandable position.Type: ApplicationFiled: April 25, 2022Publication date: December 8, 2022Inventors: Mark Ruegsegger, Khaled Boubes, Sheila Colbert, Gurleen Vilkhu, Jacob Miller, Kellen Biesbrock, Ana Minyayev, Kenzington Kottenbrock, Jordan Rosales, Tasneem Mohammad, Keerthi Kumar
-
Publication number: 20220365855Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.Type: ApplicationFiled: July 28, 2022Publication date: November 17, 2022Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
-
Patent number: 11436112Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.Type: GrantFiled: May 17, 2021Date of Patent: September 6, 2022Assignee: VMware, Inc.Inventors: Keerthi Kumar, Halesh Sadashiv, Sairam Veeraswamy, Rajesh Venkatasubramanian, Kiran Dikshit, Kiran Tati
-
Patent number: 11403084Abstract: Performing Splunk code deployment in existing environments has been a challenge for support teams due to the large infrastructure footprint and the number of moving parts. An embodiment of the present invention is directed to an Orchestration Engine to automatically execute the Splunk Deployment releases with reduced downtime and enhanced logging and traceability. This automation will not only help eliminate inefficient and resource-intensive manual processes involved in promoting changes to production, but also carry out validations and reduce human errors thereby providing a more stable and reliable platform for end users.Type: GrantFiled: January 11, 2021Date of Patent: August 2, 2022Assignee: JPMORGAN CHASE BANK, N.A.Inventors: Jijo Vincent, C. G. Jayesh, Ruchir Srivastava, Arut Prakash Thanushkodi Ravindran, Joseph Oddo, Anthony Byers, Mathew Benwell, Keerthi Kumar Gunda
-
Patent number: 11334380Abstract: The disclosure provides an approach for creating a pool of memory out of local memories of host machines, and providing that pool for the hosts to use. The pool is managed by a controller that keeps track of memory usage and allocated memory among hosts. The controller allocates or reclaims memory between hosts, as needed by the hosts. Memory allocated from a second host to a first host may then be divided into smaller portions by the first host, and further allocated to virtual machines executing within the first host.Type: GrantFiled: November 28, 2019Date of Patent: May 17, 2022Assignee: VMWARE, INC.Inventors: Marcos Aguilera, Keerthi Kumar, Pramod Kumar, Arun Ramanathan, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian, Manish Mishra
-
Publication number: 20210216295Abstract: Performing Splunk code deployment in existing environments has been a challenge for support teams due to the large infrastructure footprint and the number of moving parts. An embodiment of the present invention is directed to an Orchestration Engine to automatically execute the Splunk Deployment releases with reduced downtime and enhanced logging and traceability. This automation will not only help eliminate inefficient and resource-intensive manual processes involved in promoting changes to production, but also carry out validations and reduce human errors thereby providing a more stable and reliable platform for end users.Type: ApplicationFiled: January 11, 2021Publication date: July 15, 2021Inventors: Jijo VINCENT, C.G. JAYESH, Ruchir SRIVASTAVA, Arut Prakash Thanushko RAVINDRAN, Joseph ODDO, Anthony BYERS, Mathew BENWELL, Keerthi Kumar GUNDA
-
Patent number: 10984484Abstract: A method of accounting workflow integration includes receiving, by a workflow user interface, a first request from a first worker to generate a project including multiple accounting tasks. The first request includes an assignment of an accounting task in the accounting tasks to a second worker. The method further includes generating the project in response to the request, and providing, by the workflow user interface to the second worker, the accounting task and a deadline to complete the accounting task. The method further includes accounting software of the second worker completing the accounting task, and updating a status of the accounting task in response to completing the accounting task. The method further includes receiving a second request from the first worker, the second request to display a status of the project, and providing, to the first worker in the workflow user interface, an updated status of the project.Type: GrantFiled: July 28, 2017Date of Patent: April 20, 2021Assignee: Intuit Inc.Inventors: Priscilla Jane Nidecker, Michael D. Rundle, Thomas Alan Lee, Harpreet Hira, Shailesh Mishra, Harsha Jagadish, Keerthi Kumar Arutla, Mohan Naik, Enrique Barragan, Brad Sinclair
-
Publication number: 20210019168Abstract: The disclosure provides an approach for creating a pool of memory out of local memories of host machines, and providing that pool for the hosts to use. The pool is managed by a controller that keeps track of memory usage and allocated memory among hosts. The controller allocates or reclaims memory between hosts, as needed by the hosts. Memory allocated from a second host to a first host may then be divided into smaller portions by the first host, and further allocated to virtual machines executing within the first host.Type: ApplicationFiled: November 28, 2019Publication date: January 21, 2021Inventors: Marcos Aguilera, Keerthi Kumar, Pramod Kumar, Arun Ramanathan, Pratap Subrahmanyam, Sairam Veeraswamy, Rajesh Venkatasubramanian, Manish Mishra
-
Publication number: 20190030437Abstract: A computer implemented method comprising: generating, by a first client device executing a game in an off-line mode, a request for a virtual object from a second client device; sending the request for the virtual object, by the first client device, using an off-line communication channel; based on receiving a response including the requested virtual object from a second client device over the off-line communication channel, updating a game state for the game on the first client device to indicate receipt of the virtual object from the second client device, and updating a social network state between a first user associated with the first client device and a second user associated with the second client device.Type: ApplicationFiled: July 24, 2018Publication date: January 31, 2019Inventors: Muthukaleeeshwaran Subbiah, Sudhaharan Sam, Rajiv Golay, Rongsentemshi Jamir, Bhavna Padmanabhan, Kopparam Rama Keerthi Kumar, Rahul Daga, Anuj Khandelwal
-
Patent number: 8972648Abstract: Provided are techniques for allocating logical memory corresponding to a logical partition in a computing system; generating, a S/W PFT data structure corresponding to a first page of the logical memory, wherein the S/W PFT data structure comprises a field indicating that the corresponding first page of logical memory is a klock page; transmitting a request for a page of physical memory and the corresponding S/W PFT data structure to hypervisor, allocating physical memory corresponding to the request; and, in response to a pageout request, paging out available logical memory corresponding to the logical partition that does not indicate that the corresponding page is a klock page prior to paging out the first page.Type: GrantFiled: December 13, 2013Date of Patent: March 3, 2015Assignee: International Business Machines CorporationInventors: Keerthi Kumar, Shailaja Mallya
-
Publication number: 20140122772Abstract: Provided are techniques for allocating logical memory corresponding to a logical partition in a computing system; generating, a S/W PET data structure corresponding to a first page of the logical memory, wherein the S/W PFT data structure comprises a field indicating that the corresponding first page of logical memory is a klock page; transmitting a request for a page of physical memory and the corresponding S/W PET data structure to hypervisor, allocating physical memory corresponding to the request; and, in response to a pageout request, paging out available logical memory corresponding to the logical partition that does not indicate that the corresponding page is a klock page prior to paging out the first page.Type: ApplicationFiled: December 13, 2013Publication date: May 1, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Keerthi Kumar, Shailaja Mallya