Patents by Inventor Johnu George

Johnu George has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847500
    Abstract: A method can include receiving, at a workflow controller, a machine learning workflow, the machine learning workflow associated with a first task and a second task. The first task is training a machine learning model and the second task is deploying the model. The method can include segmenting, by the workflow controller, the machine learning workflow into a first sub-workflow associated with the first task and a second sub-workflow associated with the second task, assigning a first workflow agent to the first sub-workflow and assigning a second workflow agent to the second sub-workflow, selecting, by the first workflow agent and based on first resources needed to perform the first task, a first cluster for performing the first task and selecting, by the second workflow agent and based on second resources needed to perform the second task, a second cluster for performing the second task.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: December 19, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Johnu George, Sourav Chakraborty, Amit Kumar Saha, Debojyoti Dutta, Xinyuan Huang, Adhita Selvaraj
  • Publication number: 20230385301
    Abstract: An illustrative embodiment disclosed herein is a computer-implemented method. In some embodiments, the method includes uploading, by a processor, an object to a source bucket in an object store and creating, by the processor, a lambda bucket in the object store that is symlinked to the source bucket. In some embodiments, the lambda bucket is associated with a transformation function. In some embodiments, the method includes associating, by the processor, a lambda function with the object in the source bucket, receiving, by the processor, a request to download the object from the lambda bucket, detecting, by the processor, that the object is in the source bucket and associated with the lambda function, fetching, by the processor, the object from the source bucket, generating, by the processor, a transformed object by invoking the lambda function and the transformation function on the object, and downloading, by the processor, the transformed object.
    Type: Application
    Filed: August 23, 2022
    Publication date: November 30, 2023
    Applicant: Nutanix, Inc.
    Inventors: Johnu George, Manik Taneja, Manosiz Bhattacharyya, Naveen Reddy Gundlagutta
  • Publication number: 20230384958
    Abstract: An illustrative embodiment disclosed herein is an apparatus including a processor and a memory. In some embodiments, the memory includes programmed instructions that, when executed by the processor, cause the apparatus to upload an object to a source bucket in an object store and create a lambda bucket in the object store that is symlinked to the source bucket. In some embodiments, the lambda bucket is associated with a predefined transformation. In some embodiments, the memory includes the programmed instructions that, when executed by the processor, cause the apparatus to receive a request to download the object from the lambda bucket, detect that the object is in the source bucket, fetch the object from the source bucket, transform the object, by compute resources of the object store, using the predefined transformation, and download the transformed object.
    Type: Application
    Filed: July 25, 2022
    Publication date: November 30, 2023
    Applicant: Nutanix, Inc.
    Inventors: Johnu George, Manik Taneja, Naveen Reddy Gundlagutta, Nikhil Mundra, Satyendra Singh Naruka, Sirvisetti Venkat Sri Sai Ram
  • Patent number: 11816125
    Abstract: An illustrative embodiment disclosed herein is a computer-implemented method. In some embodiments, the method includes uploading, by a processor, an object to a source bucket in an object store and creating, by the processor, a lambda bucket in the object store that is symlinked to the source bucket. In some embodiments, the lambda bucket is associated with a transformation function. In some embodiments, the method includes associating, by the processor, a lambda function with the object in the source bucket, receiving, by the processor, a request to download the object from the lambda bucket, detecting, by the processor, that the object is in the source bucket and associated with the lambda function, fetching, by the processor, the object from the source bucket, generating, by the processor, a transformed object by invoking the lambda function and the transformation function on the object, and downloading, by the processor, the transformed object.
    Type: Grant
    Filed: August 23, 2022
    Date of Patent: November 14, 2023
    Assignee: Nutanix, Inc.
    Inventors: Johnu George, Manik Taneja, Manosiz Bhattacharyya, Naveen Reddy Gundlagutta
  • Publication number: 20230156083
    Abstract: An illustrative embodiment disclosed herein is an apparatus including a processor having programmed instructions to place a first compute resource in a storage node of an object storage platform and to place a second compute resource in a compute node in a client coupled to the object storage platform via a public network. In some embodiments, unstructured data is stored in the storage node. In some embodiments, the first compute resource of the storage node preprocesses the unstructured data. In some embodiments, the preprocessed unstructured data is sent to the compute node. In some embodiments, the second compute resource trains a machine learning (ML) model using the preprocessed unstructured data.
    Type: Application
    Filed: November 4, 2022
    Publication date: May 18, 2023
    Applicant: Nutanix, Inc.
    Inventors: Debojyoti Dutta, Johnu George, Manosiz Bhattacharyya, Roger Liao
  • Patent number: 11595474
    Abstract: A method for accelerating data operations across a plurality of nodes of one or more clusters of a distributed computing environment. Rack awareness information characterizing the plurality of nodes is retrieved and a non-volatile memory (NVM) capability of each node is determined. A write operation is received at a management node of the plurality of nodes and one or more of the rack awareness information and the NVM capability of the plurality of nodes are analyzed to select one or more nodes to receive at least a portion of the write operation, wherein at least one of the selected nodes has an NVM capability. A multicast group for the write operation is then generated wherein the selected nodes are subscribers of the multicast group, and the multicast group is used to perform hardware accelerated read or write operations at one or more of the selected nodes.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 28, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Debojyoti Dutta, Amit Kumar Saha, Johnu George, Ramdoot Kumar Pydipaty, Marc Solanas Tarre
  • Publication number: 20220414065
    Abstract: Systems, methods, and computer-readable media for managing storing of data in a data storage system using a client tag. In some examples, a first portion of a data load as part of a transaction and a client identifier that uniquely identifies a client is received from the client at a data storage system. The transaction can be tagged with a client tag including the client identifier and the first portion of the data load can be stored in storage at the data storage system. A first log entry including the client tag is added to a data storage log in response to storing the first portion of the data load in the storage. The first log entry is then written from the data storage log to a persistent storage log in persistent memory which is used to track progress of storing the data load in the storage.
    Type: Application
    Filed: August 30, 2022
    Publication date: December 29, 2022
    Inventors: Ralf Rantzau, Madhu S. Kumar, Johnu George, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 11481362
    Abstract: Systems, methods, and computer-readable media for managing storing of data in a data storage system using a client tag. In some examples, a first portion of a data load as part of a transaction and a client identifier that uniquely identifies a client is received from the client at a data storage system. The transaction can be tagged with a client tag including the client identifier and the first portion of the data load can be stored in storage at the data storage system. A first log entry including the client tag is added to a data storage log in response to storing the first portion of the data load in the storage. The first log entry is then written from the data storage log to a persistent storage log in persistent memory which is used to track progress of storing the data load in the storage.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: October 25, 2022
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Ralf Rantzau, Madhu S. Kumar, Johnu George, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 11354039
    Abstract: Embodiments include receiving an indication of a data storage module to be associated with a tenant of a distributed storage system, allocating a partition of a disk for data of the tenant, creating a first association between the data storage module and the disk partition, creating a second association between the data storage module and the tenant, and creating rules for the data storage module based on one or more policies configured for the tenant. Embodiments further include receiving an indication of a type of subscription model selected for the tenant, and selecting the disk partition to be allocated based, at least in part, on the subscription model selected for the tenant. More specific embodiments include generating a storage map indicating the first association between the data storage module and the disk partition and indicating the second association between the data storage module and the tenant.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: June 7, 2022
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Kai Zhang, Yathiraj B. Udupi, Debojyoti Dutta
  • Publication number: 20210182729
    Abstract: A method can include receiving, at a workflow controller, a machine learning workflow, the machine learning workflow associated with a first task and a second task. The first task is training a machine learning model and the second task is deploying the model. The method can include segmenting, by the workflow controller, the machine learning workflow into a first sub-workflow associated with the first task and a second sub-workflow associated with the second task, assigning a first workflow agent to the first sub-workflow and assigning a second workflow agent to the second sub-workflow, selecting, by the first workflow agent and based on first resources needed to perform the first task, a first cluster for performing the first task and selecting, by the second workflow agent and based on second resources needed to perform the second task, a second cluster for performing the second task.
    Type: Application
    Filed: December 11, 2019
    Publication date: June 17, 2021
    Inventors: Johnu George, Sourav Chakraborty, Amit Kumar Saha, Debojyoti Dutta, Xinyuan Huang, Adhita Selvaraj
  • Patent number: 11016673
    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 25, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Xinyuan Huang, Johnu George, Marc Solanas Tarre, Komei Shimamura, Purushotham Kamath, Debojyoti Dutta
  • Patent number: 10972364
    Abstract: Systems, methods, and computer-readable storage media are provided for storing machine learned models in a tiered storage. The model serving network evaluates where the models should be stored based on the model corresponding service level agreement. The model is generally stored at the lowest tiered storage device that is still capable of satisfying the model's service level agreement. In this way, the model serving network aims to store data that achieves the cheapest cost.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: April 6, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha
  • Patent number: 10938581
    Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: March 2, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Patent number: 10922287
    Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for determining a latency cost for each of a plurality of fields in an object, identifying at least one field having a latency cost that exceeds a predetermined threshold, and determining whether to store the at least one field to a first memory device or a second memory device based on the latency cost. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: February 16, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Patent number: 10915516
    Abstract: Systems, methods, and computer-readable media for storing data in a data storage system using a child table. In some examples, a trickle update to first data in a parent table is received at a data storage system storing the first data in the parent table. A child table storing second data can be created in persistent memory for the parent table. Subsequently the trickle update can be stored in the child table as part of the second data stored in the child table. The second data including the trickle update stored in the child table can be used to satisfy, at least in part, one or more data queries for the parent table using the child table.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: February 9, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Debojyoti Dutta, Madhu S. Kumar, Ralf Rantzau
  • Publication number: 20210011888
    Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for identifying a retrieval cost associated with retrieving a field in an object from data storage, comparing the retrieval cost for the field to a cost threshold for storing data in persistent memory, and selectively storing the field in either a persistent memory device or a non-persistent memory device based on a comparison of the retrieval cost for the field to the cost threshold. Systems and machine-readable media are also provided.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Publication number: 20200366568
    Abstract: Systems, methods, and computer-readable storage media are provided for storing machine learned models in a tiered storage. The model serving network evaluates where the models should be stored based on the model corresponding service level agreement. The model is generally stored at the lowest tiered storage device that is still capable of satisfying the model's service level agreement. In this way, the model serving network aims to store data that achieves the cheapest cost.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Inventors: Johnu George, Amit Kumar Saha
  • Patent number: 10797892
    Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: October 6, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Publication number: 20200285396
    Abstract: Embodiments include receiving an indication of a data storage module to be associated with a tenant of a distributed storage system, allocating a partition of a disk for data of the tenant, creating a first association between the data storage module and the disk partition, creating a second association between the data storage module and the tenant, and creating rules for the data storage module based on one or more policies configured for the tenant. Embodiments further include receiving an indication of a type of subscription model selected for the tenant, and selecting the disk partition to be allocated based, at least in part, on the subscription model selected for the tenant. More specific embodiments include generating a storage map indicating the first association between the data storage module and the disk partition and indicating the second association between the data storage module and the tenant.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Inventors: Johnu George, Kai Zhang, Yathiraj B. Udupi, Debojyoti Dutta
  • Publication number: 20200272338
    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Xinyuan Huang, Johnu George, Marc Solanas Tarre, Komei Shimamura, Purushotham Kamath, Debojyoti Dutta