Patents by Inventor Debojyoti Dutta

Debojyoti Dutta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11016673
    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: May 25, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Xinyuan Huang, Johnu George, Marc Solanas Tarre, Komei Shimamura, Purushotham Kamath, Debojyoti Dutta
  • Patent number: 11005731
    Abstract: One aspect of the disclosure relates to, among other things, a method for optimizing and provisioning a software-as-a-service (SaaS). The method includes determining a graph comprising interconnected stages for the SaaS, wherein each stage has a replication factor and one or more metrics that are associated with one or more service level objectives of the SaaS, determining a first replication factor associated with a first one of the stages which meets a first service level objective of the SaaS, adjusting the first replication factor associated with the first one of the stage based on the determined first replication factor, and provisioning the SaaS onto networked computing resources based on the graph and replication factors associated with each stage.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: May 11, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 10938677
    Abstract: In one embodiment, a method implements virtualized network functions in a serverless computing system having networked hardware resources. An interface of the serverless computing system receives a specification for a network service including a virtualized network function (VNF) forwarding graph (FG). A mapper of the serverless computing system determines an implementation graph comprising edges and vertices based on the specification. A provisioner of the serverless computing system provisions a queue in the serverless computing system for each edge. The provisioner further provisions a function in the serverless computing system for each vertex, wherein, for at least one or more functions, each one of said at least one or more functions reads incoming messages from at least one queue. The serverless computing system processes data packets by the queues and functions in accordance with the VNF FG. The queues and functions processes data packets in accordance with the VNF FG.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: March 2, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 10938937
    Abstract: Approaches are disclosed for distributing messages across multiple data centers where the data centers do not store messages using a same message queue protocol. In some embodiment, a network element translates messages from a message queue protocol (e.g., Kestrel, RABBITMQ, APACHE Kafka, and ACTIVEMQ) to an application layer messaging protocol (e.g., XMPP, MQTT, WebSocket protocol, or other application layer messaging protocols). In other embodiments, a network element translates messages from an application layer messaging protocol to a message queue protocol. Using the new approaches disclosed herein, data centers communicate using, at least in part, application layer messaging protocols to disconnect the message queue protocols used by the data centers and enable sharing messages between messages queues in the data centers.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: March 2, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Marc Solanas Tarre, Ralf Rantzau, Debojyoti Dutta, Manoj Sharma
  • Patent number: 10938581
    Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: March 2, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Patent number: 10922287
    Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for determining a latency cost for each of a plurality of fields in an object, identifying at least one field having a latency cost that exceeds a predetermined threshold, and determining whether to store the at least one field to a first memory device or a second memory device based on the latency cost. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: February 16, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Patent number: 10915516
    Abstract: Systems, methods, and computer-readable media for storing data in a data storage system using a child table. In some examples, a trickle update to first data in a parent table is received at a data storage system storing the first data in the parent table. A child table storing second data can be created in persistent memory for the parent table. Subsequently the trickle update can be stored in the child table as part of the second data stored in the child table. The second data including the trickle update stored in the child table can be used to satisfy, at least in part, one or more data queries for the parent table using the child table.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: February 9, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Debojyoti Dutta, Madhu S. Kumar, Ralf Rantzau
  • Patent number: 10902293
    Abstract: In one embodiment, a device forms a neural network envelope cell that comprises a plurality of convolution-based filters in series or parallel. The device constructs a convolutional neural network by stacking copies of the envelope cell in series. The device trains, using a training dataset of images, the convolutional neural network to perform image classification by iteratively collecting variance metrics for each filter in each envelope cell, pruning filters with low variance metrics from the convolutional neural network, and appending a new copy of the envelope cell into the convolutional neural network.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: January 26, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Purushotham Kamath, Abhishek Singh, Debojyoti Dutta
  • Publication number: 20210011888
    Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for identifying a retrieval cost associated with retrieving a field in an object from data storage, comparing the retrieval cost for the field to a cost threshold for storing data in persistent memory, and selectively storing the field in either a persistent memory device or a non-persistent memory device based on a comparison of the retrieval cost for the field to the cost threshold. Systems and machine-readable media are also provided.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Patent number: 10884807
    Abstract: In one embodiment, a method for serverless computing comprises: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: January 5, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Komei Shimamura, Timothy Okwii, Debojyoti Dutta, Yathiraj B. Udupi, Rahul Ramakrishna, Xinyuan Huang
  • Publication number: 20200396311
    Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 17, 2020
    Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 10866879
    Abstract: A controller can receive first and second metrics respectively indicating distributed computing system servers' CPU, memory, or disk utilization, throughput, or latency for a first time. The controller can receive third and fourth metrics for a second time. The controller can determine a first graph including vertices corresponding to the servers and edges indicating data flow between the servers, a second graph including edges indicating the first metrics satisfy a first threshold, a third graph including edges indicating the second metrics satisfy a second threshold, a fourth graph including edges indicating the third metrics fail to satisfy the first threshold, and a fifth graph including edges indicating the fourth metrics fail to satisfy the second threshold. The controller can display a sixth graph indicating at least one of first changes between the second graph and the fourth graph or second changes between the third graph and the fifth graph.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: December 15, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Mingye Chen, Xinyuan Huang, Debojyoti Dutta
  • Patent number: 10797892
    Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: October 6, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
  • Publication number: 20200302270
    Abstract: A neural network architecture search may be conducted by a controller to generate a neural network. The controller may perform the search by generating a directed acyclic graph across nodes in a search space, the nodes representing compute operations for a neural network. As the search is performed, the controller may retrieve resource availability information to modify the likelihood of a generated neural network architecture including previously unused nodes.
    Type: Application
    Filed: March 19, 2019
    Publication date: September 24, 2020
    Inventors: Abhishek Singh, Debojyoti Dutta
  • Publication number: 20200302272
    Abstract: The present disclosure provides systems, methods and computer-readable media for optimizing the neural architecture search for the automated machine learning process. In one aspect, neural architecture search method including selecting a neural architecture for training as part of an automated machine learning process; collecting statistical parameters on individual nodes of the neural architecture during the training; determining, based on the statistical parameters, active nodes of the neural architecture to form a candidate neural architecture; and validating the candidate neural architecture to produce a trained neural architecture to be used in implemented an application or a service.
    Type: Application
    Filed: March 19, 2019
    Publication date: September 24, 2020
    Inventors: Abhishek Singh, Debojyoti Dutta
  • Publication number: 20200285396
    Abstract: Embodiments include receiving an indication of a data storage module to be associated with a tenant of a distributed storage system, allocating a partition of a disk for data of the tenant, creating a first association between the data storage module and the disk partition, creating a second association between the data storage module and the tenant, and creating rules for the data storage module based on one or more policies configured for the tenant. Embodiments further include receiving an indication of a type of subscription model selected for the tenant, and selecting the disk partition to be allocated based, at least in part, on the subscription model selected for the tenant. More specific embodiments include generating a storage map indicating the first association between the data storage module and the disk partition and indicating the second association between the data storage module and the tenant.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Inventors: Johnu George, Kai Zhang, Yathiraj B. Udupi, Debojyoti Dutta
  • Patent number: 10771584
    Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: September 8, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
  • Patent number: 10769152
    Abstract: There is disclosed in an example a computer-implemented method of providing automated log analysis, including: receiving a log stream comprising a plurality of transaction log entries, the log entries comprising a time stamp, a component identification (ID), and a name value pair identifying a transaction; creating an index comprising mapping a key ID to a name value pair of a log entry; and selecting from the index a key ID having a relatively large number of repetitions. There is also disclosed an apparatus and computer-readable medium for performing the method.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: September 8, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Xinyuan Huang, Manoj Sharma, Debojyoti Dutta
  • Publication number: 20200279187
    Abstract: Joint hyper-parameter optimizations and infrastructure configurations for deploying a machine learning model can be generated based upon each other and output as a recommendation. A model hyper-parameter optimization may tune model hyper-parameters based on an initial set of hyper-parameters and resource configurations. The resource configurations may then be adjusted or generated based on the tuned model hyper-parameters. Further model hyper-parameter optimizations and resource configuration adjustments can be performed sequentially in a loop until a threshold performance for training the model based on the model hyper-parameters or a threshold improvement between loops is detected.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Xinyuan Huang, Debojyoti Dutta
  • Publication number: 20200272338
    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Xinyuan Huang, Johnu George, Marc Solanas Tarre, Komei Shimamura, Purushotham Kamath, Debojyoti Dutta