Patents by Inventor Amit Kumar Saha
Amit Kumar Saha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10915516Abstract: Systems, methods, and computer-readable media for storing data in a data storage system using a child table. In some examples, a trickle update to first data in a parent table is received at a data storage system storing the first data in the parent table. A child table storing second data can be created in persistent memory for the parent table. Subsequently the trickle update can be stored in the child table as part of the second data stored in the child table. The second data including the trickle update stored in the child table can be used to satisfy, at least in part, one or more data queries for the parent table using the child table.Type: GrantFiled: October 18, 2017Date of Patent: February 9, 2021Assignee: CISCO TECHNOLOGY, INC.Inventors: Johnu George, Amit Kumar Saha, Debojyoti Dutta, Madhu S. Kumar, Ralf Rantzau
-
Publication number: 20210011888Abstract: Aspects of the subject technology relate to ways to determine the optimal storage of data structures in a hierarchy of memory types. In some aspects, a process of the technology can include steps for identifying a retrieval cost associated with retrieving a field in an object from data storage, comparing the retrieval cost for the field to a cost threshold for storing data in persistent memory, and selectively storing the field in either a persistent memory device or a non-persistent memory device based on a comparison of the retrieval cost for the field to the cost threshold. Systems and machine-readable media are also provided.Type: ApplicationFiled: September 28, 2020Publication date: January 14, 2021Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
-
Publication number: 20200396311Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.Type: ApplicationFiled: August 31, 2020Publication date: December 17, 2020Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
-
Publication number: 20200366568Abstract: Systems, methods, and computer-readable storage media are provided for storing machine learned models in a tiered storage. The model serving network evaluates where the models should be stored based on the model corresponding service level agreement. The model is generally stored at the lowest tiered storage device that is still capable of satisfying the model's service level agreement. In this way, the model serving network aims to store data that achieves the cheapest cost.Type: ApplicationFiled: May 15, 2019Publication date: November 19, 2020Inventors: Johnu George, Amit Kumar Saha
-
Patent number: 10797892Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.Type: GrantFiled: February 27, 2018Date of Patent: October 6, 2020Assignee: CISCO TECHNOLOGY, INC.Inventors: Johnu George, Amit Kumar Saha, Arun Saha, Debojyoti Dutta
-
Patent number: 10779248Abstract: Incorporation of a mesh base station in a wireless network is presented herein. The mesh base station can utilize common wireless resource allocations as a corresponding wireless base station while transmitting to wireless subscriber stations during the same time period. The mesh base station obtains a data packet from the wireless base station over a backhaul link during a scheduled time period and transmits the data packet to the designated wireless subscriber station during another scheduled time period. The wireless base station and the mesh base station can also receive data packets from wireless subscriber stations during a same time period. A wireless network can be configured with two mesh base stations at an approximate boundary of two adjacent sector coverage areas, where a coverage area is supported by a wireless base station and each mesh base station supports wireless subscriber stations within a coverage radius.Type: GrantFiled: September 11, 2018Date of Patent: September 15, 2020Assignee: AT&T INTELLECTUAL PROPERTY II, L.P.Inventors: Byoung-Jo J. Kim, Nemmara K. Shankaranarayanan, Amit Kumar Saha
-
Patent number: 10771584Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.Type: GrantFiled: November 30, 2017Date of Patent: September 8, 2020Assignee: CISCO TECHNOLOGY, INC.Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
-
Publication number: 20200249065Abstract: Flow sensors are provided that can provide both leak detection and flow monitoring. The flow monitoring enables a determination whether there are blockages or leaks in a fluid system during normal operation of the system. The leak detection enables detection of leaks when the system is shut off. The flow sensors can use a frusto-conical flow guide to provide a more compact flow sensor.Type: ApplicationFiled: April 24, 2020Publication date: August 6, 2020Inventors: Kirk Andrew Allen, James Richard Parks, Amit Kumar Saha, Fei Liu, Mark W. Emory, Samuel R. Rulli
-
Patent number: 10691671Abstract: Systems, methods, and computer-readable media are provided for consistent data to be used for streaming and batch processing. The system includes one or more devices; a processor coupled to the one or more devices; and a non-volatile memory coupled to the processor and the one or more devices, wherein the non-volatile memory stores instructions that are configured to cause the processor to perform operations including receiving data from the one or more devices; validating the data to yield validated data; storing the validated data in a database on the non-volatile memory, the validated data being used for streaming processing and batch processing; and sending the validated data to a remote disk for batch processing.Type: GrantFiled: December 21, 2017Date of Patent: June 23, 2020Assignee: CISCO TECHNOLOGY, INC.Inventors: Johnu George, Amit Kumar Saha, Debojyoti Dutta, Madhu S. Kumar, Ralf Rantzau
-
Patent number: 10634538Abstract: Flow sensors are provided that can provide both leak detection and flow monitoring. The flow monitoring enables a determination whether there are blockages or leaks in a fluid system during normal operation of the system. The leak detection enables detection of leaks when the system is shut off. The flow sensors can use a frusto-conical flow guide to provide a more compact flow sensor.Type: GrantFiled: July 13, 2017Date of Patent: April 28, 2020Assignee: Rain Bird CorporationInventors: Kirk Andrew Allen, James Richard Parks, Amit Kumar Saha, Fei Liu, Mark W. Emory, Samuel R. Rulli
-
Publication number: 20200089532Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.Type: ApplicationFiled: November 25, 2019Publication date: March 19, 2020Inventors: Komei Shimamura, Xinyuan Huang, Amit Kumar Saha, Debojyoti Dutta
-
Patent number: 10545823Abstract: The present disclosure involves systems and methods for managing data in a distributed storage system. The distributed storage system may include non-volatile memory (NVM) storage devices and utilize erasure code replication for storage of data. A controller may first store at least some of the K data chunks in NVM devices before storing the coding chunks in other storage devices. In addition, the controller may transmit read requests to the NVM devices of the system first to begin receiving data chunks or coding chunks related to the data object. By writing to and reading from NVM devices first, storage and reading of the data object may occur faster than conventional storage systems.Type: GrantFiled: October 13, 2017Date of Patent: January 28, 2020Assignee: CISCO TECHNOLOGY, INC.Inventors: Ramdoot Kumar Pydipaty, Amit Kumar Saha
-
Patent number: 10547524Abstract: In one embodiment, a server determines a trigger to diagnose a software as a service (SaaS) pipeline for a SaaS client, and sends a notification to a plurality of SaaS nodes in the pipeline that the client is in a diagnostic mode, the notification causing the plurality of SaaS nodes to establish taps to collect diagnostic information for the client. The server may then send client-specific diagnostic messages into the SaaS pipeline for the client, the client-specific diagnostic messages causing the taps on the plurality of SaaS nodes to collect client-specific diagnostic information and send the client-specific diagnostic information to the server. The server then receives the client-specific diagnostic information from the plurality of SaaS nodes, and creates a client-specific diagnostic report based on the client-specific diagnostic information.Type: GrantFiled: April 27, 2017Date of Patent: January 28, 2020Assignee: Cisco Technology, Inc.Inventors: Timothy Okwii, Amit Kumar Saha, Debojyoti Dutta
-
Patent number: 10489195Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.Type: GrantFiled: July 20, 2017Date of Patent: November 26, 2019Assignee: CISCO TECHNOLOGY, INC.Inventors: Komei Shimamura, Xinyuan Huang, Amit Kumar Saha, Debojyoti Dutta
-
Publication number: 20190340310Abstract: An automatic water distribution design generating system enables those with minimal technical training to obtain complex water distribution designs after receiving the requisite inputs. A portal serves to allow a consumer or contractor to enter data relevant to the area to be watered or other data as needed to properly define the water distribution needs of the area. The portal accesses mathematical or technical equations or the like to develop or generate design for the system and convey the design with the required materials to a contractor.Type: ApplicationFiled: October 23, 2018Publication date: November 7, 2019Inventor: Amit Kumar Saha
-
Publication number: 20190208011Abstract: A method for accelerating data operations across a plurality of nodes of one or more clusters of a distributed computing environment. Rack awareness information characterizing the plurality of nodes is retrieved and a non-volatile memory (NVM) capability of each node is determined. A write operation is received at a management node of the plurality of nodes and one or more of the rack awareness information and the NVM capability of the plurality of nodes are analyzed to select one or more nodes to receive at least a portion of the write operation, wherein at least one of the selected nodes has an NVM capability. A multicast group for the write operation is then generated wherein the selected nodes are subscribers of the multicast group, and the multicast group is used to perform hardware accelerated read or write operations at one or more of the selected nodes.Type: ApplicationFiled: December 28, 2017Publication date: July 4, 2019Inventors: Debojyoti Dutta, Amit Kumar Saha, Johnu George, Ramdoot Kumar Pydipaty, Marc Solanas Tarre
-
Publication number: 20190197146Abstract: Systems, methods, and computer-readable media are provided for consistent data to be used for streaming and batch processing. The system includes one or more devices; a processor coupled to the one or more devices; and a non-volatile memory coupled to the processor and the one or more devices, wherein the non-volatile memory stores instructions that are configured to cause the processor to perform operations including receiving data from the one or more devices; validating the data to yield validated data; storing the validated data in a database on the non-volatile memory, the validated data being used for streaming processing and batch processing; and sending the validated data to a remote disk for batch processing.Type: ApplicationFiled: December 21, 2017Publication date: June 27, 2019Inventors: Johnu George, Amit Kumar Saha, Debojyoti Dutta, Madhu S. Kumar, Ralf Rantzau
-
Publication number: 20190182128Abstract: In one embodiment, a method implements virtualized network functions in a serverless computing system having networked hardware resources. An interface of the serverless computing system receives a specification for a network service including a virtualized network function (VNF) forwarding graph (FG). A mapper of the serverless computing system determines an implementation graph comprising edges and vertices based on the specification. A provisioner of the serverless computing system provisions a queue in the serverless computing system for each edge. The provisioner further provisions a function in the serverless computing system for each vertex, wherein, for at least one or more functions, each one of said at least one or more functions reads incoming messages from at least one queue. The serverless computing system processes data packets by the queues and functions in accordance with the VNF FG. The queues and functions processes data packets in accordance with the VNF FG.Type: ApplicationFiled: February 20, 2019Publication date: June 13, 2019Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
-
Publication number: 20190166221Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.Type: ApplicationFiled: November 30, 2017Publication date: May 30, 2019Inventors: Komei Shimamura, Amit Kumar Saha, Debojyoti Dutta
-
Publication number: 20190147070Abstract: Systems, methods, and computer-readable media for managing storing of data in a data storage system using a client tag. In some examples, a first portion of a data load as part of a transaction and a client identifier that uniquely identifies a client is received from the client at a data storage system. The transaction can be tagged with a client tag including the client identifier and the first portion of the data load can be stored in storage at the data storage system. A first log entry including the client tag is added to a data storage log in response to storing the first portion of the data load in the storage. The first log entry is then written from the data storage log to a persistent storage log in persistent memory which is used to track progress of storing the data load in the storage.Type: ApplicationFiled: November 13, 2017Publication date: May 16, 2019Inventors: Ralf Rantzau, Madhu S. Kumar, Johnu George, Amit Kumar Saha, Debojyoti Dutta