Patents by Inventor Puneet Agarwal

Puneet Agarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11159455
    Abstract: Ingress packet processors in a device receive network packets from ingress ports. A crossbar in the device receives, from the ingress packet processors, packet data of the packets and transmits information about the packet data to a plurality of traffic managers in the device. Each traffic manager computes a total amount of packet data to be written to buffers across the plurality of traffic managers, where each traffic manager manages one or more buffers that store packet data. Each traffic manager compares the total amount of packet data to one or more threshold values. Upon determining that the total amount of packet data is equal to or greater than a threshold value, each traffic manager drops a portion of the packet data, and writes a remaining portion of the packet data to the buffers managed by the traffic manager.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: October 26, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Mohammad Kamel Issa, Ajit K. Jain
  • Patent number: 11128561
    Abstract: Automatic load-balancing techniques in a network device are used to select, from a multipath group, a path to assign to a flow based on observed state attributes such as path state(s), device state(s), port state(s), or queue state(s) of the paths. A mapping of the path previously assigned to a flow or group of flows (e.g., on account of having then been optimal in view of the observed state attributes) is maintained, for example, in a table. So long as the flow(s) are active and the path is still valid, the mapped path is selected for subsequent data units belonging to the flow(s), which may, among other effects, avoid or reduce packet re-ordering. However, if the flow(s) go idle, or if the mapped path fails, a new optimal path may be assigned to the flow(s) from the multipath group.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: September 21, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Rupa Budhia
  • Patent number: 11099902
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: August 24, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11075847
    Abstract: In response to certain events in a network device, visibility packets may be generated. A visibility packet may be or comprise at least a portion of a packet that is in some way associated with the event, such as a packet that was dropped as a result of an event, or that was in a queue at the time of an event related to that queue. A visibility packet may be tagged or otherwise indicated as a visibility packet. The network device may include one or more visibility samplers through which visibility packets are routed on their way to a visibility queue, visibility subsystem, and/or out of the network device. The samplers allow only a limited sample of the visibility packets that they receive to pass through the sampler, essentially acting as a filter to reduce the amount of visibility packets that will be processed.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: July 27, 2021
    Assignee: Innovium, Inc.
    Inventors: Bruce Hui Kwan, William Brad Matthews, Puneet Agarwal
  • Patent number: 11057318
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by compute logic embedded in extension modules coupled directly to network switches. The compute logic performs collective actions, such as reduction operations, on gradients or other compute data processed by the nodes of the system. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the extension modules may take over some or all of the processing of the distributed system during the collective phase. An inline version of the module sits between a switch and the network. Data units carrying compute data are intercepted and processed using the compute logic, while other data units pass through the module transparently to or from the switch. Multiple modules may be connected to the switch, each coupled to a different group of nodes, and sharing intermediate results. A sidecar version is also described.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: July 6, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11057307
    Abstract: Approaches, techniques, and mechanisms are disclosed for assigning paths to network packets. The path assignment techniques utilize path state information and/or other criteria to determine whether to route a packet along a primary candidate path selected for the packet, or one or more alternative candidate paths selected for the packet. According to an embodiment, network traffic is at least partially balanced by redistributing only a portion of the traffic that would have been assigned to a given primary path. Move-eligibility criteria are applied to traffic to determine whether a given packet is eligible for reassignment from a primary path to an alternative path. The move-eligibility criteria determine which portion of the network traffic to move and which portion to allow to proceed as normal. In an embodiment, the criteria and functions used to determine whether a packet is redistributable are adjusted over time based on path state information.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: July 6, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Meg Pei Lin, Rupa Budhia
  • Patent number: 11044204
    Abstract: Nodes within a network are configured to adapt to changing path states, due to congestion, node failures, and/or other factors. A node may selectively convey path information and/or other state information to another node by annotating the information into packets it receives from the other node. A node may selectively reflect these annotated packets back to the other node, or other nodes that subsequently receive these annotated packets may reflect them. A weighted cost multipathing selection technique is improved by dynamically adjusting weights of paths in response to feedback indicating the current state of the network topology, such as collected through these reflected packets. In an embodiment, certain packets that would have been dropped may instead be transformed into “special visibility” packets that may be stored and/or sent for analysis. In an embodiment, insight into the performance of a network device is enhanced through the use of programmable visibility engines.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: June 22, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 11023686
    Abstract: Conversational systems are required to be capable of handling more sophisticated interactions than providing factual answers only. Such interactions are handled by resolving abstract anaphoric references in conversational systems which includes antecedent fact references and posterior fact references. The present disclosure resolves abstract anaphoric references in conversational systems using hierarchically stacked neural networks. In the present disclosure, a deep hierarchical maxpool network based model is used to obtain a representation of each utterance received from users and a representation of one or more generated sequences of utterances. The obtained representations are further used to identify contextual dependencies with in the one or more generated sequences which helps in resolving abstract anaphoric references in conversational systems.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: June 1, 2021
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Puneet Agarwal, Prerna Khurana, Gautam Shroff, Lovekesh Vig
  • Patent number: 11006848
    Abstract: The present invention relates a novel system and method to diagnose and predict systemic disorders including brain disorders and mental states in early stage and more accurately. More particularly, this invention relates to a novel method of EEG recording and processing through which multiple output data streams are taken together from a system like brain and the structure of their correlation matrix is studied through its eigenvector, eigendirection and eigenspaces and other signal processing techniques including compression sensing, wavelet transform, fast fourier transform etc.
    Type: Grant
    Filed: January 18, 2016
    Date of Patent: May 18, 2021
    Inventors: Puneet Agarwal, Siddharth Panwar
  • Patent number: 10931588
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: February 23, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 10931602
    Abstract: Distributed machine learning systems and other distributed computing systems are improved by embedding compute logic at the network switch level to perform collective actions, such as reduction operations, on gradients or other data processed by the nodes of the system. The switch is configured to recognize data units that carry data associated with a collective action that needs to be performed by the distributed system, referred to herein as “compute data,” and process that data using a compute subsystem within the switch. The compute subsystem includes a compute engine that is configured to perform various operations on the compute data, such as “reduction” operations, and forward the results back to the compute nodes. The reduction operations may include, for instance, summation, averaging, bitwise operations, and so forth. In this manner, the network switch may take over some or all of the processing of the distributed system during the collective phase.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: February 23, 2021
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 10891438
    Abstract: Systems and methods for Deep Learning techniques based multi-purpose conversational agents for processing natural language queries. The traditional systems and methods provide for conversational systems for processing natural language queries but do not employ Deep Learning techniques, and thus are unable to process large number of intents. Embodiments of the present disclosure provide for Deep Learning techniques based multi-purpose conversational agents for processing the natural language queries by defining and logically integrating a plurality of components comprising of multi-purpose conversational agents, identifying an appropriate agent to process one or more natural language queries by a High Level Intent Identification technique, predicting a probable user intent, classifying the query, and generate a set of responses by querying or updating one or more knowledge graphs.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: January 12, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Mahesh Prasad Singh, Puneet Agarwal, Ashish Chaudhary, Gautam Shroff, Prerna Khurana, Mayur Patidar, Vivek Bisht, Rachit Bansal, Prateek Sachan, Rohit Kumar
  • Patent number: 10868769
    Abstract: To more efficiently utilize buffer resources, schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot. The read instruction queue architecture may be duplicated for link memories and other memories in addition to the buffer memory.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: December 15, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 10868768
    Abstract: When a measure of buffer space queued for garbage collection in a network device grows beyond a certain threshold, one or more actions are taken to decreasing an enqueue rate of certain classes of traffic, such as of multicast traffic, whose reception may have caused and/or be likely to exacerbate garbage-collection-related performance issues. When the amount of buffer space queued for garbage collection shrinks to an acceptable level, these one or more actions may be reversed. In an embodiment, to more optimally handle multi-destination traffic, queue admission control logic for high-priority multi-destination data units, such as mirrored traffic, may be performed for each destination of the data units prior to linking the data units to a replication queue. If a high-priority multi-destination data unit is admitted to any queue, the high-priority multi-destination data unit can no longer be dropped, and is linked to a replication queue for replication.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: December 15, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan, Ajit Kumar Jain
  • Patent number: 10846225
    Abstract: A logical bank may comprise multiple physical banks across which logical blocks, such as buffer entries, are striped. Data structures are stored in the logical blocks, with each logical block storing no more than one data structure. When writing a data structure, if the data structure is less than half the logical block size, one or more duplicate copies of the data structure may be stored in the otherwise unused physical blocks of the logical block. Before executing a first read instruction to read a first data structure from a first logical block, if the first data structure can be read without accessing one or more of the physical banks, a second read instruction for a second data structure may be analyzed to determine if there is a copy of a second data structure within the one or more unneeded physical banks. If so, the two read instructions are consolidated.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: November 24, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 10795873
    Abstract: Certain hash-based operations in network devices and other devices, such as mapping and/or lookup operations, are improved by manipulating a hash key prior to executing a hash function on the hash key and/or by manipulating outputs of a hash function. A device may be configured to manipulate hash keys and/or outputs using manipulation logic based on one or more predefined manipulation values. A similar hash-based operation may be performed by multiple devices within a network of computing devices. Different devices may utilize different predefined manipulation values for their respective implementations of the manipulation logic. For instance, each device may assign itself a random mask value for key transformation logic as part of an initialization process when the device powers up and/or each time the device reboots. In an embodiment, described techniques may increase the entropy of hashing function outputs in certain contexts, thereby increasing the effectiveness of certain hashing functions.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: October 6, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal
  • Patent number: 10769157
    Abstract: This disclosure relates generally to data processing, and more particularly to a system and a method for mapping heterogeneous data sources. For a product being sold globally, there might be one global database listing characteristics of the product, and from various System and method for mapping attributes of entities are disclosed. In an embodiment, the system uses a combination of Supervised Bayesian Model (SBM) and an Unsupervised Textual Similarity (UTS) model for data analysis. A weighted ensemble of the SBM and the UTS is used, wherein the ensemble is weighted based on a confidence measure. The system, by performing data processing, identifies data match between different data sources (a local databases and a corresponding global database) being compared, and based on matching data found, performs mapping between the local databases and the global database.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: September 8, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Karamjit Singh, Garima Gupta, Gautam Shroff, Puneet Agarwal
  • Patent number: 10764208
    Abstract: A distributed switch architecture supports very high bandwidth applications. For instance, the distributed switch architecture may be implemented for cloud networks. The architecture scales by organizing traffic management components into tiled structures with distributed buffering. The tile structures are replicated and interconnected to perform transfers from ingress to egress using an interconnect bandwidth scheduling algorithm. Bandwidth scaling may be achieved by adding more tiles to achieve higher bandwidth. The interconnect in the architecture may be swapped out depending on implementation parameters, e.g., physical efficiency.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: September 1, 2020
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Amit Kumar, William Brad Matthews, Bruce Hui Kwan, Puneet Agarwal
  • Patent number: 10742558
    Abstract: A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Among other aspects, this may reduce power demands and allow a larger amount of buffer memory to be available to a given egress block that may be experiencing high traffic loads. Optionally, the shared traffic manager may be leveraged to reduce the resources required to handle data units on ingress. Rather than buffer the entire unit in the ingress buffers, an arbiter may be configured to buffer only the control portion of the data unit. The payload of the data unit, by contrast, is forwarded directly to the shared traffic manager, where it is placed in the egress buffers. Because the payload is not being buffered in the ingress buffers, the ingress buffer memory may be greatly reduced.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: August 11, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
  • Patent number: 10735337
    Abstract: A network traffic manager receives, from an ingress port, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the ingress port has been assigned a token corresponding to the queue. Upon determining that the ingress port has been assigned the token, the manager determines whether other cells of the packet are stored in the buffer, in response to which the manager stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells from the receive context to the buffer queue or a copy generator queue, and releases the token from the ingress port.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 4, 2020
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Puneet Agarwal, Bruce H. Kwan, Ajit K. Jain