Patents by Inventor Prashant Anand

Prashant Anand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150222550
    Abstract: A method of packet prioritization at a data network is disclosed. The data network contains a number of data plane nodes carrying user traffic and a control node managing the data plane nodes. The method starts with receiving a packet at a data plane node. The data plane node determines that it needs help from the control node for processing the received packet. It then quantizes a congestion level at the data plane node and encoding the quantized congestion level in the packet, where a number of bits in the packet indicates the quantized congestion level. It sends a portion of the packet from the data plane node to the control node, where the portion of the packet includes the number of bits encoding the quantized congestion level.
    Type: Application
    Filed: February 6, 2014
    Publication date: August 6, 2015
    Applicant: Telefonaktiebolaget L M Ericsson (publ)
    Inventor: Prashant Anand
  • Publication number: 20150149812
    Abstract: Exemplary methods for network debugging include a control plane of a first network device generating and injecting debug traffic into a data plane of the first network device such that the debug traffic appears to the data plane as if it originated from an external network device. The methods include the data plane transmitting the debug traffic to a network. In one embodiment, the control plane collects debug information of the debug traffic as it is processed by the data plane and the network. In one embodiment, the first network device is configured to exchange debug information of the debug traffic with a second network device, and to provide the debug information to an operator.
    Type: Application
    Filed: November 22, 2013
    Publication date: May 28, 2015
    Inventors: Mustafa Arisoylu, Ramanathan Lakshmikanthan, Joon Ahn, Prashant Anand
  • Publication number: 20150117216
    Abstract: A method of load balancing implemented at a data network is disclosed. The data network contains a number of data plane nodes and a number of clusters of a control node. The method starts with deriving a graph from a topology of the data plane nodes, where the graph contains vertices, each representing one of the data plane nodes, and edges, each representing a connection between a pair of data plane nodes. The method continues with partitioning the graph into a number of sub-graphs, where the partition aims at minimizing connectivity among the number of sub-graphs, and where the number of sub-graphs equal to the number of clusters. The control node then assigns each cluster to one of the data plane nodes, where each cluster is assigned to one or more data plane node partitioned into the same sub-graph.
    Type: Application
    Filed: October 31, 2013
    Publication date: April 30, 2015
    Applicant: Telefonaktiebolaget L M Ericsson (publ)
    Inventors: Prashant Anand, Srikar Rajamani
  • Patent number: 9009165
    Abstract: The present invention relates to methods and apparatus for performing a lookup on a hash table stored in external memory. An index table stored in local memory is used to perform an enhanced lookup on the hash table stored in external memory. The index table stores signature patterns that are derived from the hash keys stored in the hash entries. Using the stored signature patterns, the packet processing node predicts which hash key is likely to store the desired data. The prediction may yield a false positive, but will never yield a false negative. Thus, the hash table is accessed only once during a data lookup.
    Type: Grant
    Filed: January 10, 2013
    Date of Patent: April 14, 2015
    Assignee: Telefonaktiebolaget L M Ericsson (publ)
    Inventors: Prashant Anand, Ashish Anand
  • Publication number: 20150092549
    Abstract: A method of generating network traffic in a network device of a data communication network includes providing traffic generation parameters in the network device that describe a desired traffic pattern to be generated by the network device, generating a trigger packet in the network device, the trigger packet specifying a drop precedence for packets generated by the network device in a state defined by the trigger packet, replicating the trigger packet to provide a packet train, selectively dropping one or more packets in the packet train based on the drop precedence specified in the trigger packet, and transmitting the packet train from the network device.
    Type: Application
    Filed: September 27, 2013
    Publication date: April 2, 2015
    Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)
    Inventors: Prashant Anand, Arun Balachandran, Vinayak Joshi
  • Patent number: 8995277
    Abstract: A method is implemented by a network element to improve load sharing for a link aggregation group by redistributing data flows to less congested ports in a set of ports associated with the link aggregation group. The network element receives a data packet in a data flow at an ingress port of the network element. A load sharing process is performed to select an egress port of the network element. A check is whether the selected egress port is congested. A check is made whether a time since a previous data packet in the data flow was received exceeds a threshold value. A less congested egress port is identified in the set of ports. A flow table is updated to bind the data flow to the less congested egress port and the data packet is forwarded to the less congested egress port.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: March 31, 2015
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Prashant Anand, Arun Balachandran
  • Publication number: 20150078159
    Abstract: Aspects of a high-precision packet train generation process are distributed among several distinct processing elements. In some embodiments a control processor configures a packet-processing unit with a packet train context that includes details such as the number of packets to be generated and the headers to be included in the packets. The packet-processing unit takes a packet to be used in the packet train and recirculates it a number of times, as specified by the packet train context. The recirculated packets, with the appropriate headers inserted, are forwarded to a traffic-shaping queue in queuing hardware. The traffic-shaping queue is configured to output the forwarded packets with a constant inter-packet gap. Thus, the generation of the multiple packets in the packet train is handled by the packet-processing unit, while the precise inter-packet timing is provided by the traffic-shaping queue in the queuing hardware.
    Type: Application
    Filed: November 20, 2014
    Publication date: March 19, 2015
    Inventors: Prashant Anand, Vinayak Joshi, Ashish Anand
  • Publication number: 20150016255
    Abstract: A network device to detect large flows includes a card to receive packets of flows. The device includes a large flow detection module including a serial multiple-stage filter module including series filter modules including a lead filter module and a tail filter module. Each filter module includes counters. The serial filter module is to serially increment the counters to reflect the flows, and is to increment counters that correspond to flows of subsequent filter modules only after all counters that correspond to the flows of all prior filter modules have been incremented serially up to maximum values. The serial filter module is to detect flows that correspond to counters of the tail filter module that have been incremented up to maximum values as the large flows. The large flow detection module includes a lead filter removal module to remove the lead filter module from the start of the series.
    Type: Application
    Filed: July 15, 2013
    Publication date: January 15, 2015
    Inventors: Ashutosh Bisht, Prashant Anand
  • Publication number: 20150009830
    Abstract: A load balancing system may include a switch coupled with a plurality of servers and a controller. A flow table may include default flow entries with each default flow entry including a different match pattern. At least one of the default flow entries may include a match pattern with an unrestricted character so that the match pattern having the unrestricted character is satisfied by a plurality of data flow identifications. Each of the default flow entries may include an action to be performed for data packets having data flow identifications that satisfy its match pattern. A data packet including a data flow identification for a data flow may be received from a client device. A default flow entry having a match pattern that is satisfied by the data flow identification is identified, and the data packet is processed in accordance with the action for the identified default flow entry.
    Type: Application
    Filed: July 8, 2013
    Publication date: January 8, 2015
    Inventors: Ashutosh Bisht, Prashant Anand
  • Patent number: 8923122
    Abstract: Aspects of a high-precision packet train generation process are distributed among several distinct processing elements. In some embodiments a control processor configures a packet-processing unit with a packet train context that includes details such as the number of packets to be generated and the headers to be included in the packets. The packet-processing unit takes a packet to be used in the packet train and recirculates it a number of times, as specified by the packet train context. The recirculated packets, with the appropriate headers inserted, are forwarded to a traffic-shaping queue in queuing hardware. The traffic-shaping queue is configured to output the forwarded packets with a constant inter-packet gap. Thus, the generation of the multiple packets in the packet train is handled by the packet-processing unit, while the precise inter-packet timing is provided by the traffic-shaping queue in the queuing hardware.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: December 30, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Prashant Anand, Vinayak Joshi, Ashish Anand
  • Publication number: 20140369204
    Abstract: A first data packet of a data flow may be addressed to a primary address and include information for the data flow and a bucket ID may be computed based on the information. Responsive to the bucket ID mapping to first and second servers and the first data packet being addressed to the primary address, the first data packet may be transmitted to the first server. A second data packet may be received addressed to a stand-by address and including the information for the data flow, and a bucket ID may be computed based on the information with the bucket IDs for the first and second packets being the same. Responsive to the bucket ID for the second data packet mapping to first and second servers and the second data packet being addressed to the stand-by address, the second data packet may be transmitted to the second server.
    Type: Application
    Filed: June 17, 2013
    Publication date: December 18, 2014
    Inventors: Prashant Anand, Mustafa Arisoylu, Jayasenan Sundara Ganesh, Nandan Mahadeo Sawant
  • Publication number: 20140372567
    Abstract: Methods may be provided to forward data packets to a plurality of servers with each server being identified by a respective server identification (ID). A non-initial data packet of a data flow may be received, with the non-initial data packet including information for the data flow, and a bucket ID for the non-initial data packet may be computed as a function of the information for the data flow. Responsive to the bucket ID for the data packet mapping to first and second server identifications (IDs) of respective first and second servers and responsive to the non-initial data packet being a non-initial data packet for the data flow, the non-initial data packet may be transmitted to one of the first and second servers using one of the first and second server IDs based on a flow identification of the data flow being included in a transient table for the bucket ID.
    Type: Application
    Filed: June 17, 2013
    Publication date: December 18, 2014
    Inventors: Jayasenan Sundara Ganesh, Mustafa Arisoylu, Prashant Anand, Nandan Mahadeo Sawant
  • Publication number: 20140372616
    Abstract: Data packets may be forwarded to servers identified by respective server IDs. A mapping table includes bucket IDs identifying respective buckets. The mapping table maps: a first bucket ID to a first server ID as a current server ID; a second bucket ID to a second server IDs as a current server ID; and the first bucket ID to a third server ID as an old server ID. A data packet of a data flow may be received, and a bucket ID may be computed for the data packet. Responsive to computing the first bucket ID as the bucket ID for the data flow and responsive to the mapping table mapping the first bucket ID to the to the first server ID as the current server ID and to the third server ID as the old server ID, the data packet may be transmitted to the first server and/or to the third server.
    Type: Application
    Filed: June 17, 2013
    Publication date: December 18, 2014
    Inventors: Mustafa Arisoylu, Jayasenan Sundara Ganesh, Prashant Anand, Nandan Mahadeo Sawant
  • Patent number: 8879550
    Abstract: In one aspect, the present invention reduces the amount of low-latency memory needed for rules-based packet classification by representing a packet classification rules database in compressed form. A packet processing rules database, e.g., an ACL database comprising multiple ACEs, is preprocessed to obtain corresponding rule fingerprints. These rule fingerprints are much smaller than the rules and are easily accommodated in on-chip or other low-latency memory that is generally available to the classification engine in limited amounts. The rules database in turn can be stored in off-chip or other higher-latency memory, as initial matching operations involve only the packet key of the subject packet and the fingerprint database. The rules database is accessed for full packet classification only if a tentative match is found between the packet key and an entry in the fingerprint database. Thus, the present invention also advantageously minimizes accesses to the rules database.
    Type: Grant
    Filed: May 8, 2012
    Date of Patent: November 4, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Prashant Anand, Ramanathan Lakshmikanthan, Sun Den Chen, Ning Xu
  • Patent number: 8825867
    Abstract: A method, in one or more network elements that are in communication between clients that transmit packets and servers, of distributing the packets among the servers which are to process the packets. Stickiness of flows to servers assigned to process them is provided. A packet of a flow is received at a static first level packet distribution module. A group of servers is statically selected for the packet of the flow with the first level module. State that assigns the packet of the flow to the selected group of servers is not used. The packet of the flow is distributed to a distributed stateful second level packet distribution system. A server of the selected group is statefully selected with the second level system by accessing state that assigns processing of packets of the flow to the selected server. The packet of the flow is distributed to the selected server.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: September 2, 2014
    Assignee: Telefonaktiebolaget L M Ericsson (Publ)
    Inventors: Mustafa Arisoylu, Abhishek Arora, Prashant Anand
  • Patent number: 8811183
    Abstract: In some embodiments, an apparatus comprises a switch from a set of switches associated with a stage of a multi-stage switch fabric. The switch is configured to receive a data packet having a destination address of a destination device from a source device, and then store the data packet in a queue of the switch. The switch is configured to define a message based on the queue having an available capacity less than a threshold, and include a congestion root indicator in the message if the switch is a congestion root. The switch is then configured to send the message to the source device such that the source device sends another data packet having the destination address of the destination device to another switch from the set of switches and not to the previous switch if the message includes the congestion root indicator.
    Type: Grant
    Filed: October 4, 2011
    Date of Patent: August 19, 2014
    Assignee: Juniper Networks, Inc.
    Inventors: Prashant Anand, Hardik Bhalala
  • Publication number: 20140119193
    Abstract: A method is implemented by a network element to improve load sharing for a link aggregation group by redistributing data flows to less congested ports in a set of ports associated with the link aggregation group. The network element receives a data packet in a data flow at an ingress port of the network element. A load sharing process is performed to select an egress port of the network element. A check is whether the selected egress port is congested. A check is made whether a time since a previous data packet in the data flow was received exceeds a threshold value. A less congested egress port is identified in the set of ports. A flow table is updated to bind the data flow to the less congested egress port and the data packet is forwarded to the less congested egress port.
    Type: Application
    Filed: October 30, 2012
    Publication date: May 1, 2014
    Applicant: TELEFONAKTIEBOLGET L M ERICSSON (PUBL)
    Inventors: Prashant Anand, Arun Balachandran
  • Publication number: 20130301641
    Abstract: In one aspect, the present invention reduces the amount of low-latency memory needed for rules-based packet classification by representing a packet classification rules database in compressed form. A packet processing rules database, e.g., an ACL database comprising multiple ACEs, is preprocessed to obtain corresponding rule fingerprints. These rule fingerprints are much smaller than the rules and are easily accommodated in on-chip or other low-latency memory that is generally available to the classification engine in limited amounts. The rules database in turn can be stored in off-chip or other higher-latency memory, as initial matching operations involve only the packet key of the subject packet and the fingerprint database. The rules database is accessed for full packet classification only if a tentative match is found between the packet key and an entry in the fingerprint database. Thus, the present invention also advantageously minimizes accesses to the rules database.
    Type: Application
    Filed: May 8, 2012
    Publication date: November 14, 2013
    Inventors: Prashant Anand, Ramanathan Lakshmikanthan, Sun Den Chen, Ning Xu
  • Publication number: 20130297798
    Abstract: A method, in one or more network elements that are in communication between clients that transmit packets and servers, of distributing the packets among the servers which are to process the packets. Stickiness of flows to servers assigned to process them is provided. A packet of a flow is received at a static first level packet distribution module. A group of servers is statically selected for the packet of the flow with the first level module. State that assigns the packet of the flow to the selected group of servers is not used. The packet of the flow is distributed to a distributed stateful second level packet distribution system. A server of the selected group is statefully selected with the second level system by accessing state that assigns processing of packets of the flow to the selected server. The packet of the flow is distributed to the selected server.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: MUSTAFA ARISOYLU, Abhishek Arora, Prashant Anand
  • Publication number: 20120140626
    Abstract: In some embodiments, an apparatus includes a flow control module configured to receive a first data packet from an output queue of a stage of a multi-stage switch at a first rate when an available capacity of the output queue crosses a first threshold. The flow control module is configured to receive a second data packet from the output queue of the stage of the multi-stage switch at a second rate when the available capacity of the output queue crosses a second threshold. The flow control module configured to send a flow control signal to an edge device of the multi-stage switch from which the first data packet or the second data packet entered the multi-stage switch.
    Type: Application
    Filed: December 1, 2010
    Publication date: June 7, 2012
    Applicant: Juniper Networks, Inc.
    Inventors: Prashant Anand, Gunes Aybay, Arijit Sarcar, Hardik Bhalala