Patents by Inventor Naveen K. Jain

Naveen K. Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11496398
    Abstract: An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: November 8, 2022
    Assignee: JUNIPER NETWORKS, INC.
    Inventors: Anuj Kumar Srivastava, Gary Goldman, Harshad B Agashe, Dinesh Jaiswal, Piyush Jain, Naveen K Jain
  • Patent number: 11070474
    Abstract: A network device includes a memory, a plurality of packet processors, a switch fabric coupling the plurality of processors, and processing circuitry. The processing circuitry is configured to receive a data stream to be transmitted on a switch fabric and determine a plurality of credit counts, each credit count being assigned to a respective subchannel of a plurality of subchannels. The packet processor is further configured to determine per-subchannel occupancy of the memory for the plurality of subchannels, select, based on the plurality of credit counts and the per-subchannel occupancy of the memory, a subchannel of the plurality of subchannels for transmitting a cell of a plurality of cells for the data stream, and output data for the cell to the memory for output by the selected subchannel.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: July 20, 2021
    Assignee: Juniper Networks, Inc.
    Inventors: Piyush Jain, Anuj Kumar Srivastava, Naveen K Jain, Dinesh Jaiswal, Harshad B Agashe
  • Publication number: 20210194809
    Abstract: An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
    Type: Application
    Filed: March 10, 2021
    Publication date: June 24, 2021
    Inventors: Anuj Kumar Srivastava, Gary Goldman, Harshad B. Agashe, Dinesh Jaiswal, Piyush Jain, Naveen K. Jain
  • Patent number: 10951527
    Abstract: An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 16, 2021
    Assignee: Juniper Networks, Inc.
    Inventors: Anuj Kumar Srivastava, Gary Goldman, Harshad B Agashe, Dinesh Jaiswal, Piyush Jain, Naveen K Jain
  • Publication number: 20200213232
    Abstract: An ingress fabric endpoint coupled to a switch fabric within a network device reorders packet flows based on congestion status. In one example, the ingress fabric endpoint receives packet flows for switching across the switch fabric. The ingress fabric endpoint assigns each packet for each packet flow to a fast path or a slow path for packet switching. The ingress fabric endpoint processes, to generate a stream of cells for switching across the switch fabric, packets from the fast path and the slow path to maintain a first-in-first-out ordering of the packets within each packet flow. The ingress fabric endpoint switches a packet of a first packet flow after switching a packet of a second packet flow despite receiving the packet of the first packet flow before the packet of the second packet flow.
    Type: Application
    Filed: December 28, 2018
    Publication date: July 2, 2020
    Inventors: Anuj Kumar Srivastava, Gary Goldman, Harshad B. Agashe, Dinesh Jaiswal, Piyush Jain, Naveen K. Jain
  • Patent number: 10649946
    Abstract: A system, method, and apparatus are provided for operating a device to receive a first signaling state sequence on a multi-wire interface within a first voltage range to cause the device to transition to a high-speed communication mode for receiving high-speed data on the multi-wire interface within a second, smaller voltage range before returning to a low-power communication mode when the device receives on the multi-wire interface a second sequence of two signaling states within the first voltage range to signal a turnaround command without requiring any additional signaling state within the first voltage range, where the turnaround command enables the device to transmit data from the device over the multi-wire interface by transmitting on the multi-wire interface the first sequence of signaling states within the first voltage range to cause the device to transition to a high-speed communication mode for transmitting data from the device over the multi-wire interface.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: May 12, 2020
    Assignee: NXP USA, Inc.
    Inventors: Maik Brett, Naveen K. Jain, Shreya Singh
  • Patent number: 9973437
    Abstract: A device may store a credit value for each of multiple output components. The device may receive packets from a network device via an input component. The device may cause the input component to queue the packets. The device may selectively dequeue a packet from the input component, to be sent to an output component, based on whether the credit value for the output component satisfies a credit threshold. The device may send the packet to the output component based on a destination of the packet when the packet is dequeued from the input component. The device may determine a size of the packet after the packet is dequeued. The device may update the credit value for the output component based on the size of the packet. The device may output the packet to another network device via the output component.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: May 15, 2018
    Assignee: Juniper Networks, Inc.
    Inventors: Ravi Pathakota, Sarin Thomas, Sudipta Kundu, Srihari R. Vegesna, Firdaus Mahiar Irani, Kalpataru Maji, Naveen K. Jain
  • Publication number: 20160285777
    Abstract: A device may store a credit value for each of multiple output components. The device may receive packets from a network device via an input component. The device may cause the input component to queue the packets. The device may selectively dequeue a packet from the input component, to be sent to an output component, based on whether the credit value for the output component satisfies a credit threshold. The device may send the packet to the output component based on a destination of the packet when the packet is dequeued from the input component. The device may determine a size of the packet after the packet is dequeued. The device may update the credit value for the output component based on the size of the packet. The device may output the packet to another network device via the output component.
    Type: Application
    Filed: June 10, 2016
    Publication date: September 29, 2016
    Inventors: Ravi Pathakota, Sarin Thomas, Sudipta Kundu, Srihari R. Vegesna, Firdaus Mahiar Irani, Kalpataru Maji, Naveen K. Jain
  • Patent number: 9369397
    Abstract: A device may store a credit value for each of multiple output components. The device may receive packets from a network device via an input component. The device may cause the input component to queue the packets. The device may selectively dequeue a packet from the input component, to be sent to an output component, based on whether the credit value for the output component satisfies a credit threshold. The device may send the packet to the output component based on a destination of the packet when the packet is dequeued from the input component. The device may determine a size of the packet after the packet is dequeued. The device may update the credit value for the output component based on the size of the packet. The device may output the packet to another network device via the output component.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: June 14, 2016
    Assignee: Juniper Networks, Inc.
    Inventors: Ravi Pathakota, Sarin Thomas, Sudipta Kundu, Srihari R. Vegesna, Firdaus Mahiar Irani, Kalpataru Maji, Naveen K. Jain
  • Patent number: 9083641
    Abstract: A network processing device having multiple processing engines capable of providing multi-context parallel processing is disclosed. The device includes a receiver and a packet processor, wherein the receiver is capable of receiving packets at a predefined packet flow rate. The packet processor, in one embodiment, includes multiple processing engines, wherein each processing engine is further configured to include multiple context processing components. The context processing components are used to provide multi-context parallel processing to increase throughput.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: July 14, 2015
    Assignee: Tellabs Operations, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Patent number: 8938434
    Abstract: Records are obtained from various sources, including public records, and are then augmented and grouped. At least one augment key is applied to each of the records resulting in an augment key value for each record. The records are augmented based on the key value. Augmenting the records includes combining the fields of the common records into a single record and then removing the duplicate records. The augmented records are then grouped according to household. At least one household grouping key is applied to each of the augmented records resulting in a household grouping key value for each record. The records having the same household grouping key value are displayed as a household grouping.
    Type: Grant
    Filed: November 22, 2005
    Date of Patent: January 20, 2015
    Assignee: Intelius, Inc.
    Inventors: Naveen K. Jain, John K. Arnold, Kevin R. Marcus, Niraj Anil Shah
  • Patent number: 8811410
    Abstract: A network device having a system performance measurement unit employing one or more global time stamps for measuring the device performance is disclosed. The device includes an ingress circuit, a global time counter, an egress circuit, and a processor. The ingress circuit is configured to receive a packet from an input port while the global time counter generates an arrival time stamp in accordance with the arrival time of the packet. The egress circuit is capable of forwarding the packet to other network devices via an output port. The processor, in one embodiment, is configured to calculate packet latency in response to the arrival time stamp.
    Type: Grant
    Filed: June 22, 2012
    Date of Patent: August 19, 2014
    Assignee: Tellabs, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Publication number: 20120290353
    Abstract: Systems and methods are described for determining an allocation of resources among a plurality of channels. In one embodiment, a business objective is received as user input that corresponds to a participant in a market or business sector. A business segment corresponding to the market participant's products or interests, one or more products within the business segment, and a market channel of the plurality of channels is selected by the user, also as user input. Once these inputs have been established, an high performing allocation resources for achieving the business objective is generated for that market channel.
    Type: Application
    Filed: May 13, 2011
    Publication date: November 15, 2012
    Inventors: Naveen K. JAIN, Amit KHANNA, Vivek OHRI, Piyush SHANDILYA, Jasleen Kaur SINDHU
  • Patent number: 8228923
    Abstract: A network device having a system performance measurement unit employing one or more global time stamps for measuring the device performance is disclosed. The device includes an ingress circuit, a global time counter, an egress circuit, and a processor. The ingress circuit is configured to receive a packet from an input port while the global time counter generates an arrival time stamp in accordance with the arrival time of the packet. The egress circuit is capable of forwarding the packet to other network devices via an output port. The processor, in one embodiment, is configured to calculate packet latency in response to the arrival time stamp.
    Type: Grant
    Filed: January 9, 2008
    Date of Patent: July 24, 2012
    Assignee: Tellabs Operations, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Patent number: 8179887
    Abstract: A network system, having an array of processing engines (“PEs”) and a delay line, improves packet processing performance for time division multiplexing (“TDM”) sequencing of PEs. The system includes an ingress circuit, a delay line, a demultiplexer, a tag memory, and a multiplexer. After the ingress circuit receives a packet from an input port, the delay line stores the packet together with a unique tag value. The delay line, in one embodiment, provides a predefined time delay for the packet. Once the demultiplexer forwards the packet to an array of PEs for packet processing, a tag memory stores the tag value indexed by PE number. The PE number identifies a PE in the array, which was assigned to process the packet. The multiplexer is capable of multiplex packets from PE array and replacing the packet with the processed packet in the delay line in response to the tag value.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: May 15, 2012
    Assignee: Tellabs Operations, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Publication number: 20120076140
    Abstract: A network processing device having multiple processing engines capable of providing multi-context parallel processing is disclosed. The device includes a receiver and a packet processor, wherein the receiver is capable of receiving packets at a predefined packet flow rate. The packet processor, in one embodiment, includes multiple processing engines, wherein each processing engine is further configured to include multiple context processing components. The context processing components are used to provide multi-context parallel processing to increase throughput.
    Type: Application
    Filed: November 4, 2011
    Publication date: March 29, 2012
    Applicant: Tellabs San Jose, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Patent number: 8135878
    Abstract: A bus scheduling device having a group of direct memory access (“DMA”) engines, a group of target modules (“TM”), a read pending memory, and a bus arbiter is disclosed. A common bus, which is coupled with the DMA engines, TMs, and the read pending memory, is employed in the device for data transmission. DMA engines are capable of transmitting and receiving data to and from TMs via the common bus. The read pending memory is capable of storing information indicating the read status of the DMA engines. The arbiter or bus arbiter arbitrates bus access in response to a bus allocation scheme and the information stored in the read pending memory.
    Type: Grant
    Filed: April 7, 2008
    Date of Patent: March 13, 2012
    Assignee: Tellabs San Jose Inc.
    Inventor: Naveen K. Jain
  • Patent number: 8074054
    Abstract: A processing system includes a group of processing units (“PUs”) arranged in a daisy chain configuration or a sequence capable of parallel processing. The processing system, in one embodiment, includes PUs, a demultiplexer (“demux”), and a multiplexer (“mux”). The PUs are connected or linked in a sequence or a daisy chain configuration wherein a first PU is located at the beginning of the sequence and a last digital PU is located at the end of the sequence. Each PU is configured to read an input data packet from a packet stream during a designated reading time frame. If the time frame is outside of the designated reading time frame, a PU allows a packet stream to pass through. The demux forwards a packet stream to the first digital processing unit. The mux receives a packet steam from the last digital processing unit.
    Type: Grant
    Filed: December 12, 2007
    Date of Patent: December 6, 2011
    Assignee: Tellabs San Jose, Inc.
    Inventors: Venkata Rangavajjhala, Naveen K. Jain
  • Patent number: 8072974
    Abstract: A network processing device having multiple processing engines capable of providing multi-context parallel processing is disclosed. The device includes a receiver and a packet processor, wherein the receiver is capable of receiving packets at a predefined packet flow rate. The packet processor, in one embodiment, includes multiple processing engines, wherein each processing engine is further configured to include multiple context processing components. The context processing components are used to provide multi-context parallel processing to increase throughput.
    Type: Grant
    Filed: July 18, 2008
    Date of Patent: December 6, 2011
    Assignee: Tellabs San Jose Inc
    Inventors: Naveen K. Jain, Venkata Rangavajjhala
  • Patent number: 8072882
    Abstract: A method and apparatus for improving packet processing employing a network flow control mechanism are disclosed. A network process, in one embodiment, suspends distribution of incoming packet(s) to one or more, packet processing engines (“PEs”) upon detecting a stalling request. After identifying currently executing operations initiated by one or more kicking circuits before the issuance of stalling request, the process allows the currently executing operations to complete despite the activation of the stalling request.
    Type: Grant
    Filed: January 23, 2009
    Date of Patent: December 6, 2011
    Assignee: Tellabs San Jose, Inc.
    Inventors: Naveen K. Jain, Venkata Rangavajjhala