Patents by Inventor Nayan Amrutlal Suthar

Nayan Amrutlal Suthar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240106750
    Abstract: Examples include techniques for multipathing over reliable paths and completion reporting. Example techniques include examples of providing reliability over multiple paths routed through a network between a source and a target of a message. Example techniques also include examples of completion reporting for messages sent via packets routed through a network over multiple paths.
    Type: Application
    Filed: December 12, 2023
    Publication date: March 28, 2024
    Inventors: Nayan Amrutlal SUTHAR, Uri ELZUR, Josh D. COLLIER
  • Publication number: 20220351326
    Abstract: Examples described herein relate to a first graphics processing unit (GPU) with at least one integrated communications system, wherein the at least one integrated communications system is to apply a reliability protocol to communicate with a second at least one integrated communications system associated with a second GPU to copy data from a first memory region to a second memory region and wherein the first memory region is associated with the first GPU and the second memory region is associated with the second GPU.
    Type: Application
    Filed: June 29, 2022
    Publication date: November 3, 2022
    Inventors: Todd RIMMER, Mark DEBBAGE, Bruce G. WARREN, Sayantan SUR, Nayan Amrutlal SUTHAR, Ajaya Durg
  • Publication number: 20220103484
    Abstract: Examples described herein relate to a network interface device that is to adjust a transmission rate of packets based on a number of flows contributing to congestion and/or based on whether latency is increasing or decreasing. In some examples, adjusting the transmission rate of packets based on a number of flows contributing to congestion comprises adjust an additive increase (AI) parameter based on the number of flows contributing to congestion. In some examples, latency is based on a measured roundtrip time and a baseline roundtrip time.
    Type: Application
    Filed: December 8, 2021
    Publication date: March 31, 2022
    Inventors: Roberto PENARANDA CEBRIAN, Robert SOUTHWORTH, Pedro YEBENES SEGURA, Rong PAN, Allister ALEMANIA, Nayan Amrutlal SUTHAR, Malek MUSLEH
  • Publication number: 20210318980
    Abstract: A processor unit comprising a first controller to couple to a host processing unit over a first link; a second controller to couple to a second processor unit over a second link, wherein the second processor unit is to couple to the host central processing unit via a third link; and circuitry to determine whether to send a cache coherent request to the host central processing unit over the first link or over the second link via the second processing unit.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 14, 2021
    Applicant: Intel Corporation
    Inventors: Rahul Pal, Nayan Amrutlal Suthar, David M. Puffer, Ashok Jagannathan
  • Patent number: 9009448
    Abstract: Disclosed is an architecture, system and method for performing multi-thread DFA descents on a single input stream. An executer performs DFA transitions from a plurality of threads each starting at a different point in an input stream. A plurality of executers may operate in parallel to each other and a plurality of thread contexts operate concurrently within each executer to maintain the context of each thread which is state transitioning. A scheduler in each executer arbitrates instructions for the thread into an at least one pipeline where the instructions are executed. Tokens may be output from each of the plurality of executers to a token processor which sorts and filters the tokens into dispatch order.
    Type: Grant
    Filed: January 18, 2012
    Date of Patent: April 14, 2015
    Assignee: Intel Corporation
    Inventors: Michael Ruehle, Umesh Ramkrishnarao Kasture, Vinay Janardan Naik, Nayan Amrutlal Suthar, Robert J. McMillen
  • Publication number: 20140215090
    Abstract: In a DFA, a sub-scan is executed during a DFA scan. The sub-scan consumes input symbols out of sequence relative to the DFA scan, either forward or in reverse. An input symbol in the DFA scan is matched. A sub-scan command is supplied to the DFA. The sub-scan command is executed and at least one symbol is consumed in the sub-scan.
    Type: Application
    Filed: January 31, 2013
    Publication date: July 31, 2014
    Applicant: LSI CORPORATION
    Inventors: Michael Ruehle, Adam Scislowicz, Nayan Amrutlal Suthar, Umesh Ramkrishnarao Kasture
  • Publication number: 20130046954
    Abstract: Disclosed is an architecture, system and method for performing multi-thread DFA descents on a single input stream. An executer performs DFA transitions from a plurality of threads each starting at a different point in an input stream. A plurality of executers may operate in parallel to each other and a plurality of thread contexts operate concurrently within each executer to maintain the context of each thread which is state transitioning. A scheduler in each executer arbitrates instructions for the thread into an at least one pipeline where the instructions are executed. Tokens may be output from each of the plurality of executers to a token processor which sorts and filters the tokens into dispatch order.
    Type: Application
    Filed: January 18, 2012
    Publication date: February 21, 2013
    Inventors: Michael Ruehle, Umesh Ramkrishnarao Kasture, Vinay Janardan Naik, Nayan Amrutlal Suthar, Robert J. McMillen
  • Publication number: 20100080231
    Abstract: A system and method for restoring the arrival order of a plurality of packets after receipt of the packets and prior to a retransmission of the plurality of packets are provided. The invented system is configured to process a first number of packets through a high latency path, and then process all remaining packets through a lower latency path. The received packets are stored after processing in a queue memory until either (a.) all of the packets processed through the high latency path are fully processed through the high latency path, or (b.) a time period of packet processing has expired. The packets stored in the queue are transmitted from the system in the order in which the packets were received by the system, and the additional data packets are retransmitted without storage in the queue memory.
    Type: Application
    Filed: September 26, 2008
    Publication date: April 1, 2010
    Inventors: Deepak Lala, Nayan Amrutlal Suthar, Umesh Ramkrishnarao Kasture