Patents by Inventor Paul J. Gyugyi

Paul J. Gyugyi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9594675
    Abstract: Virtual chip enable techniques perform memory access operations on virtual chip enables rather than physical chip enables. Each virtual chip enable is a construct that includes attributes that correspond to a unique physical or logical memory device.
    Type: Grant
    Filed: December 31, 2009
    Date of Patent: March 14, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Howard Tsai, Dmitry Vyshetsky, Neal Meininger, Paul J. Gyugyi
  • Patent number: 8738800
    Abstract: Described are data structures, and methodology for forming same, for network protocol processing. A method for creating data structures for firewalling and network address translating is described. A method for creating data structures for physical layer addressing is described. A method for security protocol support using a data structure is described. A method for creating at least one data structure sized responsive to whether a firewall is activated is described. A data structure for routing packets is described. A method of forming hashing table chains is described. Additionally, method and apparatus for tracking packet states is described. More particularly, Transmission Control Protocol (“TCP”) tracking of states for packets is described. In an embodiment, a division between software states and hardware states is made as a packet is processed by both software and hardware. Additionally, method and apparatus for network protocol processing are described.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: May 27, 2014
    Assignee: NVIDIA Corporation
    Inventors: Thomas A. Maufer, Paul J. Gyugyi, Sameer Nanda, Paul J. Sidenblad
  • Patent number: 8732350
    Abstract: A system for improving direct memory access (DMA) offload. The system includes a processor, a data DMA engine and memory components. The processor selects an executable command comprising subcommands. The DDMA engine executes DMA operations related to a subcommand to perform memory transfer operations. The memory components store the plurality of subcommands and status data resulting from DMA operations. Each of the memory components has a corresponding token associated therewith. Possession of a token allocates its associated memory component to the processor or the DDMA engine possessing the token, making it inaccessible to the other. A first memory component and a second memory component of the plurality of memory components are used by the processor and the DDMA engine respectively and simultaneously. Tokens, e.g., the first and/or the second, are exchanged between the DDMA engine and the processor when the DDMA engine and/or the microcontroller complete accessing associated memory components.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: May 20, 2014
    Assignee: NVIDIA Corporation
    Inventors: Dmitry Vyshetski, Howard Tsai, Paul J. Gyugyi
  • Patent number: 8417852
    Abstract: A system and methods of uploading payload data to user buffers in system memory and of uploading partially processed frame data to legacy buffers allocated in Operating System memory space are described. User buffers are stored in a portion of system memory allocated to an application program, therefore data stored in user buffers does not need to be copied from another portion of system memory to the portion of system memory allocated to the application program. When partially processed frame data is uploaded by hardware to a legacy buffer in system memory, a tag, uniquely identifying the legacy buffer location is transferred by the hardware to a TCP stack, enabling the TCP stack to locate the legacy buffer.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: April 9, 2013
    Assignee: Nvidia Corporation
    Inventors: Anand Rajagopalan, Radoslav Danilak, Paul J. Gyugyi, Ashutosh K. Jha, Thomas A. Maufer, Sameer Nanda, Paul J. Sidenblad
  • Patent number: 8190767
    Abstract: Described are data structures, and methodology for forming same, for network protocol processing. A method for creating data structures for firewalling and network address translating is described. A method for creating data structures for physical layer addressing is described. A method for security protocol support using a data structure is described. A method for creating at least one data structure sized responsive to whether a firewall is activated is described. A data structure for routing packets is described. A method of forming hashing table chains is described. Additionally, method and apparatus for tracking packet states is described. More particularly, Transmission Control Protocol (“TCP”) tracking of states for packets is described. In an embodiment, a division between software states and hardware states is made as a packet is processed by both software and hardware. Additionally, method and apparatus for network protocol processing are described.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: May 29, 2012
    Assignee: NVIDIA Corporation
    Inventors: Thomas A. Maufer, Paul J. Gyugyi, Sameer Nanda, Paul J. Sidenblad
  • Patent number: 7991918
    Abstract: A method and apparatus for transmitting commands between a TCP stack and an offload unit and for communicating receive and transmit data buffer locations is described. A command ring buffer stored in system memory is used to transmit commands from the TCP stack to the offload unit and to transmit command status from the offload unit to the TCP stack. A notification ring buffer is used to transmit connection information from the offload unit to the TCP stack. Other ring buffers are used to transmit locations of transmit buffers or receive buffers stored in system memory from the TCP stack to the offload unit.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: August 2, 2011
    Assignee: NVIDIA Corporation
    Inventors: Ashutosh K. Jha, Radoslav Danilak, Paul J. Gyugyi, Thomas A. Maufer, Sameer Nanda, Anand Rajagopalan, Paul J. Sidenblad
  • Patent number: 7974209
    Abstract: Method and apparatus for packet processing by re-insertion into network interface circuitry. A method for handling a burst of packets sent to network interface circuitry includes checking for a connection table entry for received packets, and responsive to non-existence of the connection table entry for the received packets, sending the packets to network interface software for processing. The network interface software processing includes: building the connection table entry; processing the packets; and sending the packets as processed to the network interface circuitry. Additionally, a method for re-inserting a packet responsive to an active audit mode is described.
    Type: Grant
    Filed: December 13, 2007
    Date of Patent: July 5, 2011
    Assignee: NVIDIA Corporation
    Inventors: Thomas A. Maufer, Paul J. Gyugyi, Sameer Nanda, Paul J. Sidenblad
  • Publication number: 20110161561
    Abstract: Virtual chip enable techniques perform memory access operations on virtual chip enables rather than physical chip enables. Each virtual chip enable is a construct that includes attributes that correspond to a unique physical or logical memory device.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Applicant: NVIDIA CORPORATION
    Inventors: Howard Tsai, Dmitry Vyshetsky, Neal Meininger, Paul J. Gyugyi
  • Patent number: 7934255
    Abstract: A computing system offloads packet classification from a central processing unit to a graphics processing unit. In one implementation input data packets to be classified are represented as a first texture, classification rules are represented as a second texture, and a shading operation is performed to classify packets.
    Type: Grant
    Filed: December 11, 2006
    Date of Patent: April 26, 2011
    Assignee: NVIDIA Corporation
    Inventor: Paul J. Gyugyi
  • Patent number: 7913294
    Abstract: Method and apparatus for network protocol filtering of a packet is described. An index to a table is obtained and stored to travel with the packet. The index is obtainable to access the table to obtain packet information. In particular, a method for inbound network address translation packet filtering and a method for outbound packet filtering are described.
    Type: Grant
    Filed: June 24, 2003
    Date of Patent: March 22, 2011
    Assignee: NVIDIA Corporation
    Inventors: Thomas A. Maufer, Paul J. Gyugyi, Sameer Nanda, Paul J. Sidenblad
  • Patent number: 7860912
    Abstract: An embodiment of the invention includes a first pseudo-random number generator that is configured to produce a first sequence of values at a first clock rate. Also, a second pseudo-random number generator is configured to produce a second sequence of values at a second clock rate. The second clock rate is based on the first sequence of values and the first clock rate. A logical module is connected to the first pseudo-random number generator and the second pseudo-random number generator. The logical module is configured to produce an output value based on at least a portion of a value from the first sequence of values and at least a portion of a value from the second sequence of values.
    Type: Grant
    Filed: December 8, 2006
    Date of Patent: December 28, 2010
    Assignee: Nvidia Corporation
    Inventors: Paul J. Gyugyi, Tony C. Tam
  • Patent number: 7818806
    Abstract: Diagnostic software often requires pattern matching scanning to be performed to detect problems such as computer viruses or unwanted intruders. A computing system offloads pattern matching scanning from a central processing unit to a graphics processing unit.
    Type: Grant
    Filed: November 8, 2005
    Date of Patent: October 19, 2010
    Assignee: NVIDIA Corporation
    Inventors: Paul J. Gyugyi, Radoslav Danilak
  • Publication number: 20100161845
    Abstract: A system for improving direct memory access (DMA) offload. The system includes a processor, a data DMA engine and memory components. The processor selects an executable command comprising subcommands. The DDMA engine executes DMA operations related to a subcommand to perform memory transfer operations. The memory components store the plurality of subcommands and status data resulting from DMA operations. Each of the memory components has a corresponding token associated therewith. Possession of a token allocates its associated memory component to the processor or the DDMA engine possessing the token, making it inaccessible to the other. A first memory component and a second memory component of the plurality of memory components are used by the processor and the DDMA engine respectively and simultaneously. Tokens, e.g., the first and/or the second, are exchanged between the DDMA engine and the processor when the DDMA engine and/or the microcontroller complete accessing associated memory components.
    Type: Application
    Filed: December 19, 2008
    Publication date: June 24, 2010
    Applicant: NVIDIA CORPORATION
    Inventors: Dmitry Vyshetski, Howard Tsai, Paul J. Gyugyi
  • Patent number: 7716506
    Abstract: A system has a plurality of different clients. Each client generates a report signal indicative of a current latency tolerance associated with a performance state. A controller dynamically determines a power down level having a minimum power consumption capable of supporting the system latency of the configuration state of the clients.
    Type: Grant
    Filed: December 14, 2006
    Date of Patent: May 11, 2010
    Assignee: Nvidia Corporation
    Inventors: Roman Surgutchik, Robert William Chapman, Edward L. Riegelsberger, Brad W. Simeral, Paul J. Gyugyi
  • Patent number: 7620070
    Abstract: Method and apparatus for packet processing by re-insertion into network interface circuitry. A method for handling a burst of packets sent to network interface circuitry includes checking for a connection table entry for received packets, and responsive to non-existence of the connection table entry for the received packets, sending the packets to network interface software for processing. The network interface software processing includes: building the connection table entry; processing the packets; and sending the packets as processed to the network interface circuitry. Additionally, a method for re-inserting a packet responsive to an active audit mode is described.
    Type: Grant
    Filed: June 24, 2003
    Date of Patent: November 17, 2009
    Assignee: NVIDIA Corporation
    Inventors: Thomas A. Maufer, Paul J. Gyugyi, Sameer Nanda, Paul J. Sidenblad
  • Patent number: 7613109
    Abstract: A method and apparatus for processing data received and transmitted on a TCP connection is described. An offload unit processes received data for which a special case does not exist, to produce payload data, which is uploaded directly to application memory. The offload unit partially processes received data for which a special case does exist and uploads the partially processed received data to a buffer stored in system memory. The partially processed received data is then further processed by a TCP stack to produce payload data, which is copied to application memory.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: November 3, 2009
    Assignee: NVIDIA Corporation
    Inventors: Ashutosh K. Jha, Radoslav Danilak, Paul J. Gyugyi, Thomas A. Maufer, Sameer Nanda, Anand Rajagopalan, Paul J. Sidenblad
  • Patent number: 7603574
    Abstract: A system is coupled to a network by a network interface. In a power savings mode the speed setting of the network interface is reduced to accommodate increased system latency.
    Type: Grant
    Filed: December 14, 2006
    Date of Patent: October 13, 2009
    Assignee: NVIDIA Corporation
    Inventors: Paul J. Gyugyi, Roman Surgutchik, Raymond A. Lui
  • Patent number: 7420931
    Abstract: A method and apparatus for filtering a packet on a connection within a computing system. In one embodiment, the method includes: receiving the packet; delegating the packet to an offload unit for filtering the packet; and determining, by the offload unit, whether the connection is a delegated connection.
    Type: Grant
    Filed: June 23, 2004
    Date of Patent: September 2, 2008
    Assignee: NVIDIA Corporation
    Inventors: Sameer Nanda, Radoslav Danilak, Paul J. Gyugyi, Thomas A. Maufer, Paul J. Sidenblad, Ashutosh K. Jha, Anand Rajagopalan
  • Patent number: 7412488
    Abstract: A method of setting up a delegated connection for processing by an offload unit is described. The method comprises establishing a TCP connection and determining whether or not to delegate the TCP connection for processing by the offload unit, producing a delegated connection, and setting up the delegated connection by creating a delegated connection table entry. When frames are received on the delegated connection by the offload unit, the offload unit determines if user buffers are available. When user buffers are available, the offload unit uploads payload data to the user buffers. When user buffers are not available, the offload unit uploads a portion of the payload data to a buffer allocated in Operating System memory space.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: August 12, 2008
    Assignee: NVIDIA Corporation
    Inventors: Ashutosh K. Jha, Radoslav Danilak, Paul J. Gyugyi, Thomas A. Maufer, Sameer Nanda, Anand Rajagopalan, Paul J. Sidenblad
  • Patent number: 7363572
    Abstract: A method and apparatus for editing outbound frames and generating acknowledgements for a TCP connection is described. Acknowledgements are automatically generated and included in outbound frames during data transmissions with minimal processor intervention.
    Type: Grant
    Filed: December 9, 2003
    Date of Patent: April 22, 2008
    Assignee: NVIDIA Corporation
    Inventors: Paul J. Sidenblad, Radoslav Danilak, Paul J. Gyugyi, Ashutosh K. Jha, Thomas A. Maufer, Sameer Nanda, Anand Rajagopalan