Patents by Inventor Gregory L. Chesson

Gregory L. Chesson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8730808
    Abstract: Example embodiments of methods and apparatus for data communication are disclosed. An example method includes establishing, at a data network communication device, respective data communication channels with a plurality of client network devices. The example method also includes allocating default size data transmission windows to the plurality of client network devices, monitoring use of the default size data transmission windows by the client network devices based on received data queued in a shared data buffer, allocating fixed size data transmission windows to client network devices of the plurality that are communicating data at a rate greater than a threshold data rate, the fixed size windows being larger than the default size windows. The example method also includes receiving data from the client network devices in accordance with at least one of the default size data transmission windows and the fixed size data transmission windows.
    Type: Grant
    Filed: November 14, 2012
    Date of Patent: May 20, 2014
    Assignee: Google Inc.
    Inventor: Gregory L. Chesson
  • Patent number: 8339957
    Abstract: Example embodiments of methods and apparatus for data communication are disclosed. An example method includes receiving, at a data network communication device having a shared data buffer for queuing received data, respective data backlog information for a plurality of sending network devices operationally coupled with the data network communication device. The example method also includes determining an amount of aggregate data backlog for the data network communication device based on the respective data backlog information. The example method further includes comparing the aggregate data backlog amount with a threshold, and, in the event the aggregate data backlog amount is less than or equal to the threshold, allocating, at the data network communication device, respective data transmission windows to the plurality of sending network devices. In this example, respective sizes of the respective data transmission windows are based on the respective data backlog information for each sender.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: December 25, 2012
    Assignee: Google Inc.
    Inventor: Gregory L. Chesson
  • Publication number: 20100329114
    Abstract: Example embodiments of methods and apparatus for data communication are disclosed. An example method includes receiving, at a data network communication device having a shared data buffer for queuing received data, respective data backlog information for a plurality of sending network devices operationally coupled with the data network communication device. The example method also includes determining an amount of aggregate data backlog for the data network communication device based on the respective data backlog information. The example method further includes comparing the aggregate data backlog amount with a threshold, and, in the event the aggregate data backlog amount is less than or equal to the threshold, allocating, at the data network communication device, respective data transmission windows to the plurality of sending network devices. In this example, respective sizes of the respective data transmission windows are based on the respective data backlog information for each sender.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 30, 2010
    Inventor: Gregory L. Chesson
  • Patent number: 7675857
    Abstract: One embodiment of the present invention provides a system that avoids network congestion. During operation, the system can detect an onset of congestion in a first queue at a first node. Next, the first node can generate a first control-message, wherein the first control-message contains a congestion-point identifier which is associated with the first queue. The first node can then send the first control-message to a second node, which can cause the second node to delay sending a second message to the first node, wherein the second message is expected to be routed through the first queue at the first node. Next, the second node may propagate the control-message to a third node which may cause the third node to delay sending a third message to the second node, wherein the third message is expected to be routed through the first queue at the first node.
    Type: Grant
    Filed: May 2, 2007
    Date of Patent: March 9, 2010
    Assignee: Google Inc.
    Inventor: Gregory L. Chesson
  • Patent number: 7526092
    Abstract: A method of providing a protocol for rekeying between two stations is disclosed. The method can include providing a first set of messages for computing a new key and reserving an auxiliary storage area for the new key. The first set of messages comprises an enable exchange. The method also includes providing a second set of messages to obsolete an old key and switch to the new key. The second set of messages comprises a transition exchange. In one embodiment, the protocol includes rekeying between multiple stations, and the rekey coordinator sends the first set of messages to a plurality of rekey participants. The auxiliary storage area allows multiplexing in both the enable and transition exchanges, thereby facilitating an efficient and safe rekey operation.
    Type: Grant
    Filed: June 15, 2007
    Date of Patent: April 28, 2009
    Assignee: Atheros Communications, Inc.
    Inventors: Gregory L. Chesson, Nancy Cam-Winget
  • Patent number: 6766358
    Abstract: A method for exchanging messages between computer systems communicatively coupled in a computer system network. A message (e.g., a read or write command) is sent from a software element of a first computer system (e.g., a client computer system) to a second computer system (e.g., a server computer system). A shared memory unit is accessible by the software element of the first computer system and a software element of the second computer system. The shared memory unit of the second computer system is directly accessed, bypassing the processor of the second computer system, and the data of interest is read or written from/to the shared memory unit. In one embodiment, the method pertains to acknowledgments between software elements. A plurality of messages is sent from one software element to another software element. A count of each of the plurality of messages is maintained.
    Type: Grant
    Filed: October 25, 1999
    Date of Patent: July 20, 2004
    Assignee: Silicon Graphics, Inc.
    Inventors: Gregory L. Chesson, James T. Pinkerton, Eric Salo
  • Publication number: 20030225739
    Abstract: In a preferred embodiment is described a scheduling architecture, including a plurality of queues each within an associated queue control unit, and a plurality of data control units. The queue control units are directed to operations that obtain data for transmission of a stream from a host and ensure that it is available for transmission, preferably as a single stream. The data control units are each directed to operations that format the data from the queue control units in dependence upon the transmission (or channel) characteristics that are to be associated with that data. Further, each queue control unit can configurably be input to any of the data control units. In one embodiment the output of each of the data control units is controlled by a data arbiter, so that a single stream of data is obtained.
    Type: Application
    Filed: May 2, 2003
    Publication date: December 4, 2003
    Inventors: Gregory L. Chesson, Jeffrey S. Kuskin
  • Patent number: 6594787
    Abstract: A system and method thereof for monitoring elapsed time for a transaction. A computer system executes an application to initiate a transaction. An input/output device communicatively coupled to the computer system receives the transaction from the computer system. The input/output device is adapted to have a timer for measuring time until, for example, a response to the transaction is generated. The input/output device monitors the timer to identify when a time period allotted for the response to the transaction is exceeded (e.g., a timeout condition). The input/output device generates a signal to indicate the timeout condition.
    Type: Grant
    Filed: September 17, 1999
    Date of Patent: July 15, 2003
    Assignee: Silicon Graphics, Inc.
    Inventor: Gregory L. Chesson
  • Patent number: 6223270
    Abstract: A method and system for efficient translation of memory addresses in computer systems. The present invention enables address translations between different address spaces to be performed without using the table lookup step typically required in the prior art. Thus, the present invention provides significant improvements in both time and space efficiency over prior art implementations of address translation. In modern computer systems where direct memory access (DMA) operations are used extensively, especially in the emerging field of operating system (OS) bypass technology, the performance improvements afforded by the present invention are particularly critical to the realization of an efficient and high performance system. A method and system for efficiently translating memory addresses in computer systems and the address representation used are described herein.
    Type: Grant
    Filed: April 19, 1999
    Date of Patent: April 24, 2001
    Assignee: Silicon Graphics, Inc.
    Inventors: Gregory L. Chesson, James T. Pinkerton, Eric Salo
  • Patent number: 5875468
    Abstract: In a computer system having a number of nodes, wherein one of the nodes has a number of processors which share a single cache, a method of providing release consistent memory coherency. Initially, a write stream is divided into separate intervals or epochs at each cache, delineated by processor synch operations. When a write miss is detected, a counter corresponding to the current epoch is incremented. When the write miss globally completes, the same epoch counter is decremented. Synch operations issued to the cache stall the issuing processor until all epochs up to and including the epoch that the synch ended have no misses outstanding. Write cache misses complete from the standpoint of the cache when ownership and data are present. This allows the latency of writes operations to be partially hidden in any combination of shared cache (both hardware and software controlled), and multiple context processors.
    Type: Grant
    Filed: September 4, 1996
    Date of Patent: February 23, 1999
    Assignee: Silicon Graphics, Inc.
    Inventors: Andrew Erlichson, Neal T. Nuckolls, Gregory L. Chesson