Patents by Inventor James T. Pinkerton

James T. Pinkerton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7765405
    Abstract: A new method and framework for scheduling receive-side processing of data streams received from a remote requesting client by a multiprocessor system computer is disclosed. The method receives data packets from the remote requesting client via a network and, for each data packet, applies a cryptographically secure hashing function to portions of the received data packet yielding a hash value. The method further applies the hash value to a processor selection policy to identify a processor in the multiprocessor system as a selected processor to perform receive-side processing of the data packet. The method queues the received data packet for processing by the selected processor and invokes a procedure call to initiate processing of the data packet.
    Type: Grant
    Filed: February 25, 2005
    Date of Patent: July 27, 2010
    Assignee: Microsoft Corporation
    Inventors: James T. Pinkerton, Sanjay N. Kaniyar, Bhupinder S. Sethi
  • Publication number: 20100185704
    Abstract: A lease system is described herein that allows clients to request a lease to a remote file, wherein the lease permits access to the file across multiple applications using multiple handles without extra round trips to a server. When multiple applications on the same client (or multiple components of the same application) request access to the same file, the client specifies the same lease identifier to the server for each open request or may handle the request from the cache based on the existing lease. Because the server identifies the client's cache at the client level rather than the individual file request level, the client receives fewer break notifications and is able to cache remote files in more circumstances. Thus, by providing the ability to cache data in more circumstances common with modern applications, the lease system reduces bandwidth, improves server scalability, and provides faster access to data.
    Type: Application
    Filed: January 15, 2009
    Publication date: July 22, 2010
    Applicant: Microsoft Corporation
    Inventors: Mathew George, David M. Kruse, James T. Pinkerton, Thomas E. Jolly
  • Patent number: 7761619
    Abstract: Disclosed are methods for handling RDMA connections carried over packet stream connections. In one aspect, I/O completion events are distributed among a number of processors in a multi-processor computing device, eliminating processing bottlenecks. For each processor that will accept I/O completion events, at least one completion queue is created. When an I/O completion event is received on one of the completion queues, the processor associated with that queue processes the event. In a second aspect, semantics of the interactions among a packet stream handler, an RDMA layer, and an RNIC are defined to control RDMA closures and thus to avoid implementation errors. In a third aspect, semantics are defined for transferring an existing packet stream connection into RDMA mode while avoiding possible race conditions. The resulting RNIC architecture is simpler than is traditional because the RNIC never needs to process both streaming messages and RDMA-mode traffic at the same time.
    Type: Grant
    Filed: May 13, 2005
    Date of Patent: July 20, 2010
    Assignee: Microsoft Corporation
    Inventors: Shuangtong Feng, James T. Pinkerton
  • Publication number: 20100030871
    Abstract: Aspects of the subject matter described herein relate to client-side caching. In aspects, when a client receives a request for data that is located on a remote server, the client first checks a local cache to see if the data is stored in the local cache. If the data is not stored in the local cache, the client may check a peer cache to see if the data is stored in the peer cache. If the data is not stored in the peer cache, the client obtains the data from the remote server, caches it locally, and publishes to the peer cache that the client has a copy of the data.
    Type: Application
    Filed: November 28, 2008
    Publication date: February 4, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Thomas Ewan Jolly, James T. Pinkerton, Eileen C. Brown, David Matthew Kruse, Prashanth Prahalad, Vikrant H. Desai
  • Patent number: 7634655
    Abstract: The present invention protects against denial of service attacks on lookup or hash tables used to store state information for data transfer protocols used to transfer data between two host computers. Two hash tables are provided for holding state information, one for verified remote entities (i.e., those where the remote local address can be traced to a host), and one for unverified entities. A cryptographically secure hash is applied to packets from unverified remote entities, since these are the most likely to attempt attacks on the hash tables. The performance of the local server for packets from verified remote entities, however, is maintained.
    Type: Grant
    Filed: February 13, 2004
    Date of Patent: December 15, 2009
    Assignee: Microsoft Corporation
    Inventors: Sanjay Kaniyar, James T. Pinkerton, Bhupinder S. Sethi
  • Patent number: 7554976
    Abstract: Disclosed are methods for handling RDMA connections carried over packet stream connections. In one aspect, I/O completion events are distributed among a number of processors in a multi-processor computing device, eliminating processing bottlenecks. For each processor that will accept I/O completion events, at least one completion queue is created. When an I/O completion event is received on one of the completion queues, the processor associated with that queue processes the event. In a second aspect, semantics of the interactions among a packet stream handler, an RDMA layer, and an RNIC are defined to control RDMA closures and thus to avoid implementation errors. In a third aspect, semantics are defined for transferring an existing packet stream connection into RDMA mode while avoiding possible race conditions. The resulting RNIC architecture is simpler than is traditional because the RNIC never needs to process both streaming messages and RDMA-mode traffic at the same time.
    Type: Grant
    Filed: May 13, 2005
    Date of Patent: June 30, 2009
    Assignee: Microsoft Corporation
    Inventors: Shuangtong Feng, James T. Pinkerton
  • Patent number: 7526577
    Abstract: The present invention provides mechanisms for transferring processor control of multiple network connections between two component devices of a computerized system, such as between a host CPU and a NIC. In one aspect of the invention, two or more network communications may each have a different state object in the upper layers of a network protocol stack, and have a common state object in the lower layers (e.g., the Framing Layer) of the network protocol stack. In part due to the commonalities in the lower software layer states, the invention provides for offloading processor control of multiple network communications at once, including long and short-lived connections. In addition, the invention can negotiate with an alternative peripheral device to offload the network communication to the alternative peripheral device in the event of a failover event, and provides a solution to incoming data packets destined for one or more VLANs.
    Type: Grant
    Filed: September 19, 2003
    Date of Patent: April 28, 2009
    Assignee: Microsoft Corporation
    Inventors: James T. Pinkerton, Sanjay N. Kaniyar
  • Patent number: 7370082
    Abstract: Methods, systems, and computer program products for reducing communication overhead to make remote direct memory access more efficient for smaller data transfers. An upper layer protocol or other software creates a receive buffer and a corresponding lookup key for remotely accessing the receive buffer. In response to receiving a data message, the remote direct memory access protocol places a data portion of the data message into the receive buffer and prevents further changes. The upper layer protocol or software confirms that further changes to the receive buffer have been prevented. A lower layer transport protocol may be used to deliver data received from a remote system to the remote direct memory access protocol. Data transfers may occur through buffer copies with relatively lower overhead but also relatively lower throughput, or may occur through remote direct memory access to offer relatively higher throughput, but also imposing relatively higher overhead.
    Type: Grant
    Filed: May 9, 2003
    Date of Patent: May 6, 2008
    Assignee: Microsoft Corporation
    Inventor: James T. Pinkerton
  • Patent number: 7254637
    Abstract: A method to offload a network stack connection is presented. A request, which includes a list of resource requirements from each software layer in the stack, to offload the network stack connection is sent through the stack to the peripheral device. The device allocates resources for the list and sends a handle to each of the software layers for communication with the device. The state for each layer is sent to the device that includes state variables that are classified as a constant, a cached variable handled by the CPU, or a delegated variable handled by the device.
    Type: Grant
    Filed: November 10, 2005
    Date of Patent: August 7, 2007
    Assignee: Microsoft Corporation
    Inventors: James T. Pinkerton, Abolade Gbadegesin, Sanjay N. Kaniyar, NK Srinivas
  • Patent number: 7181531
    Abstract: A method to synchronize and upload an offloaded network stack connection between a host network stack and peripheral device is presented. A state object for each layer in the stack is sent to the device that includes state variables that are classified as a constant, a cached variable handled by the host, or a delegated variable handled by the device. State that must be updated by the network stack and the peripheral device is cleanly divided. For example, statistics are tracked by the host, the device, or the host and the device. A statistic tracked by both the host and peripheral device is divided into non-overlapping portions and combined to produce the statistic. Once an upload is initiated, the device achieves a consistent state and hands delegated states to the stack. Each layer in the stack takes control of its delegated state and resources at the device are freed.
    Type: Grant
    Filed: April 30, 2002
    Date of Patent: February 20, 2007
    Assignee: Microsoft Corporation
    Inventors: James T. Pinkerton, Abolade Gbadegesin, Sanjay Kaniyar, Nelamangala Krishaswamy Srinivas
  • Publication number: 20040225720
    Abstract: Methods, systems, and computer program products for reducing communication overhead to make remote direct memory access more efficient for smaller data transfers. An upper layer protocol or other software creates a receive buffer and a corresponding lookup key for remotely accessing the receive buffer. In response to receiving a data message, the remote direct memory access protocol places a data portion of the data message into the receive buffer and prevents further changes. The upper layer protocol or software confirms that further changes to the receive buffer have been prevented. A lower layer transport protocol may be used to deliver data received from a remote system to the remote direct memory access protocol. Data transfers may occur through buffer copies with relatively lower overhead but also relatively lower throughput, or may occur through remote direct memory access to offer relatively higher throughput, but also imposing relatively higher overhead.
    Type: Application
    Filed: May 9, 2003
    Publication date: November 11, 2004
    Inventor: James T. Pinkerton
  • Patent number: 6766358
    Abstract: A method for exchanging messages between computer systems communicatively coupled in a computer system network. A message (e.g., a read or write command) is sent from a software element of a first computer system (e.g., a client computer system) to a second computer system (e.g., a server computer system). A shared memory unit is accessible by the software element of the first computer system and a software element of the second computer system. The shared memory unit of the second computer system is directly accessed, bypassing the processor of the second computer system, and the data of interest is read or written from/to the shared memory unit. In one embodiment, the method pertains to acknowledgments between software elements. A plurality of messages is sent from one software element to another software element. A count of each of the plurality of messages is maintained.
    Type: Grant
    Filed: October 25, 1999
    Date of Patent: July 20, 2004
    Assignee: Silicon Graphics, Inc.
    Inventors: Gregory L. Chesson, James T. Pinkerton, Eric Salo
  • Patent number: 6223270
    Abstract: A method and system for efficient translation of memory addresses in computer systems. The present invention enables address translations between different address spaces to be performed without using the table lookup step typically required in the prior art. Thus, the present invention provides significant improvements in both time and space efficiency over prior art implementations of address translation. In modern computer systems where direct memory access (DMA) operations are used extensively, especially in the emerging field of operating system (OS) bypass technology, the performance improvements afforded by the present invention are particularly critical to the realization of an efficient and high performance system. A method and system for efficiently translating memory addresses in computer systems and the address representation used are described herein.
    Type: Grant
    Filed: April 19, 1999
    Date of Patent: April 24, 2001
    Assignee: Silicon Graphics, Inc.
    Inventors: Gregory L. Chesson, James T. Pinkerton, Eric Salo
  • Patent number: 5123095
    Abstract: A vector processor is closely integrated with a scalar processor. The scalar processor provides virtual-to-physical memory translation for both scalar and vector operations. In vector operations, a block move operation preformed by the scalar processor is intercepted, the write command in the operation is converted to a read, and data resulting from a vector operation is returned to the address specified by the block move write command. Writing of the data may be masked by a prior vector operation. Prefetch queues and write queues are provided between main memory and the vector processor. A microinstruction interface is supported for the vector processor.
    Type: Grant
    Filed: January 17, 1989
    Date of Patent: June 16, 1992
    Assignee: Ergo Computing, Inc.
    Inventors: Gregory M. Papadopoulos, David E. Culler, James T. Pinkerton
  • Patent number: 5017636
    Abstract: A rubber composition for tire having improved processability, wear resistance, rebound resilience and heat build-up comprises 20.about.70 parts by weight of a specified butadiene polymer, 30.about.80 parts by weight of at least one rubber selected from natural rubber, high cis-1,4 polyisoprene rubber and styrene-butadiene copolymer rubber, and 0.about.30 parts by weight of high cis-1,4 polybutadiene and/or low cis-1,4 polybutadiene, and contains 35.about.100 parts by weight of carbon black, 0.about.50 parts by weight of process oil, 0.5.about.5 parts by weight of aliphatic carboxylic acid and 0.1.about.3 parts by weight of sulfur based on 100 parts by weight of the above rubber components.
    Type: Grant
    Filed: May 10, 1990
    Date of Patent: May 21, 1991
    Assignees: Japan Synthetic Rubber Co., Ltd., Bridgestone Corporation
    Inventors: Iwakazu Hattori, Noboru Shimada, Noboru Oshima, Mitsuhiko Sakakibara, Hiroshi Mouri, Tatsuo Fujimaki, Tatsuro Hamada