Patents by Inventor Jeffrey Bradford

Jeffrey Bradford has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10572401
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: February 25, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
  • Patent number: 10528494
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: January 7, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
  • Publication number: 20170315940
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Application
    Filed: July 17, 2017
    Publication date: November 2, 2017
    Inventors: Chad MCBRIDE, Jeffrey BRADFORD, Steven WHEELER, Christopher JOHNSON, Boris BOBROV, Andras TANTOS
  • Publication number: 20170315939
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Application
    Filed: July 17, 2017
    Publication date: November 2, 2017
    Inventors: Chad MCBRIDE, Jeffrey BRADFORD, Steven WHEELER, Christopher JOHNSON, Boris BOBROV, Andras TANTOS
  • Patent number: 9715464
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: July 25, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
  • Patent number: 9498396
    Abstract: The present invention describes a medical, supply or transfer lock (120) for a pressure vessel (100) for human occupancy. The medical lock (120) includes a body tube (122), an outer door flange (124) and an outer door (140) in sealing face relation with an inside face (125) of the outer door flange (124). In an embodiment, the outer door (124) has upper rollers (146a) and lower rollers (146b). The upper rollers run on a horizontal rail (160) while the lower rollers run on a slant rail (170), which is mounted below the horizontal rail (160). The ends of the horizontal and slant rails (160,170) are curved (163, 172) so that weight component of the outer door is generated to assist the outer door to close. When pressure in the medical lock (120) is increased, the outer door (140) of this medical lock (120) becomes self-locking and self-sealing.
    Type: Grant
    Filed: April 14, 2011
    Date of Patent: November 22, 2016
    Assignee: Advanced Marine Pte Ltd
    Inventor: Jeffrey Bradford
  • Publication number: 20160283415
    Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Inventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
  • Publication number: 20140083015
    Abstract: The present invention describes a medical, supply or transfer lock (120) for a pressure vessel (100) for human occupancy. The medical lock (120) includes a body tube (122), an outer door flange (124) and an outer door (140) in sealing face relation with an inside face (125) of the outer door flange (124). In an embodiment, the outer door (124) has upper rollers (146a) and lower rollers (146b). The upper rollers run on a horizontal rail (160) whilst the lower rollers run on a slant rail (170), which is mounted below the horizontal rail (160). The ends of the horizontal and slant rails (160,170) are curved (163, 172) so that weight component of the outer door is generated to assist the outer door to close. When pressure in the medical lock (120) is increased, the outer door of this medical lock (120) becomes self-locking and self-sealing.
    Type: Application
    Filed: April 14, 2011
    Publication date: March 27, 2014
    Applicant: Advanced Marine Pte Ltd.
    Inventor: Jeffrey Bradford
  • Publication number: 20070186074
    Abstract: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
    Type: Application
    Filed: April 10, 2007
    Publication date: August 9, 2007
    Inventors: Jeffrey Bradford, Jason Dale, Kimberly Fernsler, Timothy Heil, James Rose
  • Publication number: 20070180221
    Abstract: An apparatus and method for handling data cache misses out-of-order for asynchronous pipelines are provided. The apparatus and method associates load tag (LTAG) identifiers with the load instructions and uses them to track the load instruction across multiple pipelines as an index into a load table data structure of a load target buffer. The load table is used to manage cache “hits” and “misses” and to aid in the recycling of data from the L2 cache. With cache misses, the LTAG indexed load table permits load data to recycle from the L2 cache in any order. When the load instruction issues and sees its corresponding entry in the load table marked as a “miss,” the effects of issuance of the load instruction are canceled and the load instruction is stored in the load table for future reissuing to the instruction pipeline when the required data is recycled.
    Type: Application
    Filed: February 2, 2006
    Publication date: August 2, 2007
    Inventors: Christopher Abernathy, Jeffrey Bradford, Ronald Hall, Timothy Heil, David Shippy
  • Publication number: 20070083740
    Abstract: A method, apparatus and computer program product are provided for implementing polymorphic branch history table (BHT) reconfiguration. A BHT includes a plurality of predetermined configurations corresponding predetermined operational modes. A first BHT configuration is provided. Checking is provided to identify improved performance with another BHT configuration. The BHT is reconfigured to provide improved performance based upon the current workload.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Applicant: International Business Machines Corporation
    Inventors: Jeffrey Bradford, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
  • Publication number: 20070083711
    Abstract: In a method of using a cache in a computer, the computer is monitored to detect an event that indicates that the cache is to be reconfigured into a metadata state. When the event is detected, the cache is reconfigured so that a predetermined portion of the cache stores metadata. A computational circuit employed in association with a computer includes a cache, a cache event detector circuit, and a cache reconfiguration circuit. The cache event detector circuit detects an event relative to the cache. The cache reconfiguration circuit reconfigures the cache so that a predetermined portion of the cache stores metadata when the cache event detector circuit detects the event.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Applicant: International Business Machines Corporation
    Inventors: Jeffrey Bradford, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
  • Publication number: 20070083712
    Abstract: A method, apparatus and computer program product are provided for implementing polymorphic reconfiguration of a cache size. A cache includes a plurality of physical sub-banks. A first cache configuration is provided. Then checking is provided to identify improved performance with another cache configuration. The cache size is reconfigured to provide improved performance based upon the current workload.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Applicant: International Business Machines Corporation
    Inventors: Jeffrey Bradford, Todd Christensen, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
  • Publication number: 20060161758
    Abstract: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
    Type: Application
    Filed: January 14, 2005
    Publication date: July 20, 2006
    Applicant: International Business Machines Corporation
    Inventors: Jeffrey Bradford, Jason Dale, Kimberly Fernsler, Timothy Heil, James Rose
  • Publication number: 20060149951
    Abstract: A method and apparatus for updating global branch history information are disclosed. A dynamic branch predictor within a data processing system includes a global branch history (GBH) buffer and a branch history table. The GBH buffer contains GBH information of a group of the most recent branch instructions. The branch history table includes multiple entries, each entry is associated with one or more branch instructions. The GBH information from the GBH buffer can be used to index into the branch history table to obtain a branch prediction signal. In response to a fetch group of instructions, a fixed number of GBH bits is shifted into the GBH buffer. The number of GBH bits is the same regardless of the number of branch instructions within the fetch group of instructions. In addition, there is a unique bit pattern associated with the case of no taken branch in the fetch group, regardless of the number of not-taken branches of even if there are any branches in the fetch group.
    Type: Application
    Filed: December 15, 2004
    Publication date: July 6, 2006
    Applicant: International Business Machines Corporation
    Inventors: Chris Abernathy, Jeffrey Bradford, Jason Dale, Timothy Heil
  • Publication number: 20050185581
    Abstract: The present invention provides for a computer network method and system that applies “hysteresis” to an active queue management algorithm. If a queue is at a level below a certain low threshold and a burst of packets arrives at a network node, then the probability of dropping the initial packets in the burst is recalculated, but the packets are not dropped. However, if the queue level crosses beyond a hysteresis threshold, then packets are discarded pursuant to a drop probability. Also, according to the present invention, queue level may be decreased until it becomes less than the hysteresis threshold, with packets dropped per the drop probability until the queue level decreases to at least a low threshold. In one embodiment, an adaptive algorithm is also provided to adjust the transmit probability for each flow together with hysteresis to increase the packet transmit rates to absorb bursty traffic.
    Type: Application
    Filed: February 19, 2004
    Publication date: August 25, 2005
    Applicant: International Business Machines Corporation
    Inventors: Jeffrey Bradford, Gordon Davis, Dongming Hwang, Clark Jeffries, Srinivasan Ramani, Kartik Sudeep, Ken Vu
  • Publication number: 20050138627
    Abstract: An apparatus, program product and method initiate, in connection with a context switch operation, a prefetch of data likely to be used by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, data may be prefetched on behalf of that thread so that when execution of the thread is resumed, more of the working state for the thread is likely to be cached, or at least in the process of being retrieved into cache memory, thus reducing cache-related performance penalties associated with context switching.
    Type: Application
    Filed: December 18, 2003
    Publication date: June 23, 2005
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jeffrey Bradford, Harold Kossman, Timothy Mullins
  • Publication number: 20050138628
    Abstract: An apparatus, program product and method initiate, in connection with a context switch operation, a prefetch of at least one instruction likely to be executed by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, one or more instructions may be prefetched on behalf of that thread so that when execution of the thread is resumed, those instructions are more likely to be cached, or at least in the process of being retrieved into cache memory, thus enabling a thread to begin executing instructions more quickly than if the thread was required to fetch those instructions upon resumption of its execution.
    Type: Application
    Filed: December 18, 2003
    Publication date: June 23, 2005
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jeffrey Bradford, Harold Kossman, Timothy Mullins