Patents by Inventor Jeffrey Bradford
Jeffrey Bradford has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10572401Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: GrantFiled: July 17, 2017Date of Patent: February 25, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
-
Patent number: 10528494Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: GrantFiled: July 17, 2017Date of Patent: January 7, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
-
Publication number: 20170315940Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Inventors: Chad MCBRIDE, Jeffrey BRADFORD, Steven WHEELER, Christopher JOHNSON, Boris BOBROV, Andras TANTOS
-
Publication number: 20170315939Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: ApplicationFiled: July 17, 2017Publication date: November 2, 2017Inventors: Chad MCBRIDE, Jeffrey BRADFORD, Steven WHEELER, Christopher JOHNSON, Boris BOBROV, Andras TANTOS
-
Patent number: 9715464Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: GrantFiled: March 27, 2015Date of Patent: July 25, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
-
Patent number: 9498396Abstract: The present invention describes a medical, supply or transfer lock (120) for a pressure vessel (100) for human occupancy. The medical lock (120) includes a body tube (122), an outer door flange (124) and an outer door (140) in sealing face relation with an inside face (125) of the outer door flange (124). In an embodiment, the outer door (124) has upper rollers (146a) and lower rollers (146b). The upper rollers run on a horizontal rail (160) while the lower rollers run on a slant rail (170), which is mounted below the horizontal rail (160). The ends of the horizontal and slant rails (160,170) are curved (163, 172) so that weight component of the outer door is generated to assist the outer door to close. When pressure in the medical lock (120) is increased, the outer door (140) of this medical lock (120) becomes self-locking and self-sealing.Type: GrantFiled: April 14, 2011Date of Patent: November 22, 2016Assignee: Advanced Marine Pte LtdInventor: Jeffrey Bradford
-
Publication number: 20160283415Abstract: Hardware accelerated synchronization of data movement across multiple direct memory access (DMA) engines is provided using techniques in which the order of descriptor processing is guaranteed for scenarios involving a single CPU and multiple DMA engines as well as those involving multiple CPUs and multiple DMA engines.Type: ApplicationFiled: March 27, 2015Publication date: September 29, 2016Inventors: Chad McBride, Jeffrey Bradford, Steven Wheeler, Christopher Johnson, Boris Bobrov, Andras Tantos
-
Publication number: 20140083015Abstract: The present invention describes a medical, supply or transfer lock (120) for a pressure vessel (100) for human occupancy. The medical lock (120) includes a body tube (122), an outer door flange (124) and an outer door (140) in sealing face relation with an inside face (125) of the outer door flange (124). In an embodiment, the outer door (124) has upper rollers (146a) and lower rollers (146b). The upper rollers run on a horizontal rail (160) whilst the lower rollers run on a slant rail (170), which is mounted below the horizontal rail (160). The ends of the horizontal and slant rails (160,170) are curved (163, 172) so that weight component of the outer door is generated to assist the outer door to close. When pressure in the medical lock (120) is increased, the outer door of this medical lock (120) becomes self-locking and self-sealing.Type: ApplicationFiled: April 14, 2011Publication date: March 27, 2014Applicant: Advanced Marine Pte Ltd.Inventor: Jeffrey Bradford
-
Publication number: 20070186074Abstract: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.Type: ApplicationFiled: April 10, 2007Publication date: August 9, 2007Inventors: Jeffrey Bradford, Jason Dale, Kimberly Fernsler, Timothy Heil, James Rose
-
Publication number: 20070180221Abstract: An apparatus and method for handling data cache misses out-of-order for asynchronous pipelines are provided. The apparatus and method associates load tag (LTAG) identifiers with the load instructions and uses them to track the load instruction across multiple pipelines as an index into a load table data structure of a load target buffer. The load table is used to manage cache “hits” and “misses” and to aid in the recycling of data from the L2 cache. With cache misses, the LTAG indexed load table permits load data to recycle from the L2 cache in any order. When the load instruction issues and sees its corresponding entry in the load table marked as a “miss,” the effects of issuance of the load instruction are canceled and the load instruction is stored in the load table for future reissuing to the instruction pipeline when the required data is recycled.Type: ApplicationFiled: February 2, 2006Publication date: August 2, 2007Inventors: Christopher Abernathy, Jeffrey Bradford, Ronald Hall, Timothy Heil, David Shippy
-
Publication number: 20070083740Abstract: A method, apparatus and computer program product are provided for implementing polymorphic branch history table (BHT) reconfiguration. A BHT includes a plurality of predetermined configurations corresponding predetermined operational modes. A first BHT configuration is provided. Checking is provided to identify improved performance with another BHT configuration. The BHT is reconfigured to provide improved performance based upon the current workload.Type: ApplicationFiled: October 7, 2005Publication date: April 12, 2007Applicant: International Business Machines CorporationInventors: Jeffrey Bradford, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
-
Publication number: 20070083711Abstract: In a method of using a cache in a computer, the computer is monitored to detect an event that indicates that the cache is to be reconfigured into a metadata state. When the event is detected, the cache is reconfigured so that a predetermined portion of the cache stores metadata. A computational circuit employed in association with a computer includes a cache, a cache event detector circuit, and a cache reconfiguration circuit. The cache event detector circuit detects an event relative to the cache. The cache reconfiguration circuit reconfigures the cache so that a predetermined portion of the cache stores metadata when the cache event detector circuit detects the event.Type: ApplicationFiled: October 7, 2005Publication date: April 12, 2007Applicant: International Business Machines CorporationInventors: Jeffrey Bradford, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
-
Publication number: 20070083712Abstract: A method, apparatus and computer program product are provided for implementing polymorphic reconfiguration of a cache size. A cache includes a plurality of physical sub-banks. A first cache configuration is provided. Then checking is provided to identify improved performance with another cache configuration. The cache size is reconfigured to provide improved performance based upon the current workload.Type: ApplicationFiled: October 7, 2005Publication date: April 12, 2007Applicant: International Business Machines CorporationInventors: Jeffrey Bradford, Todd Christensen, Richard Eickemeyer, Timothy Heil, Harold Kossman, Timothy Mullins
-
Publication number: 20060161758Abstract: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.Type: ApplicationFiled: January 14, 2005Publication date: July 20, 2006Applicant: International Business Machines CorporationInventors: Jeffrey Bradford, Jason Dale, Kimberly Fernsler, Timothy Heil, James Rose
-
Publication number: 20060149951Abstract: A method and apparatus for updating global branch history information are disclosed. A dynamic branch predictor within a data processing system includes a global branch history (GBH) buffer and a branch history table. The GBH buffer contains GBH information of a group of the most recent branch instructions. The branch history table includes multiple entries, each entry is associated with one or more branch instructions. The GBH information from the GBH buffer can be used to index into the branch history table to obtain a branch prediction signal. In response to a fetch group of instructions, a fixed number of GBH bits is shifted into the GBH buffer. The number of GBH bits is the same regardless of the number of branch instructions within the fetch group of instructions. In addition, there is a unique bit pattern associated with the case of no taken branch in the fetch group, regardless of the number of not-taken branches of even if there are any branches in the fetch group.Type: ApplicationFiled: December 15, 2004Publication date: July 6, 2006Applicant: International Business Machines CorporationInventors: Chris Abernathy, Jeffrey Bradford, Jason Dale, Timothy Heil
-
Publication number: 20050185581Abstract: The present invention provides for a computer network method and system that applies “hysteresis” to an active queue management algorithm. If a queue is at a level below a certain low threshold and a burst of packets arrives at a network node, then the probability of dropping the initial packets in the burst is recalculated, but the packets are not dropped. However, if the queue level crosses beyond a hysteresis threshold, then packets are discarded pursuant to a drop probability. Also, according to the present invention, queue level may be decreased until it becomes less than the hysteresis threshold, with packets dropped per the drop probability until the queue level decreases to at least a low threshold. In one embodiment, an adaptive algorithm is also provided to adjust the transmit probability for each flow together with hysteresis to increase the packet transmit rates to absorb bursty traffic.Type: ApplicationFiled: February 19, 2004Publication date: August 25, 2005Applicant: International Business Machines CorporationInventors: Jeffrey Bradford, Gordon Davis, Dongming Hwang, Clark Jeffries, Srinivasan Ramani, Kartik Sudeep, Ken Vu
-
Publication number: 20050138627Abstract: An apparatus, program product and method initiate, in connection with a context switch operation, a prefetch of data likely to be used by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, data may be prefetched on behalf of that thread so that when execution of the thread is resumed, more of the working state for the thread is likely to be cached, or at least in the process of being retrieved into cache memory, thus reducing cache-related performance penalties associated with context switching.Type: ApplicationFiled: December 18, 2003Publication date: June 23, 2005Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jeffrey Bradford, Harold Kossman, Timothy Mullins
-
Publication number: 20050138628Abstract: An apparatus, program product and method initiate, in connection with a context switch operation, a prefetch of at least one instruction likely to be executed by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, one or more instructions may be prefetched on behalf of that thread so that when execution of the thread is resumed, those instructions are more likely to be cached, or at least in the process of being retrieved into cache memory, thus enabling a thread to begin executing instructions more quickly than if the thread was required to fetch those instructions upon resumption of its execution.Type: ApplicationFiled: December 18, 2003Publication date: June 23, 2005Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jeffrey Bradford, Harold Kossman, Timothy Mullins