Patents by Inventor Kevin R. Wadleigh

Kevin R. Wadleigh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10430190
    Abstract: Systems and methods which provide a modular processor framework and instruction set architecture designed to efficiently execute applications whose memory access patterns are irregular or non-unit stride are disclosed. A hybrid multithreading framework (HMTF) of embodiments provides a framework for constructing tightly coupled, chip-multithreading (CMT) processors that contain specific features well-suited to hiding latency to main memory and executing highly concurrent applications. The HMTF of embodiments includes an instruction set designed specifically to exploit the high degree of parallelism and concurrency control mechanisms present in the HMTF hardware modules. The instruction format implemented by a HMTF of embodiments is designed to give the architecture, the runtime libraries, and/or the application ultimate control over how and when concurrency between thread cache units is initiated.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 1, 2019
    Assignee: Micron Technology, Inc.
    Inventors: John D. Leidel, Kevin R. Wadleigh, Joe Bolding, Tony Brewer, Dean E. Walker
  • Publication number: 20130332711
    Abstract: Systems and methods which provide a modular processor framework and instruction set architecture designed to efficiently execute applications whose memory access patterns are irregular or non-unit stride as disclosed. A hybrid multithreading framework (HMTF) of embodiments provides a framework for constructing tightly coupled, chip-multithreading (CMT) processors that contain specific features well-suited to hiding latency to main memory and executing highly concurrent applications. The HMTF of embodiments includes an instruction set designed specifically to exploit the high degree of parallelism and concurrency control mechanisms present in the HMTF hardware modules. The instruction format implemented by a HMTF of embodiments is designed to give the architecture, the runtime libraries, and/or the application ultimate control over how and when concurrency between thread cache units is initiated.
    Type: Application
    Filed: March 15, 2013
    Publication date: December 12, 2013
    Applicant: Convey Computer
    Inventors: John D. Leidel, Kevin R. Wadleigh, Joe Bolding, Tony Brewer, Dean E. Walker
  • Patent number: 7028168
    Abstract: A system for performing matrix operations utilizes a processor, memory, and a matrix operation manager. The processor has a memory cache. The memory is external to the processor and stores first and second matrices. The matrix operation manager is configured to mathematically combine the first matrix with the scond matrix utilizing a hoisted matrix algorithm for hoisting values of the first matrix, and the hoisted matrix algorithm has an outer loop and an inner loop that is performed to completion for each iteration of the outer loop. The matrix operation manager, for each iteration of the outer loop, is configured to load to the cache and to write to a contiguous portion of the memory, before performing the inner loop, values from the first matrix that are to be combined, via performance of the inner loop, with values from the second matrix.
    Type: Grant
    Filed: December 5, 2002
    Date of Patent: April 11, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Kevin R. Wadleigh
  • Patent number: 6446157
    Abstract: The inventive mechanism determines whether memory source and destination addresses map to the same or nearly the same cache address. If they map to different addresses, then loads and stores are ordered so that loads to one cache bank are performed on the same clock cycles as the stores to another cache bank. After a group of loads and stores are completed, then load and store operations for each bank are switched. If the source and destination addresses map to nearly the same cache address and if the source address is prior to the destination address, then a group of cache lines is loaded into registers and stored to memory without any interleaving of other loads and stores. If the source and destination addresses map to the same cache location, then an initial load of data into registers is performed. After that, additional loads are interleaved with non-cache conflicting stores to move new values into memory. Thus, loads and stores to matching cache addresses are separated by time.
    Type: Grant
    Filed: September 20, 1999
    Date of Patent: September 3, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Patrick McGehearty, Kevin R. Wadleigh, Aaron Potler
  • Patent number: 6088714
    Abstract: The inventive mechanism uses seven steps to perform the mathematic equivalent to performing one large FFT on the input data. The input data array is decomposed into a plurality of squares. In first step, each of the squares has their respective points swapped across their main diagonals. In the second step, small FFTs are calculated for each of the squares. In the third step, the data is transposed in each of the squares as the first step. In fourth step, the data is oriented into a column format, which are multiplied by the twiddle coefficients. In the fifth step 75, small column oriented FFTs are calculated. The results of each of steps 4 and 5 is in a work array which is small enough to remain in cache. In the sixth step, columns data are transposed and stored into a columns of the squares. In the seventh step, the data is transposed in each of the squares as the first and third steps. This mechanism reduces cache misses, and allows for parallel processing.
    Type: Grant
    Filed: July 27, 1998
    Date of Patent: July 11, 2000
    Assignee: Agilent Technologies
    Inventor: Kevin R. Wadleigh
  • Patent number: 6029225
    Abstract: The inventive mechanism determines whether memory source and destination addresses map to the same or nearly the same cache address. If they map to different addresses, then loads and stores are ordered so that loads to one cache bank are performed on the same clock cycles as the stores to another cache bank. After a group of loads and stores are completed, then load and store operations for each bank are switched. If the source and destination addresses map to nearly the same cache address and if the source address is prior to the destination address, then a group of cache lines is loaded into registers and stored to memory without any interleaving of other loads and stores. If the source and destination addresses map to the same cache location, then an initial load of data into registers is performed. After that, additional loads are interleaved with non-cache conflicting stores to move new values into memory. Thus, loads and stores to matching cache addresses are separated by time.
    Type: Grant
    Filed: December 16, 1997
    Date of Patent: February 22, 2000
    Assignee: Hewlett-Packard Company
    Inventors: Patrick McGehearty, Kevin R. Wadleigh, Aaron Potler