Patents by Inventor Gregory S. Mathews

Gregory S. Mathews has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20020073284
    Abstract: A method is provided for requesting data from a memory. The method includes issuing a plurality of data requests to a data request port for the memory. The plurality of data requests includes at least two ordered data requests. The method includes determining if an earlier one of the ordered data requests corresponds to a miss in the memory, and converting a later one of the ordered data requests to a prefetch in response to the earlier one of the ordered data requests corresponding to a miss in the memory. An apparatus includes a memory having at least one pipelined port for receiving data requests. The port is adapted to determine whether an earlier ordered one of the data requests corresponds to a miss in the memory. The port converts a later ordered one of the data requests to a prefetch in response to determining that the earlier ordered one of the data requests corresponds to a miss in the memory.
    Type: Application
    Filed: January 31, 2002
    Publication date: June 13, 2002
    Inventors: John Wai Cheong Fu, Dean Ahmad Mulla, Gregory S. Mathews, Stuart E. Sailer, Jeng-Jye Shaw
  • Patent number: 6405233
    Abstract: A technique for receiving a first data from a storage location in which the first data is not stored fully aligned within processor data boundaries for data retrieval. The adder also receives a second data having its alignment adjusted to correspond to the first data and adds the first data and the second data in CPU unaligned format. A carry control circuit coupled to the adder determines which carries are selected for transfer to the next stage for calculating a sum of the two data.
    Type: Grant
    Filed: June 30, 1999
    Date of Patent: June 11, 2002
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Jeng-Jye Shaw
  • Patent number: 6381678
    Abstract: A method is provided for requesting data from a memory. The method includes issuing a plurality of data requests to a data request port for the memory. The plurality of data requests includes at least two ordered data requests. The method includes determining if an earlier one of the ordered data requests corresponds to a miss in the memory, and converting a later one of the ordered data requests to a prefetch in response to the earlier one of the ordered data requests corresponding to a miss in the memory. An apparatus includes a memory having at least one pipelined port for receiving data requests. The port is adapted to determine whether an earlier ordered one of the data requests corresponds to a miss in the memory. The port converts a later ordered one of the data requests to a prefetch in response to determining that the earlier ordered one of the data requests corresponds to a miss in the memory.
    Type: Grant
    Filed: October 30, 1998
    Date of Patent: April 30, 2002
    Assignee: Intel Corporation
    Inventors: John Wai Cheong Fu, Dean Ahmad Mulla, Gregory S. Mathews, Stuart E. Sailer, Jeng-Jye Shaw
  • Publication number: 20010044881
    Abstract: A method is provided for requesting data from a memory. The method includes issuing a plurality of data requests to a data request port for the memory. The plurality of data requests includes at least two ordered data requests. The method includes determining if an earlier one of the ordered data requests corresponds to a miss in the memory, and converting a later one of the ordered data requests to a prefetch in response to the earlier one of the ordered data requests corresponding to a miss in the memory. An apparatus includes a memory having at least one pipelined port for receiving data requests. The port is adapted to determine whether an earlier ordered one of the data requests corresponds to a miss in the memory. The port converts a later ordered one of the data requests to a prefetch in response to determining that the earlier ordered one of the data requests corresponds to a miss in the memory.
    Type: Application
    Filed: October 30, 1998
    Publication date: November 22, 2001
    Inventors: JOHN WAI CHEONG FU, DEAN AHMAD MULLA, GREGORY S. MATHEWS, STUART E. SAILER, JENG-JYE SHAW
  • Patent number: 6272597
    Abstract: A novel on-chip cache memory and method of operation are provided which increase microprocessor performance. The on-chip cache memory has two levels. The first level is optimized for low latency and the second level is optimized for capacity. Both levels of cache are pipelined and can support simultaneous dual port accesses. A queuing structure is provided between the first and second level of cache which is used to decouple the faster first level cache from the slower second level cache. The queuing structure is also dual ported. Both levels of cache support non-blocking behavior. When there is a cache miss at one level of cache, both caches can continue to process other cache hits and misses. The first level cache is optimized for integer data. The second level cache can store any data type including floating point. The novel two-level cache system of the present invention provides high performance which emphasizes throughput.
    Type: Grant
    Filed: December 31, 1998
    Date of Patent: August 7, 2001
    Assignee: Intel Corporation
    Inventors: John Wai Cheong Fu, Dean A. Mulla, Gregory S. Mathews, Stuart E. Sailer
  • Patent number: 6105115
    Abstract: A NRU algorithm is used to track lines in each region of a memory array such that the corresponding NRU bits are reset on a region-by-region basis. That is, the NRU bits of one region are reset when all of the bits in that region indicate that their corresponding lines have recently been used. Similarly, the NRU bits of another region are reset when all of the bits in that region indicate that their corresponding lines have recently been used. Resetting the NRU bits in one region, however, does not affect the NRU bits in another region. A LRU algorithm is used to track the regions of the array such that each region has a single corresponding entry in a LRU table. That is, all the lines in a single region collectively correspond to a single LRU entry. A region is elevated to most recently used status in the LRU table once the NRU bits of the region are reset.
    Type: Grant
    Filed: December 31, 1997
    Date of Patent: August 15, 2000
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Dean A. Mulla
  • Patent number: 5956752
    Abstract: Index prediction is used to access data in a memory array. A virtual address is received at an input. The virtual address is translated to a physical address. The memory array is accessed at a predicted address. A portion of the predicted address is compared to a portion of the physical address. If the portion of the predicted address is different from the portion of the physical address, then the predicted address was an incorrect prediction. The memory array is accessed at the physical address.
    Type: Grant
    Filed: December 16, 1996
    Date of Patent: September 21, 1999
    Assignee: Intel Corporation
    Inventor: Gregory S. Mathews
  • Patent number: 5802577
    Abstract: A computer system maintaining cache coherency among a plurality of caching devices coupled across a local bus includes a bus master, a memory, and a plurality of cache complexes, all coupled to the local bus. When the bus master requests a read or write with the memory, the cache complexes snoop the transaction. Each cache complex asserts a busy signal during the snooping process. A detection circuit detects when the busy signals have been de-asserted and asserts a done signal. If one of the snoops results in a cache hit to a dirty line, the respective cache complex asserts a dirty signal. If one of the snoops results in a cache hit to a clean line, the respective cache complex asserts a clean signal. If the memory detects a simultaneous assertion of the dirty signal and the done signal, it halts the transaction request from the bus master.
    Type: Grant
    Filed: May 14, 1997
    Date of Patent: September 1, 1998
    Assignee: Intel Corporation
    Inventors: Ketan S. Bhat, Gregory S. Mathews
  • Patent number: 5634131
    Abstract: A mechanism and means for powering down a functional unit on an integrated circuit having multiple functional units. Some of the functional units are clocked independently of each other. The present invention includes a method and mechanism for indicating to the functional unit whether it is required for use. The present invention also includes a method and mechanism for powering down the functional unit transparent and independent of the rest of the functional units when the functional unit is not required for use.
    Type: Grant
    Filed: October 26, 1994
    Date of Patent: May 27, 1997
    Assignee: Intel Corporation
    Inventors: Eugene P. Matter, Yahya S. Sotoudeh, Gregory S. Mathews
  • Patent number: 5469544
    Abstract: A microprocessor for use in a computer system which pipelines addresses for both burst and non-burst mode data transfers. By pipelining addresses, the microprocessor is able to increase the throughput of data transfers in the system. In the present invention, bits are used which may be programmed to disable and enable the address pipelining for the non-burst mode and burst mode transfers.
    Type: Grant
    Filed: November 9, 1992
    Date of Patent: November 21, 1995
    Assignee: Intel Corporation
    Inventors: Deepak J. Aatresh, Tosaku Nakanishi, Gregory S. Mathews
  • Patent number: 5398244
    Abstract: An innovative protocol and system for implementing the same enables quick release of the bus by the master device, such as a CPU, to permit slave devices access to the bus. In one embodiment, the arbiter can select between the original hold protocol and quick hold protocol according to predetermined criteria which indicates that a low latency response is requested. Upon assertion of a QHOLD signal, the CPU issues a burst last signal to prematurely terminate outstanding burst transactions on the bus in a manner transparent to the slave devices. Once the outstanding bus cycles are complete, the CPU performs an internal backoff to immediately release the bus for access by the slave device requesting access. Any pending burst cycles which were terminated prematurely by the QHOLD signal, are subsequently restarted for the data not transacted by the CPU after the slave device completes access to the bus.
    Type: Grant
    Filed: July 16, 1993
    Date of Patent: March 14, 1995
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Deepak J. Aatresh, Sanjay Jain
  • Patent number: 5392437
    Abstract: A mechanism for powering down a functional unit on an integrated circuit having multiple functional units. Some of the functional units are clocked independently of each other. A method and mechanism for indicating to the functional unit whether it is required for use. Also included is a method and mechanism for powering down the functional unit transparent and independent of the rest of the functional units when the functional unit is not required for use.
    Type: Grant
    Filed: November 6, 1992
    Date of Patent: February 21, 1995
    Assignee: Intel Corporation
    Inventors: Eugene P. Matter, Yahya S. Sotoudeh, Gregory S. Mathews
  • Patent number: 5359723
    Abstract: A cache memory hierarchy having a first level write through cache memory and a second level write back cache memory is provided to a computer system having a CPU, a main memory, and a number of DMA devices. The first level write through cache memory responds to read and write accesses by the CPU, and snoop accesses by the DMA devices, whereas the second level write back cache memory responds to read and write accesses by the CPU as well as the DMA devices. Additionally, the first level write through cache memory is a relatively large cache memory designed to provide a high cache hit rate, whereas the second level write back cache memory is a relatively small cache memory designed to reduce accesses to the main memory. Furthermore, the first level write through cache memory reallocates its cache lines in response to CPU read misses only, whereas the second level write through cache memory reallocates its cache lines in response to CPU write misses only.
    Type: Grant
    Filed: December 16, 1991
    Date of Patent: October 25, 1994
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Edward S. Zager