Patents by Inventor Matthew M. Gilbert

Matthew M. Gilbert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9519586
    Abstract: Efficient techniques are described for reducing cache pollution by use of a prefetch logic that recognizes exits from software loops or function returns to cancel any pending prefetch request operations. The prefetch logic includes a loop data address monitor to determine a data access stride based on repeated execution of a memory access instruction in a program loop. Data prefetch logic then speculatively issues prefetch requests according to the data access stride. A stop prefetch circuit is used to cancel pending prefetch requests in response to an identified loop exit. The prefetch logic may also recognize a return from a called function and cancel any pending prefetch request operations associated with the called function. When prefetch requests are canceled, demand requests, such as based on load instructions, are not canceled. This approach to reduce cache pollution uses program flow information to throttle data cache prefetching.
    Type: Grant
    Filed: January 21, 2013
    Date of Patent: December 13, 2016
    Assignee: QUALCOMM Incorporated
    Inventor: Matthew M. Gilbert
  • Publication number: 20140208039
    Abstract: Efficient techniques are described for reducing cache pollution by use of a prefetch logic that recognizes exits from software loops or function returns to cancel any pending prefetch request operations. The prefetch logic includes a loop data address monitor to determine a data access stride based on repeated execution of a memory access instruction in a program loop. Data prefetch logic then speculatively issues prefetch requests according to the data access stride. A stop prefetch circuit is used to cancel pending prefetch requests in response to an identified loop exit. The prefetch logic may also recognize a return from a called function and cancel any pending prefetch request operations associated with the called function. When prefetch requests are canceled, demand requests, such as based on load instructions, are not canceled. This approach to reduce cache pollution uses program flow information to throttle data cache prefetching.
    Type: Application
    Filed: January 21, 2013
    Publication date: July 24, 2014
    Applicant: QUALCOMM INCORPORATED
    Inventor: Matthew M. Gilbert
  • Patent number: 7213137
    Abstract: The method and apparatus feature detecting an interrupt service request; storing into an instruction cache interrupt service instructions in response to detecting the interrupt service request; and fetching instructions from the instruction cache into an instruction stream sequence, the instruction stream sequence including mainline program instructions and the interrupt service instructions resulting in allocating core processor bandwidth between the interrupt servicing and mainline program instructions while executing the instruction stream sequence based on an interrupt priority; and processing instructions within the instruction stream sequence including the mainline program instructions and the inserted interrupt servicing instructions. The method and apparatus further feature recycling of executed micro-ops and detecting imminent context switch for interrupt service instruction preparation.
    Type: Grant
    Filed: October 31, 2003
    Date of Patent: May 1, 2007
    Assignee: Intel Corporation
    Inventors: Douglas D. Boom, Matthew M. Gilbert
  • Patent number: 6771595
    Abstract: A resource controller allocates a portion of network memory to a receive path for receiving data and to a transmit path for transmitting data. Network traffic patterns are monitored including the amount of data received and transmitted by the network processing device. The resource controller determines based on the monitored traffic patterns if the transmit path or receive path has allocated a desire amount of network memory. The resource controller removes underutilized resources in the receive or transmit paths. Removed network memory is returned to a resource pool and made available for allocation to another receive path or transmit path that needs additional network memory. An artificial intelligence system predicts future network resource allocations to further increase the efficiency of the resource controller's network resource allocation.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: August 3, 2004
    Assignee: Intel Corporation
    Inventors: Matthew M. Gilbert, Douglas D. Boom
  • Publication number: 20040073735
    Abstract: The method includes detecting and prioritizing one or more interrupt service requests; inserting interrupt servicing instructions responsive to the interrupt service request into an instruction queue mechanism; and processing the instructions within the instruction queue mechanism including the inserted interrupt servicing instructions. The instruction queue mechanism may include an instruction cache and an instruction fetch unit for fetching instructions from the instruction cache, wherein the processing includes decoding the instructions into micro-opcodes and executing the micro-opcodes in one or more out-of-order execution units. The method further includes retiring the executed micro-opcodes including those micro-opcodes representing the inserted interrupt servicing instructions to the instruction cache. Preferably, the criteria for interrupting the core processor include the priority of the interrupts and the capacity of the processor to allocate bandwidth to interrupt servicing.
    Type: Application
    Filed: October 31, 2003
    Publication date: April 15, 2004
    Applicant: Intel Corporation (a Delaware Corporation)
    Inventors: Douglas D. Boom, Matthew M. Gilbert
  • Patent number: 6662297
    Abstract: The method and apparatus feature detecting and prioritizing one or more interrupt service requests; inserting interrupt servicing instructions responsive to the interrupt service request into an instruction queue mechanism; and processing the instructions within the instruction queue mechanism including the inserted interrupt servicing instructions. The instruction queue mechanism may include an instruction cache and an instruction fetch unit for fetching instructions from the instruction cache, wherein the processing includes decoding the instructions into micro-opcodes and executing the micro-opcodes in one or more out-of-order execution units. Further features include retiring the executed micro-opcodes including those micro-opcodes representing the inserted interrupt servicing instructions to the instruction cache. Preferably, the criteria for interrupting the core processor include the priority of the interrupts and the capacity of the processor to allocate bandwidth to interrupt servicing.
    Type: Grant
    Filed: December 30, 1999
    Date of Patent: December 9, 2003
    Assignee: Intel Corporation
    Inventors: Douglas D. Boom, Matthew M. Gilbert