Patents by Inventor Matthias Knoth
Matthias Knoth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8051320Abstract: The present invention provides a clock ratio controller for dynamic voltage and frequency scaled digital systems, and applications thereof. In an embodiment, a digital system is provided that includes a first digital circuit that operates at a first rate determined by a first clock signal and a second digital circuit that operates at a second rate determined by a second clock signal. The first digital circuit is coupled to the second digital circuit by a bus that is used for communications between the first digital circuit and the second digital circuit. A clock ratio controller is used to adjust the frequency of the first clock signal and/or the second clock signal in response to a power management signal without causing a loss of synchronization between the first digital circuit and the second digital circuit.Type: GrantFiled: December 12, 2007Date of Patent: November 1, 2011Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Patent number: 7899993Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.Type: GrantFiled: April 9, 2009Date of Patent: March 1, 2011Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Patent number: 7873820Abstract: The present invention provides processing systems, apparatuses, and methods that reduce power consumption with the use of a loop buffer. In an embodiment, an instruction fetch unit of a processor initially provides instructions from an instruction cache to an execution unit of the processor. While instructions are provided from the instruction cache to the execution unit, instructions forming a loop are stored in a loop buffer. When a loop stored in the loop buffer is being iterated, the instruction cache is disabled to reduce power consumption and instructions are provided to the execution unit from the loop buffer. When the loop is exited, the instruction cache is re-enabled and instructions are provided to the execution unit from the instruction cache.Type: GrantFiled: November 15, 2005Date of Patent: January 18, 2011Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Patent number: 7657708Abstract: Methods for reducing data cache access power in a processor. In an embodiment, a micro tag array is used to store base address or base register data bits, offset data bits, a carry bit, and way selection data bits associated with cache accesses. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal activates only the cache dataram that stores the needed data.Type: GrantFiled: August 18, 2006Date of Patent: February 2, 2010Assignee: MIPS Technologies, Inc.Inventors: Matthias Knoth, Ryan C. Kinter
-
Patent number: 7650465Abstract: Processors and systems having a micro tag array that reduces data cache access power. The processors and systems include a cache that has a plurality of datarams, a processor pipeline register, and a micro tag array. The micro tag array is coupled to the cache and the processor pipeline register. The micro tag array stores base address data bits or base register data bits, offset data bits, a carry bit, and way selection data bits. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal enables only a single dataram of the cache.Type: GrantFiled: August 18, 2006Date of Patent: January 19, 2010Assignee: MIPS Technologies, Inc.Inventors: Matthias Knoth, Ryan C. Kinter
-
Publication number: 20090198900Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.Type: ApplicationFiled: April 9, 2009Publication date: August 6, 2009Inventor: Matthias KNOTH
-
Patent number: 7562191Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.Type: GrantFiled: November 15, 2005Date of Patent: July 14, 2009Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20090158078Abstract: The present invention provides a clock ratio controller for dynamic voltage and frequency scaled digital systems, and applications thereof. In an embodiment, a digital system is provided that includes a first digital circuit that operates at a first rate determined by a first clock signal and a second digital circuit that operates at a second rate determined by a second clock signal. The first digital circuit is coupled to the second digital circuit by a bus that is used for communications between the first digital circuit and the second digital circuit. A clock ratio controller is used to adjust the frequency of the first clock signal and/or the second clock signal in response to a power management signal without causing a loss of synchronization between the first digital circuit and the second digital circuit.Type: ApplicationFiled: December 12, 2007Publication date: June 18, 2009Applicant: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20090157981Abstract: A multiprocessor system maintains cache coherence among processors in a coherent domain. Within the coherent domain, a first processor can receive a command to perform a cache maintenance operation. The first processor can determine whether the cache maintenance operation is a coherent operation. For coherent operations, the first processor sends a coherent request message for distribution to other processors in the coherent domain and can cancel execution of the cache maintenance operation pending receipt of intervention messages corresponding to the coherent request. The intervention messages can reflect a global ordering of coherence traffic in the multiprocessor system and can include instructions for maintaining a data cache and an instruction cache of the first processor. Cache maintenance operations that are determined to be non-coherent can be executed at the first processor without sending the coherent request.Type: ApplicationFiled: December 10, 2008Publication date: June 18, 2009Applicant: MIPS Technologies, Inc.Inventors: Ryan C. Kinter, Darren M. Jones, Matthias Knoth
-
Publication number: 20090132841Abstract: The present invention provides processing systems, apparatuses, and methods that access a scratch pad on-demand to reduce power consumption. In an embodiment, an instruction fetch unit initiates an instruction fetch. When a scratch pad is enabled, an instruction is retrieved from the scratch pad in parallel with a translation of a virtual address to a physical address. If the physical address is associated with the scratch pad, the retrieved instruction is provided to an execution unit. Otherwise, the scratch pad is disabled to reduce power consumption and the instruction fetch is re-initiated. When the scratch pad is disabled, an instruction is retrieved from another instruction source, such as an instruction cache, in parallel with the translation of the virtual address to the physical address. If the physical address is associated with the scratch pad, the scratch pad is enabled and the instruction fetch is re-initiated.Type: ApplicationFiled: January 22, 2009Publication date: May 21, 2009Applicant: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Patent number: 7496771Abstract: The present invention provides processing systems, apparatuses, and methods that access a scratch pad on-demand to reduce power consumption. In an embodiment, an instruction fetch unit initiates an instruction fetch. When a scratch pad is enabled, an instruction is retrieved from the scratch pad in parallel with a translation of a virtual address to a physical address. If the physical address is associated with the scratch pad, the retrieved instruction is provided to an execution unit. Otherwise, the scratch pad is disabled to reduce power consumption and the instruction fetch is re-initiated. When the scratch pad is disabled, an instruction is retrieved from another instruction source, such as an instruction cache, in parallel with the translation of the virtual address to the physical address. If the physical address is associated with the scratch pad, the scratch pad is enabled and the instruction fetch is re-initiated.Type: GrantFiled: November 15, 2005Date of Patent: February 24, 2009Assignee: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20080046653Abstract: Methods for reducing data cache access power in a processor. In an embodiment, a micro tag array is used to store base address or base register data bits, offset data bits, a carry bit, and way selection data bits associated with cache accesses. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal activates only the cache dataram that stores the needed data.Type: ApplicationFiled: August 18, 2006Publication date: February 21, 2008Applicant: MIPS Technologies, Inc.Inventors: Matthias Knoth, Ryan C. Kinter
-
Publication number: 20080046652Abstract: Processors and systems having a micro tag array that reduces data cache access power. The processors and systems include a cache that has a plurality of datarams, a processor pipeline register, and a micro tag array. The micro tag array is coupled to the cache and the processor pipeline register. The micro tag array stores base address data bits or base register data bits, offset data bits, a carry bit, and way selection data bits. When a LOAD or a STORE instruction is fetched, at least a portion of the base address and at least a portion of the offset of the instruction are compared to data stored in the micro tag array. If a micro tag array hit occurs, the micro tag array generates a cache dataram enable signal. This signal enables only a single dataram of the cache.Type: ApplicationFiled: August 18, 2006Publication date: February 21, 2008Applicant: MIPS Technologies, Inc.Inventors: Matthias Knoth, Ryan C. Kinter
-
Publication number: 20070113057Abstract: The present invention provides processing systems, apparatuses, and methods that reduce power consumption with the use of a loop buffer. In an embodiment, an instruction fetch unit of a processor initially provides instructions from an instruction cache to an execution unit of the processor. While instructions are provided from the instruction cache to the execution unit, instructions forming a loop are stored in a loop buffer. When a loop stored in the loop buffer is being iterated, the instruction cache is disabled to reduce power consumption and instructions are provided to the execution unit from the loop buffer. When the loop is exited, the instruction cache is re-enabled and instructions are provided to the execution unit from the instruction cache.Type: ApplicationFiled: November 15, 2005Publication date: May 17, 2007Applicant: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20070113050Abstract: The present invention provides processing systems, apparatuses, and methods that access a scratch pad on-demand to reduce power consumption. In an embodiment, an instruction fetch unit initiates an instruction fetch. When a scratch pad is enabled, an instruction is retrieved from the scratch pad in parallel with a translation of a virtual address to a physical address. If the physical address is associated with the scratch pad, the retrieved instruction is provided to an execution unit. Otherwise, the scratch pad is disabled to reduce power consumption and the instruction fetch is re-initiated. When the scratch pad is disabled, an instruction is retrieved from another instruction source, such as an instruction cache, in parallel with the translation of the virtual address to the physical address. If the physical address is associated with the scratch pad, the scratch pad is enabled and the instruction fetch is re-initiated.Type: ApplicationFiled: November 15, 2005Publication date: May 17, 2007Applicant: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Publication number: 20070113013Abstract: Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.Type: ApplicationFiled: November 15, 2005Publication date: May 17, 2007Applicant: MIPS Technologies, Inc.Inventor: Matthias Knoth
-
Patent number: 7159103Abstract: A loop instruction, at least one target instruction, and an associated trigger address are cached during loop entry. During each loop iteration, the processor predicts whether the loop will be taken or not-taken in a subsequent iteration. When pre-fetch of the cached loop instruction is subsequently detected (i.e., by comparing the trigger address with the current program counter value), the loop taken/not-taken prediction is used to fetch either loop body instructions (when predicted taken) or fall-through instructions (when predicted not-taken). The cached loop instruction is then executed and the loop taken/not-taken prediction is verified using a dedicated loop execution circuit while a penultimate loop body instruction is executed in the processor execution stage (pipeline). When a previous loop taken prediction is verified, the cached target instruction is executed, and then the fetched loop body instructions are executed.Type: GrantFiled: March 24, 2003Date of Patent: January 2, 2007Assignee: Infineon Technologies AGInventors: Sagheer Ahmad, Matthias Knoth, Roger D. Arnold
-
Publication number: 20040193858Abstract: A loop instruction, at least one target instruction, and an associated trigger address are cached during loop entry. During each loop iteration, the processor predicts whether the loop will be taken or not-taken in a subsequent iteration. When pre-fetch of the cached loop instruction is subsequently detected (i.e., by comparing the trigger address with the current program counter value), the loop taken/not-taken prediction is used to fetch either loop body instructions (when predicted taken) or fall-through instructions (when predicted not-taken). The cached loop instruction is then executed and the loop taken/not-taken prediction is verified using a dedicated loop execution circuit while a penultimate loop body instruction is executed in the processor execution stage (pipeline). When a previous loop taken prediction is verified, the cached target instruction is executed, and then the fetched loop body instructions are executed.Type: ApplicationFiled: March 24, 2003Publication date: September 30, 2004Applicant: Infineon Technologies North America Corp.Inventors: Sagheer Ahmad, Matthias Knoth, Roger D. Arnold