System and method for fetching multiple groups of instructions from an instruction cache in a RISC processor system for execution during separate cycles

- Silicon Graphics, Inc.

A system and method for fetching instructions for use in a RISC processor having an on-chip instruction cache is disclosed. The system accesses a first group of instructions having a first set of ordered addresses and a second group of instructions having a second set of ordered addresses, simultaneously, from an instruction cache. The first group of instructions is to be executed during a first cycle and the second group of instructions is to be executed during a second cycle. The technique transfers the first group of instructions to an instruction decoder for execution during the first cycle and transfers the second group of instructions to the instruction decoder for execution during the second cycle. The technique reduces the power consumed by memory modules and support circuitry of the instruction cache by requiring instruction cache accesses only every other cycle.

Skip to:  ·  Claims  ·  References Cited  · Patent History  ·  Patent History

Claims

1. An instruction fetch system for use in a RISC processor that executes fixed length instructions every cycle and has an on-chip instruction cache, the instruction fetch system being configured to reduce power consumption of the on-chip instruction cache, the instruction fetch system comprising:

means for accessing a first group of instructions having a first set of ordered addresses and a second group of instructions having a second set of ordered addresses, simultaneously, from an instruction cache, wherein said first group of instructions is to be executed during a first cycle and said second group of instructions is to be executed during a second cycle, wherein said second set of ordered addresses is addressed in sequence from said first set of ordered addresses;
means for transferring said first group of instructions to an instruction decoder for execution during said first cycle; and
means for transferring said second group of instructions to said instruction decoder for execution during said second cycle,
whereby power consumed by memory modules and support circuitry of said instruction cache is reduced by requiring instruction cache accesses only every other cycle.

2. The instruction fetch system of claim 1, further comprising:

a program counter for generating said sequence of ordered instruction addresses during successive cycles.

3. The instruction fetch system of claim 2, wherein said instruction decoder comprises:

means for generating target address information when a program flow changing instruction is decoded.

4. The instruction fetch system of claim 3, wherein said program counter further comprises:

means for generating a target address during a cycle when said target address information is generated.

5. The instruction fetch system of claim 4, further comprising:

means for accessing said instruction group when a generated target address is either said first address or one of said following addresses; and
means for transferring the instruction addressed by said generated target address to said instruction decoder during the cycle when said target address is generated.

6. The instruction fetch system of claim 1, further comprising:

a staging register for storing said first instruction and said following instructions during said first cycle.

7. The instruction fetch system of claim 1, wherein said means for transferring comprises:

means for selecting one of said instructions from said instruction group based on one of said instruction addresses when said one of said instruction addresses is generated.

8. In a RISC processor that executes fixed length instructions every cycle and has an on-chip instruction cache, a method for fetching instructions so as to reduce power consumption of the on-chip instruction cache, the method comprising the steps of:

accessing a first group of instructions having a first set of ordered addresses and a second group of instructions having a second set of ordered addresses, simultaneously, from an instruction cache, wherein said first group of instructions is to be executed during a first cycle and said second group of instructions is to be executed during a second cycle, wherein said second set of ordered addresses is addressed in sequence from said first set of ordered addresses;
transferring said first group of instructions to an instruction decoder for execution during said first cycle; and
transferring said second group of instructions to said instruction decoder for execution during said second cycle when a corresponding one of said predetermined number of following addresses is generated,
whereby power consumed by memory modules and support circuitry of said instruction cache is reduced by requiring instruction cache accesses only every other cycle.

9. The method of claim 8, further comprising the step of:

generating said sequence of ordered instruction addresses during successive cycles.

10. The method of claim 9, further comprising the step of:

generating target address information when a program flow changing instruction is decoded.

11. The method of claim 10, further comprising the step of:

generating a target address during a cycle when said target address information is generated.

12. The method of claim 11, further comprising the steps of

accessing said instruction group when a generated target address is either said first address or one of said following addresses; and
transferring the instruction addressed by said generated target address to said instruction decoder during the cycle when said target address is generated.

13. The method of claim 8, further comprising the step of:

storing said first instruction and said following instructions during said first cycle.

14. The method of claim 8, further comprising the step of:

selecting one of said instructions from said instruction group based on a corresponding one of said instruction addresses when said one of said instruction addresses is generated.
Referenced Cited
U.S. Patent Documents
3949376 April 6, 1976 Ball et al.
4695981 September 22, 1987 Sikich et al.
4783767 November 8, 1988 Hamada
4817057 March 28, 1989 Kondo et al.
4918662 April 17, 1990 Kondo
4926354 May 15, 1990 Roy
4931994 June 5, 1990 Matsui et al.
5150330 September 22, 1992 Hag
5170375 December 8, 1992 Mattausch et al.
5263002 November 16, 1993 Suzuki et al.
5285323 February 8, 1994 Hetherington et al.
5293332 March 8, 1994 Shirai
5293343 March 8, 1994 Raab et al.
5329492 July 12, 1994 Mochizuki
5335330 August 2, 1994 Inoue
5367655 November 22, 1994 Grossman et al.
5388072 February 7, 1995 Matick et al.
Patent History
Patent number: 5870574
Type: Grant
Filed: Jul 24, 1996
Date of Patent: Feb 9, 1999
Assignee: Silicon Graphics, Inc. (Mountain View, CA)
Inventors: Andre Kowalczyk (San Jose, CA), Givargis G. Kaldani (Los Gatos, CA)
Primary Examiner: Jack A. Lane
Law Firm: Sterne, Kessler, Goldstein & Fox P.L.L.C.
Application Number: 8/686,363
Classifications
Current U.S. Class: 395/382; Instruction Data Cache (711/125); Addressing Cache Memories (711/3)
International Classification: G06F 930; G06F 1200;