Method and device for synchronizing in a multiprocessor system

A method and a device for synchronization in a multiprocessor system having at least two processors, switchover means being provided which make it possible to switch between at least two operating modes, the device being designed in such a way that a synchronization is performed using a stop signal which stops a processor running ahead in order to synchronize it with the at least [one] second processor.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

In technical applications such as in the automobile industry or in the industrial goods industry in particular, i.e., in mechanical engineering and automation, more and more microprocessor-based or computer-based control and regulating systems are being used for applications critical with regard to safety. Dual computer systems or dual processor systems (dual cores) are nowadays widely used computer systems for applications critical with regard to safety, in particular in vehicles, such as antilock systems, electronic stability programs (ESP), X-by-wire systems such as drive-by-wire or steer-by-wire or [brake]-by-wire, etc., or also in other networked systems. To satisfy these high safety requirements in future applications, powerful error [detection] mechanisms and error handling mechanisms are needed, in particular to counteract transient errors arising, for example, when the size of semiconductor structures of computer systems is reduced. It is relatively difficult to protect the core itself, i.e., the processor. One approach, as mentioned above, is the use of a dual core system for error detection.

BACKGROUND INFORMATION

Such processor units having at least two integrated execution units are known as dual core or multicore architectures. Such dual core or multicore architectures are proposed according to the related art mainly for two reasons.

First, they may contribute to an enhanced performance in that the two execution units or cores are considered and treated as two processing units on a single semiconductor module. In this configuration, the two execution units or cores process different programs or tasks. This allows enhanced performance; for this reason, this configuration is referred to as performance mode.

The second reason for implementing a dual-core or multicore architecture is enhanced reliability in that the two execution units redundantly process the same program. The results of the two execution units or CPUs, i.e., cores, are compared, and an error may be detected from the comparison for agreement. In the following, this configuration is referred to as safety mode or error detection mode.

Thus, currently there are both dual processor and multiprocessor systems that work redundantly to recognize hardware errors (see dual core or master checker systems), and dual processor and multiprocessor systems that process different data on their processors. If these two operating modes are combined in a dual processor or multiprocessor system (for the sake of simplicity we shall only refer to dual processor systems; however, the present invention is also applicable to multiprocessor systems), both processors must contain different data in performance mode and the same data in error detection mode.

The object of the present invention is to provide a unit and a method which delivers the instructions/data to the at least two processors or cores redundantly or differently, depending on the mode, and divides up the memory access rights which enable the synchronization and/or desynchronization of both processors or cores in the event of a mode change, in particular in the performance mode.

Such a unit is not yet known. It makes it possible to operate a dual processor system effectively in such a way that switchover during operation is possible in both safety and performance modes. We shall therefore refer to processors, which, however, also includes the concept of cores or execution units.

Furthermore, the object of the present invention is to make it possible to synchronize the multiprocessor system. No such method or implementation is known so far. There are multiprocessor systems capable of only one of the two modes, but none that works synchronously, is capable of being switched over, and can compare the data synchronously. Synchronization may take place in an accurately clocked and synchronous manner by the method presented here, but there may also be applications in which such an accurate synchronization is not required. In those cases this method may still be used to achieve “loose” synchronization. Loose synchronization is synchronization in which the two processors process the same [data], but the time interval of the processing may fluctuate within a range predefined by the comparator.

SUMMARY OF THE INVENTION

A dual core system has two processors which may process the same task or different tasks. These two processors of the dual-core system may process these tasks synchronously or with a clock pulse offset.

In order to enable this switchover between the two modes in a synchronous system in which output data are to be compared synchronously, these two processors must be synchronizable. In other words, when the processor changes from performance mode (=mode in which they process different tasks and the output data are not compared) to a safety mode (=mode in which both processors process the same task and their outputs are compared in each cycle), it must be possible to synchronize the program runs of the two processors.

The present invention thus also presents a method and a device in which the switchover intent is triggered by a signal. This signal may be generated, for example, by monitoring the instruction bus (watching whether the switchover intent is executed) or it may be a control signal of the decoder (for example, triggering an interrupt, writing into a register, in the other processor) and the processor therefore jumps to the predefined program address.

In addition, the two processors may advantageously jump to different program points on the basis of an identifier, which is unique to each processor in the multiprocessor system, and thus be desynchronized (important: processor identification via a processor ID bit, conditional jump, reading the processor ID bit from a memory area that is separate for each processor, but with the same address, processor ID bit stored in the internal processor register).

The present invention teaches a method and a device for synchronizing in a multiprocessor system having at least two processors, switchover means being provided through which switchover between at least two modes is possible, the device being designed in such a way that synchronization is performed via a stop signal, which stops a processor running ahead in order to synchronize it with the at least [one] second processor.

The synchronization may advantageously take place by communicating a switchover intent of at least one processor (for example, triggering an interrupt, writing into a register, . . . in the other processor) and as a result the processor jumping to a predefined program address.

The present invention also advantageously presents a unit for data distribution from at least one data source in a system having at least two processing units, switchover means (ModeSwitch) being provided which make switchover between at least two operating modes of the system possible, the unit being designed in such a way that the data distribution and/or the data source (in particular the instruction memory, data memory, cache) depend(s) on the operating mode. A system having such a unit is also presented.

The first operating mode corresponds to a safety mode in which the two processing units process the same programs and/or data, and comparison means are provided, which compare the states resulting from the processing of the same programs for agreement.

The unit according to the present invention and the method according to the present invention make implementation of both modes possible in a dual-processor system.

If the two processors operate in error detection mode (F mode), the two processors receive the same data/instructions; if they operate in performance mode (P mode), each processor may access the memory. In that case, this unit manages the accesses to the single memory or peripheral present.

In the F mode, the unit receives the data/addresses of a processor (here referred to as “master”) and relays them to the components such as memories, bus, etc. The second processor (here “slave”) intends to access the same device. The data distribution unit receives this request at a second port, but does not relay it to the other components. The data distribution unit transmits the same data to both slave and master and compares the data of the two processors. If they are different, the data distribution unit (here DDU) indicates this via an error signal. Therefore, only the master operates the bus/memory and the slave receives the same data (operating mode as in the case of a dual-core system).

In the P mode both processors execute different program portions. The memory accesses are therefore also different.

The DDU therefore receives the request of the processors and returns the results/requested data to the processor that requested them. If both processors intend to access the component at the same time, one processor is set to a wait state until the other one has been served.

Switchover between the two modes and thus between the different types of operation of the data distribution unit takes place via a control signal, which may be generated by one of the two processors or externally.

If the dual-processor system is operated with a clock pulse offset in the F mode, but not in the P mode, the DDU unit delays the data for the slave accordingly, i.e., stores the master's output data until they may be compared to slave's output data for error detection.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a block diagram of a data processing system according to a first embodiment of the present invention.

FIG. 2 shows a flow chart of an operating method executed by the data processing system in FIG. 1.

FIG. 3 shows a second embodiment of a data processing system according to the present invention.

DETAILED DESCRIPTION

The clock pulse offset is elucidated in more detail with reference to FIG. 1.

FIG. 1 shows a dual-core system having a first computer 100, in particular a master computer and a second computer 101, in particular a slave computer. The entire system is operated at a predefinable clock pulse, i.e., in predefinable clock cycles CLK. The clock pulse is supplied to the computers via clock input CLK1 of computer 100 and clock input CLK2 of computer 101. In this dual-core system, there is also a special feature for error detection in that first computer 100 and second computer 101 operate a predefinable time offset or a predefinable clock pulse offset in particular. Any desired time period may be defined for a time offset, and also any desired clock pulse regarding an offset of the clock pulses. This may be an offset by an integral number of clock pulses, but also, as shown in this example, an offset by 1.5 clock pulses, first computer 100 working, i.e., being operated here 1.5 clock pulses ahead of second computer 101. This offset may prevent common mode failures from interfering with the computers or processors, i.e., the cores of the dual-core system, in the same way and thus from remaining undetected. In other words, due to the offset, such common mode failures affect the computers at different points in time during the program run and thus have different effects for the two computers, which make errors detectable. Under certain circumstances, effects of errors of the same type would not be detectable in a comparison without a clock pulse offset; this is avoided by the method according to the present invention. To implement this time or clock pulse offset, 1.5 clock pulses in this particular case of a dual-core system, offset modules 112 through 115 are provided.

To detect the above-mentioned common mode errors, this system is designed to operate at a predefined time offset or clock pulse offset, here of 1.5 clock pulses, i.e., while one of the computers, e.g., computer 100, is directly addressing external components 103 and 104 in particular, second computer 101 is running with a delay of exactly 1.5 clock pulses. To generate the desired 1.5-pulse delay in this case, computer 101 is supplied with the inverted clock signal at clock input CLK2. However, the above-mentioned terminals of the computer, i.e., its data and/or instructions, must therefore also be delayed by the above-mentioned clock pulses, here 1.5 clock pulses in particular; as mentioned previously, offset or delay modules 112 through 115 are provided for this purpose. In addition to the two computers or processors 100 and 101, components 103 and 104 are provided, which are connected to the two computers 100 and 101 via bus 116, having bus lines 116A, 116B, and 116C, and bus 117, having bus lines 117A and 117B. Bus 117 is an instruction bus, 117A being an instruction address bus and 117B being the partial instruction (data) bus. Address bus 117A is connected to computer 100 via an instruction address terminal IA1 (instruction address 1) and to computer 101 via an instruction address terminal IA2 (instruction address 2). The instructions proper are transmitted via partial instruction bus 117B, which is connected to computer 100 via an instruction terminal I1 (instruction 1) and to computer 101 via an instruction terminal I2 (instruction 2). In this instruction bus 117 having 117A and 117B, one component 103, an instruction memory, for example, a safe instruction memory in particular or the like, is connected in between. This component, in particular as an instruction memory, is also operated at clock cycle CLK in this example. In addition, a data bus 116 has a data address bus or data address line 116A and a data bus or data line 116B. Data address bus or data address line 116A is connected to computer 100 via a data address terminal DA1 (data address 1) and to computer 101 via a data address terminal DA2 (data address 2). Also data bus or data line 116B is connected to computer 100 via a data terminal DO1 (data out 1) and to computer 101 via a data terminal DO2 (data out 2). Furthermore, data bus 116 has data bus line 116C, which is connected to computer 100 via a data terminal DI1 (data in 1) and to computer 101 via a data terminal DI2 (data in 2). In this data bus 116 having lines 116A, 116B, and 116C, a component 104, a data memory for example, a safe data memory in particular or the like, is connected in between. This component 104 is also supplied with clock cycle CLK.

Components 103 and 104 represent any components that are connected to the computers of the dual-core system via a data bus and/or instruction bus and are able to receive or output erroneous data and/or instructions corresponding to accesses via data and/or instructions of the dual-core system for read and/or write operations. Error identifier generators 105, 106, and 107, which generate an error identifier such as a parity bit, or another error code such as an error correction code (ECC), or the like, are provided for error prevention. For this purpose, appropriate error identifier checking devices 108 and 109 are also provided for checking the particular error identifier, i.e., the parity bit or another error code such as ECC, for example.

In the redundant design in the dual-core system, the data and/or instructions are compared in comparators 110 and 111 as depicted in FIG. 1. However, if there is a time offset, a clock pulse offset in particular, between computers 100 and 101, caused either by a non-synchronous dual-core system or, in the case of a synchronous dual-core system by synchronization errors, or as in this special example, by a time or clock pulse offset, here of 1.5 clock pulses in particular, provided for error detection, a computer, computer 100 in particular in this case, may write or read erroneous data and/or instructions into components, external components in particular such as memory 103 or 104 in particular in this case, but with regard to other users or actuators or sensors during this time or clock pulse offset. It may thus erroneously perform a write access instead of an intended read access due to this clock pulse offset. These scenarios result, of course, in errors in the entire system, in particular without a clear possibility to display which data and/or instructions exactly have been erroneously changed, which also causes recovery problems.

In order to eliminate this problem, a delay unit 102, as shown, is connected into the lines of the data bus and/or into the instruction bus. For the sake of clarity, only connection into the data bus is depicted. Of course, connection into the instruction bus is also possible and conceivable. This delay unit 102 delays the accesses, the memory accesses in particular in this case, so that a possible time offset or clock pulse offset is compensated, in particular in the case of an error detection, for example, via comparators 110 and 111, at least until the error signal is generated in the dual-core system, i.e., the error is detected in the dual-core system. Different variants may be implemented:

Delay of the write and read operations, delay of the write operations only, or, although not preferably, delay of the read operations. A delayed write operation may then be converted into a read operation via a change signal, the error signal in particular, in order to avoid erroneous writing.

An exemplary implementation of the data distribution unit (DDU) which preferably has a device for detecting the switchover intent (via I11OPDetect), the mode switch unit, and the iram and dram control module is explained below with reference to FIG. 2.

I11OpDetect: Switchover between the two modes is detected by the “switch detect” units. The unit is situated between the cache and the processor on the instruction bus and shows whether the I11Op instruction is loaded into the processor. If the instruction is detected, this event is communicated to the mode switch unit. The switch detect unit is provided separately for each processor. The switch detect unit does not have to have an error-tolerant design, since it is present in duplicate, i.e., redundantly. It is also conceivable to design this unit to be error-tolerant and thus without redundancy; however, the redundant design is preferable.

ModeSwitch: Switchover between the two modes is triggered by the “switch detect” unit. If a switchover is to be performed from lock mode to split mode, both switch detect units detect the switchover, since both processors are processing the same program code in the lock mode. The switch detect unit of processor 1 detects these 1.5 clock pulses before the switch detect unit of processor 2. The mode switch unit stops processor 1 for two pulses with the aid of the wait signal. Processor 2 is also stopped 1.5 clock pulses later, but only for one-half of a clock pulse, thus being synchronized to the system clock. The status signal is subsequently switched to split for the other components and the two processors continue to operate. For the two processors to execute different tasks, they must diverge in the program code. This takes place via a read access to the processor ID directly after switching over into the split mode. The read processor ID is different for each of the two processors. If a comparison is to be made with a reference processor ID, the corresponding processor may be brought to another program point using a conditional jump instruction. When switching over from split mode to lock mode, this is noticed by a processor, i.e., by one before the other. This processor will execute program code containing the switchover instruction. This is now registered by the switch detect unit, which informs the mode switch unit accordingly. The mode switch unit stops the corresponding processor and informs the second one of the synchronization intent via an interrupt. The second processor receives an interrupt and may now execute a software routine to terminate its task. It then jumps to the program point where the switchover instruction is located. Its switch detect unit now also signals the intent to change modes to the mode switch unit. At the next rising system clock edge, the wait signal is deactivated for processor 1 and, 1.5 clock pulses later, for processor 2. Now both processors work synchronously with a clock pulse offset of 1.5 clock pulses.

If the system is in lock mode, both switch detect units must inform the mode switch unit that they intend to switch to the split mode. If the switchover intent is only communicated by one unit, the error is detected by the comparator units, since these continue to receive data from one of the two processors, and these data are different from those of the stopped processor.

If both processors are in the split mode and one does not switch back to the lock mode, this may be detected by an external watchdog. In the event of a trigger signal for each processor, the watchdog notices that the waiting processor is no longer sending messages. If there is only one watchdog signal for the processor system, the watchdog may only be triggered in the lock mode. The watchdog would thus, detect that no mode switchover has taken place. The mode signal is in the form of a dual-rail signal, where “10” stands for the lock mode and “01” for the split mode. “00” and “11” indicate errors.

IramControl: Access to the instruction memory of both processors is controlled via the IRAM control, which must have a reliable design, since it is a single point of failure. It has two finite state machines for each processor: a synchronous finite state machine iram1clkreset and an asynchronous finite state machine readiram1. In the safety-critical mode, the finite state machines of the two processors monitor one another, and in the performance mode they operate separately.

Reloading of the two caches of the processors is controlled by two finite state machines, one synchronous finite state machine iramclkreset and an asynchronous finite state machine readiram. These two finite state machines divide the memory accesses in the split mode. Processor 1 has the higher priority. After an access to the main memory by processor 1, if both processors now intend to access the main memory, processor 2 receives the memory access permission. These two finite state machines are implemented for each processor. In the lock mode, the output signals of the finite state machines are compared in order to detect the occurrence of any error.

The data for updating cache 2 in the lock mode are delayed by 1.5 clock pulses in the IRAM control unit.

In bit 5 in register 0 of the SysControl, the identity of the corresponding core is encoded. In the case of core 1 the bit is 0 and in the case of core 2 it is high. This register is mirrored in the memory area having the address 65528.

In the event of a memory access by core 2, a check is first made to determine in what mode the core is operating. If it is in the lock mode, its memory access is suppressed. This signal is in the form of a common rail signal, since it is critical with regard to safety.

The program counter of processor 1 is delayed by 1.5 clock pulses to enable a comparison with the program counter of processor 2 in the lock mode.

In the split mode, the caches of both processors may be reloaded separately. If a switchover into the lock mode is performed, the two caches are not coherent with respect to one another. This may cause the two processors to diverge and the comparators to thus signal an error. To avoid this, a flag table is constructed in the IRAM control, where it is noted whether a cache line has been written in the lock mode or in the split mode. When the cache line is reloaded in the lock mode, the entry corresponding to the cache line is set at 0, and when it is reloaded in the split mode or when the cache line of a single cache is updated, it is set at 1. If the processor now accesses the memory in the lock mode, a check is performed of whether this cache line was updated in the lock mode, i.e., whether it is identical in the two caches. In the split mode, the processor may always access the cache line, regardless of the status of the Flag_Vector. This table must be present only once, since in the event of an error, the two processors diverge and thus this error is reliably detected by the comparators. Since the access times to the central table are relatively long, this table may also be copied to each cache.

DramControl: The parity is formed in this component for the address, data, and memory control signals of each processor.

There is a process for both processors for locking the memory. This process does not have to have a fail-safe design, since in the lock mode erroneous memory accesses are detected by the comparator and in the split mode no safety-relevant applications are executed. A check is performed here of whether the processor intends to lock the memory for the other processor. The data memory is locked via an access to the memory address $FBFF$-64511. This signal must be applied for one cycle even if a wait instruction is being applied to the processor at the time of the call. The finite state machine for managing the data memory accesses has two main states:

    • processor status lock: Both processors operate in the lock mode. This means that the data memory locking function is not needed. Processor 1 coordinates the memory accesses.
    • processor status split: A data memory access conflict resolution is now necessary, and memory lock must be able to occur.

The split mode state is in turn subdivided into seven states which resolve the access conflicts and are able to lock the data memory for the other processor. When both processors intend to access the memory at the same time, the order of execution represents the priorities at the same time.

    • Core 1_Lock: Processor 1 has locked the data memory. If processor 2 intends to access the memory in this state, it is stopped by a wait signal until processor 1 releases the data memory again.
    • Core2_Lock: This is the same state as the previous one, except that now processor 2 has locked the data memory and processor 1 is stopped for data memory operations.
    • lockl_wait: The data memory was locked by processor 2 as processor 1 also intended to reserve it for itself. Processor 1 is thus pre-marked for the next memory lock.
    • nex: The same for processor 2. The data memory was locked by processor 1 during the locking attempt. The memory is pre-reserved for processor 2. In the event of normal memory access without locking, processor 2 may have access before processor 1 if processor 1 was up previously.
    • Memory access by processor 1: The memory is not locked in this case. Processor 1 is allowed to access the data memory. If it intends to lock it, it may do so in this state.
    • Memory access by processor 2. Processor 1 did not intend to access the memory in the same clock pulse; therefore, the memory is free for processor 2.
    • No processor intends to access the data memory.

As mentioned previously, the DDU has the switchover intent detector (I11OPDetect), the mode switch unit, and the Iram and Dram control.

The mode switch operation is elucidated below again with reference to FIG. 3.

By way of example, switchover of the two processors is triggered by the I11Op instruction in the program. One precondition is that each processor may be identified unambiguously. Each processor is assigned a digit for this purpose. In this example, one core is [assigned the digit] 1 and the other [the digit] 0. This is encoded in the processor status register.

Both processors are stopped here for synchronization by the wait instruction. The clock pulse for the processor to be stopped may also be temporarily stopped (for example, by an OR gating with 0 for stop and 1 for continue).

“Switch detect” unit: Switchover between the two modes is detected by the “switch detect” units. The unit is situated between the cache and the processor on the instruction bus and shows whether the I11Op instruction is loaded into the processor. If the instruction is detected, this event is communicated to the mode switch unit. This detection is communicated to the “mode switch” unit via the “core 1 signal” or the “core 2 signal” (see FIG. 2). The “switch detect” unit is provided separately for each processor. The “switch detect” unit does not have to have an error-tolerant design, since it is present in duplicate, i.e., redundantly.

“Mode switch” unit: Switchover between the two modes is triggered by the “switch detect” unit. If a switchover is to be performed from lock mode to split mode, both “switch detect” units detect the switchover, since both processors are processing the same program code in the lock mode. The “switch detect” unit of processor 1 detects these 1.5 clock pulses before the “switch detect” unit of processor 2. The “mode switch” unit stops processor 1 for two clock pulses with the aid of the wait signal. Processor 2 is also stopped 1.5 clock pulses later, but only for one-half of a clock pulse, thus being synchronized to the system clock. The status signal is subsequently switched to split for the other components, and the two processors continue to operate. For the two processors to execute different tasks, they must diverge in the program code. This takes place in that a read access to the processor ID is carried out directly after the switchover into the split mode. This read processor ID is different for each of the two processors. If a comparison is to be made with a reference processor ID, the corresponding processor may be brought to another program point using a conditional jump instruction. An example of separation of the two processors has the following aspect:

In the split process, the address of the status register in which the processor ID is stored is first written into r1.
LDL r1,248
LDH r1,255
subsequent separation of the two processors via the I11Op instruction
error
Load contents of status register from address r1 into r2
LDW r2,r1
Test bit at fifth position of the status register. If it is 0, it is processor 1; if it is 1, it is processor 2.
BTEST r2,5
Processor 2 is now brought to another program point via a conditional jump

JMPI_CT 10

When switching over from split mode to lock mode, this is noticed by one processor, i.e., by one before the other. This processor will execute program code containing the switchover instruction. This is now registered by the “switch detect” unit, which informs the “mode switch” unit accordingly. The mode switch unit stops the corresponding processor and informs the second one of the synchronization intent via an interrupt (“message 1” or “message 2” in FIG. 2). The second processor receives an interrupt and may now execute a software routine to terminate its task. It then also jumps to the program point where the switchover instruction is located. Its “switch detect” unit now also signals the mode change intent to the mode switch unit. At the next rising system clock edge, the wait signal is deactivated for processor 1 and, 1.5 clock pulses later, for processor 2. Now both processors work synchronously with a clock pulse offset of 1.5 clock pulses.

If the system is in lock mode, both “switch detect” units must inform the mode switch unit that they intend to switch to the split mode. If the switchover intent is only communicated by one unit, the error is detected by the comparator units, since these continue to receive data from one of the two processors, and these data are different from those of the stopped processor.

If both processors are in the split mode and one does not switch back to the lock mode, this may be detected by an external watchdog. In the event of a trigger signal for each processor, the watchdog notices that the waiting processor is no longer sending messages. If there is only one watchdog signal for the processor system, the watchdog may only be triggered in the lock mode. The watchdog would thus detect that no mode switchover has taken place. The mode signal is in the form of a dual-rail signal (it is referred to as “status” in FIG. 2), where “10” stands for the lock mode and “01” for the split mode. “00” and “11” indicate errors.

Since the switchover instruction is detected at the beginning of the processor pipeline, no jumps are allowed in the pipeline stages after detection. The simplest way to prevent this is to introduce two NOp before the I11Op instruction.

As explained previously, the core of the present invention is the general mode of operation of the mode switch procedure (different data attribution and thus also selection of operating mode depending on the mode) and especially the synchronization of the processors.

The object recited in the preamble is also achieved by the special implementation shown.

Claims

1-32. (canceled)

33. A method for synchronization in a multiprocessor system that includes at least two processors, comprising:

providing a switchover arrangement that makes a switchover between at least two operating modes possible; and
performing a synchronization using a stop signal which stops a processor running ahead in order to synchronize it with the at least one second processor.

34. The method for synchronization as recited in claim 33, wherein the synchronization is triggered by a synchronization intent and the synchronization intent being able to take place from one or more processors.

35. The method for synchronization as recited in claim 33, wherein the wait signal of one processor is used as the stop signal.

36. The method for synchronization as recited in claim 33, wherein an interrupt signal is triggered as the stop signal.

37. The method for synchronization as recited in claim 33, wherein the processor is stopped for synchronization by skipping clock cycles.

38. The method for synchronization as recited in claim 33, wherein the processor is stopped for synchronization by switching off a clock signal.

39. The method for synchronization as recited in claim 33, wherein the switchover is represented by a switchover intent, and the switchover intent is triggered by a signal.

40. The method for synchronization as recited in claim 33, wherein the switchover is represented by a switchover intent, the switchover only taking place if the switchover intent is requested by two or more processors.

41. The method for synchronization as recited in claim 33, wherein the switchover is triggered by a switchover intent, the operating mode of the multiprocessor system being changed after a switchover intent and the switchover intent is indicated by a signal.

42. The method for synchronization as recited in claim 33, wherein the instantaneous operating mode is indicated by a mode signal.

43. The method for synchronization as recited in claim 42, wherein the mode signal has the form of a coded signal corresponding to a dual rail signal.

44. The method for synchronization as recited in claim 42, wherein the mode signal is generated redundantly by two finite finite finite state machines or by a dual rail logic.

45. The method for synchronization as recited in claim 33, wherein a synchronization intent is supplied to a central unit, which relays the synchronization intent to the at least one other processor.

46. The method for synchronization as recited in claim 33, wherein synchronization takes place by communicating the synchronization intent, which causes the processor to jump to a predefined program address.

47. The method for synchronization as recited in claim 33, wherein the one processor is stopped until the other processor has processed a task, and then has also arrived at the same program point.

48. The method for synchronization as recited in claim 33, wherein after a synchronization, both processors jump to different program points, on the basis of an identifier (ID) which is unique to each processor in the multiprocessor system, for desynchronization, and are thus desynchronized.

49. A device for synchronization in a multiprocessor system including at least two processors, comprising:

an arrangement for switching between at least two operating modes, a synchronization being performed using a stop signal which stops a processor running ahead in order to synchronize it with the at least one second processor.

50. The device for synchronization as recited in claim 49, wherein the first operating mode corresponds to a safety mode in which the two processors execute the same programs, and comparison means are provided which compare the states resulting from the execution of the same programs for agreement.

51. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that the wait signal of one processor is used as the stop signal.

52. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that an interrupt signal is triggered as the stop signal.

53. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that the processor is stopped for synchronization by skipping clock cycles.

54. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that the processor is stopped for synchronization by switching off a clock signal.

55. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that the instantaneous operating mode is indicated by a mode signal.

56. The device for synchronization as recited in claim 55, wherein the device is designed in such a way that the mode signal has the form of a coded signal corresponding to a dual rail signal.

57. The device for synchronization as recited in claim 55, wherein the device is designed in such a way that the mode signal is generated redundantly by two finite state machines or by a dual rail logic.

58. The device for synchronization as recited in claim 49, further comprising:

a central unit, wherein the device is designed in such a way that a synchronization intent is supplied to a central unit, which relays the synchronization intent to the at least one other processor.

59. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that the one processor is stopped until the at least second processor has processed a task, and then has also arrived at the same program point.

60. The device for synchronization as recited in claim 49, wherein the device is designed in such a way that after a synchronization, both processors jump to different program points on the basis of an identifier which is unique for each processor in the multiprocessor system for desynchronization, and are thus desynchronized.

61. The device for synchronization as recited in claim 60, further comprising a processor register, wherein the identifier is stored in the processor register.

62. The device for synchronization as recited in claim 60, wherein the identifier is stored externally to the processors in a central unit.

63. The device for synchronization as recited in claim 49, wherein the arrangement for switching has an error-tolerant design by duplicating the finite state machines or by implementation in a dual rail logic.

64. The device for synchronization as recited in claim 49, wherein the device is in a multiprocessor system.

Patent History
Publication number: 20090164826
Type: Application
Filed: Oct 25, 2005
Publication Date: Jun 25, 2009
Inventor: Thomas Kottke (Ehningen)
Application Number: 11/666,413
Classifications
Current U.S. Class: Synchronization Of Plural Processors (713/375)
International Classification: G06F 1/12 (20060101);