/* ** Text version of NASA/GSFC memo describing RTEMS MongooseV/MIPS R300 BSP error and fix */ Memorandum To: Dave Leucht, ST5 Software Lead CC: Kequan Luu, Asst. Branch Chief, Code 582. From: Art Ferrer, Code 582 Date: 3/29/2004 Re: ST5 Floating Point Error Explanation and Fix. Purpose The purpose of this memo is to report on the ST5 floating point error investigation, briefly explain the error and propose a resolution. Background Intermittent Invalid Floating Point Operation exceptions have been observed in the ST5 flight software. These errors were reported in Build 1.4 and presumably fixed with modifications to time archiving and processing. However, the errors persisted and were characterized by occurring at the same instruction in the same task. This led to the conclusion that the error was limited to a single software task. Modification to the suspected software task was released in Build 1.8. An extended 48 hour test was performed on the Build 1.8 flight software, duplicating the conditions under which the error was observed to occur. The test completed without error. I was assigned the task of characterizing the error, and determining its cause. Error Characterization From one logic analyzer trace of the floating point error, we can determine that the MP task is executing and is in the Check_Time_Gap routine. Lines 16-17 below were executed without error and in the trace we are in the process of executing Lines 19-21 when the DSS interrupt occurs and the processor context is changed. 1 procedure Check_Time_Gap 2 (State : in out State_Type; 3 Vector : in Mag.Hardware.Sample_Type; 4 Packet_Label : in Mag.Telemetry.Packet_Label_Type; 5 Packet_Terminated : out Boolean) 6 -- Check if the time stamp on Vector is within tolerance from 7 -- the last time stamp. If not, terminate the current packet. 8 is 9 use Midex.OS_Utils, Interfaces; 10 use type Mag.Telemetry.Packet_Label_Type; 11 12 -- The hardware clock is bizarre; there is no easy way to get an 13 -- absolute flat time from it. So we just use the subseconds, 14 -- which can be fooled by just the right time gap. 15 16 Flat_Delta_Time : constant Flat_Time_Type := 17 Flat_Time_Type (Table.Misc_Param.Vector_Delta_Time * IEEE_Float_64 (Flat_Ticks_Per_Second)); 18 19 Flat_Delta_Time_Threshold : constant Flat_Time_Type := 20 Flat_Time_Type (Table.Misc_Param.Vector_Delta_Time_Threshold * 21 Interfaces.IEEE_Float_64 (Flat_Ticks_Per_Second)); 22 23 Computed_Period : Flat_Time_Type; 24 Jitter : Flat_Time_Type; A partial assembly language listing is given below which indicates the flow of control prior to the DSS ISR interrupt and subsequent task context switches. Address Data 1) 80722400: 00000000 nop 2) 80722404: 4620103e c.le.d $f2,$f0 (The c.le.d instruction compares $f2 to $f0 and sets the condition bit in the FP status register if $f2 <= $f0.) 3) 80722408: 00000000 nop 4) 8072240c: 4501000e bc1t ffffffff80722448 <__fixunsdfdi+0xa8> (The bclt instruction will branch to address 0x80722448 if the condition bit in the FP status register is set.) 5) 80722410: 00000000 nop 6) 80722414: 4442f800 cfc1 $v0,$31 The branch does NOT occur. We can assume that the condition bit is zero. At this point in the logic analyzer trace, we get a DSS interrupt and perform a number of context switches which include: 1) DSS ISR 2) DSS high priority task 3) AC task 4) DS task 5) Clock ISR When we return to the MP context, execution starts at address 0x80722414. Due to the pipeline architecture, the bclt instruction is executed again. However, we get a different result. Address Data 1) 8072240c: 4501000e bc1t ffffffff80722448 <__fixunsdfdi+0xa8> 2) 80722410: 00000000 nop 3) 80722448: 46220001 sub.d $f0,$f0,$f2 This time the program branches to 0x80722448, "sub.d $f0, $f0, $f2", and continues to perform calculations that result in a genuine FP Invalid Operation error and initiate a warm reset. The reason we branch to 0x80722448 after the interrupt is that the bclt instruction relies on the conditon bit in the FP status register which has NOT BEEN RESTORED to its previous MP task context. Error Resolution A resolution to the error will require making the modifications to the RTEMS Board Support Package, and performing the tests recommended below: 1) Add the FP status register to the Context_Control_fp structure in cpu.h. (pathname: rtems-ss/c/src/exec/score/cpu/mips/rtems/score/cpu.h). 2) Modify the _CPU_Context_save_fp function in cpu_asm.S to save the FP status register. (pathname: rtems/rtems-ss/c/src/exec/score/cpu/mips/cpu_asm.S). 3) Modify _CPU_Context_restore_fp function in cpu_asm.S to restore the FP status register. (pathname: rtems-ss/c/src/exec/score/cpu/mips/cpu_asm.S). 4) Rebuild the RTEMS library 5) Perform a logic analyzer trace on the ST5 breadboard running a flight software version with the new RTEMS library, FSW Build 1.6, and processor caches disabled, and duplicating the conditions where the FP error was observed to occur. 6) Verify from the logic analyzer trace that the FP status register is saved and restored on task context switches. 7) Perform an extended test for a minimum of 48 hours under the same conditons in 5) above but with processor caches enabled. (The FP error was observed to occur at least once every 24 hours.) Summary The FP Invalid Operation exception has been occuring because of an error in the RTEMS Board Support Package (BSP) for the Mongoose V. In specific, the Floating Point Status register is neither saved nor restored with the floating point context during task context switches. The error is infrequent because the trigger condition requires that a low priority task (while executing a vulnerable section of code) be preempted by a higher prioirty task. When the low priority task resumes execution under these conditions, the error will occur. The error in the RTEMS BSP was not found during BSP development because a rigorous task preemption and interrupt environment was not available. Otherwise, the RTEMS BSP did pass all floating point performance tests. The RTEMS custodian company has been informed of this error and will be sent BSP upgrades after the fix has been verified. Conclusions From the nature of the conditions that cause the FP error, and the fact that full FP context save/restore is not implemented correctly in the RTEMS BSP, there is a high probability that the FP Invalid Operation exception will occur again in the Build 1.8 release. I recommend a new ST5 flight software build release which includes modifications the RTEMS BSP. /* -------------------------------- Start of file cpu_asm.S with bug ---------------------------------------------- */ /* * This file contains the basic algorithms for all assembly code used * in an specific CPU port of RTEMS. These algorithms must be implemented * in assembly language * * History: * Baseline: no_cpu * 1996: Ported to MIPS64ORION by Craig Lebakken * COPYRIGHT (c) 1996 by Transition Networks Inc. * To anyone who acknowledges that the modifications to this file to * port it to the MIPS64ORION are provided "AS IS" without any * express or implied warranty: * permission to use, copy, modify, and distribute this file * for any purpose is hereby granted without fee, provided that * the above copyright notice and this notice appears in all * copies, and that the name of Transition Networks not be used in * advertising or publicity pertaining to distribution of the * software without specific, written prior permission. Transition * Networks makes no representations about the suitability * of this software for any purpose. * 2000: Reworked by Alan Cudmore to become * the baseline of the more general MIPS port. * 2001: Joel Sherrill continued this rework, * rewriting as much as possible in C and added the JMR3904 BSP * so testing could be performed on a simulator. * * COPYRIGHT (c) 1989-2000. * On-Line Applications Research Corporation (OAR). * * The license and distribution terms for this file may be * found in the file LICENSE in this distribution or at * http://www.OARcorp.com/rtems/license.html. * * $Id: cpu_asm.S,v 1.1 2002/03/20 17:27:41 st5 Exp $ */ #include #include "iregdef.h" #include "idtcpu.h" /* enable debugging shadow writes to misc ram, this is a vestigal * Mongoose-ism debug tool- but may be handy in the future so we * left it in... */ #define INSTRUMENT #define SAVE_ALL_REGISTERS /* Ifdefs prevent the duplication of code for MIPS ISA Level 3 ( R4xxx ) * and MIPS ISA Level 1 (R3xxx). */ #if __mips == 3 /* 64 bit register operations */ #define NOP #define ADD dadd #define STREG sd #define LDREG ld #define MFCO dmfc0 #define MTCO dmtc0 #define ADDU addu #define ADDIU addiu #define R_SZ 8 #define F_SZ 8 #define SZ_INT 8 #define SZ_INT_POW2 3 /* XXX if we don't always want 64 bit register ops, then another ifdef */ #elif __mips == 1 /* 32 bit register operations*/ #define NOP nop #define ADD add #define STREG sw #define LDREG lw #define MFCO mfc0 #define MTCO mtc0 #define ADDU add #define ADDIU addi #define R_SZ 4 #define F_SZ 4 #define SZ_INT 4 #define SZ_INT_POW2 2 #else #error "mips assembly: what size registers do I deal with?" #endif #define ISR_VEC_SIZE 4 #define EXCP_STACK_SIZE (NREGS*R_SZ) #ifdef __GNUC__ #define ASM_EXTERN(x,size) .extern x,size #else #define ASM_EXTERN(x,size) #endif /* NOTE: these constants must match the Context_Control structure in cpu.h */ #define S0_OFFSET 0 #define S1_OFFSET 1 #define S2_OFFSET 2 #define S3_OFFSET 3 #define S4_OFFSET 4 #define S5_OFFSET 5 #define S6_OFFSET 6 #define S7_OFFSET 7 #define SP_OFFSET 8 #define FP_OFFSET 9 #define RA_OFFSET 10 #define C0_SR_OFFSET 11 /* #define C0_EPC_OFFSET 12 */ /* NOTE: these constants must match the Context_Control_fp structure in cpu.h */ #define FP0_OFFSET 0 #define FP1_OFFSET 1 #define FP2_OFFSET 2 #define FP3_OFFSET 3 #define FP4_OFFSET 4 #define FP5_OFFSET 5 #define FP6_OFFSET 6 #define FP7_OFFSET 7 #define FP8_OFFSET 8 #define FP9_OFFSET 9 #define FP10_OFFSET 10 #define FP11_OFFSET 11 #define FP12_OFFSET 12 #define FP13_OFFSET 13 #define FP14_OFFSET 14 #define FP15_OFFSET 15 #define FP16_OFFSET 16 #define FP17_OFFSET 17 #define FP18_OFFSET 18 #define FP19_OFFSET 19 #define FP20_OFFSET 20 #define FP21_OFFSET 21 #define FP22_OFFSET 22 #define FP23_OFFSET 23 #define FP24_OFFSET 24 #define FP25_OFFSET 25 #define FP26_OFFSET 26 #define FP27_OFFSET 27 #define FP28_OFFSET 28 #define FP29_OFFSET 29 #define FP30_OFFSET 30 #define FP31_OFFSET 31 /* * _CPU_Context_save_fp_context * * This routine is responsible for saving the FP context * at *fp_context_ptr. If the point to load the FP context * from is changed then the pointer is modified by this routine. * * Sometimes a macro implementation of this is in cpu.h which dereferences * the ** and a similarly named routine in this file is passed something * like a (Context_Control_fp *). The general rule on making this decision * is to avoid writing assembly language. */ /* void _CPU_Context_save_fp( * void **fp_context_ptr * ); */ #if ( CPU_HARDWARE_FP == FALSE ) FRAME(_CPU_Context_save_fp,sp,0,ra) .set noat ld a1,(a0) NOP swc1 $f0,FP0_OFFSET*F_SZ(a1) swc1 $f1,FP1_OFFSET*F_SZ(a1) swc1 $f2,FP2_OFFSET*F_SZ(a1) swc1 $f3,FP3_OFFSET*F_SZ(a1) swc1 $f4,FP4_OFFSET*F_SZ(a1) swc1 $f5,FP5_OFFSET*F_SZ(a1) swc1 $f6,FP6_OFFSET*F_SZ(a1) swc1 $f7,FP7_OFFSET*F_SZ(a1) swc1 $f8,FP8_OFFSET*F_SZ(a1) swc1 $f9,FP9_OFFSET*F_SZ(a1) swc1 $f10,FP10_OFFSET*F_SZ(a1) swc1 $f11,FP11_OFFSET*F_SZ(a1) swc1 $f12,FP12_OFFSET*F_SZ(a1) swc1 $f13,FP13_OFFSET*F_SZ(a1) swc1 $f14,FP14_OFFSET*F_SZ(a1) swc1 $f15,FP15_OFFSET*F_SZ(a1) swc1 $f16,FP16_OFFSET*F_SZ(a1) swc1 $f17,FP17_OFFSET*F_SZ(a1) swc1 $f18,FP18_OFFSET*F_SZ(a1) swc1 $f19,FP19_OFFSET*F_SZ(a1) swc1 $f20,FP20_OFFSET*F_SZ(a1) swc1 $f21,FP21_OFFSET*F_SZ(a1) swc1 $f22,FP22_OFFSET*F_SZ(a1) swc1 $f23,FP23_OFFSET*F_SZ(a1) swc1 $f24,FP24_OFFSET*F_SZ(a1) swc1 $f25,FP25_OFFSET*F_SZ(a1) swc1 $f26,FP26_OFFSET*F_SZ(a1) swc1 $f27,FP27_OFFSET*F_SZ(a1) swc1 $f28,FP28_OFFSET*F_SZ(a1) swc1 $f29,FP29_OFFSET*F_SZ(a1) swc1 $f30,FP30_OFFSET*F_SZ(a1) swc1 $f31,FP31_OFFSET*F_SZ(a1) j ra nop .set at ENDFRAME(_CPU_Context_save_fp) #endif /* * _CPU_Context_restore_fp_context * * This routine is responsible for restoring the FP context * at *fp_context_ptr. If the point to load the FP context * from is changed then the pointer is modified by this routine. * * Sometimes a macro implementation of this is in cpu.h which dereferences * the ** and a similarly named routine in this file is passed something * like a (Context_Control_fp *). The general rule on making this decision * is to avoid writing assembly language. */ /* void _CPU_Context_restore_fp( * void **fp_context_ptr * ) */ #if ( CPU_HARDWARE_FP == FALSE ) FRAME(_CPU_Context_restore_fp,sp,0,ra) .set noat ld a1,(a0) NOP lwc1 $f0,FP0_OFFSET*4(a1) lwc1 $f1,FP1_OFFSET*4(a1) lwc1 $f2,FP2_OFFSET*4(a1) lwc1 $f3,FP3_OFFSET*4(a1) lwc1 $f4,FP4_OFFSET*4(a1) lwc1 $f5,FP5_OFFSET*4(a1) lwc1 $f6,FP6_OFFSET*4(a1) lwc1 $f7,FP7_OFFSET*4(a1) lwc1 $f8,FP8_OFFSET*4(a1) lwc1 $f9,FP9_OFFSET*4(a1) lwc1 $f10,FP10_OFFSET*4(a1) lwc1 $f11,FP11_OFFSET*4(a1) lwc1 $f12,FP12_OFFSET*4(a1) lwc1 $f13,FP13_OFFSET*4(a1) lwc1 $f14,FP14_OFFSET*4(a1) lwc1 $f15,FP15_OFFSET*4(a1) lwc1 $f16,FP16_OFFSET*4(a1) lwc1 $f17,FP17_OFFSET*4(a1) lwc1 $f18,FP18_OFFSET*4(a1) lwc1 $f19,FP19_OFFSET*4(a1) lwc1 $f20,FP20_OFFSET*4(a1) lwc1 $f21,FP21_OFFSET*4(a1) lwc1 $f22,FP22_OFFSET*4(a1) lwc1 $f23,FP23_OFFSET*4(a1) lwc1 $f24,FP24_OFFSET*4(a1) lwc1 $f25,FP25_OFFSET*4(a1) lwc1 $f26,FP26_OFFSET*4(a1) lwc1 $f27,FP27_OFFSET*4(a1) lwc1 $f28,FP28_OFFSET*4(a1) lwc1 $f29,FP29_OFFSET*4(a1) lwc1 $f30,FP30_OFFSET*4(a1) lwc1 $f31,FP31_OFFSET*4(a1) j ra nop .set at ENDFRAME(_CPU_Context_restore_fp) #endif /* _CPU_Context_switch * * This routine performs a normal non-FP context switch. */ /* void _CPU_Context_switch( * Context_Control *run, * Context_Control *heir * ) */ FRAME(_CPU_Context_switch,sp,0,ra) MFC0 t0,C0_SR li t1,~(SR_INTERRUPT_ENABLE_BITS) STREG t0,C0_SR_OFFSET*4(a0) /* save status register */ and t0,t1 MTC0 t0,C0_SR /* first disable ie bit (recommended) */ #if __mips == 3 ori t0,SR_EXL|SR_IE /* enable exception level to disable interrupts */ MTC0 t0,C0_SR #endif STREG ra,RA_OFFSET*R_SZ(a0) /* save current context */ STREG sp,SP_OFFSET*R_SZ(a0) STREG fp,FP_OFFSET*R_SZ(a0) STREG s0,S0_OFFSET*R_SZ(a0) STREG s1,S1_OFFSET*R_SZ(a0) STREG s2,S2_OFFSET*R_SZ(a0) STREG s3,S3_OFFSET*R_SZ(a0) STREG s4,S4_OFFSET*R_SZ(a0) STREG s5,S5_OFFSET*R_SZ(a0) STREG s6,S6_OFFSET*R_SZ(a0) STREG s7,S7_OFFSET*R_SZ(a0) /* MFC0 t0,C0_EPC NOP STREG t0,C0_EPC_OFFSET*R_SZ(a0) */ _CPU_Context_switch_restore: LDREG ra,RA_OFFSET*R_SZ(a1) /* restore context */ LDREG sp,SP_OFFSET*R_SZ(a1) LDREG fp,FP_OFFSET*R_SZ(a1) LDREG s0,S0_OFFSET*R_SZ(a1) LDREG s1,S1_OFFSET*R_SZ(a1) LDREG s2,S2_OFFSET*R_SZ(a1) LDREG s3,S3_OFFSET*R_SZ(a1) LDREG s4,S4_OFFSET*R_SZ(a1) LDREG s5,S5_OFFSET*R_SZ(a1) LDREG s6,S6_OFFSET*R_SZ(a1) LDREG s7,S7_OFFSET*R_SZ(a1) /* LDREG t0,C0_EPC_OFFSET*R_SZ(a1) NOP MTC0 t0,C0_EPC */ LDREG t0, C0_SR_OFFSET*R_SZ(a1) NOP #if __mips == 3 andi t0,SR_EXL bnez t0,_CPU_Context_1 /* set exception level from restore context */ li t0,~SR_EXL MFC0 t1,C0_SR NOP and t1,t0 MTC0 t1,C0_SR #elif __mips == 1 andi t0,(SR_INTERRUPT_ENABLE_BITS) /* we know 0 disabled */ beq t0,$0,_CPU_Context_1 /* set level from restore context */ MFC0 t0,C0_SR NOP or t0,(SR_INTERRUPT_ENABLE_BITS) /* new_sr = old sr with enabled */ MTC0 t0,C0_SR /* set with enabled */ #endif _CPU_Context_1: j ra NOP ENDFRAME(_CPU_Context_switch) /* * _CPU_Context_restore * * This routine is generally used only to restart self in an * efficient manner. It may simply be a label in _CPU_Context_switch. * * NOTE: May be unnecessary to reload some registers. * * void _CPU_Context_restore( * Context_Control *new_context * ); */ FRAME(_CPU_Context_restore,sp,0,ra) ADD a1,a0,zero j _CPU_Context_switch_restore NOP ENDFRAME(_CPU_Context_restore) ASM_EXTERN(_ISR_Nest_level, SZ_INT) ASM_EXTERN(_Thread_Dispatch_disable_level,SZ_INT) ASM_EXTERN(_Context_Switch_necessary,SZ_INT) ASM_EXTERN(_ISR_Signals_to_thread_executing,SZ_INT) ASM_EXTERN(_Thread_Executing,SZ_INT) .extern _Thread_Dispatch .extern _ISR_Vector_table /* void __ISR_Handler() * * This routine provides the RTEMS interrupt management. * * void _ISR_Handler() * * * This discussion ignores a lot of the ugly details in a real * implementation such as saving enough registers/state to be * able to do something real. Keep in mind that the goal is * to invoke a user's ISR handler which is written in C and * uses a certain set of registers. * * Also note that the exact order is to a large extent flexible. * Hardware will dictate a sequence for a certain subset of * _ISR_Handler while requirements for setting * * At entry to "common" _ISR_Handler, the vector number must be * available. On some CPUs the hardware puts either the vector * number or the offset into the vector table for this ISR in a * known place. If the hardware does not give us this information, * then the assembly portion of RTEMS for this port will contain * a set of distinct interrupt entry points which somehow place * the vector number in a known place (which is safe if another * interrupt nests this one) and branches to _ISR_Handler. * */ FRAME(_ISR_Handler,sp,0,ra) .set noreorder /* Q: _ISR_Handler, not using IDT/SIM ...save extra regs? */ /* wastes a lot of stack space for context?? */ ADDIU sp,sp,-EXCP_STACK_SIZE STREG ra, R_RA*R_SZ(sp) /* store ra on the stack */ STREG v0, R_V0*R_SZ(sp) STREG v1, R_V1*R_SZ(sp) STREG a0, R_A0*R_SZ(sp) STREG a1, R_A1*R_SZ(sp) STREG a2, R_A2*R_SZ(sp) STREG a3, R_A3*R_SZ(sp) STREG t0, R_T0*R_SZ(sp) STREG t1, R_T1*R_SZ(sp) STREG t2, R_T2*R_SZ(sp) STREG t3, R_T3*R_SZ(sp) STREG t4, R_T4*R_SZ(sp) STREG t5, R_T5*R_SZ(sp) STREG t6, R_T6*R_SZ(sp) STREG t7, R_T7*R_SZ(sp) mflo t0 STREG t8, R_T8*R_SZ(sp) STREG t0, R_MDLO*R_SZ(sp) STREG t9, R_T9*R_SZ(sp) mfhi t0 STREG gp, R_GP*R_SZ(sp) STREG t0, R_MDHI*R_SZ(sp) STREG fp, R_FP*R_SZ(sp) .set noat STREG AT, R_AT*R_SZ(sp) .set at MFC0 t0,C0_SR MFC0 t1,C0_EPC STREG t0,R_SR*R_SZ(sp) STREG t1,R_EPC*R_SZ(sp) #ifdef INSTRUMENT lw t2, _Thread_Executing nop sw t2, 0x8001FFF0 #ifdef SAVE_ALL_REGISTERS sw t0, 0x8001F050 sw t1, 0x8001F054 li t0, 0xdeadbeef li t1, 0xdeadbeef li t2, 0xdeadbeef sw ra, 0x8001F000 sw v0, 0x8001F004 sw v1, 0x8001F008 sw a0, 0x8001F00c sw a1, 0x8001F010 sw a2, 0x8001F014 sw a3, 0x8001F018 sw t0, 0x8001F01c sw t1, 0x8001F020 sw t2, 0x8001F024 sw t3, 0x8001F028 sw t4, 0x8001F02c sw t5, 0x8001F030 sw t6, 0x8001F034 sw t7, 0x8001F038 sw t8, 0x8001F03c sw t9, 0x8001F040 sw gp, 0x8001F044 sw fp, 0x8001F048 #endif #endif /* determine if an interrupt generated this exception */ MFC0 k0,C0_CAUSE NOP and k1,k0,CAUSE_EXCMASK beq k1, 0, _ISR_Handler_1 _ISR_Handler_Exception: /* if we return from the exception, it is assumed nothing */ /* bad is going on and we can continue to run normally */ move a0,sp jal mips_vector_exceptions nop j _ISR_Handler_exit nop _ISR_Handler_1: MFC0 k1,C0_SR and k0,CAUSE_IPMASK and k0,k1 /* external interrupt not enabled, ignore */ /* but if it's not an exception or an interrupt, */ /* Then where did it come from??? */ beq k0,zero,_ISR_Handler_exit li t2,1 /* set a flag so we process interrupts */ /* * save some or all context on stack * may need to save some special interrupt information for exit * * #if ( CPU_HAS_SOFTWARE_INTERRUPT_STACK == TRUE ) * if ( _ISR_Nest_level == 0 ) * switch to software interrupt stack * #endif */ /* * _ISR_Nest_level++; */ LDREG t0,_ISR_Nest_level NOP ADD t0,t0,1 STREG t0,_ISR_Nest_level /* * _Thread_Dispatch_disable_level++; */ LDREG t1,_Thread_Dispatch_disable_level NOP ADD t1,t1,1 STREG t1,_Thread_Dispatch_disable_level /* * Call the CPU model or BSP specific routine to decode the * interrupt source and actually vector to device ISR handlers. */ move a0,sp jal mips_vector_isr_handlers nop /* * --_ISR_Nest_level; */ LDREG t2,_ISR_Nest_level NOP ADD t2,t2,-1 STREG t2,_ISR_Nest_level /* * --_Thread_Dispatch_disable_level; */ LDREG t1,_Thread_Dispatch_disable_level NOP ADD t1,t1,-1 STREG t1,_Thread_Dispatch_disable_level /* * if ( _Thread_Dispatch_disable_level || _ISR_Nest_level ) * goto the label "exit interrupt (simple case)" */ or t0,t2,t1 bne t0,zero,_ISR_Handler_exit nop /* * #if ( CPU_HAS_SOFTWARE_INTERRUPT_STACK == TRUE ) * restore stack * #endif * * if ( !_Context_Switch_necessary && !_ISR_Signals_to_thread_executing ) * goto the label "exit interrupt (simple case)" */ LDREG t0,_Context_Switch_necessary LDREG t1,_ISR_Signals_to_thread_executing NOP or t0,t0,t1 beq t0,zero,_ISR_Handler_exit nop #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0,0x8001F100 #endif /* restore interrupt state from the saved status register, * if the isr vectoring didn't so we allow nested interrupts to * occur LDREG t0,R_SR*R_SZ(sp) NOP MTC0 t0,C0_SR rfe */ jal _Thread_Dispatch nop #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0,0x8001F104 #endif /* * prepare to get out of interrupt * return from interrupt (maybe to _ISR_Dispatch) * * LABEL "exit interrupt (simple case):" * prepare to get out of interrupt * return from interrupt */ _ISR_Handler_exit: LDREG t0, R_SR*R_SZ(sp) NOP MTC0 t0, C0_SR /* restore context from stack */ #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0, 0x8001FFF4 #endif LDREG k0, R_MDLO*R_SZ(sp) LDREG t0, R_T0*R_SZ(sp) mtlo k0 LDREG k0, R_MDHI*R_SZ(sp) LDREG t1, R_T1*R_SZ(sp) mthi k0 LDREG t2, R_T2*R_SZ(sp) LDREG t3, R_T3*R_SZ(sp) LDREG t4, R_T4*R_SZ(sp) LDREG t5, R_T5*R_SZ(sp) LDREG t6, R_T6*R_SZ(sp) LDREG t7, R_T7*R_SZ(sp) LDREG t8, R_T8*R_SZ(sp) LDREG t9, R_T9*R_SZ(sp) LDREG gp, R_GP*R_SZ(sp) LDREG fp, R_FP*R_SZ(sp) LDREG ra, R_RA*R_SZ(sp) LDREG a0, R_A0*R_SZ(sp) LDREG a1, R_A1*R_SZ(sp) LDREG a2, R_A2*R_SZ(sp) LDREG a3, R_A3*R_SZ(sp) LDREG v1, R_V1*R_SZ(sp) LDREG v0, R_V0*R_SZ(sp) #ifdef INSTRUMENT #ifdef SAVE_ALL_REGISTERS sw ra, 0x8001F000 sw v0, 0x8001F004 sw v1, 0x8001F008 sw a0, 0x8001F00c sw a1, 0x8001F010 sw a2, 0x8001F014 sw a3, 0x8001F018 sw t0, 0x8001F01c sw t1, 0x8001F020 sw t2, 0x8001F024 sw t3, 0x8001F028 sw t4, 0x8001F02c sw t5, 0x8001F030 sw t6, 0x8001F034 sw t7, 0x8001F038 sw t8, 0x8001F03c sw t9, 0x8001F040 sw gp, 0x8001F044 sw fp, 0x8001F048 #endif #endif LDREG k0, R_EPC*R_SZ(sp) .set noat LDREG AT, R_AT*R_SZ(sp) .set at ADDIU sp,sp,EXCP_STACK_SIZE j k0 rfe nop .set reorder ENDFRAME(_ISR_Handler) FRAME(mips_break,sp,0,ra) #if 1 break 0x0 j mips_break #else j ra #endif nop ENDFRAME(mips_break) /* ---------------------------------- End of file cpu_asm.S with bug -----------------------------------------------*/ /* ---------------------------------- Start of file cpu_asm.S with fix ---------------------------------------------*/ /* * This file contains the basic algorithms for all assembly code used * in an specific CPU port of RTEMS. These algorithms must be implemented * in assembly language * * History: * Baseline: no_cpu * 1996: Ported to MIPS64ORION by Craig Lebakken * COPYRIGHT (c) 1996 by Transition Networks Inc. * To anyone who acknowledges that the modifications to this file to * port it to the MIPS64ORION are provided "AS IS" without any * express or implied warranty: * permission to use, copy, modify, and distribute this file * for any purpose is hereby granted without fee, provided that * the above copyright notice and this notice appears in all * copies, and that the name of Transition Networks not be used in * advertising or publicity pertaining to distribution of the * software without specific, written prior permission. Transition * Networks makes no representations about the suitability * of this software for any purpose. * 2000: Reworked by Alan Cudmore to become * the baseline of the more general MIPS port. * 2001: Joel Sherrill continued this rework, * rewriting as much as possible in C and added the JMR3904 BSP * so testing could be performed on a simulator. * 2004: 24March, Art Ferrer, NASA/GSFC, added save of FP status/control * register to fix intermittent FP error encountered on ST5 mission * implementation on Mongoose V processor. * * COPYRIGHT (c) 1989-2000. * On-Line Applications Research Corporation (OAR). * * The license and distribution terms for this file may be * found in the file LICENSE in this distribution or at * http://www.OARcorp.com/rtems/license.html. * * $Id: cpu_asm.S,v 1.1 2002/03/20 17:27:41 st5 Exp $ */ #include #include "iregdef.h" #include "idtcpu.h" /* enable debugging shadow writes to misc ram, this is a vestigal * Mongoose-ism debug tool- but may be handy in the future so we * left it in... */ #define INSTRUMENT #define SAVE_ALL_REGISTERS /* Ifdefs prevent the duplication of code for MIPS ISA Level 3 ( R4xxx ) * and MIPS ISA Level 1 (R3xxx). */ #if __mips == 3 /* 64 bit register operations */ #define NOP #define ADD dadd #define STREG sd #define LDREG ld #define MFCO dmfc0 #define MTCO dmtc0 #define ADDU addu #define ADDIU addiu #define R_SZ 8 #define F_SZ 8 #define SZ_INT 8 #define SZ_INT_POW2 3 /* XXX if we don't always want 64 bit register ops, then another ifdef */ #elif __mips == 1 /* 32 bit register operations*/ #define NOP nop #define ADD add #define STREG sw #define LDREG lw #define MFCO mfc0 #define MTCO mtc0 #define ADDU add #define ADDIU addi #define R_SZ 4 #define F_SZ 4 #define SZ_INT 4 #define SZ_INT_POW2 2 #else #error "mips assembly: what size registers do I deal with?" #endif #define ISR_VEC_SIZE 4 #define EXCP_STACK_SIZE (NREGS*R_SZ) #ifdef __GNUC__ #define ASM_EXTERN(x,size) .extern x,size #else #define ASM_EXTERN(x,size) #endif /* NOTE: these constants must match the Context_Control structure in cpu.h */ #define S0_OFFSET 0 #define S1_OFFSET 1 #define S2_OFFSET 2 #define S3_OFFSET 3 #define S4_OFFSET 4 #define S5_OFFSET 5 #define S6_OFFSET 6 #define S7_OFFSET 7 #define SP_OFFSET 8 #define FP_OFFSET 9 #define RA_OFFSET 10 #define C0_SR_OFFSET 11 /* #define C0_EPC_OFFSET 12 */ /* NOTE: these constants must match the Context_Control_fp structure in cpu.h */ #define FP0_OFFSET 0 #define FP1_OFFSET 1 #define FP2_OFFSET 2 #define FP3_OFFSET 3 #define FP4_OFFSET 4 #define FP5_OFFSET 5 #define FP6_OFFSET 6 #define FP7_OFFSET 7 #define FP8_OFFSET 8 #define FP9_OFFSET 9 #define FP10_OFFSET 10 #define FP11_OFFSET 11 #define FP12_OFFSET 12 #define FP13_OFFSET 13 #define FP14_OFFSET 14 #define FP15_OFFSET 15 #define FP16_OFFSET 16 #define FP17_OFFSET 17 #define FP18_OFFSET 18 #define FP19_OFFSET 19 #define FP20_OFFSET 20 #define FP21_OFFSET 21 #define FP22_OFFSET 22 #define FP23_OFFSET 23 #define FP24_OFFSET 24 #define FP25_OFFSET 25 #define FP26_OFFSET 26 #define FP27_OFFSET 27 #define FP28_OFFSET 28 #define FP29_OFFSET 29 #define FP30_OFFSET 30 #define FP31_OFFSET 31 #define FPCS_OFFSET 32 /* * _CPU_Context_save_fp_context * * This routine is responsible for saving the FP context * at *fp_context_ptr. If the point to load the FP context * from is changed then the pointer is modified by this routine. * * Sometimes a macro implementation of this is in cpu.h which dereferences * the ** and a similarly named routine in this file is passed something * like a (Context_Control_fp *). The general rule on making this decision * is to avoid writing assembly language. */ /* void _CPU_Context_save_fp( * void **fp_context_ptr * ); */ #if ( CPU_HARDWARE_FP == FALSE ) FRAME(_CPU_Context_save_fp,sp,0,ra) .set noat addiu $R_SP,$R_SP,-24 /* Reserve some stack space */ sw $R_T0, 16($R_SP) /* Save $t0 contents */ ld a1,(a0) /* Load pointer to pointer */ nop swc1 $f0,FP0_OFFSET*F_SZ(a1) /* Save FP registers */ swc1 $f1,FP1_OFFSET*F_SZ(a1) swc1 $f2,FP2_OFFSET*F_SZ(a1) swc1 $f3,FP3_OFFSET*F_SZ(a1) swc1 $f4,FP4_OFFSET*F_SZ(a1) swc1 $f5,FP5_OFFSET*F_SZ(a1) swc1 $f6,FP6_OFFSET*F_SZ(a1) swc1 $f7,FP7_OFFSET*F_SZ(a1) swc1 $f8,FP8_OFFSET*F_SZ(a1) swc1 $f9,FP9_OFFSET*F_SZ(a1) swc1 $f10,FP10_OFFSET*F_SZ(a1) swc1 $f11,FP11_OFFSET*F_SZ(a1) swc1 $f12,FP12_OFFSET*F_SZ(a1) swc1 $f13,FP13_OFFSET*F_SZ(a1) swc1 $f14,FP14_OFFSET*F_SZ(a1) swc1 $f15,FP15_OFFSET*F_SZ(a1) swc1 $f16,FP16_OFFSET*F_SZ(a1) swc1 $f17,FP17_OFFSET*F_SZ(a1) swc1 $f18,FP18_OFFSET*F_SZ(a1) swc1 $f19,FP19_OFFSET*F_SZ(a1) swc1 $f20,FP20_OFFSET*F_SZ(a1) swc1 $f21,FP21_OFFSET*F_SZ(a1) swc1 $f22,FP22_OFFSET*F_SZ(a1) swc1 $f23,FP23_OFFSET*F_SZ(a1) swc1 $f24,FP24_OFFSET*F_SZ(a1) swc1 $f25,FP25_OFFSET*F_SZ(a1) swc1 $f26,FP26_OFFSET*F_SZ(a1) swc1 $f27,FP27_OFFSET*F_SZ(a1) swc1 $f28,FP28_OFFSET*F_SZ(a1) swc1 $f29,FP29_OFFSET*F_SZ(a1) swc1 $f30,FP30_OFFSET*F_SZ(a1) swc1 $f31,FP31_OFFSET*F_SZ(a1) cfc1 $R_T0,$31 /* Read FP status/conrol reg */ cfc1 $R_T0,$31 /* Two reads clear pipeline */ nop /* Nops to ensure execution */ nop sw $R_T0,FPCS_OFFSET*F_SZ(a1) /* Store value to FPCS location */ lw $R_T0,16($R_SP) /* Restore $t0 value from stack */ nop addiu $R_SP,$R_SP,24 /* Deallocate stack space */ j ra nop .set at ENDFRAME(_CPU_Context_save_fp) #endif /* * _CPU_Context_restore_fp_context * * This routine is responsible for restoring the FP context * at *fp_context_ptr. If the point to load the FP context * from is changed then the pointer is modified by this routine. * * Sometimes a macro implementation of this is in cpu.h which dereferences * the ** and a similarly named routine in this file is passed something * like a (Context_Control_fp *). The general rule on making this decision * is to avoid writing assembly language. */ /* void _CPU_Context_restore_fp( * void **fp_context_ptr * ) */ #if ( CPU_HARDWARE_FP == FALSE ) FRAME(_CPU_Context_restore_fp,sp,0,ra) .set noat ADDIU $R_SP,$R_SP,-24 /* Reserve some stack space */ sw $R_T0,16($R_SP) /* Store $t0 value to stack */ ld a1,(a0) /* Load pointer to pointer */ NOP lwc1 $f0,FP0_OFFSET*4(a1) /* Load FP registers */ lwc1 $f1,FP1_OFFSET*4(a1) lwc1 $f2,FP2_OFFSET*4(a1) lwc1 $f3,FP3_OFFSET*4(a1) lwc1 $f4,FP4_OFFSET*4(a1) lwc1 $f5,FP5_OFFSET*4(a1) lwc1 $f6,FP6_OFFSET*4(a1) lwc1 $f7,FP7_OFFSET*4(a1) lwc1 $f8,FP8_OFFSET*4(a1) lwc1 $f9,FP9_OFFSET*4(a1) lwc1 $f10,FP10_OFFSET*4(a1) lwc1 $f11,FP11_OFFSET*4(a1) lwc1 $f12,FP12_OFFSET*4(a1) lwc1 $f13,FP13_OFFSET*4(a1) lwc1 $f14,FP14_OFFSET*4(a1) lwc1 $f15,FP15_OFFSET*4(a1) lwc1 $f16,FP16_OFFSET*4(a1) lwc1 $f17,FP17_OFFSET*4(a1) lwc1 $f18,FP18_OFFSET*4(a1) lwc1 $f19,FP19_OFFSET*4(a1) lwc1 $f20,FP20_OFFSET*4(a1) lwc1 $f21,FP21_OFFSET*4(a1) lwc1 $f22,FP22_OFFSET*4(a1) lwc1 $f23,FP23_OFFSET*4(a1) lwc1 $f24,FP24_OFFSET*4(a1) lwc1 $f25,FP25_OFFSET*4(a1) lwc1 $f26,FP26_OFFSET*4(a1) lwc1 $f27,FP27_OFFSET*4(a1) lwc1 $f28,FP28_OFFSET*4(a1) lwc1 $f29,FP29_OFFSET*4(a1) lwc1 $f30,FP30_OFFSET*4(a1) lwc1 $f31,FP31_OFFSET*4(a1) cfc1 $R_T0,$31 /* Read from FP status/control reg */ cfc1 $R_T0,$31 /* Two reads clear pipeline */ nop /* NOPs ensure execution */ nop lw $R_T0,FPCS_OFFSET*4(a1) /* Load saved FPCS value */ nop ctc1 $R_T0,$31 /* Restore FPCS register */ lw $R_T0,16($R_SP) /* Restore $t0 value */ nop addiu $R_SP,$R_SP,24 /* Deallocate stack space */ j ra nop .set at ENDFRAME(_CPU_Context_restore_fp) #endif /* _CPU_Context_switch * * This routine performs a normal non-FP context switch. */ /* void _CPU_Context_switch( * Context_Control *run, * Context_Control *heir * ) */ FRAME(_CPU_Context_switch,sp,0,ra) MFC0 t0,C0_SR li t1,~(SR_INTERRUPT_ENABLE_BITS) STREG t0,C0_SR_OFFSET*4(a0) /* save status register */ and t0,t1 MTC0 t0,C0_SR /* first disable ie bit (recommended) */ #if __mips == 3 ori t0,SR_EXL|SR_IE /* enable exception level to disable interrupts */ MTC0 t0,C0_SR #endif STREG ra,RA_OFFSET*R_SZ(a0) /* save current context */ STREG sp,SP_OFFSET*R_SZ(a0) STREG fp,FP_OFFSET*R_SZ(a0) STREG s0,S0_OFFSET*R_SZ(a0) STREG s1,S1_OFFSET*R_SZ(a0) STREG s2,S2_OFFSET*R_SZ(a0) STREG s3,S3_OFFSET*R_SZ(a0) STREG s4,S4_OFFSET*R_SZ(a0) STREG s5,S5_OFFSET*R_SZ(a0) STREG s6,S6_OFFSET*R_SZ(a0) STREG s7,S7_OFFSET*R_SZ(a0) /* MFC0 t0,C0_EPC NOP STREG t0,C0_EPC_OFFSET*R_SZ(a0) */ _CPU_Context_switch_restore: LDREG ra,RA_OFFSET*R_SZ(a1) /* restore context */ LDREG sp,SP_OFFSET*R_SZ(a1) LDREG fp,FP_OFFSET*R_SZ(a1) LDREG s0,S0_OFFSET*R_SZ(a1) LDREG s1,S1_OFFSET*R_SZ(a1) LDREG s2,S2_OFFSET*R_SZ(a1) LDREG s3,S3_OFFSET*R_SZ(a1) LDREG s4,S4_OFFSET*R_SZ(a1) LDREG s5,S5_OFFSET*R_SZ(a1) LDREG s6,S6_OFFSET*R_SZ(a1) LDREG s7,S7_OFFSET*R_SZ(a1) /* LDREG t0,C0_EPC_OFFSET*R_SZ(a1) NOP MTC0 t0,C0_EPC */ LDREG t0, C0_SR_OFFSET*R_SZ(a1) NOP #if __mips == 3 andi t0,SR_EXL bnez t0,_CPU_Context_1 /* set exception level from restore context */ li t0,~SR_EXL MFC0 t1,C0_SR NOP and t1,t0 MTC0 t1,C0_SR #elif __mips == 1 andi t0,(SR_INTERRUPT_ENABLE_BITS) /* we know 0 disabled */ beq t0,$0,_CPU_Context_1 /* set level from restore context */ MFC0 t0,C0_SR NOP or t0,(SR_INTERRUPT_ENABLE_BITS) /* new_sr = old sr with enabled */ MTC0 t0,C0_SR /* set with enabled */ #endif _CPU_Context_1: j ra NOP ENDFRAME(_CPU_Context_switch) /* * _CPU_Context_restore * * This routine is generally used only to restart self in an * efficient manner. It may simply be a label in _CPU_Context_switch. * * NOTE: May be unnecessary to reload some registers. * * void _CPU_Context_restore( * Context_Control *new_context * ); */ FRAME(_CPU_Context_restore,sp,0,ra) ADD a1,a0,zero j _CPU_Context_switch_restore NOP ENDFRAME(_CPU_Context_restore) ASM_EXTERN(_ISR_Nest_level, SZ_INT) ASM_EXTERN(_Thread_Dispatch_disable_level,SZ_INT) ASM_EXTERN(_Context_Switch_necessary,SZ_INT) ASM_EXTERN(_ISR_Signals_to_thread_executing,SZ_INT) ASM_EXTERN(_Thread_Executing,SZ_INT) .extern _Thread_Dispatch .extern _ISR_Vector_table /* void __ISR_Handler() * * This routine provides the RTEMS interrupt management. * * void _ISR_Handler() * * * This discussion ignores a lot of the ugly details in a real * implementation such as saving enough registers/state to be * able to do something real. Keep in mind that the goal is * to invoke a user's ISR handler which is written in C and * uses a certain set of registers. * * Also note that the exact order is to a large extent flexible. * Hardware will dictate a sequence for a certain subset of * _ISR_Handler while requirements for setting * * At entry to "common" _ISR_Handler, the vector number must be * available. On some CPUs the hardware puts either the vector * number or the offset into the vector table for this ISR in a * known place. If the hardware does not give us this information, * then the assembly portion of RTEMS for this port will contain * a set of distinct interrupt entry points which somehow place * the vector number in a known place (which is safe if another * interrupt nests this one) and branches to _ISR_Handler. * */ FRAME(_ISR_Handler,sp,0,ra) .set noreorder /* Q: _ISR_Handler, not using IDT/SIM ...save extra regs? */ /* wastes a lot of stack space for context?? */ ADDIU sp,sp,-EXCP_STACK_SIZE STREG ra, R_RA*R_SZ(sp) /* store ra on the stack */ STREG v0, R_V0*R_SZ(sp) STREG v1, R_V1*R_SZ(sp) STREG a0, R_A0*R_SZ(sp) STREG a1, R_A1*R_SZ(sp) STREG a2, R_A2*R_SZ(sp) STREG a3, R_A3*R_SZ(sp) STREG t0, R_T0*R_SZ(sp) STREG t1, R_T1*R_SZ(sp) STREG t2, R_T2*R_SZ(sp) STREG t3, R_T3*R_SZ(sp) STREG t4, R_T4*R_SZ(sp) STREG t5, R_T5*R_SZ(sp) STREG t6, R_T6*R_SZ(sp) STREG t7, R_T7*R_SZ(sp) mflo t0 STREG t8, R_T8*R_SZ(sp) STREG t0, R_MDLO*R_SZ(sp) STREG t9, R_T9*R_SZ(sp) mfhi t0 STREG gp, R_GP*R_SZ(sp) STREG t0, R_MDHI*R_SZ(sp) STREG fp, R_FP*R_SZ(sp) .set noat STREG AT, R_AT*R_SZ(sp) .set at MFC0 t0,C0_SR MFC0 t1,C0_EPC STREG t0,R_SR*R_SZ(sp) STREG t1,R_EPC*R_SZ(sp) #ifdef INSTRUMENT lw t2, _Thread_Executing nop sw t2, 0x8001FFF0 #ifdef SAVE_ALL_REGISTERS sw t0, 0x8001F050 sw t1, 0x8001F054 li t0, 0xdeadbeef li t1, 0xdeadbeef li t2, 0xdeadbeef sw ra, 0x8001F000 sw v0, 0x8001F004 sw v1, 0x8001F008 sw a0, 0x8001F00c sw a1, 0x8001F010 sw a2, 0x8001F014 sw a3, 0x8001F018 sw t0, 0x8001F01c sw t1, 0x8001F020 sw t2, 0x8001F024 sw t3, 0x8001F028 sw t4, 0x8001F02c sw t5, 0x8001F030 sw t6, 0x8001F034 sw t7, 0x8001F038 sw t8, 0x8001F03c sw t9, 0x8001F040 sw gp, 0x8001F044 sw fp, 0x8001F048 #endif #endif /* determine if an interrupt generated this exception */ MFC0 k0,C0_CAUSE NOP and k1,k0,CAUSE_EXCMASK beq k1, 0, _ISR_Handler_1 _ISR_Handler_Exception: /* if we return from the exception, it is assumed nothing */ /* bad is going on and we can continue to run normally */ move a0,sp jal mips_vector_exceptions nop j _ISR_Handler_exit nop _ISR_Handler_1: MFC0 k1,C0_SR and k0,CAUSE_IPMASK and k0,k1 /* external interrupt not enabled, ignore */ /* but if it's not an exception or an interrupt, */ /* Then where did it come from??? */ beq k0,zero,_ISR_Handler_exit li t2,1 /* set a flag so we process interrupts */ /* * save some or all context on stack * may need to save some special interrupt information for exit * * #if ( CPU_HAS_SOFTWARE_INTERRUPT_STACK == TRUE ) * if ( _ISR_Nest_level == 0 ) * switch to software interrupt stack * #endif */ /* * _ISR_Nest_level++; */ LDREG t0,_ISR_Nest_level NOP ADD t0,t0,1 STREG t0,_ISR_Nest_level /* * _Thread_Dispatch_disable_level++; */ LDREG t1,_Thread_Dispatch_disable_level NOP ADD t1,t1,1 STREG t1,_Thread_Dispatch_disable_level /* * Call the CPU model or BSP specific routine to decode the * interrupt source and actually vector to device ISR handlers. */ move a0,sp jal mips_vector_isr_handlers nop /* * --_ISR_Nest_level; */ LDREG t2,_ISR_Nest_level NOP ADD t2,t2,-1 STREG t2,_ISR_Nest_level /* * --_Thread_Dispatch_disable_level; */ LDREG t1,_Thread_Dispatch_disable_level NOP ADD t1,t1,-1 STREG t1,_Thread_Dispatch_disable_level /* * if ( _Thread_Dispatch_disable_level || _ISR_Nest_level ) * goto the label "exit interrupt (simple case)" */ or t0,t2,t1 bne t0,zero,_ISR_Handler_exit nop /* * #if ( CPU_HAS_SOFTWARE_INTERRUPT_STACK == TRUE ) * restore stack * #endif * * if ( !_Context_Switch_necessary && !_ISR_Signals_to_thread_executing ) * goto the label "exit interrupt (simple case)" */ LDREG t0,_Context_Switch_necessary LDREG t1,_ISR_Signals_to_thread_executing NOP or t0,t0,t1 beq t0,zero,_ISR_Handler_exit nop #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0,0x8001F100 #endif /* restore interrupt state from the saved status register, * if the isr vectoring didn't so we allow nested interrupts to * occur LDREG t0,R_SR*R_SZ(sp) NOP MTC0 t0,C0_SR rfe */ jal _Thread_Dispatch nop #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0,0x8001F104 #endif /* * prepare to get out of interrupt * return from interrupt (maybe to _ISR_Dispatch) * * LABEL "exit interrupt (simple case):" * prepare to get out of interrupt * return from interrupt */ _ISR_Handler_exit: LDREG t0, R_SR*R_SZ(sp) NOP MTC0 t0, C0_SR /* restore context from stack */ #ifdef INSTRUMENT lw t0,_Thread_Executing nop sw t0, 0x8001FFF4 #endif LDREG k0, R_MDLO*R_SZ(sp) LDREG t0, R_T0*R_SZ(sp) mtlo k0 LDREG k0, R_MDHI*R_SZ(sp) LDREG t1, R_T1*R_SZ(sp) mthi k0 LDREG t2, R_T2*R_SZ(sp) LDREG t3, R_T3*R_SZ(sp) LDREG t4, R_T4*R_SZ(sp) LDREG t5, R_T5*R_SZ(sp) LDREG t6, R_T6*R_SZ(sp) LDREG t7, R_T7*R_SZ(sp) LDREG t8, R_T8*R_SZ(sp) LDREG t9, R_T9*R_SZ(sp) LDREG gp, R_GP*R_SZ(sp) LDREG fp, R_FP*R_SZ(sp) LDREG ra, R_RA*R_SZ(sp) LDREG a0, R_A0*R_SZ(sp) LDREG a1, R_A1*R_SZ(sp) LDREG a2, R_A2*R_SZ(sp) LDREG a3, R_A3*R_SZ(sp) LDREG v1, R_V1*R_SZ(sp) LDREG v0, R_V0*R_SZ(sp) #ifdef INSTRUMENT #ifdef SAVE_ALL_REGISTERS sw ra, 0x8001F000 sw v0, 0x8001F004 sw v1, 0x8001F008 sw a0, 0x8001F00c sw a1, 0x8001F010 sw a2, 0x8001F014 sw a3, 0x8001F018 sw t0, 0x8001F01c sw t1, 0x8001F020 sw t2, 0x8001F024 sw t3, 0x8001F028 sw t4, 0x8001F02c sw t5, 0x8001F030 sw t6, 0x8001F034 sw t7, 0x8001F038 sw t8, 0x8001F03c sw t9, 0x8001F040 sw gp, 0x8001F044 sw fp, 0x8001F048 #endif #endif LDREG k0, R_EPC*R_SZ(sp) .set noat LDREG AT, R_AT*R_SZ(sp) .set at ADDIU sp,sp,EXCP_STACK_SIZE j k0 rfe nop .set reorder ENDFRAME(_ISR_Handler) FRAME(mips_break,sp,0,ra) #if 1 break 0x0 j mips_break #else j ra #endif nop ENDFRAME(mips_break) /* ---------------------------------- End of file cpu_asm.S with fix -----------------------------------------------*/ /* ---------------------------------- Start of file cpu.h with bug -------------------------------------------------*/ /* * Mips CPU Dependent Header File * * Conversion to MIPS port by Alan Cudmore and * Joel Sherrill . * * These changes made the code conditional on standard cpp predefines, * merged the mips1 and mips3 code sequences as much as possible, * and moved some of the assembly code to C. Alan did much of the * initial analysis and rework. Joel took over from there and * wrote the JMR3904 BSP so this could be tested. Joel also * added the new interrupt vectoring support in libcpu and * tried to better support the various interrupt controllers. * * Original MIP64ORION port by Craig Lebakken * COPYRIGHT (c) 1996 by Transition Networks Inc. * * To anyone who acknowledges that this file is provided "AS IS" * without any express or implied warranty: * permission to use, copy, modify, and distribute this file * for any purpose is hereby granted without fee, provided that * the above copyright notice and this notice appears in all * copies, and that the name of Transition Networks not be used in * advertising or publicity pertaining to distribution of the * software without specific, written prior permission. * Transition Networks makes no representations about the suitability * of this software for any purpose. * * COPYRIGHT (c) 1989-2001. * On-Line Applications Research Corporation (OAR). * * The license and distribution terms for this file may be * found in the file LICENSE in this distribution or at * http://www.OARcorp.com/rtems/license.html. * * $Id: cpu.h,v 1.1 2002/03/20 17:27:41 st5 Exp $ */ #ifndef __CPU_h #define __CPU_h #ifdef __cplusplus extern "C" { #endif #include /* pick up machine definitions */ #ifndef ASM #include #endif /* conditional compilation parameters */ /* * Should the calls to _Thread_Enable_dispatch be inlined? * * If TRUE, then they are inlined. * If FALSE, then a subroutine call is made. * * Basically this is an example of the classic trade-off of size * versus speed. Inlining the call (TRUE) typically increases the * size of RTEMS while speeding up the enabling of dispatching. * [NOTE: In general, the _Thread_Dispatch_disable_level will * only be 0 or 1 unless you are in an interrupt handler and that * interrupt handler invokes the executive.] When not inlined * something calls _Thread_Enable_dispatch which in turns calls * _Thread_Dispatch. If the enable dispatch is inlined, then * one subroutine call is avoided entirely.] */ #define CPU_INLINE_ENABLE_DISPATCH TRUE /* * Should the body of the search loops in _Thread_queue_Enqueue_priority * be unrolled one time? In unrolled each iteration of the loop examines * two "nodes" on the chain being searched. Otherwise, only one node * is examined per iteration. * * If TRUE, then the loops are unrolled. * If FALSE, then the loops are not unrolled. * * The primary factor in making this decision is the cost of disabling * and enabling interrupts (_ISR_Flash) versus the cost of rest of the * body of the loop. On some CPUs, the flash is more expensive than * one iteration of the loop body. In this case, it might be desirable * to unroll the loop. It is important to note that on some CPUs, this * code is the longest interrupt disable period in RTEMS. So it is * necessary to strike a balance when setting this parameter. */ #define CPU_UNROLL_ENQUEUE_PRIORITY TRUE /* * Does RTEMS manage a dedicated interrupt stack in software? * * If TRUE, then a stack is allocated in _Interrupt_Manager_initialization. * If FALSE, nothing is done. * * If the CPU supports a dedicated interrupt stack in hardware, * then it is generally the responsibility of the BSP to allocate it * and set it up. * * If the CPU does not support a dedicated interrupt stack, then * the porter has two options: (1) execute interrupts on the * stack of the interrupted task, and (2) have RTEMS manage a dedicated * interrupt stack. * * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE. * * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is * possible that both are FALSE for a particular CPU. Although it * is unclear what that would imply about the interrupt processing * procedure on that CPU. */ #define CPU_HAS_SOFTWARE_INTERRUPT_STACK FALSE /* * Does this CPU have hardware support for a dedicated interrupt stack? * * If TRUE, then it must be installed during initialization. * If FALSE, then no installation is performed. * * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE. * * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is * possible that both are FALSE for a particular CPU. Although it * is unclear what that would imply about the interrupt processing * procedure on that CPU. */ #define CPU_HAS_HARDWARE_INTERRUPT_STACK FALSE /* * Does RTEMS allocate a dedicated interrupt stack in the Interrupt Manager? * * If TRUE, then the memory is allocated during initialization. * If FALSE, then the memory is allocated during initialization. * * This should be TRUE is CPU_HAS_SOFTWARE_INTERRUPT_STACK is TRUE * or CPU_INSTALL_HARDWARE_INTERRUPT_STACK is TRUE. */ #define CPU_ALLOCATE_INTERRUPT_STACK FALSE /* * Does the RTEMS invoke the user's ISR with the vector number and * a pointer to the saved interrupt frame (1) or just the vector * number (0)? * */ #define CPU_ISR_PASSES_FRAME_POINTER 1 /* * Does the CPU have hardware floating point? * * If TRUE, then the RTEMS_FLOATING_POINT task attribute is supported. * If FALSE, then the RTEMS_FLOATING_POINT task attribute is ignored. * * If there is a FP coprocessor such as the i387 or mc68881, then * the answer is TRUE. * * The macro name "MIPS_HAS_FPU" should be made CPU specific. * It indicates whether or not this CPU model has FP support. For * example, it would be possible to have an i386_nofp CPU model * which set this to false to indicate that you have an i386 without * an i387 and wish to leave floating point support out of RTEMS. */ #if ( MIPS_HAS_FPU == 1 ) #define CPU_HARDWARE_FP TRUE #else #define CPU_HARDWARE_FP FALSE #endif /* * Are all tasks RTEMS_FLOATING_POINT tasks implicitly? * * If TRUE, then the RTEMS_FLOATING_POINT task attribute is assumed. * If FALSE, then the RTEMS_FLOATING_POINT task attribute is followed. * * So far, the only CPU in which this option has been used is the * HP PA-RISC. The HP C compiler and gcc both implicitly use the * floating point registers to perform integer multiplies. If * a function which you would not think utilize the FP unit DOES, * then one can not easily predict which tasks will use the FP hardware. * In this case, this option should be TRUE. * * If CPU_HARDWARE_FP is FALSE, then this should be FALSE as well. */ #define CPU_ALL_TASKS_ARE_FP FALSE /* * Should the IDLE task have a floating point context? * * If TRUE, then the IDLE task is created as a RTEMS_FLOATING_POINT task * and it has a floating point context which is switched in and out. * If FALSE, then the IDLE task does not have a floating point context. * * Setting this to TRUE negatively impacts the time required to preempt * the IDLE task from an interrupt because the floating point context * must be saved as part of the preemption. */ #define CPU_IDLE_TASK_IS_FP FALSE /* * Should the saving of the floating point registers be deferred * until a context switch is made to another different floating point * task? * * If TRUE, then the floating point context will not be stored until * necessary. It will remain in the floating point registers and not * disturned until another floating point task is switched to. * * If FALSE, then the floating point context is saved when a floating * point task is switched out and restored when the next floating point * task is restored. The state of the floating point registers between * those two operations is not specified. * * If the floating point context does NOT have to be saved as part of * interrupt dispatching, then it should be safe to set this to TRUE. * * Setting this flag to TRUE results in using a different algorithm * for deciding when to save and restore the floating point context. * The deferred FP switch algorithm minimizes the number of times * the FP context is saved and restored. The FP context is not saved * until a context switch is made to another, different FP task. * Thus in a system with only one FP task, the FP context will never * be saved or restored. */ #define CPU_USE_DEFERRED_FP_SWITCH TRUE /* * Does this port provide a CPU dependent IDLE task implementation? * * If TRUE, then the routine _CPU_Internal_threads_Idle_thread_body * must be provided and is the default IDLE thread body instead of * _Internal_threads_Idle_thread_body. * * If FALSE, then use the generic IDLE thread body if the BSP does * not provide one. * * This is intended to allow for supporting processors which have * a low power or idle mode. When the IDLE thread is executed, then * the CPU can be powered down. * * The order of precedence for selecting the IDLE thread body is: * * 1. BSP provided * 2. CPU dependent (if provided) * 3. generic (if no BSP and no CPU dependent) */ /* we can use the low power wait instruction for the IDLE thread */ #define CPU_PROVIDES_IDLE_THREAD_BODY TRUE /* * Does the stack grow up (toward higher addresses) or down * (toward lower addresses)? * * If TRUE, then the grows upward. * If FALSE, then the grows toward smaller addresses. */ /* our stack grows down */ #define CPU_STACK_GROWS_UP FALSE /* * The following is the variable attribute used to force alignment * of critical RTEMS structures. On some processors it may make * sense to have these aligned on tighter boundaries than * the minimum requirements of the compiler in order to have as * much of the critical data area as possible in a cache line. * * The placement of this macro in the declaration of the variables * is based on the syntactically requirements of the GNU C * "__attribute__" extension. For example with GNU C, use * the following to force a structures to a 32 byte boundary. * * __attribute__ ((aligned (32))) * * NOTE: Currently only the Priority Bit Map table uses this feature. * To benefit from using this, the data must be heavily * used so it will stay in the cache and used frequently enough * in the executive to justify turning this on. */ /* our cache line size is 16 bytes */ #if __GNUC__ #define CPU_STRUCTURE_ALIGNMENT __attribute__ ((aligned (16))) #else #define CPU_STRUCTURE_ALIGNMENT #endif /* * Define what is required to specify how the network to host conversion * routines are handled. */ #define CPU_HAS_OWN_HOST_TO_NETWORK_ROUTINES FALSE #define CPU_BIG_ENDIAN TRUE #define CPU_LITTLE_ENDIAN FALSE /* * The following defines the number of bits actually used in the * interrupt field of the task mode. How those bits map to the * CPU interrupt levels is defined by the routine _CPU_ISR_Set_level(). */ #define CPU_MODES_INTERRUPT_MASK 0x00000001 /* * Processor defined structures * * Examples structures include the descriptor tables from the i386 * and the processor control structure on the i960ca. */ /* may need to put some structures here. */ /* * Contexts * * Generally there are 2 types of context to save. * 1. Interrupt registers to save * 2. Task level registers to save * * This means we have the following 3 context items: * 1. task level context stuff:: Context_Control * 2. floating point task stuff:: Context_Control_fp * 3. special interrupt level context :: Context_Control_interrupt * * On some processors, it is cost-effective to save only the callee * preserved registers during a task context switch. This means * that the ISR code needs to save those registers which do not * persist across function calls. It is not mandatory to make this * distinctions between the caller/callee saves registers for the * purpose of minimizing context saved during task switch and on interrupts. * If the cost of saving extra registers is minimal, simplicity is the * choice. Save the same context on interrupt entry as for tasks in * this case. * * Additionally, if gdb is to be made aware of RTEMS tasks for this CPU, then * care should be used in designing the context area. * * On some CPUs with hardware floating point support, the Context_Control_fp * structure will not be used or it simply consist of an array of a * fixed number of bytes. This is done when the floating point context * is dumped by a "FP save context" type instruction and the format * is not really defined by the CPU. In this case, there is no need * to figure out the exact format -- only the size. Of course, although * this is enough information for RTEMS, it is probably not enough for * a debugger such as gdb. But that is another problem. */ /* WARNING: If this structure is modified, the constants in cpu.h must be updated. */ #if __mips == 1 #define __MIPS_REGISTER_TYPE unsigned32 #define __MIPS_FPU_REGISTER_TYPE unsigned32 #elif __mips == 3 #define __MIPS_REGISTER_TYPE unsigned64 #define __MIPS_FPU_REGISTER_TYPE unsigned64 #else #error "mips register size: unknown architecture level!!" #endif typedef struct { __MIPS_REGISTER_TYPE s0; __MIPS_REGISTER_TYPE s1; __MIPS_REGISTER_TYPE s2; __MIPS_REGISTER_TYPE s3; __MIPS_REGISTER_TYPE s4; __MIPS_REGISTER_TYPE s5; __MIPS_REGISTER_TYPE s6; __MIPS_REGISTER_TYPE s7; __MIPS_REGISTER_TYPE sp; __MIPS_REGISTER_TYPE fp; __MIPS_REGISTER_TYPE ra; __MIPS_REGISTER_TYPE c0_sr; /* __MIPS_REGISTER_TYPE c0_epc; */ } Context_Control; /* WARNING: If this structure is modified, the constants in cpu.h * must also be updated. */ typedef struct { #if ( CPU_HARDWARE_FP == TRUE ) __MIPS_FPU_REGISTER_TYPE fp0; __MIPS_FPU_REGISTER_TYPE fp1; __MIPS_FPU_REGISTER_TYPE fp2; __MIPS_FPU_REGISTER_TYPE fp3; __MIPS_FPU_REGISTER_TYPE fp4; __MIPS_FPU_REGISTER_TYPE fp5; __MIPS_FPU_REGISTER_TYPE fp6; __MIPS_FPU_REGISTER_TYPE fp7; __MIPS_FPU_REGISTER_TYPE fp8; __MIPS_FPU_REGISTER_TYPE fp9; __MIPS_FPU_REGISTER_TYPE fp10; __MIPS_FPU_REGISTER_TYPE fp11; __MIPS_FPU_REGISTER_TYPE fp12; __MIPS_FPU_REGISTER_TYPE fp13; __MIPS_FPU_REGISTER_TYPE fp14; __MIPS_FPU_REGISTER_TYPE fp15; __MIPS_FPU_REGISTER_TYPE fp16; __MIPS_FPU_REGISTER_TYPE fp17; __MIPS_FPU_REGISTER_TYPE fp18; __MIPS_FPU_REGISTER_TYPE fp19; __MIPS_FPU_REGISTER_TYPE fp20; __MIPS_FPU_REGISTER_TYPE fp21; __MIPS_FPU_REGISTER_TYPE fp22; __MIPS_FPU_REGISTER_TYPE fp23; __MIPS_FPU_REGISTER_TYPE fp24; __MIPS_FPU_REGISTER_TYPE fp25; __MIPS_FPU_REGISTER_TYPE fp26; __MIPS_FPU_REGISTER_TYPE fp27; __MIPS_FPU_REGISTER_TYPE fp28; __MIPS_FPU_REGISTER_TYPE fp29; __MIPS_FPU_REGISTER_TYPE fp30; __MIPS_FPU_REGISTER_TYPE fp31; #endif } Context_Control_fp; /* This struct reflects the stack frame employed in ISR_Handler. Note that the ISR routine doesn't save all registers to this frame, so cpu_asm.S should be consulted to see if the registers you're interested in are actually there. */ typedef struct { #if __mips == 1 unsigned int regs[80]; #endif #if __mips == 3 unsigned int regs[94]; #endif } CPU_Interrupt_frame; /* * The following table contains the information required to configure * the mips processor specific parameters. */ typedef struct { void (*pretasking_hook)( void ); void (*predriver_hook)( void ); void (*postdriver_hook)( void ); void (*idle_task)( void ); boolean do_zero_of_workspace; unsigned32 idle_task_stack_size; unsigned32 interrupt_stack_size; unsigned32 extra_mpci_receive_server_stack; void * (*stack_allocate_hook)( unsigned32 ); void (*stack_free_hook)( void* ); /* end of fields required on all CPUs */ unsigned32 clicks_per_microsecond; } rtems_cpu_table; /* * Macros to access required entires in the CPU Table are in * the file rtems/system.h. */ /* * Macros to access MIPS specific additions to the CPU Table */ #define rtems_cpu_configuration_get_clicks_per_microsecond() \ (_CPU_Table.clicks_per_microsecond) /* * This variable is optional. It is used on CPUs on which it is difficult * to generate an "uninitialized" FP context. It is filled in by * _CPU_Initialize and copied into the task's FP context area during * _CPU_Context_Initialize. */ SCORE_EXTERN Context_Control_fp _CPU_Null_fp_context; /* * On some CPUs, RTEMS supports a software managed interrupt stack. * This stack is allocated by the Interrupt Manager and the switch * is performed in _ISR_Handler. These variables contain pointers * to the lowest and highest addresses in the chunk of memory allocated * for the interrupt stack. Since it is unknown whether the stack * grows up or down (in general), this give the CPU dependent * code the option of picking the version it wants to use. * * NOTE: These two variables are required if the macro * CPU_HAS_SOFTWARE_INTERRUPT_STACK is defined as TRUE. */ SCORE_EXTERN void *_CPU_Interrupt_stack_low; SCORE_EXTERN void *_CPU_Interrupt_stack_high; /* * With some compilation systems, it is difficult if not impossible to * call a high-level language routine from assembly language. This * is especially true of commercial Ada compilers and name mangling * C++ ones. This variable can be optionally defined by the CPU porter * and contains the address of the routine _Thread_Dispatch. This * can make it easier to invoke that routine at the end of the interrupt * sequence (if a dispatch is necessary). * SCORE_EXTERN void (*_CPU_Thread_dispatch_pointer)(); * * NOTE: Not needed on this port. */ /* * Nothing prevents the porter from declaring more CPU specific variables. */ /* XXX: if needed, put more variables here */ /* * The size of the floating point context area. On some CPUs this * will not be a "sizeof" because the format of the floating point * area is not defined -- only the size is. This is usually on * CPUs with a "floating point save context" instruction. */ #define CPU_CONTEXT_FP_SIZE sizeof( Context_Control_fp ) /* * Amount of extra stack (above minimum stack size) required by * system initialization thread. Remember that in a multiprocessor * system the system intialization thread becomes the MP server thread. */ #define CPU_MPCI_RECEIVE_SERVER_EXTRA_STACK 0 /* * This defines the number of entries in the ISR_Vector_table managed * by RTEMS. */ extern unsigned int mips_interrupt_number_of_vectors; #define CPU_INTERRUPT_NUMBER_OF_VECTORS (mips_interrupt_number_of_vectors) #define CPU_INTERRUPT_MAXIMUM_VECTOR_NUMBER (CPU_INTERRUPT_NUMBER_OF_VECTORS - 1) /* * Should be large enough to run all RTEMS tests. This insures * that a "reasonable" small application should not have any problems. */ #define CPU_STACK_MINIMUM_SIZE (2048*sizeof(unsigned32)) /* * CPU's worst alignment requirement for data types on a byte boundary. This * alignment does not take into account the requirements for the stack. */ #define CPU_ALIGNMENT 8 /* * This number corresponds to the byte alignment requirement for the * heap handler. This alignment requirement may be stricter than that * for the data types alignment specified by CPU_ALIGNMENT. It is * common for the heap to follow the same alignment requirement as * CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict enough for the heap, * then this should be set to CPU_ALIGNMENT. * * NOTE: This does not have to be a power of 2. It does have to * be greater or equal to than CPU_ALIGNMENT. */ #define CPU_HEAP_ALIGNMENT CPU_ALIGNMENT /* * This number corresponds to the byte alignment requirement for memory * buffers allocated by the partition manager. This alignment requirement * may be stricter than that for the data types alignment specified by * CPU_ALIGNMENT. It is common for the partition to follow the same * alignment requirement as CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict * enough for the partition, then this should be set to CPU_ALIGNMENT. * * NOTE: This does not have to be a power of 2. It does have to * be greater or equal to than CPU_ALIGNMENT. */ #define CPU_PARTITION_ALIGNMENT CPU_ALIGNMENT /* * This number corresponds to the byte alignment requirement for the * stack. This alignment requirement may be stricter than that for the * data types alignment specified by CPU_ALIGNMENT. If the CPU_ALIGNMENT * is strict enough for the stack, then this should be set to 0. * * NOTE: This must be a power of 2 either 0 or greater than CPU_ALIGNMENT. */ #define CPU_STACK_ALIGNMENT CPU_ALIGNMENT /* * ISR handler macros */ /* * Support routine to initialize the RTEMS vector table after it is allocated. */ #define _CPU_Initialize_vectors() /* * Disable all interrupts for an RTEMS critical section. The previous * level is returned in _level. */ #define _CPU_ISR_Disable( _level ) \ do { \ mips_get_sr( _level ); \ mips_set_sr( (_level) & ~SR_INTERRUPT_ENABLE_BITS ); \ } while(0) /* * Enable interrupts to the previous level (returned by _CPU_ISR_Disable). * This indicates the end of an RTEMS critical section. The parameter * _level is not modified. */ #define _CPU_ISR_Enable( _level ) \ do { \ mips_set_sr(_level); \ } while(0) /* * This temporarily restores the interrupt to _level before immediately * disabling them again. This is used to divide long RTEMS critical * sections into two or more parts. The parameter _level is not * modified. */ #define _CPU_ISR_Flash( _xlevel ) \ do { \ unsigned int _scratch; \ _CPU_ISR_Enable( _xlevel ); \ _CPU_ISR_Disable( _scratch ); \ } while(0) /* * Map interrupt level in task mode onto the hardware that the CPU * actually provides. Currently, interrupt levels which do not * map onto the CPU in a generic fashion are undefined. Someday, * it would be nice if these were "mapped" by the application * via a callout. For example, m68k has 8 levels 0 - 7, levels * 8 - 255 would be available for bsp/application specific meaning. * This could be used to manage a programmable interrupt controller * via the rtems_task_mode directive. * * On the MIPS, 0 is all on. Non-zero is all off. This only * manipulates the IEC. */ unsigned32 _CPU_ISR_Get_level( void ); /* in cpu.c */ void _CPU_ISR_Set_level( unsigned32 ); /* in cpu.c */ /* end of ISR handler macros */ /* Context handler macros */ /* * Initialize the context to a state suitable for starting a * task after a context restore operation. Generally, this * involves: * * - setting a starting address * - preparing the stack * - preparing the stack and frame pointers * - setting the proper interrupt level in the context * - initializing the floating point context * * This routine generally does not set any unnecessary register * in the context. The state of the "general data" registers is * undefined at task start time. * * NOTE: This is_fp parameter is TRUE if the thread is to be a floating * point thread. This is typically only used on CPUs where the * FPU may be easily disabled by software such as on the SPARC * where the PSR contains an enable FPU bit. */ #define _CPU_Context_Initialize( _the_context, _stack_base, _size, \ _isr, _entry_point, _is_fp ) \ { \ unsigned32 _stack_tmp = \ (unsigned32)(_stack_base) + (_size) - CPU_STACK_ALIGNMENT; \ _stack_tmp &= ~(CPU_STACK_ALIGNMENT - 1); \ (_the_context)->sp = _stack_tmp; \ (_the_context)->fp = _stack_tmp; \ (_the_context)->ra = (unsigned64)_entry_point; \ (_the_context)->c0_sr = ((_the_context)->c0_sr & 0x0fff0000) | \ ((_isr)?0xff00:0xff01) | \ ((_is_fp)?0x30000000:0x10000000); \ } /* * This routine is responsible for somehow restarting the currently * executing task. If you are lucky, then all that is necessary * is restoring the context. Otherwise, there will need to be * a special assembly routine which does something special in this * case. Context_Restore should work most of the time. It will * not work if restarting self conflicts with the stack frame * assumptions of restoring a context. */ #define _CPU_Context_Restart_self( _the_context ) \ _CPU_Context_restore( (_the_context) ); /* * The purpose of this macro is to allow the initial pointer into * A floating point context area (used to save the floating point * context) to be at an arbitrary place in the floating point * context area. * * This is necessary because some FP units are designed to have * their context saved as a stack which grows into lower addresses. * Other FP units can be saved by simply moving registers into offsets * from the base of the context area. Finally some FP units provide * a "dump context" instruction which could fill in from high to low * or low to high based on the whim of the CPU designers. */ #define _CPU_Context_Fp_start( _base, _offset ) \ ( (void *) _Addresses_Add_offset( (_base), (_offset) ) ) /* * This routine initializes the FP context area passed to it to. * There are a few standard ways in which to initialize the * floating point context. The code included for this macro assumes * that this is a CPU in which a "initial" FP context was saved into * _CPU_Null_fp_context and it simply copies it to the destination * context passed to it. * * Other models include (1) not doing anything, and (2) putting * a "null FP status word" in the correct place in the FP context. */ #if ( CPU_HARDWARE_FP == TRUE ) #define _CPU_Context_Initialize_fp( _destination ) \ { \ *((Context_Control_fp *) *((void **) _destination)) = _CPU_Null_fp_context; \ } #endif /* end of Context handler macros */ /* Fatal Error manager macros */ /* * This routine copies _error into a known place -- typically a stack * location or a register, optionally disables interrupts, and * halts/stops the CPU. */ #define _CPU_Fatal_halt( _error ) \ do { \ unsigned int _level; \ _CPU_ISR_Disable(_level); \ loop: goto loop; \ } while (0) extern void mips_break( int error ); /* Bitfield handler macros */ /* * This routine sets _output to the bit number of the first bit * set in _value. _value is of CPU dependent type Priority_Bit_map_control. * This type may be either 16 or 32 bits wide although only the 16 * least significant bits will be used. * * There are a number of variables in using a "find first bit" type * instruction. * * (1) What happens when run on a value of zero? * (2) Bits may be numbered from MSB to LSB or vice-versa. * (3) The numbering may be zero or one based. * (4) The "find first bit" instruction may search from MSB or LSB. * * RTEMS guarantees that (1) will never happen so it is not a concern. * (2),(3), (4) are handled by the macros _CPU_Priority_mask() and * _CPU_Priority_bits_index(). These three form a set of routines * which must logically operate together. Bits in the _value are * set and cleared based on masks built by _CPU_Priority_mask(). * The basic major and minor values calculated by _Priority_Major() * and _Priority_Minor() are "massaged" by _CPU_Priority_bits_index() * to properly range between the values returned by the "find first bit" * instruction. This makes it possible for _Priority_Get_highest() to * calculate the major and directly index into the minor table. * This mapping is necessary to ensure that 0 (a high priority major/minor) * is the first bit found. * * This entire "find first bit" and mapping process depends heavily * on the manner in which a priority is broken into a major and minor * components with the major being the 4 MSB of a priority and minor * the 4 LSB. Thus (0 << 4) + 0 corresponds to priority 0 -- the highest * priority. And (15 << 4) + 14 corresponds to priority 254 -- the next * to the lowest priority. * * If your CPU does not have a "find first bit" instruction, then * there are ways to make do without it. Here are a handful of ways * to implement this in software: * * - a series of 16 bit test instructions * - a "binary search using if's" * - _number = 0 * if _value > 0x00ff * _value >>=8 * _number = 8; * * if _value > 0x0000f * _value >=8 * _number += 4 * * _number += bit_set_table[ _value ] * * where bit_set_table[ 16 ] has values which indicate the first * bit set */ #define CPU_USE_GENERIC_BITFIELD_CODE TRUE #define CPU_USE_GENERIC_BITFIELD_DATA TRUE #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Bitfield_Find_first_bit( _value, _output ) \ { \ (_output) = 0; /* do something to prevent warnings */ \ } #endif /* end of Bitfield handler macros */ /* * This routine builds the mask which corresponds to the bit fields * as searched by _CPU_Bitfield_Find_first_bit(). See the discussion * for that routine. */ #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Priority_Mask( _bit_number ) \ ( 1 << (_bit_number) ) #endif /* * This routine translates the bit numbers returned by * _CPU_Bitfield_Find_first_bit() into something suitable for use as * a major or minor component of a priority. See the discussion * for that routine. */ #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Priority_bits_index( _priority ) \ (_priority) #endif /* end of Priority handler macros */ /* functions */ /* * _CPU_Initialize * * This routine performs CPU dependent initialization. */ void _CPU_Initialize( rtems_cpu_table *cpu_table, void (*thread_dispatch) ); /* * _CPU_ISR_install_raw_handler * * This routine installs a "raw" interrupt handler directly into the * processor's vector table. */ void _CPU_ISR_install_raw_handler( unsigned32 vector, proc_ptr new_handler, proc_ptr *old_handler ); /* * _CPU_ISR_install_vector * * This routine installs an interrupt vector. */ void _CPU_ISR_install_vector( unsigned32 vector, proc_ptr new_handler, proc_ptr *old_handler ); /* * _CPU_Install_interrupt_stack * * This routine installs the hardware interrupt stack pointer. * * NOTE: It need only be provided if CPU_HAS_HARDWARE_INTERRUPT_STACK * is TRUE. */ void _CPU_Install_interrupt_stack( void ); /* * _CPU_Internal_threads_Idle_thread_body * * This routine is the CPU dependent IDLE thread body. * * NOTE: It need only be provided if CPU_PROVIDES_IDLE_THREAD_BODY * is TRUE. */ void _CPU_Thread_Idle_body( void ); /* * _CPU_Context_switch * * This routine switches from the run context to the heir context. */ void _CPU_Context_switch( Context_Control *run, Context_Control *heir ); /* * _CPU_Context_restore * * This routine is generally used only to restart self in an * efficient manner. It may simply be a label in _CPU_Context_switch. * * NOTE: May be unnecessary to reload some registers. */ void _CPU_Context_restore( Context_Control *new_context ); /* * _CPU_Context_save_fp * * This routine saves the floating point context passed to it. */ void _CPU_Context_save_fp( void **fp_context_ptr ); /* * _CPU_Context_restore_fp * * This routine restores the floating point context passed to it. */ void _CPU_Context_restore_fp( void **fp_context_ptr ); /* The following routine swaps the endian format of an unsigned int. * It must be static because it is referenced indirectly. * * This version will work on any processor, but if there is a better * way for your CPU PLEASE use it. The most common way to do this is to: * * swap least significant two bytes with 16-bit rotate * swap upper and lower 16-bits * swap most significant two bytes with 16-bit rotate * * Some CPUs have special instructions which swap a 32-bit quantity in * a single instruction (e.g. i486). It is probably best to avoid * an "endian swapping control bit" in the CPU. One good reason is * that interrupts would probably have to be disabled to insure that * an interrupt does not try to access the same "chunk" with the wrong * endian. Another good reason is that on some CPUs, the endian bit * endianness for ALL fetches -- both code and data -- so the code * will be fetched incorrectly. */ static inline unsigned int CPU_swap_u32( unsigned int value ) { unsigned32 byte1, byte2, byte3, byte4, swapped; byte4 = (value >> 24) & 0xff; byte3 = (value >> 16) & 0xff; byte2 = (value >> 8) & 0xff; byte1 = value & 0xff; swapped = (byte1 << 24) | (byte2 << 16) | (byte3 << 8) | byte4; return( swapped ); } #define CPU_swap_u16( value ) \ (((value&0xff) << 8) | ((value >> 8)&0xff)) #ifdef __cplusplus } #endif #endif /* ---------------------------------- End of file cpu.h with bug ---------------------------------------------------*/ /* ---------------------------------- Start of file cpu.h with fix -------------------------------------------------*/ /* * Mips CPU Dependent Header File * * Conversion to MIPS port by Alan Cudmore and * Joel Sherrill . * * These changes made the code conditional on standard cpp predefines, * merged the mips1 and mips3 code sequences as much as possible, * and moved some of the assembly code to C. Alan did much of the * initial analysis and rework. Joel took over from there and * wrote the JMR3904 BSP so this could be tested. Joel also * added the new interrupt vectoring support in libcpu and * tried to better support the various interrupt controllers. * * Original MIP64ORION port by Craig Lebakken * COPYRIGHT (c) 1996 by Transition Networks Inc. * * To anyone who acknowledges that this file is provided "AS IS" * without any express or implied warranty: * permission to use, copy, modify, and distribute this file * for any purpose is hereby granted without fee, provided that * the above copyright notice and this notice appears in all * copies, and that the name of Transition Networks not be used in * advertising or publicity pertaining to distribution of the * software without specific, written prior permission. * Transition Networks makes no representations about the suitability * of this software for any purpose. * * COPYRIGHT (c) 1989-2001. * On-Line Applications Research Corporation (OAR). * * The license and distribution terms for this file may be * found in the file LICENSE in this distribution or at * http://www.OARcorp.com/rtems/license.html. * * $Id: cpu.h,v 1.1 2002/03/20 17:27:41 st5 Exp $ * * 03/23/04 Modified by Art Ferrer, NASA/GSFC, Code 582 * Added FP status register to Context_Control_fp. */ #ifndef __CPU_h #define __CPU_h #ifdef __cplusplus extern "C" { #endif #include /* pick up machine definitions */ #ifndef ASM #include #endif /* conditional compilation parameters */ /* * Should the calls to _Thread_Enable_dispatch be inlined? * * If TRUE, then they are inlined. * If FALSE, then a subroutine call is made. * * Basically this is an example of the classic trade-off of size * versus speed. Inlining the call (TRUE) typically increases the * size of RTEMS while speeding up the enabling of dispatching. * [NOTE: In general, the _Thread_Dispatch_disable_level will * only be 0 or 1 unless you are in an interrupt handler and that * interrupt handler invokes the executive.] When not inlined * something calls _Thread_Enable_dispatch which in turns calls * _Thread_Dispatch. If the enable dispatch is inlined, then * one subroutine call is avoided entirely.] */ #define CPU_INLINE_ENABLE_DISPATCH TRUE /* * Should the body of the search loops in _Thread_queue_Enqueue_priority * be unrolled one time? In unrolled each iteration of the loop examines * two "nodes" on the chain being searched. Otherwise, only one node * is examined per iteration. * * If TRUE, then the loops are unrolled. * If FALSE, then the loops are not unrolled. * * The primary factor in making this decision is the cost of disabling * and enabling interrupts (_ISR_Flash) versus the cost of rest of the * body of the loop. On some CPUs, the flash is more expensive than * one iteration of the loop body. In this case, it might be desirable * to unroll the loop. It is important to note that on some CPUs, this * code is the longest interrupt disable period in RTEMS. So it is * necessary to strike a balance when setting this parameter. */ #define CPU_UNROLL_ENQUEUE_PRIORITY TRUE /* * Does RTEMS manage a dedicated interrupt stack in software? * * If TRUE, then a stack is allocated in _Interrupt_Manager_initialization. * If FALSE, nothing is done. * * If the CPU supports a dedicated interrupt stack in hardware, * then it is generally the responsibility of the BSP to allocate it * and set it up. * * If the CPU does not support a dedicated interrupt stack, then * the porter has two options: (1) execute interrupts on the * stack of the interrupted task, and (2) have RTEMS manage a dedicated * interrupt stack. * * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE. * * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is * possible that both are FALSE for a particular CPU. Although it * is unclear what that would imply about the interrupt processing * procedure on that CPU. */ #define CPU_HAS_SOFTWARE_INTERRUPT_STACK FALSE /* * Does this CPU have hardware support for a dedicated interrupt stack? * * If TRUE, then it must be installed during initialization. * If FALSE, then no installation is performed. * * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE. * * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is * possible that both are FALSE for a particular CPU. Although it * is unclear what that would imply about the interrupt processing * procedure on that CPU. */ #define CPU_HAS_HARDWARE_INTERRUPT_STACK FALSE /* * Does RTEMS allocate a dedicated interrupt stack in the Interrupt Manager? * * If TRUE, then the memory is allocated during initialization. * If FALSE, then the memory is allocated during initialization. * * This should be TRUE is CPU_HAS_SOFTWARE_INTERRUPT_STACK is TRUE * or CPU_INSTALL_HARDWARE_INTERRUPT_STACK is TRUE. */ #define CPU_ALLOCATE_INTERRUPT_STACK FALSE /* * Does the RTEMS invoke the user's ISR with the vector number and * a pointer to the saved interrupt frame (1) or just the vector * number (0)? * */ #define CPU_ISR_PASSES_FRAME_POINTER 1 /* * Does the CPU have hardware floating point? * * If TRUE, then the RTEMS_FLOATING_POINT task attribute is supported. * If FALSE, then the RTEMS_FLOATING_POINT task attribute is ignored. * * If there is a FP coprocessor such as the i387 or mc68881, then * the answer is TRUE. * * The macro name "MIPS_HAS_FPU" should be made CPU specific. * It indicates whether or not this CPU model has FP support. For * example, it would be possible to have an i386_nofp CPU model * which set this to false to indicate that you have an i386 without * an i387 and wish to leave floating point support out of RTEMS. */ #if ( MIPS_HAS_FPU == 1 ) #define CPU_HARDWARE_FP TRUE #else #define CPU_HARDWARE_FP FALSE #endif /* * Are all tasks RTEMS_FLOATING_POINT tasks implicitly? * * If TRUE, then the RTEMS_FLOATING_POINT task attribute is assumed. * If FALSE, then the RTEMS_FLOATING_POINT task attribute is followed. * * So far, the only CPU in which this option has been used is the * HP PA-RISC. The HP C compiler and gcc both implicitly use the * floating point registers to perform integer multiplies. If * a function which you would not think utilize the FP unit DOES, * then one can not easily predict which tasks will use the FP hardware. * In this case, this option should be TRUE. * * If CPU_HARDWARE_FP is FALSE, then this should be FALSE as well. */ #define CPU_ALL_TASKS_ARE_FP FALSE /* * Should the IDLE task have a floating point context? * * If TRUE, then the IDLE task is created as a RTEMS_FLOATING_POINT task * and it has a floating point context which is switched in and out. * If FALSE, then the IDLE task does not have a floating point context. * * Setting this to TRUE negatively impacts the time required to preempt * the IDLE task from an interrupt because the floating point context * must be saved as part of the preemption. */ #define CPU_IDLE_TASK_IS_FP FALSE /* * Should the saving of the floating point registers be deferred * until a context switch is made to another different floating point * task? * * If TRUE, then the floating point context will not be stored until * necessary. It will remain in the floating point registers and not * disturned until another floating point task is switched to. * * If FALSE, then the floating point context is saved when a floating * point task is switched out and restored when the next floating point * task is restored. The state of the floating point registers between * those two operations is not specified. * * If the floating point context does NOT have to be saved as part of * interrupt dispatching, then it should be safe to set this to TRUE. * * Setting this flag to TRUE results in using a different algorithm * for deciding when to save and restore the floating point context. * The deferred FP switch algorithm minimizes the number of times * the FP context is saved and restored. The FP context is not saved * until a context switch is made to another, different FP task. * Thus in a system with only one FP task, the FP context will never * be saved or restored. */ #define CPU_USE_DEFERRED_FP_SWITCH TRUE /* * Does this port provide a CPU dependent IDLE task implementation? * * If TRUE, then the routine _CPU_Internal_threads_Idle_thread_body * must be provided and is the default IDLE thread body instead of * _Internal_threads_Idle_thread_body. * * If FALSE, then use the generic IDLE thread body if the BSP does * not provide one. * * This is intended to allow for supporting processors which have * a low power or idle mode. When the IDLE thread is executed, then * the CPU can be powered down. * * The order of precedence for selecting the IDLE thread body is: * * 1. BSP provided * 2. CPU dependent (if provided) * 3. generic (if no BSP and no CPU dependent) */ /* we can use the low power wait instruction for the IDLE thread */ #define CPU_PROVIDES_IDLE_THREAD_BODY TRUE /* * Does the stack grow up (toward higher addresses) or down * (toward lower addresses)? * * If TRUE, then the grows upward. * If FALSE, then the grows toward smaller addresses. */ /* our stack grows down */ #define CPU_STACK_GROWS_UP FALSE /* * The following is the variable attribute used to force alignment * of critical RTEMS structures. On some processors it may make * sense to have these aligned on tighter boundaries than * the minimum requirements of the compiler in order to have as * much of the critical data area as possible in a cache line. * * The placement of this macro in the declaration of the variables * is based on the syntactically requirements of the GNU C * "__attribute__" extension. For example with GNU C, use * the following to force a structures to a 32 byte boundary. * * __attribute__ ((aligned (32))) * * NOTE: Currently only the Priority Bit Map table uses this feature. * To benefit from using this, the data must be heavily * used so it will stay in the cache and used frequently enough * in the executive to justify turning this on. */ /* our cache line size is 16 bytes */ #if __GNUC__ #define CPU_STRUCTURE_ALIGNMENT __attribute__ ((aligned (16))) #else #define CPU_STRUCTURE_ALIGNMENT #endif /* * Define what is required to specify how the network to host conversion * routines are handled. */ #define CPU_HAS_OWN_HOST_TO_NETWORK_ROUTINES FALSE #define CPU_BIG_ENDIAN TRUE #define CPU_LITTLE_ENDIAN FALSE /* * The following defines the number of bits actually used in the * interrupt field of the task mode. How those bits map to the * CPU interrupt levels is defined by the routine _CPU_ISR_Set_level(). */ #define CPU_MODES_INTERRUPT_MASK 0x00000001 /* * Processor defined structures * * Examples structures include the descriptor tables from the i386 * and the processor control structure on the i960ca. */ /* may need to put some structures here. */ /* * Contexts * * Generally there are 2 types of context to save. * 1. Interrupt registers to save * 2. Task level registers to save * * This means we have the following 3 context items: * 1. task level context stuff:: Context_Control * 2. floating point task stuff:: Context_Control_fp * 3. special interrupt level context :: Context_Control_interrupt * * On some processors, it is cost-effective to save only the callee * preserved registers during a task context switch. This means * that the ISR code needs to save those registers which do not * persist across function calls. It is not mandatory to make this * distinctions between the caller/callee saves registers for the * purpose of minimizing context saved during task switch and on interrupts. * If the cost of saving extra registers is minimal, simplicity is the * choice. Save the same context on interrupt entry as for tasks in * this case. * * Additionally, if gdb is to be made aware of RTEMS tasks for this CPU, then * care should be used in designing the context area. * * On some CPUs with hardware floating point support, the Context_Control_fp * structure will not be used or it simply consist of an array of a * fixed number of bytes. This is done when the floating point context * is dumped by a "FP save context" type instruction and the format * is not really defined by the CPU. In this case, there is no need * to figure out the exact format -- only the size. Of course, although * this is enough information for RTEMS, it is probably not enough for * a debugger such as gdb. But that is another problem. */ /* WARNING: If this structure is modified, the constants in cpu.h must be updated. */ #if __mips == 1 #define __MIPS_REGISTER_TYPE unsigned32 #define __MIPS_FPU_REGISTER_TYPE unsigned32 #elif __mips == 3 #define __MIPS_REGISTER_TYPE unsigned64 #define __MIPS_FPU_REGISTER_TYPE unsigned64 #else #error "mips register size: unknown architecture level!!" #endif typedef struct { __MIPS_REGISTER_TYPE s0; __MIPS_REGISTER_TYPE s1; __MIPS_REGISTER_TYPE s2; __MIPS_REGISTER_TYPE s3; __MIPS_REGISTER_TYPE s4; __MIPS_REGISTER_TYPE s5; __MIPS_REGISTER_TYPE s6; __MIPS_REGISTER_TYPE s7; __MIPS_REGISTER_TYPE sp; __MIPS_REGISTER_TYPE fp; __MIPS_REGISTER_TYPE ra; __MIPS_REGISTER_TYPE c0_sr; /* __MIPS_REGISTER_TYPE c0_epc; */ } Context_Control; /* WARNING: If this structure is modified, the constants in cpu.h * must also be updated. */ typedef struct { #if ( CPU_HARDWARE_FP == TRUE ) __MIPS_FPU_REGISTER_TYPE fp0; __MIPS_FPU_REGISTER_TYPE fp1; __MIPS_FPU_REGISTER_TYPE fp2; __MIPS_FPU_REGISTER_TYPE fp3; __MIPS_FPU_REGISTER_TYPE fp4; __MIPS_FPU_REGISTER_TYPE fp5; __MIPS_FPU_REGISTER_TYPE fp6; __MIPS_FPU_REGISTER_TYPE fp7; __MIPS_FPU_REGISTER_TYPE fp8; __MIPS_FPU_REGISTER_TYPE fp9; __MIPS_FPU_REGISTER_TYPE fp10; __MIPS_FPU_REGISTER_TYPE fp11; __MIPS_FPU_REGISTER_TYPE fp12; __MIPS_FPU_REGISTER_TYPE fp13; __MIPS_FPU_REGISTER_TYPE fp14; __MIPS_FPU_REGISTER_TYPE fp15; __MIPS_FPU_REGISTER_TYPE fp16; __MIPS_FPU_REGISTER_TYPE fp17; __MIPS_FPU_REGISTER_TYPE fp18; __MIPS_FPU_REGISTER_TYPE fp19; __MIPS_FPU_REGISTER_TYPE fp20; __MIPS_FPU_REGISTER_TYPE fp21; __MIPS_FPU_REGISTER_TYPE fp22; __MIPS_FPU_REGISTER_TYPE fp23; __MIPS_FPU_REGISTER_TYPE fp24; __MIPS_FPU_REGISTER_TYPE fp25; __MIPS_FPU_REGISTER_TYPE fp26; __MIPS_FPU_REGISTER_TYPE fp27; __MIPS_FPU_REGISTER_TYPE fp28; __MIPS_FPU_REGISTER_TYPE fp29; __MIPS_FPU_REGISTER_TYPE fp30; __MIPS_FPU_REGISTER_TYPE fp31; __MIPS_FPU_REGISTER_TYPE fpcs; #endif } Context_Control_fp; /* This struct reflects the stack frame employed in ISR_Handler. Note that the ISR routine doesn't save all registers to this frame, so cpu_asm.S should be consulted to see if the registers you're interested in are actually there. */ typedef struct { #if __mips == 1 unsigned int regs[80]; #endif #if __mips == 3 unsigned int regs[94]; #endif } CPU_Interrupt_frame; /* * The following table contains the information required to configure * the mips processor specific parameters. */ typedef struct { void (*pretasking_hook)( void ); void (*predriver_hook)( void ); void (*postdriver_hook)( void ); void (*idle_task)( void ); boolean do_zero_of_workspace; unsigned32 idle_task_stack_size; unsigned32 interrupt_stack_size; unsigned32 extra_mpci_receive_server_stack; void * (*stack_allocate_hook)( unsigned32 ); void (*stack_free_hook)( void* ); /* end of fields required on all CPUs */ unsigned32 clicks_per_microsecond; } rtems_cpu_table; /* * Macros to access required entires in the CPU Table are in * the file rtems/system.h. */ /* * Macros to access MIPS specific additions to the CPU Table */ #define rtems_cpu_configuration_get_clicks_per_microsecond() \ (_CPU_Table.clicks_per_microsecond) /* * This variable is optional. It is used on CPUs on which it is difficult * to generate an "uninitialized" FP context. It is filled in by * _CPU_Initialize and copied into the task's FP context area during * _CPU_Context_Initialize. */ SCORE_EXTERN Context_Control_fp _CPU_Null_fp_context; /* * On some CPUs, RTEMS supports a software managed interrupt stack. * This stack is allocated by the Interrupt Manager and the switch * is performed in _ISR_Handler. These variables contain pointers * to the lowest and highest addresses in the chunk of memory allocated * for the interrupt stack. Since it is unknown whether the stack * grows up or down (in general), this give the CPU dependent * code the option of picking the version it wants to use. * * NOTE: These two variables are required if the macro * CPU_HAS_SOFTWARE_INTERRUPT_STACK is defined as TRUE. */ SCORE_EXTERN void *_CPU_Interrupt_stack_low; SCORE_EXTERN void *_CPU_Interrupt_stack_high; /* * With some compilation systems, it is difficult if not impossible to * call a high-level language routine from assembly language. This * is especially true of commercial Ada compilers and name mangling * C++ ones. This variable can be optionally defined by the CPU porter * and contains the address of the routine _Thread_Dispatch. This * can make it easier to invoke that routine at the end of the interrupt * sequence (if a dispatch is necessary). * SCORE_EXTERN void (*_CPU_Thread_dispatch_pointer)(); * * NOTE: Not needed on this port. */ /* * Nothing prevents the porter from declaring more CPU specific variables. */ /* XXX: if needed, put more variables here */ /* * The size of the floating point context area. On some CPUs this * will not be a "sizeof" because the format of the floating point * area is not defined -- only the size is. This is usually on * CPUs with a "floating point save context" instruction. */ #define CPU_CONTEXT_FP_SIZE sizeof( Context_Control_fp ) /* * Amount of extra stack (above minimum stack size) required by * system initialization thread. Remember that in a multiprocessor * system the system intialization thread becomes the MP server thread. */ #define CPU_MPCI_RECEIVE_SERVER_EXTRA_STACK 0 /* * This defines the number of entries in the ISR_Vector_table managed * by RTEMS. */ extern unsigned int mips_interrupt_number_of_vectors; #define CPU_INTERRUPT_NUMBER_OF_VECTORS (mips_interrupt_number_of_vectors) #define CPU_INTERRUPT_MAXIMUM_VECTOR_NUMBER (CPU_INTERRUPT_NUMBER_OF_VECTORS - 1) /* * Should be large enough to run all RTEMS tests. This insures * that a "reasonable" small application should not have any problems. */ #define CPU_STACK_MINIMUM_SIZE (2048*sizeof(unsigned32)) /* * CPU's worst alignment requirement for data types on a byte boundary. This * alignment does not take into account the requirements for the stack. */ #define CPU_ALIGNMENT 8 /* * This number corresponds to the byte alignment requirement for the * heap handler. This alignment requirement may be stricter than that * for the data types alignment specified by CPU_ALIGNMENT. It is * common for the heap to follow the same alignment requirement as * CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict enough for the heap, * then this should be set to CPU_ALIGNMENT. * * NOTE: This does not have to be a power of 2. It does have to * be greater or equal to than CPU_ALIGNMENT. */ #define CPU_HEAP_ALIGNMENT CPU_ALIGNMENT /* * This number corresponds to the byte alignment requirement for memory * buffers allocated by the partition manager. This alignment requirement * may be stricter than that for the data types alignment specified by * CPU_ALIGNMENT. It is common for the partition to follow the same * alignment requirement as CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict * enough for the partition, then this should be set to CPU_ALIGNMENT. * * NOTE: This does not have to be a power of 2. It does have to * be greater or equal to than CPU_ALIGNMENT. */ #define CPU_PARTITION_ALIGNMENT CPU_ALIGNMENT /* * This number corresponds to the byte alignment requirement for the * stack. This alignment requirement may be stricter than that for the * data types alignment specified by CPU_ALIGNMENT. If the CPU_ALIGNMENT * is strict enough for the stack, then this should be set to 0. * * NOTE: This must be a power of 2 either 0 or greater than CPU_ALIGNMENT. */ #define CPU_STACK_ALIGNMENT CPU_ALIGNMENT /* * ISR handler macros */ /* * Support routine to initialize the RTEMS vector table after it is allocated. */ #define _CPU_Initialize_vectors() /* * Disable all interrupts for an RTEMS critical section. The previous * level is returned in _level. */ #define _CPU_ISR_Disable( _level ) \ do { \ mips_get_sr( _level ); \ mips_set_sr( (_level) & ~SR_INTERRUPT_ENABLE_BITS ); \ } while(0) /* * Enable interrupts to the previous level (returned by _CPU_ISR_Disable). * This indicates the end of an RTEMS critical section. The parameter * _level is not modified. */ #define _CPU_ISR_Enable( _level ) \ do { \ mips_set_sr(_level); \ } while(0) /* * This temporarily restores the interrupt to _level before immediately * disabling them again. This is used to divide long RTEMS critical * sections into two or more parts. The parameter _level is not * modified. */ #define _CPU_ISR_Flash( _xlevel ) \ do { \ unsigned int _scratch; \ _CPU_ISR_Enable( _xlevel ); \ _CPU_ISR_Disable( _scratch ); \ } while(0) /* * Map interrupt level in task mode onto the hardware that the CPU * actually provides. Currently, interrupt levels which do not * map onto the CPU in a generic fashion are undefined. Someday, * it would be nice if these were "mapped" by the application * via a callout. For example, m68k has 8 levels 0 - 7, levels * 8 - 255 would be available for bsp/application specific meaning. * This could be used to manage a programmable interrupt controller * via the rtems_task_mode directive. * * On the MIPS, 0 is all on. Non-zero is all off. This only * manipulates the IEC. */ unsigned32 _CPU_ISR_Get_level( void ); /* in cpu.c */ void _CPU_ISR_Set_level( unsigned32 ); /* in cpu.c */ /* end of ISR handler macros */ /* Context handler macros */ /* * Initialize the context to a state suitable for starting a * task after a context restore operation. Generally, this * involves: * * - setting a starting address * - preparing the stack * - preparing the stack and frame pointers * - setting the proper interrupt level in the context * - initializing the floating point context * * This routine generally does not set any unnecessary register * in the context. The state of the "general data" registers is * undefined at task start time. * * NOTE: This is_fp parameter is TRUE if the thread is to be a floating * point thread. This is typically only used on CPUs where the * FPU may be easily disabled by software such as on the SPARC * where the PSR contains an enable FPU bit. */ #define _CPU_Context_Initialize( _the_context, _stack_base, _size, \ _isr, _entry_point, _is_fp ) \ { \ unsigned32 _stack_tmp = \ (unsigned32)(_stack_base) + (_size) - CPU_STACK_ALIGNMENT; \ _stack_tmp &= ~(CPU_STACK_ALIGNMENT - 1); \ (_the_context)->sp = _stack_tmp; \ (_the_context)->fp = _stack_tmp; \ (_the_context)->ra = (unsigned64)_entry_point; \ (_the_context)->c0_sr = ((_the_context)->c0_sr & 0x0fff0000) | \ ((_isr)?0xff00:0xff01) | \ ((_is_fp)?0x30000000:0x10000000); \ } /* * This routine is responsible for somehow restarting the currently * executing task. If you are lucky, then all that is necessary * is restoring the context. Otherwise, there will need to be * a special assembly routine which does something special in this * case. Context_Restore should work most of the time. It will * not work if restarting self conflicts with the stack frame * assumptions of restoring a context. */ #define _CPU_Context_Restart_self( _the_context ) \ _CPU_Context_restore( (_the_context) ); /* * The purpose of this macro is to allow the initial pointer into * A floating point context area (used to save the floating point * context) to be at an arbitrary place in the floating point * context area. * * This is necessary because some FP units are designed to have * their context saved as a stack which grows into lower addresses. * Other FP units can be saved by simply moving registers into offsets * from the base of the context area. Finally some FP units provide * a "dump context" instruction which could fill in from high to low * or low to high based on the whim of the CPU designers. */ #define _CPU_Context_Fp_start( _base, _offset ) \ ( (void *) _Addresses_Add_offset( (_base), (_offset) ) ) /* * This routine initializes the FP context area passed to it to. * There are a few standard ways in which to initialize the * floating point context. The code included for this macro assumes * that this is a CPU in which a "initial" FP context was saved into * _CPU_Null_fp_context and it simply copies it to the destination * context passed to it. * * Other models include (1) not doing anything, and (2) putting * a "null FP status word" in the correct place in the FP context. */ #if ( CPU_HARDWARE_FP == TRUE ) #define _CPU_Context_Initialize_fp( _destination ) \ { \ *((Context_Control_fp *) *((void **) _destination)) = _CPU_Null_fp_context; \ } #endif /* end of Context handler macros */ /* Fatal Error manager macros */ /* * This routine copies _error into a known place -- typically a stack * location or a register, optionally disables interrupts, and * halts/stops the CPU. */ #define _CPU_Fatal_halt( _error ) \ do { \ unsigned int _level; \ _CPU_ISR_Disable(_level); \ loop: goto loop; \ } while (0) extern void mips_break( int error ); /* Bitfield handler macros */ /* * This routine sets _output to the bit number of the first bit * set in _value. _value is of CPU dependent type Priority_Bit_map_control. * This type may be either 16 or 32 bits wide although only the 16 * least significant bits will be used. * * There are a number of variables in using a "find first bit" type * instruction. * * (1) What happens when run on a value of zero? * (2) Bits may be numbered from MSB to LSB or vice-versa. * (3) The numbering may be zero or one based. * (4) The "find first bit" instruction may search from MSB or LSB. * * RTEMS guarantees that (1) will never happen so it is not a concern. * (2),(3), (4) are handled by the macros _CPU_Priority_mask() and * _CPU_Priority_bits_index(). These three form a set of routines * which must logically operate together. Bits in the _value are * set and cleared based on masks built by _CPU_Priority_mask(). * The basic major and minor values calculated by _Priority_Major() * and _Priority_Minor() are "massaged" by _CPU_Priority_bits_index() * to properly range between the values returned by the "find first bit" * instruction. This makes it possible for _Priority_Get_highest() to * calculate the major and directly index into the minor table. * This mapping is necessary to ensure that 0 (a high priority major/minor) * is the first bit found. * * This entire "find first bit" and mapping process depends heavily * on the manner in which a priority is broken into a major and minor * components with the major being the 4 MSB of a priority and minor * the 4 LSB. Thus (0 << 4) + 0 corresponds to priority 0 -- the highest * priority. And (15 << 4) + 14 corresponds to priority 254 -- the next * to the lowest priority. * * If your CPU does not have a "find first bit" instruction, then * there are ways to make do without it. Here are a handful of ways * to implement this in software: * * - a series of 16 bit test instructions * - a "binary search using if's" * - _number = 0 * if _value > 0x00ff * _value >>=8 * _number = 8; * * if _value > 0x0000f * _value >=8 * _number += 4 * * _number += bit_set_table[ _value ] * * where bit_set_table[ 16 ] has values which indicate the first * bit set */ #define CPU_USE_GENERIC_BITFIELD_CODE TRUE #define CPU_USE_GENERIC_BITFIELD_DATA TRUE #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Bitfield_Find_first_bit( _value, _output ) \ { \ (_output) = 0; /* do something to prevent warnings */ \ } #endif /* end of Bitfield handler macros */ /* * This routine builds the mask which corresponds to the bit fields * as searched by _CPU_Bitfield_Find_first_bit(). See the discussion * for that routine. */ #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Priority_Mask( _bit_number ) \ ( 1 << (_bit_number) ) #endif /* * This routine translates the bit numbers returned by * _CPU_Bitfield_Find_first_bit() into something suitable for use as * a major or minor component of a priority. See the discussion * for that routine. */ #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE) #define _CPU_Priority_bits_index( _priority ) \ (_priority) #endif /* end of Priority handler macros */ /* functions */ /* * _CPU_Initialize * * This routine performs CPU dependent initialization. */ void _CPU_Initialize( rtems_cpu_table *cpu_table, void (*thread_dispatch) ); /* * _CPU_ISR_install_raw_handler * * This routine installs a "raw" interrupt handler directly into the * processor's vector table. */ void _CPU_ISR_install_raw_handler( unsigned32 vector, proc_ptr new_handler, proc_ptr *old_handler ); /* * _CPU_ISR_install_vector * * This routine installs an interrupt vector. */ void _CPU_ISR_install_vector( unsigned32 vector, proc_ptr new_handler, proc_ptr *old_handler ); /* * _CPU_Install_interrupt_stack * * This routine installs the hardware interrupt stack pointer. * * NOTE: It need only be provided if CPU_HAS_HARDWARE_INTERRUPT_STACK * is TRUE. */ void _CPU_Install_interrupt_stack( void ); /* * _CPU_Internal_threads_Idle_thread_body * * This routine is the CPU dependent IDLE thread body. * * NOTE: It need only be provided if CPU_PROVIDES_IDLE_THREAD_BODY * is TRUE. */ void _CPU_Thread_Idle_body( void ); /* * _CPU_Context_switch * * This routine switches from the run context to the heir context. */ void _CPU_Context_switch( Context_Control *run, Context_Control *heir ); /* * _CPU_Context_restore * * This routine is generally used only to restart self in an * efficient manner. It may simply be a label in _CPU_Context_switch. * * NOTE: May be unnecessary to reload some registers. */ void _CPU_Context_restore( Context_Control *new_context ); /* * _CPU_Context_save_fp * * This routine saves the floating point context passed to it. */ void _CPU_Context_save_fp( void **fp_context_ptr ); /* * _CPU_Context_restore_fp * * This routine restores the floating point context passed to it. */ void _CPU_Context_restore_fp( void **fp_context_ptr ); /* The following routine swaps the endian format of an unsigned int. * It must be static because it is referenced indirectly. * * This version will work on any processor, but if there is a better * way for your CPU PLEASE use it. The most common way to do this is to: * * swap least significant two bytes with 16-bit rotate * swap upper and lower 16-bits * swap most significant two bytes with 16-bit rotate * * Some CPUs have special instructions which swap a 32-bit quantity in * a single instruction (e.g. i486). It is probably best to avoid * an "endian swapping control bit" in the CPU. One good reason is * that interrupts would probably have to be disabled to insure that * an interrupt does not try to access the same "chunk" with the wrong * endian. Another good reason is that on some CPUs, the endian bit * endianness for ALL fetches -- both code and data -- so the code * will be fetched incorrectly. */ static inline unsigned int CPU_swap_u32( unsigned int value ) { unsigned32 byte1, byte2, byte3, byte4, swapped; byte4 = (value >> 24) & 0xff; byte3 = (value >> 16) & 0xff; byte2 = (value >> 8) & 0xff; byte1 = value & 0xff; swapped = (byte1 << 24) | (byte2 << 16) | (byte3 << 8) | byte4; return( swapped ); } #define CPU_swap_u16( value ) \ (((value&0xff) << 8) | ((value >> 8)&0xff)) #ifdef __cplusplus } #endif #endif /* ---------------------------------- End of file cpu.h with fix ---------------------------------------------------*/