User:Johnburger/PIC/Background
General PIC capabilities
The PIC is very flexible in its usage. As an example, I once programmed a system that had an 80188 CPU, an 8259 PIC, and 8x 16450 UART serial chips. After one of the UARTs interrupted the CPU, I had configured the PIC to "rotate" the priority of the interrupts to demote that UART to the bottom of the list, allowing others to get a fairer service.
But it would be true to say that well over 99% of all 8259 PICs (or their equivalents) are installed in PCs, which uses the PIC in only one of its many modes. Today the PIC's functionality, along with that of many other previously separate chips, has been moved into a single all-in-one support chip called the Southbridge.
The PC: IBM made a Whoops! (or two... or...)
IBM's decision in 1980 to base its new Personal Computer on Intel's architecture arguably made Intel the world's largest chip manufacturer today. Not only the processor but also the support chips (the PIC was just one of the many Intel chips used) all worked together, were readily available, and were cheap. IBM just had to put them all together and make some design decisions: but at the time they weren't aware of how fundamentally some of those decisions were going to affect the future.
Intel specification
Intel's 8086/8088 processor supported up to 256 interrupts, up to 64 of which could be external interrupts (called Interrupt ReQuests or IRQs) using the 8259 PIC (see below). Intel predefined the first few interrupts (INTs) of the CPU to signal internal exceptions, such as Division by Zero or Debug, and quietly documented that they reserved the first 32 interrupts for future use - even though only the first 5 were currently defined.
IBM either missed the documented reservation, or ignored it since they weren't actually in use. They promptly and arbitrarily assigned various interrupts from 5 upwards for their own use: system calls, hardware interrupts (IRQs), and even simple pointers to data tables. (Tip: Don't ever make an INT call to one of those ones - unless you want to crash your computer?) For example, INT 5, the first one not in use, was adopted to perform a Screen Print (that smacks of an early debugging requirement to me...) At least when Microsoft added their own interrupts in MS-DOS they started from 32 (20h) - but that may be because IBM had already used most of the lower ones!
The effect of this decision became apparent when the PC was upgraded to use the 80286 and 80386. These newer processors used more of those Intel-reserved interrupts, which meant that executing a simple BOUND instruction on a PC could now cause the printer to burst into life! Worse, the IRQs (assigned to INTs 8 through 15 on the PC) interfered with more of the CPU's internal exceptions, complicating their handlers. Was that INT 13 caused by an internal General Protection Fault, or the Ethernet card?
This is why one of the first things that an OS writer on the modern PC platform needs to do is reprogram the PIC to get the IRQs away from the Intel-reserved exceptions! Blame it on IBM...
IRQ sharing
Another design decision - or at least a decision not to be more proscriptive in their documentation - also had an effect on the future of the new PC. The PC had an expansion bus where extra circuit boards could be installed, adding features such as communications or storage to the basic PC. Some of these expansion cards may need to interrupt the CPU, so the bus carried IRQs 2 through 7 (IRQs 0 and 1 were already dedicated to the timer and keyboard on the motherboard). Properly designed cards could use these IRQs in a cooperative manner, sharing the same interrupt with each other.
Unfortunately, most cards weren't properly designed in this respect - they assumed that they were the only one using a particular IRQ. That very quickly used up all the available IRQs, making adding new cards a tricky proposition for unskilled users. Moveable jumpers were required on expansion cards to move between available IRQs - and then the drivers for those cards needed to be informed of the jumper position. The invention of Plug-and-Play (PnP) could arguably be ascribed to this situation.
Level- versus Edge-triggered Interrupts
The final design compromise - with respect to IRQs, anyway - was actually one without a correct answer. The PIC supports two kinds of Interrupt modes: level-triggered and edge-triggered. Think of a pupil sitting in a classroom wanting to attract the attention of the teacher. She could raise her hand and keep it up until the teacher acknowledged her, or she could raise it and quickly lower it again, hoping to catch the teacher's eye. If the teacher had his back turned (busy doing something else) he may completely miss the second type, but with the first type the raised hand could obscure someone sitting behind the girl.
A level-triggered interrupt is raised for as long as the hardware is requesting service for the interrupt. If the CPU doesn't service the interrupt, which would allow the hardware to lower the interrupt line, the CPU cannot be told about other interrupts that may be pending.
However, an edge-triggered interrupt merely pulses the interrupt line, and if the CPU misses the pulse the device may go unserviced. Also, if the device decides it doesn't need an interrupt after all, there's no way for it to "take back" its interrupt pulse.
The designers of the original IBM PC decided to go for edge-triggered interrupts. Later designers of the PCI bus decided to go for level-triggered interrupts. Either way, the programmer who is writing the interrupt handler (you!) needs to carefully handle the device(s) assigned to an interrupt to discover if it actually needs servicing: never assume that the interrupt came from 'your' device, and always assume that there may be other devices hanging on the same interrupt.
One more issue to do with interrupts is the concept of a "spurious" interrupt. If a PIC sees an interrupt, it will immediately pass it on to the CPU - which may have interrupts disabled. By the time the CPU gets back to the PIC, the interrupt may have gone away (maybe the edge-triggered pulse was too short, or maybe the level-triggered interrupt simply stopped, or maybe electrical noise made a phantom spike). But the PIC is committed: it has to tell the CPU something. What it does is to signify an IRQ 7. The IRQ 7 handler then has to handle the possibility that 'its' associated device may not have been the cause of the interrupt: it must examine the PIC to see if there really was an interrupt, and simply ignore it if there wasn't. (Also, see "Working with the Slave" below).
Adding more Interrupts - the IBM PC/AT
When IBM started work on the PC/AT, they had to increase the size of the expansion bus to accommodate the new 80286's 24-bit address space, so that gave them the opportunity to add more interrupts to the system. Luckily, the PIC was already designed to support extra interrupts by sacrificing one or more of its interrupt lines so that a new PIC could be wired on in a Master/Slave cascade arrangement - adding eight more interrupts each!
This immediately gave the designers a problem though. In the few years that the original IBM PC had been sold, various board manufacturers had already utilised all of the available IRQ lines, and removing one of the used lines would orphan a section of the market. They wanted to support all of the existing boards - so they devised a clever trick:
They decided to sacrifice IRQ 2, but then designate IRQ 9 as its replacement. They wired IRQ 9 where IRQ 2 used to be on the expansion bus (renaming it IRQ 2/9 in the process), then wrote a default IRQ 9 handler that would do the housekeeping necessary for the second PIC, and then simply jump to the original IRQ 2 handler. This meant that old IRQ 2 devices could work transparently on the new PC/AT, without any changes to the device driver, and without the device realising that it was actually using a new IRQ (let alone a different PIC).
Their other decision was which interrupts to assign the new IRQs to. Many of the interrupts had already been given well-known functions by various companies, and IBM needed a block of eight of them! They ended up choosing 70h-77h - at least it was out of the Intel-reserved space!
Working with the Master
Adding a new PIC didn't require much of a change to the code that handled the existing PIC. The initialisation code during system startup had to change, telling the old PIC that it was now a Master and where the Slave was, but all of the rest of the code could stay the same - none of the existing code for any of the current boards would have to change to work with the new PC/AT.
Working with the Slave
New code would have to be added to the system startup sequence to prepare the new PIC, telling it that it was a Slave and where it was connected to the Master.
And of course, the device drivers for any new boards that wanted to use the new IRQs would have to handle the new PIC. If they were on one of the original IRQs, they would have to perform housekeeping on just the old PIC as before. If they were on one of the new IRQs, they would have to perform housekeeping on both the old and the new PICs at startup, as well as during the interrupt handler.
Interrupt Prioritisation
As described above, the PIC arbitrates which interrupt to pass on to the CPU in priority order - by default it designates the first input (IRQ0) as the highest priority. If a middle-priority interrupt occurs, the PIC will interrupt the CPU which will begin to execute the relevant interrupt handler. If that handler hasn't finished when a lower-priority interrupt comes in, the PIC will 'save' that interrupt until the CPU has finished with the earlier one. But if a higher-priority one comes in, the PIC will immediately attempt to interrupt the CPU.
That of course raises the question: how does the PIC know that the CPU has finished with the current interrupt so that it can pass on a lower-priority one? To resolve this, the PIC has two different modes:
- It can be set to deem that the interrupt is handled as soon as it successfully interrupts the CPU. Known as the Automatic End Of Interrupt (AEOI) mode, it means that the PIC can immediately signal the CPU on the next interrupt.
- Or it can be set to require the interrupt handler to send a command to the PIC when it has reached a suitable point in its processing - typically right at the end. This command is generically known as the End Of Interrupt (EOI) command, although the commonly used one is the Non-specific EOI: the PIC knows which interrupt it last gave the CPU, so it knows which one to complete when a (non-specific) EOI command is given, and the interrupt handler doesn't have to issue a Specific EOI.
The IBM PC designers decided that the AEOI mode was too dangerous and that they wanted explicit control over interrupts, so they only used the standard EOI mode.
But there is one independent issue that affects the PIC's prioritisation mechanism. When the CPU receives an interrupt, it saves its current state (represented by a few registers) onto the stack, clears the "Interrupt Enabled" flag (disabling future interrupts), and starts to execute the interrupt handler. When that handler is finished, the previous CPU state is restored (including the Interrupt flag state), and the normal processing resumes. However, for the period of the interrupt's execution, by default all future interrupts are disabled, defeating the PIC's priority handling mechanism. Unless the interrupt handler explicitly re-enabled interrupts, no new interrupts could be handled regardless of their priority. (Note that as of the 80286, the default "disable interrupts" mechanism can be changed on a per-interrupt basis.)
There's also a second issue that affects the prioritisation of interrupts, but this time only for the Slave PIC. If a low-priority interrupt occurs on the Slave PIC, it signals the Master PIC, which in turn interrupts the CPU. If a higher-priority interrupt were to then occur on the Slave PIC before the previous one was acknowledged, the Slave would again signal the Master PIC - which would then postpone it. The new interrupt wouldn't be of a higher priority than the one currently being serviced ("equal" is not "higher"), so the Master PIC would postpone it until the CPU was ready, again defeating the Slave PIC's priority system.
To combat this, the PIC designers provided another mode that could be enabled on the Master PIC:
- The standard cascade mode is named the "Fully Nested Mode" (FNM). When a Slave interrupt handler needed to acknowledge its interrupt, it would always send an EOI command to both the Master and Slave PICs.
- A second cascade mode was named the "Special Fully Nested Mode" (SFNM). In this mode, the Master PIC would allow interrupts from the same priority through, but only if they came from a Slave. In this mode though, a Slave interrupt handler must not acknowledge the Slave interrupt on the Master unless the Slave had finished all its interrupts. This requirement meant that all Slave interrupt handlers were more complicated than before, but worse, they introduced an uncertainty: although the Slave interrupt handler could determine that the Slave PIC was indeed finished, before the handler could get around to acknowledging the interrupt on the Master, another Slave interrupt could arrive.
The uncertainty of SFNM versus the lack of prioritisation of FNM meant that the IBM PC/AT designers decided to use the standard Fully Nested Mode to initialise their Master PIC.