IBM was already investigating the use of RISC processors in desktop machines, and could, in theory, save considerable money if a single well-documented bus could be used across their entire computer lineup.
[3] The Micro Channel was primarily a 32-bit bus, but the system also supported a 16-bit mode designed to lower the cost of connectors and logic in Intel-based machines like the IBM PS/2.
Micro Channel cards also featured a unique, 16-bit software-readable ID, which formed the basis of an early plug and play system.
The BIOS and/or OS can read IDs, compare against a list of known cards, and perform automatic system configuration to suit.
In turn, this required IBM to release updated Reference Disks (The CMOS Setup Utility) on a regular basis.
To accompany these reference disks were ADF files which were read by setup which in turn provided configuration information for the card.
In this critical area, Micro Channel architecture's biggest advantage was also its greatest disadvantage, and one of the major reasons for its demise.
But for large organizations with hundreds or even thousands of PCs, permanently matching each PC with its own floppy disk was logistically unlikely or impossible.
After this experience repeated itself thousands of times, business leaders realized their dream scenario for upgrade simplicity did not work in the corporate world, and they sought a better process.
In theory, Micro Channel architecture systems could be expanded, like mainframes, with only the addition of intelligent masters, without periodic need to upgrade the central processor.
The final major Micro Channel architecture improvement was POS, the Programmable Option Select, which allowed all setup to take place in software.
The feature did not really live up to its promise; the automatic configuration was fine when it worked, but it frequently did not - resulting in an unbootable computer - and resolving the problem by manual intervention was much more difficult than configuring an ISA system, not least because the documentation for the MCA device would tend to assume that the automatic configuration would work and so did not provide the necessary information to set it up by hand, unlike ISA device documentation which by necessity provided full details (however having to physically remove and check all IRQ settings, then find and set the new IRQ for a new device—if a suitable one was available—for ISA was no fun at all, and beyond many users... it is obvious why the attempt was made to move to software-arbitrated configuration, and why this was to later succeed in the form of PnP.)
In November 1983 The Economist stated that the IBM PC standard's dominance of the personal computer market was not a problem because "it can help competition to flourish".
A small number of other manufacturers, including Apricot, Dell, Research Machines, and Olivetti adopted it, but only for part of their PC range.
Despite the fact that MCA was a huge technical improvement over ISA, it soon became clear that its introduction and marketing by IBM was poorly handled.
The PC clone market did not want to pay royalties to IBM in order to use this new technology, and stayed largely with the 16-bit AT bus, (embraced and renamed as ISA to avoid IBM's "AT" trademark) and manual configuration, although the VESA Local Bus (VLB) was briefly popular for Intel '486 machines.
For servers the technical limitations of the old ISA were too great, and, in late 1988, the "Gang of Nine", led by Compaq, announced a rival high-performance bus - Extended Industry Standard Architecture (EISA).
This offered similar performance benefits to Micro Channel, but with the twin advantage of being able to accept older ISA boards and being free from IBM's control.