The single-level store was implemented in the System/38 and moved to other systems in the lineup after that, but the concept of a machine that directly ran high-level languages has never appeared in an IBM product.
Only six months later, IBM began a study project on what trends were taking place in the market and how these should be used in a series of machines that would replace the 360 in the future.
When the machines were announced in 1970, now known as the System/370, they were essentially 360s using small-scale ICs for logic, much larger amounts of internal memory and other relatively minor changes.
[3] The recession of 1969–1970 led to slowing sales in the 1970-71 time period and much smaller orders for the 370 compared to the rapid uptake of the 360 five years earlier.
While some in the company began efforts to introduce useful improvements to the 370 as soon as possible to make them more attractive, others felt nothing short of a complete reimagining of the system would work in the long term.
IBM's Jerrier A. Haddad wrote a memo on the topic, suggesting that the cost of logic and memory was going to zero faster than it could be measured.
[3] An internal Corporate Technology Committee (CTC) study concluded a 30-fold reduction in the price of memory would take place in the next five years, and another 30 in the five after that.
[4] In terms of the computer itself, if one followed the progression from the 360 to the 370 and onto some hypothetical System/380, the new machines would be based on large-scale integration and would be dramatically reduced in complexity and cost.
Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue.
At the time, the first systems with virtual memory (VM) were emerging, and the seminal Multics project had expanded on this concept as the basis for a single-level store.
[7] Physically, systems would be single-level stores, so the idea of having another layer for "files" which represented separate storage made no sense, and having pointers into a single large memory would not only mean one could simply refer to any data as it if were local, but also eliminate the need for separate application programming interfaces (APIs) for the same data depending on whether it was loaded or not.
A group of twelve participants spread across three divisions produced the "Higher Level System Report", or HLS, which was delivered on 25 February 1970.
This proved enormously successful, as a customer could buy a low-end machine and always upgrade to a faster one in the future, knowing all their applications would continue to run.
If a program called this instruction, there was no need to convert this into lower-level code, the machine would do this internally in microcode or even a direct hardware implementation.
[7] Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services.
IBM responded by refusing to service machines with these third-party add-ons, which led almost immediately to sweeping anti-trust investigations and many subsequent legal remedies.
They concluded that the compatible mainframe business was indeed viable and that the basis for charging for software and services as part of the hardware price would quickly vanish.
These events created a desire within the company to find some solution that would once again force the customers to purchase everything from IBM but in a way that would not violate antitrust laws.
[7] If IBM followed the suggestions of the HLS report, this would mean that other vendors would have to copy the microcode implementing the huge number of instructions.
The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order-of-magnitude reduction in the costs of developing, operating and maintaining application software.
One design principle of FS was a "single-level store" which extended the idea of virtual memory (VM) to cover persistent data.
This means there is no need to save and load data, simply allocating it in memory will have that effect as the VM system writes it out.
This concept had been explored in the Multics system but proved to be very slow, but that was a side-effect of available hardware where the main memory was implemented in core with a far slower backing store in the form of a hard drive or drum.
The FS project was officially started in September 1971, following the recommendations of a special task force assembled in the second quarter of 1971.
In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.
As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution.
Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie).
The reasons given for terminating the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar.
In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance.
Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line: