Rolls-Royce/Snecma Olympus 593

The overall efficiency of the engine in supersonic cruising flight (supercruise) was about 43%, which at the time was the highest figure recorded for any normal thermodynamic machine.

The early development stages validated the basic design concept, but many studies were required to meet the specifications which included thrust-specific fuel consumption (TSFC or simply SFC), engine pressure ratio, size and weight, and turbine entry temperature.

Ground test running of the engines was co-ordinated between Bristol Siddeley, Patchway; the National Gas Turbine Establishment (NGTE), Pystock, UK; and the Centre d'Essais des Propulseurs (CEPr) at Saclay, France.

[5] Increases in aircraft weight during the design phase led to a take-off thrust requirement which could not be met by the engine.

These two derivative engines were built to determine the validity of design concepts such as turbine stator and rotor cooling and testing the system at high environmental temperatures.

At Bristol, flight tests began using a RAF Avro Vulcan bomber with the engine and its nacelle attached below the bomb-bay.

Due to the high inlet air temperatures at Mach 2 cruise - in excess of 120 °C (393 K; 248 °F)[5] - the compressor drums and blades were made from titanium except for the last four HP stages, which were Nimonic 90[13] nickel alloy.

They were met with a variable geometry intake and an intake-control system that compromised neither the operation of the engine nor the control of the aircraft.

[22] Since the ramp bleed slot was in the subsonic diffuser, and downstream of the shock system, changes in flow demanded by the engine would be accommodated with corresponding changes in the bleed slot flow without significantly affecting the external shock pattern.

[22] The dump doors were closed at cruise to prevent loss in thrust, since air leaking from the duct does not contribute to the pressure recovery in the intake.

[19] Since the intake area was optimal for cruise, an auxiliary inlet was provided to meet the higher engine air flow needed for take-off.

This was developed in around 1972, relatively late in the programme, by the Electronics and Space Systems division of the British Aircraft Corporation at Filton.

Concorde's Air Intake Control System also pioneered the use of digital data highways (multiplexed serial data buses) which connected the Air Intake Sensor Units that collected aerodynamic data at the nose of the aircraft (total pressure, static pressure, angle of attack and sideslip) and sent it to the Air Intake Control Units located nearer the air intakes, a distance of around 190 ft (58 m), using screened twisted pair cables to replace what would have been a much greater weight in aircraft wiring had only analogue signal wiring and pneumatic piping been used.

The intake control system had the unique ability to keep the powerplants operating correctly and to aid recovery whatever the pilots, the aircraft, and the atmosphere were doing in combination at the time.

The overall pressure ratio for the powerplant at Mach 2.0 cruise at 51,000 ft (15,500 m) was about 82:1, with 7.3:1 from the intake and 11.3:1 from the two engine compressors, far higher than any subsonic aircraft of the day, giving a correspondingly high overall efficiency of about 43%.

The eyelids formed the divergent passage while the engine exhaust ejected or pumped the secondary flow from the intake ramp bleed slot.

[16] During cruise at Mach 2.02 each Olympus 593 was producing around 10,000 lbf (44 kN) of thrust, equivalent to 36,000 hp (27,000 kW) per engine.

Concorde intake system schematics
The initial converging section of the intake which produces drag. With the front ramps deflected down to the Mach 2 position, shown by the clean white area, decelerating supersonic flow was present through 3 shocks with increasing static pressure on the forward-facing surface to cause a rearward drag force
Turbine and reheat gutter section of an Olympus 593 on display at the Fleet Air Arm Museum