CICS

In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces.

It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system.

The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States.

When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs).

Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare.

[6] In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM).

This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments.

[10] CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade.

Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows.

By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS.

Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options.

Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code.

Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem.

CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions.

In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack.

This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine.

There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3.

For example, the new CICS Java API (JCICSX) allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer's local workstation.

Plug-ins for Maven (cics-bundle-maven) and Gradle (cics-bundle-gradle) are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code.

In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc.

These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions.

Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown.

But not all CICS applications use VSAM as the primary data source (or historically other single-address-space-at-a-time datastores such as CA Datacom) − many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager.

CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties.

The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware.

However, the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi).

However, a CICS address space, due to its quasi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time – even with use of MVS sub-tasks.

The community vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/I, or assembler code.

For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of mainframe Sysplexes in two locations in Phoenix, AZ, each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests.

The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in.

Chart depicting high-level architecture of CICS (in French)
IBM Hursley, where much of the CICS development has been done, 2008
Beginning of a CICSGEN stage one module, 1982
Advertisement for CICS debugging product, 1978
Chart showing a particular task invocation of CICS, 2010
A diagram showing one site's relationship between z/OS and CICS, 2010