While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times.
For example, it handles VISA credit card transaction processing during the peak holiday shopping season.
The depth of the CPU ready list is measured as any incoming transaction is received, and queued for the I-stream with the lowest demand, thus maintaining continuous load balancing among available processors.
In cases where loosely coupled configurations are populated by multiprocessor CPCs (Central Processing Complex, i.e. the physical machine packaged in one system cabinet), SMP takes place within the CPC as described here, whereas sharing of inter-CPC resources takes place as described under Loosely coupled, below.
Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled.
The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device).
All processor shared records on a TPF system will be accessed via the same file address which will resolve to the same location.
TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPF Prime CRAS[12] (Computer room agent set — which is best thought of as the "operator's console").
Such systems perform analysis on character content (see Screen scrape) and convert the message to/from the desired graphical form, depending on its context.
TPF application source code is commonly stored in external systems, and likewise built "offline".
Starting with z/TPF 1.1, Linux is the supported build platform; executable programs intended for z/TPF operation must observe the ELF format for s390x-ibm-linux.
Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "Z-messages", as they are all prefixed with the letter "Z".
Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes.
Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource.
The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible.
This created a challenging programming environment in which segments related to one another could not directly address each other, with control transfer between them implemented as the ENTER/BACK system service.
The introduction of C language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing.
The TPF loader was extended to read the z/OS-unique load module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF's segment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL).
Furthermore, external references became possible, and separate source code programs that had once been segments could now be directly linked together into a shared object.
A value point is that critical legacy applications can benefit from improved efficiency through simple repackaging—calls made between members of a single shared object module now have a much shorter pathlength at run time as compared to calling the system's ENTER/BACK service.