This post was originally written in the 1990s when I was thinking about attempting a book describing IBM’s TPF to an unsuspecting world. However real life intervened and that never happened. It may be of interest though as The ‘need for speed’ that ACP/TPF brought with it did encourage some significant computing firsts, like loosely coupling mainframes together (clustering) to process the same workload, tightly-coupling multi-core processors (and all the assembly-level synchronisation and parallelism problems that that brings with it, with shared memory etc.). So I hope the following will shine a light on what happened next in the life of TPF with more technical details to come for the determinedly curious… (Patrick O’Connor, 2019)
It will soon become clear as we explore the development of TPF that an extraordinary amount of cooperation has taken place between both the users of the software, amongst themselves, and the users in concert with IBM. We have already seen how the three pioneering systems at American Airlines, Pan-Am and Delta were the product of a close working relationship with IBM and this relationship between the users and IBM has continued throughout the lifetime of TPF up to the present day. Pan-Am, with its financial problems, disappeared but American and Delta remained, along with some other airlines, notably United (or rather its computing off-shoot COVIA Corp) and Eastern (or rather its computing off-shoot System One). In fact the habit of cutting the computer division free as a separate company will be visited again as we approach the present day in our historical survey. Many people who have only been introduced to computers after the advent of the Personal Computer will probably have little appreciation for the industry as it was when those first PARS-like systems were created. Computer engineers were still struggling with operating system code that was too closely associated with the hardware that it was running on to be easily moved to different machines. The quest for a solution to this problem was one of the driving forces behind the System 360 concept. For the first time software that operated on one machine in the 'family' would work, largely unmodified, on any other machine in the same 'family'. The effect of this simple standardization on the industry is hard to over emphasize. With experienced computer engineers now free to spend their time developing better systems rather than trying to rewrite the existing ones to work on the latest machines progress was much more rapid. However, even today, there is a place for system code that is sympathetic to the hardware and is able to take advantage of every design feature for improved performance. Modern software companies, particularly in the highly competitive PC marketplace, employ experts on each type of computer that they want their software to run on. These experts are charged with rewriting certain routines that interface with the hardware to 'fit' the actual target machine and obtain every last ounce of performance benefit at the expense of portability of the code. This was the situation in the early days (or rather years...) of ACP/TPF. IBM supplied the source for the operating system to its customers, which probably goes back to those days when ACP was just a part of the PARS system that included the applications that would need to be customized for the user's requirements. Nowadays the idea that an operating system from IBM would arrive, in its shrink wrap, with libraries of source code included would have many an MVS programmer rolling up his sleeves. This had two major effects on the development of the product as a whole. First it helped maintain the close working relationship between the users and IBM, since these users were doing a splendid job of correcting the many mistakes to be found in the early releases of the software. Second it had the negative effect of encouraging customization to a degree which meant that it would be highly unlikely that IBM would ever be able to ship the system without the source in the future. Another problem that would have faced IBM if they had shipped ACP/TPF OCO (Object Code Only) would have been support in the event of system problems. It is hard to imagine an airline, dependent on its 24 hour reservation system for its very existence, waiting for the local IBM System Engineer to resolve a problem that has caused their system to crater at peak time on a Monday morning. In fact the availability of the source has proved mutually beneficial as I mentioned before and it has certainly made the job of an ACP/TPF systems programmer one of the more interesting to be had in the programming world. The close observance of the characteristics of the hardware that I mentioned earlier dogged ACP/TPF for more than 20 years. Only with the announcement of TPF V2 R4 did IBM solidly declare that TPF was a strategic operating system in their portfolio and that since it was now able to use the XA Common I/O technique support for new hardware from IBM would always be available in TPF at the same time as in other mainline operating environments. This announcement brought to an end more than two decades of the leading TPF users having to often write their own software to handle new devices that they wished to put on their systems to cope with the explosive growth they were experiencing as the world discovered air travel. Another oddity that is littered through the history of ACP/TPF is RPQ numbers. The term 'RPQ' belongs to IBM and stands for Request Price Quotation. What it actually means is that a (usually hardware) feature has been added, often at the request of one or two customers. This technique was common in the TPF world. In fact it even reached the stage of having a special set of model numbers for the IBM mainframes that included a whole host of special hardware features created for the TPF environment. As you can see this reliance on an empathy between hardware and software was not ideal for the TPF customer. It meant that it was hard to choose another hardware vendor other than IBM since it was not clear whether the other vendors could offer the special add-ons for the TPF peculiarities, or that they would want to. It also became so much of a nuisance to IBM that they eventually scrapped the idea of differentiating between the higher performance machines built for the TPF user and the standard mainframes intended for the vast majority of its customers. Nowadays it is more unusual to see RPQ applied to a modification necessary for TPF only as IBM has made these modifications part of the standard product. It is still IBM practice, I believe, to use an RPQ for a hardware modification necessary to support some of its own new software for example. In its lifetime ACP/TPF has flirted with almost any technology that promised better performance. For a time the IBM fixed head disks, 2305s, were supported but other advances, both in hardware and software have made these devices an unnecessary complication to an already complex system. Intelligent DASD controllers having fast memory to provide caching have also been in use for some time. Unfortunately, for a long time the type of buffering used for TPF was incompatible with other operating systems meaning DASD could not be shared in a single string between TPF and anything else. Having said up to now that ACP/TPF craved anything that would give it more performance perhaps one of the most interesting software developments during this time was that of the Hypervisor. The situation faced by most of the airlines, particularly those in N. America that did not already fly to many other parts of the world at that time, was that they needed the biggest, fastest machine that IBM could provide for the transaction load during the day but often found that off-peak traffic was only using a fraction of that expensive hardware. So the idea of the Hypervisor was born, a software package that would sit above the ACP system in the box and allow coexistence within the machine with OS/VS. The constraints on the package were severe, it must not impact ACP more than 5% and it had to ensure that it had no effect on the system's overall reliability. It also had to be totally invisible to ACP and to the other system, OS/VS, and all applications running under these two systems. This was achieved by using the interrupt handlers in ACP. These contain logic capable of recognizing whether any interrupt belonged to ACP or not; so it was relatively easy to modify these to process OS/VS interrupts. OS/VS2 (MVS) is the only other system, other than a test ACP system, that the Hypervisor was designed to host. This simplifies the job of providing this 'virtual machine'. Anything more complicated than that would have produced something akin to IBM’s VM, which would have been too much of an overhead. The guest system runs in problem mode, that is to say in user mode as opposed to system or supervisor mode. If guest code (OS/VS) is in control of the processor when an interrupt occurs the native system (ACP with the Hypervisor) gets control but the reverse is not true. If the guest system is not in control but receives an interrupt the interrupt is queued until the guest once again gets control from the hypervisor. Even privileged instructions from the OS/VS guest are simulated by the hypervisor to prevent the guest from obtaining control over the system and excluding ACP. It also has the effect of preventing any problems to ACP reliability from faults in the OS/VS system but did degrade the perceived performance of OS/VS. In 1973 this software was introduced and with the release of the XA version of TPF, V2 R4, it was dropped. The next major event in the history of ACP/TPF we are going to visit was known as the JADE project (Joint Airlines DEvelopment) which produced as a result the first ACP/TPF systems to operate with multiple mainframe machines linked together and sharing the same database. This was another example of the close working relationship between IBM and the major users of ACP.