banner ad
Experts Logo

articles

The Benefits of ODR in Complex Software Contract Disputes
Using the Forensic Systems Analysis methodology to arrive at and present
an expert opinion for Online Dispute Resolution purposes

By: Dr. Stephen Castell
Tel: +44 1621 891 776
Email Dr. Stephen Castell


View Profile on Experts.com.


Paper presented at the Third Annual Forum on Online Dispute Resolution, hosted by the International Conflict Resolution Centre at the University of Melbourne, Australia, 5-6 July 2004, in collaboration with the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP); Day 2, Tuesday 6 July 2004, Workshop Streams 14.00-15.30, 'Applications of ODR - Commercial Disputes'.

Summary
Software implementation contracts are frequently terminated with the software rejected amidst allegations from both supplier and customer, e.g. software/database errors/deficiencies, faulty design, shifting user/business requirements. An important technical issue on which the IT Expert appointed in such disputes is asked to give an expert opinion is: what was the quality of the delivered software and was it fit for purpose? ODR can bring benefits in presenting the objective findings of the IT Expert through online demonstration of the results of Forensic Systems Analysis, providing dramatic insight into the state of the software, saving time and costs, and facilitating settlements.

Introduction - Features of IT Disputes
Those who have been involved in litigious disputes over failed computer software projects would readily agree that, whatever their size in terms of the financial amounts at stake (and whatever the facts and circumstances of the contract between the parties, and the conduct of the subsequent software development) software construction and implementation cases present interwoven technical and legal issues which can be both arcane and complex - and therefore prove costly and time-consuming to unravel.

CASTELL Consulting, an independent professional IT consultancy, founded in 1978, has been involved in a wide variety of such complex computer software litigation [1]. We have in particular been instructed as expert witnesses (in e.g. the UK, Europe, the Arabian Gulf, Australasia, the USA) in many legal actions concerning major software development contracts which have been terminated, with the software rejected amidst allegations of incomplete or inadequate delivery, software errors, shifting user specifications, poor project management, delays and cost over-runs. This work has been on behalf of Claimants and Defendants, software customers and suppliers, in the High Court (or equivalent), Arbitration, Mediation and other forms of ADR. In addition, I have personally acted as an ICC Arbitrator, and CEDR-trained Mediator; and as Technical Assessor to an International Arbitrator (in a Hong Kong case for tens of millions of US$), in a role similar to that which, under the recently introduced English High Court Civil Procedure Rules, has become increasingly familiar to many UK IT expert witnesses - being appointed Single Joint Expert ("SJE").

Forensic Systems Analysis
Over the years of examining mixed and varied software development disputes as appointed experts, CASTELL has developed a range of techniques for assessing and reading the "technical entrails" of failed, stalled, delayed or generally troublesome software development projects. It should be noted that such projects can these days often be a contractually uncertain mixture of "customised" software packages and "bespoke" construction. Many articles and papers have been written as a result of such experiences [2].

This CASTELL inquisitorial method, Forensic Systems Analysis [3], focusing as it does on testing of the software in dispute, is I believe capable of being developed into a protocol for presenting the objective findings of the IT Expert in the context of Online Dispute Resolution. I believe that, through online demonstration of the results of Forensic Systems Analysis, the IT Expert can provide illuminating and dramatic insights into the state of the software in dispute, and contribute mightily to saving time and costs in, and facilitating rapid settlement of, such technically complex disputes.

Furthermore, such techniques of Online Dispute Resolution and Forensic Systems Analysis could, I suggest, be used not merely in a dispute context, but also to assess the fragile status and troublesome characteristics of specific "problem software projects/contracts" before they stall, fail, or sink into litigation; and, more generally, as a positive and rigorous "litigation sensitive" Software Quality Assurance and Project Management Audit Method for large software construction and implementation projects, throughout their conduct. I do not, however, explore these further aspects and benefits in this short paper.

I here outline just some of the Forensic Systems Analysis components relevant to expert investigation in typical civil litigation over failed software development contracts, the application and findings of which could, I believe, with great benefit be presented online for Online Dispute Resolution purposes. [Further details will be available on a future website www.ForensicSystemsAnalysis.com].

Software "quality"
The most common, and arguably most important, issue on which the computer expert is inevitably asked to give a view in software development or implementation cases is: what was the quality of the delivered software and was it fit for purpose? This raises the question of: just what is meant by "software quality"? The ready answer from the experienced IT expert is that "quality" can only mean "fitness for purpose", in the sense of "does the delivered software meet its stated requirements ?". Thus:

  1. "Quality" of software is a concept which is essentially dependent on the specification of what the software is expected to do and how the software is expected to perform in its defined environments. In other words, the yardstick for measuring and judging whether software is of appropriate quality and fit for its intended purpose is the Statement of Requirements defining what is required or expected of it; and

  2. Testing software against its Statement of Requirements is the only practical and universally accepted method of judging the quality of the software and whether or not the software is fit for its intended purpose.
This critical focus on testing the software in dispute against its Statement of Requirements has a different emphasis for different specific cases.

For example, in a case concerning an in-store EFTPOS system for a major national retailer, the crucial issue was whether or not the software supplier was likely to have fixed many outstanding errors and have had the system ready to roll-out in time for the pre-Christmas sales rush. What was the objective technical evidence of the software house's "bug find and fix" performance ? Were the bugs escalating, or was the software converging onto a stable, performant system ? Were, rather, the constant changes in customer specification - as alleged by the supplier - perhaps to blame for the delays and inability of the software to pass a critical acceptance test?

A case concerning a large University Consortium similarly focused on the apparent inability of the software developer to present a main module of the software system in a state capable of passing formal Repeat Acceptance Tests, with a number of faults appearing at each attempt at "final" testing (even though three earlier main modules had been successfully developed and accepted). How serious were these faults, and were earlier faults that had been thought to have been fixed constantly re-appearing ? Was the customer justified in terminating the contract on the grounds of a "reasonable opinion" that the software supplier would not resolve all the alleged faults in a "timely and satisfactory manner" ? Was the supplier's counter-claim for a large financial amount for "software extras" valid, and could that explain the inability of the software to converge onto an "acceptable" system?

In another case - that of a real-time computer-aided mobilising system for a large ambulance brigade - the focus was on the response times of the software in a clearly life-or-death application. How well were the desired response, availability, reliability and recovery targets for the software contractually defined, and what was the evidence of the system's actual performance under varying load conditions?

Testing Incident Reports
The computer expert witness - often coming onto the scene of the failed project many months, sometimes years, after it has all collapsed - is usually presented with large volumes of project documentation, an important element of which is the set of software testing records. Typically, these are in the form of Testing Incident Reports ("TIRs"), and they can run into many hundreds, if not thousands or tens of thousands, for large-scale bespoke software development contracts.

To simplify, the dispute may then come down to this. The customer alleges that the TIRs represented errors in the software which were critical, serious, incapable of being remedied, too numerous in number, or in some other way, or ways, either were, or summed up to, a material breach of the contract by the software supplier, entitling the customer to reject the software and terminate the contract. The software vendor/developer, on the other hand, retorts that the TIRs did not constitute "showstopper" faults, they were readily technically rectifiable, and anyway principally arose from the many and continuous changes in specification made by the customer - the customer was not entitled to terminate and had himself repudiated the contract in so doing.

The Forensic Systems Analysis Methodology: EFLAT, EAT and FORBAT
The expert hired by either of the parties in the dispute (or as an SJE) may address these issues using a number of Forensic Systems Analysis components, the most important of which, for the purposes of realising the presentational benefits of ODR, are likely to be EFLAT, EAT and FORBAT, outlined as follows. EFLAT - Expert's Fault Log Analysis Task - Material Defect During software development defects are routinely encountered, and routinely fixed, and there is generally nothing alarming about their occurrence. For the purposes of rejection of software and termination of a software development contract, any alleged defect must therefore be assessed using a strict test as to whether or not it is truly a material defect, that is, as to whether or not "the contract cannot be considered to have been performed while this defect persists". EFLAT, developed over the years through careful debate with many firms of instructing solicitors, and learned Counsel, uses what I believe is a sound protocol for testing whether or not any given software fault, in terms of its relevance to a breach and termination of a contract, is truly a material defect. This protocol is essentially that, to be a material defect, an alleged software fault must be

  1. of large consequential (business) effect; and
  2. impossible, or take (or have taken) a long time, to fix; and
  3. incapable of any practical workaround.
The customer is quite properly entitled to define what is a "large" consequential business effect; and the supplier, equally, may put forward an appropriate sizing for a "long" time to fix - each from the standpoint of his own business/technical knowledge and experience, and in the context of the particular contract/project. Both views ought to be evidentially supportable. Both views - and, also, whether or not there is indeed a practical workaround - would be the subject of expert scrutiny and opinion.

EFLAT constitutes a careful re-running of the appropriate Acceptances Tests, under expert observation, with each TIR (or "Fault Log") raised during the test rigorously and dispassionately assessed according to the material defect rule. The outcome is a Scott Schedule (of Software Defects) with each fault particularised, stating why each was considered a breach of contract (by reference to specific contractually defined requirements), what the consequential effect was estimated to be, what the technical time to fix was (or was estimated to be), whether or not there was any practical workaround available; giving the expert's independent view on all these individual elements, with, finally, an opinion as to whether or not the specific fault in total was a material defect. A pro-forma for the Scott Schedule (of Software Defects) is given at Annex A hereof, and such a pro-forma is, I believe, readily adaptable to being presented, for example via (updatable) web pages, in the context of Online Dispute Resolution.

EFLAT is undertaken with a "prototyping" orientation, assessing first only a limited proportion of the TIRs, so that experience may be gained as to how difficult the full task is likely to be, how long all the TIRs will take to assess, and whether there may be technical obstacles in, say, reproducing the exact conditions of the Acceptance Test corresponding to those which obtained when the alleged faults were originally found. Not the least of these obstacles can be the basic evidential uncertainty over whether or not the version of the applications software and/or the database configuration and/or the hardware and systems software environment available to the expert (months or years later) precisely correspond to the system being litigated over.

Obstacles apart, early prototyping of EFLAT enables estimates of how much time (and therefore cost) is likely to be needed to complete the Scott Schedule (of Defects) in its entirety, enabling clients and instructing solicitors to take a considered view as to the full extent of expert investigation to be commissioned. Once again, such estimates, properly presented and shared through e.g. controlled web "publication", should be of great benefit in the conduct and costing of an Online Dispute Resolution.

EAT - "Extras" Analysis Task
It can be that, in software development projects of any significant size, there are many "contract variations" caused by the inevitable shift in the customer's or users' perceptions of what they require as, for example, they see the software actually being built and tested. This "specification drift" or "constant changes to requirements" is a well-known phenomenon in almost all engineering construction disciplines and presents a particular challenge to well-ordered project management, to ensure that such variations are at all times properly documented and controlled, and that both parties understand and agree the impact on project scope, timetable and costs which implementing all requested software changes could have.

Typically, for the software systems project which collapses and ends in dispute or litigation, the computer expert witness is asked to give opinion on whether or not there were indeed changes from the originally contracted software; and, if so, what was the quality of the additional software built; and to what financial remuneration (e.g. on a quantum meruit basis) the supplier may be entitled for providing such software "extras".

EAT comprises a methodical analysis of (1) the contractual documentation (in particular the Statement of Requirements, including any amendments or re-issues thereof during the project); (2) the work records of the software engineers who did the construction of the "extras"; (3) the items of software design, source code, functionality, execution and performance which it is alleged have been produced as a result of all this extra work; and (4) the financial amount claimed, and if it is consistent with (1)-(3) and passes a "sanity cross-check" such as that provided by assessing the "� or $ per delivered line of source code" standard software metric.

A pro-forma for the resulting Scott Schedule (of Software Extras) is given at Annex B hereof. Once again, a "prototyping" methodology is used to give client and instructing solicitors an early reading as to the likely time and costs needed to reach a complete opinion on all items of software "extras" claimed. And, once again, such a pro-forma and its associated insights are, I believe, readily adaptable to being presented with great benefit in the context of Online Dispute Resolution.

FORBAT - FORensic Bug Analysis Task
Always recognising that during software development defects are routinely encountered, and routinely fixed, and there is generally nothing alarming about their occurrence, the overall numbers of such "bugs" (as logged by the TIRs), and the pattern of their build-up and resolution, are nevertheless important indicators of the progress of software construction and testing.

Such indicators are unfortunately often misread by both the software customer and the software developer: in particular, the dramatic increase in TIRs and apparent "never-ending increase in bugs" during systems testing can be badly misinterpreted. The point is that systems testing (usually the responsibility of the software developer) is meant to find bugs and fix them - it is not being done properly if there is not a large build-up in recorded TIRs. This contrasts with acceptance testing (usually the responsibility of the customer) where "zero", or only a small number of non-serious bugs, is a not unreasonable expectation, particularly as acceptance testing should be undertaken with the appropriate attitude - for acceptance, not rejection of the software proffered for testing. FORBAT uses a number of standard quantitative analysis techniques to give an objective graphical presentation of the true "bug find and fix" performance of the software house, readily understandable, with a little explanation, to non-technical clients, lawyers or judges. The insights which spring out of these presentations are usually vivid (and incidentally can come as something of a surprise to the parties themselves). These are best explained by two examples, both taken from a real software project:

Continue to full article


Dr. Stephen Castell CITP CPhys FIMA MEWI MIoD, Chartered IT Professional, is Chairman of CASTELL Consulting. He is an internationally acknowledged independent computer expert who has been involved in a wide range of computer litigation over many years. He is an Accredited Member, Forensic Expert Witness Association, Los Angeles Chapter.

©Copyright - All Rights Reserved

DO NOT REPRODUCE WITHOUT WRITTEN PERMISSION BY AUTHOR.

Related articles

stephen_castell_logo.jpg

10/7/2009· Computers

IT Expert and Counsel in Computer Software Disputes - Professionals in Harmony

By: Dr. Stephen Castell

The Negotiation Competition, now in its fifth year, is a contest open to all law students in England and Wales, designed to promote the skill of negotiation, a crucial component of ADR

Phil-Isaak-Data-Center-Infrastructure-Expert-Photo.jpg

12/5/2018· Computers

Have We Reached the EDGE Yet?

By: Philip J. Isaak

One of the new terms in the data center industry is Edge Data Centers. What is the Edge? Where is the Edge? Before we begin to understand what an Edge Data Center is, it might be helpful to first understand what it is not.

stephen_castell_logo.jpg

6/23/2011· Computers

IT Disaster? What IT Disaster? And What D'You Mean "We're Not Insured For It"!?

By: Dr. Stephen Castell

You are an established, reputable, medium-sized corporation. A year ago your board decided to upgrade your existing computer systems by buying a 'unified package', 'lightly-customised', from a 'solution provider'.

;
Experts.com-No broker Movie Ad

Follow us

linkedin logo youtube logo rss feed logo
;