• Investigation Process Research References Library
    The Investigation Process Research Resource Site
    A Pro Bono site with hundreds of resources for Investigation Investigators
    Home Page Site Guidance FAQs Old News Site inputs Forums
    . . . .. . . . . . . . last updated 8/8/09

    INVESTIGATING
    INVESTIGATIONS

    to advance the
    State-of-the-Art of
    investigations, through
    investigation process
    research.



    Research Resources:

    Search site for::


    Launched Aug 26 1996.

     

     

    QUALITY CONTROL PROCEDURES FOR ACCIDENT INVESTIGATORS

    First posted 26 Aug 1996

    Quick view




    QUALITY CONTROL PROCEDURES FOR ACCIDENT INVESTIGATORS

    BY LUDWIG BENNER, JR. AND IRA J. RIMSON

    An expanded version of this paper, with an example demonstrating its application, was published in forum October 1991 (24:3) and March 1992 (25:2), available from the International Society of Air Safety Investigators, 5 Export Drive, Sterling, VAUSA 20164

    ABSTRACT

    This paper describes the results of an inquiry into technical quality control needs and criteria for accident investigation outputs that purport to describe what happened and why it happened. To satisfy the needs and criteria identified, a quality control procedure that has been used to manage effectively the quality of reported descriptions of accidents produced by accident investigators is presented. An example of its application is appended.

    FOREWORD

    In a paper in the February 1991 ISASI forum "Standards for the Conduct of Air Safety Investigation" Ira Rimson (MO0851) made an eloquent appeal for the development of quality controls for investigations. The author's experience in safety research and investigations fully substantiates this most recent appeal. To try to add momentum to a movement to develop and institute quality controls for investigations, this report of the author's efforts to introduce quality control over a 20 year period in government and the private sector was prepared. Initially, a significant stimulus for the effort to develop and apply quality controls was the need to ensure that reports being developed for the National Transportation Safety Board satisfied quality needs of the NTSB while the author was employed by the NTSB (Benner 1975) . The publication in the ISASI forum of an article title "Are these the same Accident?" (Rimson, 1983) containing a remarkable comparison of two NTSB investigation outputs provided more compelling evidence of the need for investigation quality control. The need was also reinforced in other research conducted about that time, involving the identification of investigation models and objectives and how their accomplishment was being assessed during a review of most major United States Government accident investigation processes. (Benner, 1983) That work involved a review of 17 major U. S. governmental agency investigation program documents, and disclosed little guidance for quality control.

    The basic concepts of understanding, prediction and control are fundamental to scientific inquiry. The scope of this paper is limited to the "understanding" part of the inquiry process - the part that purports to describe what happened, and why it happened. That output constitutes the foundation for almost everything else that flows from an investigation, so this paper addresses quality control of that investigation work product first. It does not address quality control of subsequent uses of the understanding - the judgments, conclusions, opinions or decisions based on the report of what happened - which is also needed, but is better discussed after we reach a consensus on how to control the quality of our understanding of an accident.

    INVESTIGATION QUALITY CONTROL NEEDS

    There has been little research to define quality control needs and criteria for accident investigation outputs reported in the literature in the context of understanding an accident. One of the few documents to address the issue was the publication of MORT SAFETY ASSURANCE SYSTEMS, (Johnson 1980) but the criteria are not concrete. Guidance which exists is found in accident investigation manuals and is typically generalized and abstract, rather than concrete. In these circumstances, during the early efforts to develop quality standards and controls, it was necessary to identify the needs by observation. The method selected was direct observation of quality control efforts at the National Transportation Safety Board and in other organizations.

    The general systems model provides a useful model for guiding observations of quality control efforts in an ordered manner. Investigations are processes, and so is quality control. Use of the general systems model elements provided insights into the outputs desired of the investigation process, and what is required to produce those outputs. This familiar model proved to be very helpful, by forcing examination of the investigation process elements in an orderly way..

    Fig. 1 General Systems Model

    input-operations-output-feedback

    Let's look at what is involved in the processes, to see what we should look for to define quality control needs..

    Investigation Inputs:

    The occurrence. To make bean soup, you have to have some beans. To produce an accident investigation you have to start with an accident. For the purposes of this paper, it will be assumed that an accident has occurred, and the investigation of that accident will occur.

    Investigation knowledge and skill. The second input is the knowledge and skill of the individual who will be investigating the accident. That person has to know what an investigation is, and the process required to produce investigation outputs, and then exercise the skill to produce the desired outputs. While that knowledge and skill is highly variable, and influences the quality of the outputs, it is assumed that the knowledge and skill will adapt to the quality criteria established for the work..

    Data created during accident. The third essential input to an investigation is data created by the accident. Data is the putty from which an investigator molds the investigation outputs. If an accident generates no data, you have no basis for determining what happened

    Observations. A fourth essential input is observations of the data from an accident. When an accident generates data, the investigator has to know where to look for it, how to recognize it when he or she sees it, and how to use it. The investigator must make and document observations. Observation is used in the sense of a systematic noting and recording in a descriptive manner akin to scientific notation, of some object, property or condition.

    Accident Investigation Operation

    In the systems context, you have to do something with the inputs to produce an output. What is the investigation operation when viewed this way? Upon analysis, it consists of several minimal procedures.

    Acquire data . The first step in the operation of an investigation is to acquire data surviving the accident. The procedures for acquiring data will affect the quality of the outputs in several ways. First, if the data acquired is screened for use to prove an existing hypothesis, the output will be biased. To avoid bias, an investigator must search for and be prepared to accommodate or discard every item of surviving data. This can involve a a wide range in the amount of data available, from a prodigious quantity to a meager quantity, depending on the nature of the accident. The investigation and the QC processes must recognize the need to filter some data from other data during an investigation, and provide criteria or procedures for doing so, in a way that will minimize differences about these decisions among investigators.

    Transform and organize the data. An investigator can't take a smashed aircraft or component and append it to a report. Rather, the objects, conditions or properties have to be viewed and those observations of the smashed aircraft or Body>

    This data transformation is required to permit organization of the data into a structure which will permit integration of data from numerous sources. Organization of the data is required as it is assembled to permit its use in the synthesis and production of the investigation outputs. When and how data are organized varies widely, accounting in part for the replicability problems we all recognize.

    Integrate data to synthesize a description of what happened. By transforming the data into appropriately designed data elements, and organizing it properly, the investigator should be able to "reconstruct" or form a description of what happened by systematically piecing all the data together into a valid scenario that can be tested and verified. The integration of the data is that step in the process when all the "pertinent" data has to be arrayed to lead to logically sound hypotheses about what happened. Typically, this data integration process leads to the synthesis of a description of what happened during the accident being investigated.

    Validate the description. Ideally, the description of the accident should enable an investigator to reproduce the accident process and achieve identical outcomes. That would, of course, be the ultimate demonstration that the investigator knows what happened and why it happenedGhowever that is not a practical proof, for quality control purposes, if for no other reason than the loss would be socially unacceptable. Typically, experienced investigators will use what might be termed "mental movies" to reproduce the accident in their minds. When they feel they can visualize what happened in their own minds, they generally are satisfied that they have achieved a proper description of what happened.

    AI output:

    Once the accident is understood, it is relatively easy to produce the outputs. Choices of formats are even available to report the investigation results in various ways.

    Forms. The most widely used method for reporting what happened is the accident reporting form. Forms provide investigators with blanks to fill in and thus should be highly replicable in the data that they call for, and the order in which it is presented. Unfortunately, because they call for judgments, they often do not offer a high degree of replicability or validity, as can be seen in the example which will follow. Beyond that, they pose major problems for investigators - primarily deciding how to fit the investigator's observations into the blanks on the forms. In one organization, 92% of the information in one category of a form was reported as "other" when accident factors were listed on a form. It is not difficult to image how hard it would be to reverse that experience with a QC program aimed solely at making the investigators better forms preparers.

    Written descriptions. The second most widely used form of investigation output is the written narrative description of an accident. Generally, these outputs are much more complete than information provided on forms. Again generally, written descriptions have pose one significant dilemma: the written word is by its nature linear, while during accidents a lot of thinks are usually happening.

    A derivation of this written format is the graphic or flow charted output format. Events and causal factors charts, logic trees and similar kinds of flow charts are used to show the flow of events that occurred.

    Verbal descriptions. A third output category is the verbal description of what happened. Typically, an investigator will make several verbal reports about what happened before the final work product is documented. While transient and thus not readily available for study, it is instructive for the purposes of this paper, as will be seen.

    Description of data that supports description of what happened. Another output from investigations is the data that is collected for and flows from the investigation in one or more outputs, in addition to the description of the accident. Such data includes, for example, description of the system within which the accident occurred, procedures used, a photo or sketch of the site, and similar visual aids to help someone visualize and understand the setting or the accident.

    Other outputs. After the "what happened and why did it happen" are understood, many uses flow from the accident description. The description is used to identify and define problems demonstrated by the accident, to develop estimates of risks, to identify possible options to change interactions and relationships among events during future operations and assess their comparative effectiveness in reducing future risks, and resolve questions about responsibility, costs and other social, legal and technical issues. The quality of the subsequent outputs is directly dependent on the quality of the accident description. Gigo (garbage in/age out) is so appropriate to investigations.

    Feedback after the investigation

    On occasions feedback about what happened may be introduced explicitly into the investigation process. More typically, the feedback is implicit, in the form of questions about the output, requested changes to the outputs, controversy, litigation, hard feelings, etc. The questions that are raised about the description of what happened, for example, can provide indications of investigation quality control problems but only if someone is alert to this issue. Unfortunately, a large quantity of feedback about investigations is introduced in litigation where differences are commonly resolved by committee judgments under legal rules rather than technical tests with technical rules.

    What this analysis makes clear is that all four aspects of the investigation process should be made candidates for quality control purposes. Outputs, however, provide a convenient point to try to enforce quality standards, because if your inputs and operations are inadequate, your outputs will be inadequate. Thus controlling output quality should lead to better quality inputs and investigation operations, as well as improved feedback.

    Direct observation of quality control efforts.

    Ome form of QC program for investigation outputs has been observed in most organizations. These QC efforts are typically driven by abstract or ambiguous criteria expressed in terms such as "clear, " "concise," "complete" and similar adjectives. Because these criteria consist of words high on hayakawa's ladder of abstraction, the criteria left room for a broad range of individual interpretations, and outcomes were dependent on who signed the report last.

    Another common quality control process is a "peer review." in most instances where this procedure was used, it masked the absence of an agreed set of concrete or measurable criteria that could be applied reasonably uniformly by all.

    It was observed that the results are these process are highly variable outputs. The forum article (rimson 1983) illustrates this point convincingly.

    A third informal but relatively effective process was the informal, one-on-one sharing of verbal descriptions of the accident among investigators. The process as it was observed to operate was found to be the most effective quality-improving procedure. As one investigator told another what happened, the listener tried to visualize what was being described, and put it into the proper sequential order. This resulted in continual refinement of the description until the listening investigator could visualize the accident successfully from beginning to end. Observations of this process provided the key insights into the process that is described below [1] .

    Without a rigorous set of quality standards and quality control process, replicability and quality and the description of what happened and why it happened continue to suffer, and usually little progress toward improving quality is achieved. For many investigators, the problem quickly becomes one of shifting criteria of acceptability.

    Investigation quality goals

    At this point, it is appropriate to report some fundamental goals for accident investigation outputs, flowing from the above discussion. First, recognition that an accident is a process, and that a description of a process can be systematized. Secondly, investigators should be working toward a certainty of 1 for all the data that are used to produce a description of the accident process. Finally investigators should be striving toward a certainty of 1 for the description of what happened, to raise the level of replicability.

    Accident as a process

    Describing a process is substantially different than describing a cause or causal factors, and much more demanding. If we insist on establishing quality standards for the process description we will be much further ahead than we would be by focusing on causes.

    Data: is it valid?

    Beware of gigo. The answer to this question depends in part on the integrity of the investigator and the investigator's ability to make and document actual observations, and not distort or misrepresent those observations in the documentation. It also depends in part on the methods used to acquire the data and how the data are transformed for use and integration. Missing a relevant input, or distorting an event by the way it is documented have the same result: distorted understanding of what happened. Finally, the methods used to record and organize the data to discover what happened can be critical. A frequently observed failure is the recording of conclusions rather than observations, for example, which deprives the user of the basis for testing the reported description.

    Description: is it valid?

    The bottom line for the investigator is to develop a valid description of the accident, from which action can be proposed , planned and implemented to reduce future occurrences and risks. Invalid descriptions have been observed to produce unacceptable consequences including serious injustices to individuals and organizations; misdirected remedial efforts; needless controversy; and myriad other problems. cCurrent criteria such as "clear" or complete" are not helpful, and produce results such as those demonstrated by the 1983 forum article. hHow can we QC this work?

    Proposed investigation QC criteria

    How do we describe what happened

    Listening to verbal reports of accidents and the exchanges that followed suggests a useful approach to define criteria for describing what happened. The content of these verbal reports by investigators varied significantly. noted that when the accident was described in concrete terms of where it happened, who did what, and when, and the actions were sequenced properly, relatively few questions had to be raised about what happened. It became apparent that as the listener had this word picture painted by the investigator, the listener's mind could easily follow the scenario, visualizing what happened in a form of a "mental movie." as long as no blank or double-exposed frames of this mental movie pop into our mind screen, we can "picture" what happened fairly easily. The key was stating who did what, when and where.

    Consider now what creates confusion as one listens to an accident being described. When the narrator states the events out of order, either with respect to time or location, confusion begins almost immediately, because one can not reconstruct an orderly movie. When the narrator uses ambiguous names (he, they) one has difficulty picturing what was happening. When the narrator uses the passive voice (he was struck on the arm.) one can't visualize who did the striking;, and the picture gets confused. When the narrator attributes an action to two people, or to the wrong person, confusion arises quickly. by looking for the seeds of confusion, a number of useful quality control criteria were identified. The main criteria follow.

    Kipling's faithful servants: who what when where why

    who did what

    As a first step, a quality investigative output will give every person or object one name and use only that name throughout the rest of the accident description or discussion.

    Secondly, the output will present the accident data and accident description in the "active voice" where the name of the actor who does something comes before any words describing what the actor did. An abbreviated way to state this is to ensure that data is transformed into an actor +action format. This is the basic "event" with which descriptions of accidents are developed. Think of the actor/on format as the basic event building block for accident descriptions.

    When

    When you describe what someone or something did, you have to describe when that occurred, relative to some other event referent point. A way to do that is to graphically display the basic events in their relative time sequence during the accident if this can be determined.

    Where

    In addition to timing the events in their proper order, the events must be ordered in their spatial sequence to make sense. You can't claim someone fell up two long flights of stairs, for example.

    Why

    This is probably the most subtle and the most abused quality control criterion. For one event to interact with another, there must be a causal relationship or linkage between the events. This causal linkage must be established for the events in the description, because the causal links define why the accident process continued to its conclusion - the harmful outcome. If there is no causal linkage, the event may be nice to know, but it is not needed to understand the accident! cause connotes an if/ relationship: if a occurs then b follows. If b does not always follow a, then a is not the cause of b. rather a + a n are the cause of b, or a causes b + b n .

    In an accident description this kind of critical logic testing can be used to select events that must be reported to describe what happened and why it happened. Tests for causal linkages are well know to critical thinkers. The tests require application of the necessary and sufficient reasoning from logic,. They provide a basis for establishing investigation quality control to determine and ensure that precede/low logic governs the accident description.

    These ideas provide the essential criteria to permit a check of the quality of accident descriptions and data.

    Suggested quality control procedure

    The following technical procedure provides a way to do a quality control check of an accident description, using these criteria to ensure that the description is valid and complete, explain why the accident occurred, eliminate unnecessary data from the description, and identify uncertainties. Although relevant, this quality control procedure does not address subsequent uses of the accident description, judgments or opinions about the accident or its occurrence, recommendations, or other aspects of the investigation quality control issue. It is based on the premise that the quality of other aspects of the investigation depend on the quality of the accident description, because if the quality of the description is unsatisfactory, any subsequent uses of the accident description are tainted. Quality criteria and procedures for those aspects of an investigation require separate consideration. [2]

    The QC procedure

    This procedure is designed to ensure that the who, what, when, where and why are properly reported, using the concepts just described. To apply the procedure, start with any description of an accident and take the following steps.

    1. Find the reported events.

    Simply go through the information provided and highlight each actor and concrete action set reported. Each actor/on set is called an event. The reported events are indicated by underlining.. This forms the basis for your assessment of the data quality. If the data are not properly transformed into the actor/on format, it will give you problems when you go to use it. You may have to dig hard to find all these actor+action (event) sets, because the actor and the action may be separated by many words, or in bad cases - sentences. For whatever the investigator's reasons, sometimes the name of the actor is not stated when an action is reported . "passive voice" sentences ("was" or "were") or pronouns or other ambiguous actor names, or abstract action words like failed, erred, etc., obstruct formulation of your mental movie of what happened. do not assume or infer complete events from these entries. Let the report preparer do that. rather, use the names of people or objects in the report, or the action words, by recording the given part of the event only, leaving the other part of the event either blank or represented by a question mark.

    After you have underlined or circled all the "event building blocks" (highlighting also is ok) go to the next step.

    2. Organize the highlighted events.

    Next, record all the event building blocks (actor + action + any descriptive words needed to describe the action) on a medium like "post-it"&tm; sticky note pads. As you record them, put each new event on a worksheet laid out to provide time and actor coordinates. Use one row for each actor, and a time line across the top with tick marks to indicate approximate times under which the events are aligned. Arrange the events so their order from left to right represents the relative time when they occurred. As this display progresses, look at each event relative to each other event to be sure each is in the right sequence. If two actors were doing something at the same time, the events should be placed one above or below the other, to indicate that the actors represented by the rows did something at the same time - in parallel, rather than in series.

    3. Apply sequencing tests

    When all the highlighted event building blocks have been placed in their proper rows and in their proper time "columns" along the rows, review the worksheet to see if they are aligned properly according to their spatial location during the accident. new add events for which you have only the actor or the action - not both - and place those events on your worksheet. At this time, it is also convenient to review again the source of your events, and add events that can be inferred reasonably. When all the events are entered, review each pair to ascertain wether the order in which they are displayed is logical with respect to time and space. If not rearrange the events to get them into their proper order. When both time and spatial relationships have been tested and are satisfactory, proceed to the next step.

    4. Add causal links

    This next step is the critical quality control test step. What you try to do is add linking arrows to describe the sequences or flow of the changes that produced the outcome you are investigating, from the first event in the accident process through the last event in the process. An inability to add causal links shows where you have a quality control problem - in the form of an investigation data, data processing or reporting failure - and shows where you have problems with the accident description or the reported data.

    The procedure is based on working with event pairs or sets on your worksheet. You begin with the left-most (earliest in time) event pair on the worksheet and examine the event pair for a causal relationship. If the left event had to occur to make the right event occur, you have established an initial " necessary" causal relationship between the events. In other words, the right event could not have occurred until the left event occurred in this accident. (do not introduce expected relationships yet; forget about what usually happens and focus on what the report says did happen in this accident. )

    The next step is to examine the same events to establish a " sufficient" logical relationship between the events. If the left event was the only event that was required for the right event to occur, you have established an initial "sufficient" causal relationship between the events. In other words, the left event was both necessary and sufficient to produce the right event of the pair, and in this accident did so.

    At that point, you draw a linking arrow from the left event to the right event, signifying a causal link between the two events. Each linking arrow on your work sheet shows you that you have a causal relationship between two events.

    Often you will find that more than one event is must occur before another event will occur. This is similar to an "and" logic gate in a fault tree (see figure 1.) "or" logic gates are not permitted in a final accident description because they indicate uncertainty about what happened. It is better to use a question mark if the causal links can not be established, to indicate the uncertainty in the description. If you find that the left event was not sufficient by itself to produce the right event, you begin to look for the other event(s) that had to occur to produce the right event. For each causally linked event pair, draw a linking arrow between them.

    Figure 2. causally-linked events sets


    Necessary and sufficient tests may produce causally-linked pairs (1), converging event sets (2) or diverging events sets (3) . uncertainties (4) are indicated by a question mark between the events, indicating an unmet data need. uncertainties are not objectionable if they are faithfully represented in the text of the report.

    After you establish all the necessary and sufficient causal relationships among the initial events pair, repeat the same logical reasoning for each pair of events on the worksheet. If any events are not linked to the other events, those events identify problems with the accident description that you should resolve. At the conclusion of this process, you know what the accident description contains, and how well it enables users to visualize and understand the accident process. If the worksheet display has gaps or flawed logic, they are apparent from your QC worksheet. The output can be returned to the preparer with concrete shortcomings very visible for all to see ( and correct.)

    While a discussion of the methods is beyond the scope of this paper, the causally-linked events are also utilized to search for options to control future risks, to control the quality of recommendations flowing from the accident, and for other purposes.

    Handling problems disclosed by the QC task

    You will discover several kinds of problems when you do the causal links for the accident you are checking. A typical problem is that you have gaps in the links between the first and last events on the worksheet. You can either seek more data through investigation, or acknowledge the gaps in you understanding of the accident, or develop hypotheses to provide potential descriptions to fill the gaps using systematic methods like backstep, fault trees, etc. Alternatively, you can reduce the scope of the report. If you leave the gaps, recognize that you are depriving yourself of potential corrective action choices and may be induced to promote less safety effective recommendations, or misdirect safety efforts.

    As second kind of problem is extraneous events that are left over after you have completed your causal links. What do you do with them? The best course of action is to remove them from the description, because they mislead rather than illuminate users of your reports, and provide "hooks" for others to grasp to raise irrelevant, unnecessary and invalid questions about the accident. At their worst, they divert efforts to control future risks from bona fide needs demonstrated by the accident.

    A third kind of problem arises when the report does not transform the accident data into useful building blocks, so you can't establish the causal links or events displays. This is a clear warning of a report of unacceptable quality , and you should return it until it passes your QC check.

    A fourth kind of problem is that some managers will want a "simple" description of a complex accident. The worksheet provides a way for you to present the complexity of the accident in a way that most managers will accept when it is demonstrated.

    A fifth kind of problem is that individuals keeping accident statistics will want to have their forms completed whether or not the accident data fits the blanks. This can be overcome by providing the QC worksheets with the forms, and just filling in the forms on a best efforts basis. most forms will not fare will with this quality control method, but be assured the problem is with the forms, rather than the method. It is uncommon for the narrative section of forms to fare much better when QC'd this way, but that problem can be ameliorated by providing the worksheet to the forms preparer and user.

    Another problem is that individuals who want to make unwarranted judgments or draw unjustified conclusions, or propose recommendations to solve last year's problems instead of the problems demonstrated by this accident, or insist on a single cause of the accident will get quite upset with your QC results. Your QC outputs with their multiple causal links expose sloppy logic, loose interpretations of data, or unsubstantiated hypotheses, and are very frustrating to hip shooters used to using poor outputs and abstractions. You will just have to endure their frustrations and hope they will begin to understand soon.

    EXAMPLE APPLICATIONS

    To illustrate application of this accident investigation report quality control method, the reports from the 1983 forum article were analyzed with the QC procedure. The worksheet produced from that report is shown in the appended example. The results of the QC check using this method can be seen on the worksheets. (Contact ISASI or Webmaster for copy via fax.)

    When the method has been applied to other accidents, the results ranged from a report that contained NO events during the accident to reports that produced a relatively complete set of causal links from beginning to end. In all trials, gaps in the investigation findings - and understanding of the accident process - were identified, indicating quality problems.

    CONCLUSIONS

    In the absence of a generally accepted technical quality control procedure for accident investigation outputs, investigators' descriptions of accidents are often accepted uncritically. A candidate quality control procedure to identify quality problems with accident descriptions is presented This procedure has been successfully utilized for several years in diverse investigations. The results of its application to two reports are described in the appended example. The example demonstrates what the quality control procedure can accomplish.

    The criteria and method presented here can be utilized immediately with useful results. Work to develop additional or new criteria and perhaps alternative procedures should continue until even more effective methods can be devised. It would be helpful for investigators to share their QC development experience freely with other investigators.

    The quality control process presented here can be applied during an investigation rather than at the end of the investigation. When it is, it can provide real-time guidance to investigators by pointing to gaps in the accident description while they are in the field and have access to additional data. This is not a trivial advantage to investigators.

    References

    1. Benner, L., ACCIDENT THEORY AND ACCIDENT INVESTIGATION, Proceedings of the Society of Air Safety Investigators Annual Seminar, 1975, p149

    2. Benner, L., ACCIDENT MODELS AND INVESTIGATION METHODOLOGIES EMPLOYED BY SELECTED U. S. GOVERNMENT AGENCIES, report to U. S. Dept. of Labor, Occupational Safety and Health Administration, Washington, DC February 21, 1983.

    3. Hendrick and Benner, INVESTIGATING ACCIDENTS WITH STEP, Marcel Dekker, New York, 1987, p 197

    4. Johnson, W., MORT SAFETY ASSURANCE SYSTEMS, Marcel Dekker, New York, 1980, p 373

    5. Rimson, I., ARE THESE THE SAME ACCIDENT? ISASI forum , 1983, No. 3 page 12-13

    6. Rimson, I., STANDARDS FOR THE CONDUCT OF AIR SAFETY INVESTIGATION, ISASI forum, Vol 23, No. 4, February 1991, p 51.

    7. [1] See the report of the use of computer-generated video graphics in the Delta 191 air crash trial in the ISASI forum Volume 23, No. 4, February 1991, p 81, which describes efforts to reconstruct a "movie" to aid visualization of the accident events.

    8. [2] See Hendrick 1987 for discussions of these aspects of investigations and control of their quality.