The Investigation Process Research Resource Site
A Pro Bono site with hundreds of resources for Investigation Investigators
Home Page Site Guidance FAQs Old News Site inputs Forums
. . . .. . . . . . . . last updated 1/15/08


to advance the
State-of-the-Art of
investigations, through
investigation process

Research Resources:

Search site for::

Launched Aug 26 1996.

  (large file)

Ludwig Benner, Jr.
National Transportation Safety Board



At least 5 different perceptions of the accident phenomenon, 44 different reasons for investigating accidents, 7 investigative processes, 6 general methodologies, and 3 differing types of output requirements affect accident investigations. These differences reflect the lack of a unifying conceptual framework for accident investigation and safety; corrupt data search, selection, organization and reporting decisions during investigations; and result in spurious accident data.

A tentative unifying perception of accident phenomena, with supporting assumptions, principles and rules of procedure for their investigation was synthesized from the findings. Four games simulating the resultant accident investigation process have been developed. The process has been applied to improve hazardous materials emergency response decision making, and evaluation of safety countermeasures.

N.B. 2007. Though it may not be clearly stated, this report is based on direct observations of investigation processes, interviews with investigators and others, and work products during investigations, and actual participation in investigations. The significance of this use of primary data was not recognized until the results of alternative research tools became available subsequent to this study. Awareness of the issues reported here began to permeate the investigation community, attracting researchers from other disciplines to undertake studies using the tools of their disciplines, usually relying on secondary data . The differences in the results led to my awareness of the critical significance of primary data for studying investigation processes. LB

ACCIDENT INVESTIGATORS HAVE PROBLEMS that affect a lot of people beside the investigators. I am an investigator. Let me tell you about those problems, and how they might be overcome.


Accident investigators investigate accidents. What is this thing called “accident”that investigators investigate? When they begin an investigation what should be the scope of their investigative efforts? What data should they seek during an investigation? How will the “right” data be recognized? When will the investigator have enough data? How should this be determined? How should the data be organized, summarized or reported? How does the investigator determine if the outputs or work products from the investigation are satisfactory, and whom the data and outputs must satisfy? At what point does the investigative task end and the analytical task or use of the outputs begin? What conclusions can be drawn from the outputs, and how should their validity be tested? Equally importantly, where should one turn for the answers to these questions, and how does one assess the quality of the answers? And finally, how do accident investigations contribute to safety in the most effective way?

These are not rhetorical questions. Every investigator is faced with these questions each time an accident investigation is begun, because every accident is different from previous accidents in one or more ways. Presently each investigator answers such questions with “common sense” and with “good judgment” seemingly predicated on experience and academic background, rather than on generally accepted decision rules that led to reproducible outputs. In these circumstances, were the investigators’ tasks of such a nature that final work products were theoretically consistent and replicable anyway? Observed differences in the purposes of investigations, in the scope of matters investigated and in the nature of data reported indicated a negative answer. Observations of uses of the work products suggested that it is these differences that create problems for users of investigative work products, and persons involved in the accidents. For the users, unreproducible, inconsistent and incomplete work products can impede discovery of safety problems, misdirect corrective efforts, generate controversy, undermine credibility, and confuse the users’ audience. For the persons involved in the accidents, work products can overly simplify complex relationships and result in unjust blame for the accident, or even worse effects. For all concerned, investigators’ problems get magnified when the outputs are used.

For these reasons, investigators’ problems need to be addressed and resolved. This paper is addressed to that need. As the questions were pursued with other investigators, early ill-defined uncertainties accumulated rapidly and in growing numbers. Upon analyses, they have been categorized in terms of accident investigation objectives, scope, methods, outputs and uses for study purposes.


Why bother to investigate a specific accident? Investigators may be told to do an investigation by their employers, or they may elect to conduct an investigation on their own initiative. Of over 200 investigators queries informally, almost all had difficulty articulating the objectives for the investigation they were conducting. Most replied in terms of reasons for investigative programs, rather than objectives for a specific case. General answers most frequently included “prevent accidents,” ”complete the forms,” “save lives,” and “find the cause.” Reasons reported since 1974 have been summarized in Appendix A. 44 reasons have been reported. Note the substantive differences in reasons. Be aware that only once was “understanding the accident phenomenon” stated as a reason. Because of the nature of the reasons expressed, attempts to transform these reasons into objectives that could be used to measure the quality or success of the investigation were fruitless.


During an accident investigation, every investigator makes an implicit determination of the beginning and end of the accident by deciding what data will be reported. Attempts to find out how this decision was reached by practicing investigators remain unsuccessful. Very few investigators are even willing to discuss the question. Most evade the question or begin to talk about examples which defied generalization. Not one of the investigators queries articulated a rule of procedure for identifying the beginning or end of an accident. In practice, this difficulty resulted in frequent disagreement among investigative team members about matters that should be reported, and how far backward in time to track “the accident.”


Informal inquiries among investigators showed essentially no agreement about the “best” accident investigation methods to be used. The choice of methods was usually intuitive, and was not addressed explicitly by any investigators. Additionally, no pressure from users of the work products was exerted on investigators to address this issue. Investigators became very uncomfortable when the issue was raised, and were unable to discuss the question of criteria for selecting methodologies for their investigations, probably because they had not previously attempted to articulate them.


Specifications for work products from accident investigations were, for practical purposes, non-existent. The closest approaches to specifications were scattered instructions for the use of forms for reporting accidents. However, even the simplest forms required interpretations by investigators, as will be shown later, and none of the instructions provided a basis for determining the quality of the data reported with accident-based criteria. The numerous outputs often generated from a single accident further demonstrated the problem confronting an investigator seeking criteria by which to judge the acceptability of the work products--both with respect to one’s purpose and the realities of the accident. The same dilemma confronts users and evaluators of the investigators’ outputs.


When one attempts to use investigative outputs for evaluation purposes, or to reach a better understanding of accidents, problems with the scope, methods and outputs are accentuated. The almost universal lament about the “accident data” available for subsequent uses was experienced personally by the author in the evaluation of safety program efforts in his field. It is discussed in a report of the National Transportation Safety Board released in 1971, in detail. (1) The problem cited remains unresolved.

As the author’s informal inquiries continued, these practical problems were discovered to reflect even more basic theoretical problems. As the research progressed, they became more clearly understandable. They can now be characterized in terms of perceptions or concepts, assumptions, principles and rules of procedure. The clarity arose from attempts to identify and understand the theoretical underpinnings of the safety and accident investigation fields. Candidate concepts, perceptions and unifying frameworks were actively sought, analyzed, tried by the author in investigations, and found to be deficient, usually in more than one respect. Available theory did not provide workable criteria needed to answer investigators’ questions cited above. Additionally, symptoms of this theoretical problem were discerned in the works of others. (2) Perhaps the most vivid recent symptom was the publication of safety professionals’ views about why accidents happen. (3) The reasons varied widely and significantly, when judged by the criteria that will be discussed shortly. Other visible symptoms are the disputes which often arise about “probable cause and its determination. Many other symptoms could be cited. They all suggest that the lack of a theoretical “glue” to resolve the investigators’ dilemmas is a real and continuing problem.

To sum up the problems, accident investigators have been resolving their investigative dilemmas in every accident--each in his or her own way, using personally developed decision criteria. The results are not hard to anticipate. Replicability, verification, prediction, communication, utility, consensus and credibility suffer. Users never have the right, good or reliable (4) or enough data. The more one delves into these problems, the greater the enormity of their consequences seems. Ponder the effects on safety policy, public opinion, safety programs and objectives, research outputs or on the assignment of personal fault or culpability, and the need for their resolution can be seen.


A word about the research is in order, because it does not meet traditional research design standards. It has, however, been tested. It grew out of personal needs that were informally addressed, into a determined pursuit of rational understanding that would form the basis of investigative decision and evaluation criteria. The research was driven by personal concern, practical needs, process realities, and predictive validation necessities.

The personal concerns initially related to friends in the fire service whose lives might depend on the quality of the lessons learned and reported during the author’s accident investigations. Firefighters and other emergency response personnel bore the highest risks from accidents involving hazardous materials. Until the reasons for these high risks could be understood, they could not be reliably controlled. Accidents did not occur frequently enough to develop a traditional statistical base for analysis of the reasons, so an alternative methodology had to be found.

The practical needs reflected job performance requirements for the author and fellow investigators of the phenomenon called “accidents.” Principles and rules of procedure for the investigative process that would produce consistent and defensible results would provide great comfort to any investigator who was involved in some of the controversies which arise after investigations are concluded.

Process realities involved both investigative and accident processes. The investigative process realities included the diverse interests of parties in large investigations and the diversity of views, skills and ideas brought into an investigation by individual specialists--all of which had to be reconciled in a convincing way by the lead investigators. The accident process realities included the continuing need to reconcile existing concepts and views about the phenomenon with the realities of the accident observed during the investigations.

Prediction validations demanded the research. Evaluations require norms. Ways to identify these norms had to be related to ways accident data could be reported, so the evaluator could “close the loop” between the predictions and the accident experience. Traditional concepts and methods demonstrated little promise in actual investigations. Did this mean that the task was impossible? If so, why bother with accident investigations? Until this question could be resolved, the author had to continue to guess at the data which would serve this need.

As the need for investigative criteria became apparent through the informal inquiries, study of the accident processes, the frameworks within which they functioned, the process outputs, process participants, interactions within the processes, and process results was initiated. That study, in turn, required methodological choices by the author. The traditional comparative approaches failed to provide working insights, so an attempt was made to formalize the investigative method itself to support the study effort. The success of the method developed led to an attempt to develop a coherent unifying framework and set of assumptions, principles, rules of procedure and predictive methodology. Since the research was primarily part-time outside of working hours, record-keeping was kept to a bare minimum, and no nice neat records of early observations are available.

While not formally tested in classical ways, the research findings have been tested during accident investigations informally. As new insights were suspected, they were tested against customary procedures during investigations, and also with simulations in the classroom. These tests clarified investigative task criteria, facilitated discovery of safety problems, improved the efficiency of the investigations, and enhanced credibility of the investigation outputs .

Experimental applications of the findings during classroom simulations triggered deeper insights into both the problems and ways to resolve them, including the “mental movies” and “advancing time” concepts discussed below. These interactions with experienced, practicing investigators and students are acknowledged with gratitude.


Because of the scope of this research, and its initial informality, •a full report of all the findings is too lengthy for a paper. Further detail is contained in the works referenced. Of special significance are the findings about accident perceptions, investigation processes, methodologies, outputs and relationships among these findings.


Accident investigators investigate accidents. What is this thing called “accident?” The question is not new. The author’s initial approach was to examine models of an accident developed by previous researchers, and test them in practice. A list of the models tested and the deficiencies found in them is shown in Appendix B. Each model was tested for its potential value in providing investigative criteria during investigations. No single model fulfilled the need.

The next step was to try to answer the questions using “definitions” of accidents found in the regulations, literature, safety publications, investigation manuals, and periodicals. Review of over 200 different definitions disclosed that the diversity of opinions about the nature of the accident phenomenon was even greater than suspected. The findings from the accident definitions are tabulated in Appendix C.

The third step was to observe the methods used by individual investigators during investigations, and to try to discern from their actions, comments and decisions what their perceptions of the accident phenomenon were. These observations and the work described above led to the identification of five general and differing perceptions of the accident phenomenon. (5) Each perception was found to be accompanied by a set of ‘implicit assumptions, rules for investigative procedures, and “principles.”

The five perceptions, and their implicit assumptions, principles and rules of procedure (theories?) include 1) the single event perception and related “cause” theory; 2) the chain-of-events perception and “domino” theory; 3) the determinant variable(s) perception and “factorial” theory; 4) the branched events chain perception and “logic tree” theory; and 5) the multilinear events sequences perception and “process” or “p-theory.” A brief description of each is found in Appendix D.

After these perceptions were identified, an attempt was made to link them to commonly held safety concepts and philosophies, in order to attack the “close the loop” problem. Relationships between these perceptions and numerous safety concepts became visible. For example, the single event view is clearly linked to the concept of “cause” of accidents, and the body of investigative procedures related to that determination. The idea of unsafe . acts and unsafe conditions appears to be linked inextricably to the domino view, as is the idea of “causes” and safety actions to “break the chain of events.” The branched chain and multilinear events sequences perceptions, dealing with interactions and probabilities of events sets, compel a risk perception of safety, and the view of a non-zero probability of an accident with any activity. The determinant variable perception relates clearly to the “causal factor” view of safety problems, and data sampling for safety analysis. Each view has the effect of influencing a different philosophy of safety, and the safety programs which ensue. These perceptions also help to interpret the meaning of safety concepts in concrete terms, as will be shown in the next section.


. The early dialogues quickly revealed that there is no single accident investigation process. As individual investigators were observed, differences in the processes used by the investigators were observed. The processes ranged from the simple “investigation” of a few minutes duration by an employee’s supervisor in a “minor” accident to a multi-million dollar effort for a major accident. Among the differences observed were the sizes of technical staffs, and their technical capabilities; authorities; funds available; methods employed; work products; and the effectiveness of the investigative efforts.

One way to categorize the processes identified is in terms of their staffing. Each category of investigative processes is briefly described in Appendix E. The categories are the 1) one-person process; 2) intra-organizational ad hoc team process; 3) intra-organizational standing team process; 4) multi-organizational ad hoc team process; 5) multi-organizational standing team process; 6) judicial investigation process; and 7) legislative investigation process.

When examined for their underlying perceptions, purposes, objectives, scope, methods, outputs and applications, process differences and commonalties were observed. The processes incorporated at least one of the five perceptions cited above, and sometimes more than one. Their purposes ranged from getting forms filled out to restoration of national confidence in a large system accident. Methods ranged from intuitive methods in one-person cases to the use of sophisticated analyses and calculations in others. One person methods included the use of multiple choice codes for completing forms, while other processes attempted to identify sequences of events, determine “cause” and report all the facts and circumstances of the accident. One—person, judicial and legislative processes relied most heavily on witnesses’ statements. The standing team processes relied on a more balanced split between data from people and things. Some scenario modeling was used by standing teams or in teams with trained investigators. Uses varied widely, forming no readily discernible patterns within a process.

Common to almost all process observations were attempts by investigators to determine “cause(s)” and usually also a “sequence of events.” As a rough outline of a course of events evolved, most investigators tended to desegregate events for a more detailed understanding of what happened. With in team processes, common time and spatial events testing procedures were noted.

Investigators felt impelled to “get all the facts” about an accident so the “facts” could be analyzed and reported. The distinction between fact gathering and analysis of the facts was sharply drawn almost universally, and is encouraged by the format of most narrative reports. This may be a carryover from legal concepts related to the weighing of evidence under adversary proceedings. Criteria for identifying “facts” during an investigation, however, were either absent or were stated in equivocal terms; the determination that data constituted a “fact” was a judgment call which sometimes generated heated controversy. The “facts” sought seemed to be most heavily influenced by the investigators’ personal background and experience, and by the assumed hypotheses formed early in the investigation. (6) Interestingly, organizations using accident reporting forms customarily provided at least one additional level of persons to verify or evaluate the completed forms from the investigator before processing them for accident “analysis.”

The observed uses of the data reported on forms suggested that a distinction should be made between “primary” and “secondary” accident investigations. The primary investigation incorporated direct observations of debris or physical evidence and witness’ information collected by the investigators. The secondary investigations were conducted solely with data reported by others after primary investigation efforts, to draw conclusions about the accidents. For example, field investigators usually completed the report forms used by the secondary investigator .


At least six different basic methodological approaches were observed in accident investigations. The approaches included events reconstruction, statistical, modeling, simulation and “hunt and peck.”

Events reconstruction methods used “events” deduced from physical evidence remaining after the accident, witness interviews and speculations by the investigator to “reconstruct” the sequence of events viewed as the accident. The methodology drew heavily on physical science disciplines and examination techniques. The degree to which events were decomposed or broken down into sub-events was left to the investigators’ judgments. Frequently, one or several events were selected as the “cause” or “causes” or “probable cause” or “proximate cause” of the accident. Events were undefined, and therefore the nature of the events described varied widely. Reported events sequences often stopped with a crash or collision. Reports often included recommendations designed to “break the chain” of events .

Events reconstruction methods were augmented with logic trees, which provided for treatment of concurrent as well as sequential events. Logic trees culminated in a single undesired event, selected by the investigator. However, these logic trees did not show actual time relationships among interacting events connected with “and” gates.

Statistical methodologies were used for secondary investigations, usually involving several accidents. They included manipulation of data using statistical rules of procedure to identify determinant variables in an accident occurrence. Both static and dynamic descriptors were used indiscriminately, and the accident scope issue was not addressed. A fatal defect with these methods from an investigator's perspective was the failure to treat time relationships among events, although sequences were often addressed. Observation of conditions being represented as events was common. A frequent complaint about the outputs was the inability to identify corrective actions after “problems” were identified. (7) For example, the conclusion that drinking is involved in half the highway accidents does not provide a basis for corrective actions.

Adversary methods were used frequently in the judicial and legislative investigative processes. The premise that opposing interests would bring out the truth about the accident forms the basis for the formation of investigative teams with representatives of several parties involved in the accident. Information was often gathered in an adversary setting, particularly during witnesses’ “interviews.” The “facts” gathered were tested informally by discussing their apparent logic and consistency against the summary hypotheses that incorporated these “facts.” The method was disciplined by rules of evidence and other procedures drawn from the judicial processes, but the outputs were usually incomplete, and not subjected to any rigorous investigative disciplining criteria such as “beginning” and “end” tests. Issues reported were largely judgment calls by investigators, and no methodology for the calls could be detected.

Modeling took several forms. One observed form was a “mental movie” procedure, in which the investigator tried to form a movie of the accident in his or her mind as the accident data was being gathered. The mental movie provided a model into which data would be fitted as it comes to the investigator’s attention. As the model grew, the unknowns or gaps in the movie could be identified and overcome before the investigation was finished. The movies were developed intuitively, and beginning and end points were not consciously selected. The finished movie constituted an accident model, complete with settings and actors, actions and outcomes. This kind of modeling seemed to explain the events reconstruction methodology origins.

Flow charting was another kind of modeling observed. Events flow charts describing the accident mechanism or scenario in its sequential order were developed and recorded on paper in chart form. The charts, in some cases, implied or presented the timing and duration of events. (40) Logic trees were a form of this kind of modeling during the investigations, and sometimes were included in the outputs. However, charts rarely constituted the sole outputs.

Scale modeling was another technique used in accident investigations. Use of scale models with witnesses during interviews helps them explain their observations to investigators. The attitude of aircraft, their flight paths and other data were sometimes conveyed to investigators with such models . .

Two other observations about models were noteworthy. First, events .f low charts or models did not have to deal with variables in an accident investigation, because the events had .a certainty of one, i.e., the accident happened. Secondly, inadequately disciplined models were often observed. Curious mixtures of actions and conditions within supposed events sequence models were not uncommon. Another defect in most models was the violation of the “advancing time” discipline. Time was represented by the order in which events were presented and linked with arrows. However, arrows often pointed in opposite directions, creating the suggestion that time flowed forward and backward. In accidents, time was never observed to have flowed backward; participants in accidents never got to pass through a given time segment a second time! Undisciplined use of arrows without regard to the implications about time flows, and the mixing of static and dynamic descriptors in models does not represent the accident phenomena investigated by this author.

Simulations constituted another methodology. Crash simulations intended to reproduce automotive accidents were widely used to form the basis for safety actions. Flight simulations, explosives tests and similar reenactments of hypothesized accident scenarios were observed too. Simulations were usually thought of as reenactments by the investigators. The reenactments were used both to develop data about gaps in their understanding of what happened, and to formulate hypotheses by seeing if the simulations resulted in the same outcomes, Simulations also permitted investigators to vary the assumed events hypothesized during investigations, to assess the effects of the changes in the accident scenario. Simulations constitute one way of demonstrating that the investigator fully understands an accident mechanism.

“Hunt and peck” is the catch-all term used to describe unstructured investigative approaches observed. Some investigators--especially inexperienced ones--have no methodology in mind when they begin an investigation. This methodology is indicated when an investigator must visit the scene of the accident before anything else can be done, in the hope that some insights will come from just looking at the accident. No useful results have been observed with this methodology.

While not distinguished separately from the above, comparative analytical methods were employed in each of the investigative methodologies described above. In investigations, comparisons of the expected versus the actual actions, events or conditions were frequently addressed, and the determination of the expectations was an important task for many investigators. Comparisons also form the basis for the use of checklists and accident report forms; the comparison in the one case is between

the expected and actual investigative tasks, and in the second case is between the form designers’ view of the accident, the investigator’s view and the realities of the accident.

Supporting these general methodologies are at least 22 investigative examination techniques that the author observed. These techniques are listed in Appendix F. The quality criteria for these examination techniques usually reside within the disciplines represented, and are not published in useful forms for accident investigators’ use. Examinations were usually performed without substantive test plans. They did not address the entire accident phenomenon when they were used, and they usually focused on a narrow set of events. On a few occasions, these examinations constituted the principal investigative effort.


. Three types of investigative work products were observed. They were, in the estimated order of frequency, 1) completed accident report forms; 2) narrative accident reports; and 3)models. Sometimes, more than one kind of work product was observed from a single investigation. Supporting materials for these work products included documentation such as photos, test reports, maps, sketches, diagrams, etc.

Completed forms usually were designed by someone other than the investigator; often they were designed by insurers, researchers, or regulators, seeking specific information to serve their needs. Entries were specified in what appeared at first to be concrete terms, but which upon analysis were found to require extensive interpretation by the investigators. Typically, forms were not fully completed by investigators in all detail, or else non—specific entries were used (“other”) when all the blanks had to have some kind of entry. Every form required investigators to make judgment calls about the entries, for reasons that included the inapplicability of the form to their specific accident, lack of investigative data, misperceptions about the accident phenomenon that found their way into the forms, and conflicts with entry specifications. Using the five accident perceptions described above, the form content and entry specifications were found to be internally consistent with one of the perceptions, or in some cases two of the perceptions, when they were analyzed.

Observed difficulties with the use of forms in accident investigations included limitations on the opportunities to discover new insights into the accident phenomenon being investigated; distortion of data about an accident because of the need to fit actual observations into predetermined specifications that didn’t fit the actual observations; and the tremendous variations in the scope of the data about an accident that were reported.

Narrative reports were observed in two forms: written and verbal. Report length (reflecting the scope of the data about an accident) varied from a few minutes duration to over 300 pages plus appendices. No universal pattern was discerned for narrative reports. However, a general format used in National Transportation Safety Board reports was observed frequently in other reports. That format provides for presentation of facts/analysis! conclusions! recommendations sections in that order in written reports about accidents. (9) Non-governmental reports had no discernible pattern beyond a narrative description of what happened, followed by a discussion of investigative, cause or related information. In narrative reports of accident investigations, the accident was described as an events sequence description, without exceptions. Analysis sections had no common pattern.

Most organizations prescribing narrative reports provided some form of investigative manual. These supporting manuals contained varying levels of detail about investigative purposes, procedures, examination methods reporting formats, and--on occasion safety philosophy. Manuals often prescribed matter to be reported, but did not provide criteria for determining whether the quality of the matter covered was adequate. The International Civil Aviation Organization and the Energy Research And Development Administration investigation manuals are typical of the better manuals available, but neither treats the quality criteria question explicitly. (10A2)

Models are listed as a type of deliverable, but they are rarely the sole deliverable. At the present time, models are usually appended to reports. The ANSI reporting format calls for a crude model of accidents, and provides limited specifications for the outputs. (11) However, the ANSI model is too abbreviated to be useful for understanding accidents. (12)

Recommendations are often considered an output from an accident investigation. However, the preparation of recommendations from information developed during an accident investigation seems more properly to be an end use of the investigative output, rather than an integral function of the investigation. The view that an investigation should produce recommendations is apparently tied to the accident prevention purposes of accident investigations, rather than the risk-based perception of safety reflected in the events sequences perceptions. Of all functions related to accident investigations, the development of recommendations was the least structured, and the most dependent on the investigators’ common sense and good judgment. Research into countermeasure development and evaluation has disclosed only one major published effort to present countermeasure theory and principles for the investigator. (13)

Supporting documentation should not be confused with total outputs, but their contribution to understanding accidents can be significant. An aerial photo of an accident site, for example, contains data for both the investigators and the users of outputs. However, specifications for supporting documents and for their role in the outputs were skimpy. Only the judicial processes provided criteria for evaluating data, in that rules of evidence were used. (14)


These findings were interrelated. An individual’s perceptions of the nature of the accident phenomenon influenced the individual’s reasons for investigating an accident. These reasons determined the process selected, the scope of the investigation, and the methods used. These in turn influenced the work products. Both the perceptions of an accident and the deliverables from the investigation influenced the eventual applications of the work products. These relationships are shown in the following model.

Fig. 1 — Influence of accident perceptions on accident investigations

When one considers that there are at least .5 differing accident perceptions, 44 reasons for an accident investigation, at least 7 different investigation processes, at least 6 basic and 22 supporting methodologies, and at least 3 types of work products, one gets insights into the range of choices available to investigators. Add to these choices the undefined scope of investigations and the resultant additional variations in resultant decisions by investigators, and the impossibility of achieving reproducible investigative results becomes unmistakably clear.

Sometimes more than one set of choices was observed in the investigations of a single accident. For example, following one accident, nine separate investigations were conducted. An obvious question arises: why didn’t one investigation with one investigative output serve everyone’s need for an explanation of the accident? The linking of several investigations of one accident was not addressed in any accident manuals reviewed. Witnesses have been observed to be questioned as often as five times: what is the likelihood of getting identical statements each time? Did all the investigators have . access to all the residues or debris that survived the accident, even after destructive testing had occurred? In multiple investigations, who did what testing? These kinds of questions suggest a need to consider the larger question: why not one investigation? The answer seems to lie in the differences in perceptions of the accident phenomenon and resultant perceptions of “safety,” which drive different persons to seek different information from the accident to serve their narrow purposes.

Another relationship concerns applications of investigative findings for evaluation purposes. Not one of fourteen major accident investigation manuals treats the relationship between investigations and “safety” predictions. Validation of “safety” performance assumptions on which countermeasures are instituted is still apparently considered the domain of “researchers.” While this linkup has been examined in the highway and hazardous materials safety fields, (15,16,37) actual use of the predictive analyses and logic by investigators during the planning and initial stages of an accident investigation was observed only in two actual cases.


The findings have spawned development of a new potentially unifying framework for accident investigations and safety, with an accompanying new methodology for investigators. The framework and use of the methodology has been taught with games that simulate the investigative procedure. The principles and methodology have been applied to achieve improvements in hazardous materials emergency response decision making.


The tentative new framework can be described most clearly by referring to an orchestra and a musical score for a symphony. (38) It is based on the process perception of the accident phenomenon, because that perception most nearly seems to reflect realities observed in accident investigations .

An orchestra consists of many musicians and instruments which must work together to produce a melodious output. When these “actors” play a symphony, they work from a plan (the score) that specifies what each musician and each instrument must do during the entire symphony. If all adhere to the plan, within close tolerances, the music envisioned by the composer will be reproduced successfully, and the purpose of the activity will be achieved. But if something occurs to disturb the musicians and keep them from achieving their intended performance, some sour notes, or possibly even disruption of the symphony can occur. For example, a perturbation like the collapse of the conductor could stop the music. The disturbance can be viewed as being similar to an “accident.”

Sometimes, there is no score or plan, as when musicians get together for a “jam session.” During such sessions, the musicians improvise the music they play. If they are skilled and experienced, their music will be melodious and pleasing most of the time. They may have to stop occasionally when they lose the beat, or misplay notes. If unskilled musicians try to engage in “jam sessions,” the results are more erratic, and the sessions will be disrupted frequently because of coordination and timing problems, as well as sour notes. These interruptions are unintended, but they occur. They too can be viewed as “accidents.”

Conceptually, these analogies parallel activities in which accidents occur. Activities are conducted in a way that achieves a dynamic equilibrium (melodious music) among the interacting actors (musicians) engaged in the activity (performance.) This dynamic equilibrium, or homeostasis(17) among the actors requires constant adjustive(18) interactions within certain tolerances(19) to proceed in the temporal and sequential order demanded by the activity to achieve a successful outcome. . While the activity is in the homeostatic (or “at risk”) state, the actors interact within the required time and spatial boundaries, undergoing continuing changes of state. These changes of state occur in response to actions designed into the activity, or in response to changing relationships among the actors, as when a sour note occurs. When an actor does not adjust to a perturbation and the new state needed to sustain homeostasis, and the activity is disrupted with some harm, the music stops. The transformation from the homeostatic or the “at risk” state to the unintended harmed state is analogous to the “accident process.” The beginning of the “accident” process is the perturbation to which an actor had to adapt within the required time and spatial constraints, but did not. The end of the accident can be viewed as the newly harmed state of the involved actors. Harmed state is that state in which an actor can no longer continue the function required by the activity to sustain homeostasis, or which requires ameliorative treatment .

The analogy provides insights into relationships between safety and accident investigation. The role of managers or “activity designers” is to devise the “score” that actors (animate and inanimate) engaged in the activity can follow to achieve the desired outcome, and to staff the activity with skilled actors that can follow the “score.” Safety analysts’ predictive role is to identify passages in the score that are likely to give the actors problems, and either get the score changed, provide better instruments, or give the musicians plenty of practice so they can play the passages successfully. Accident investigators monitor the activity’s performance; they need to find out what the score was, and explain the accident (transformation) process which occurred in terms of the homeostatic state, the perturbation(s) that occurred, the adaptive reactions, the successive changes of state that ended in the harmed state(s) and the time or spatial constraints on the actors. Another logical safety function would seem to be the monitoring of continuing performance to determine how well the safety controls are working, and whether other difficult passages are indicated by some sour notes that did not end in accidents., i.e., ”near misses." In any performance of the score, there is a nonzero probability that the musicians will have an “accident.” However, these accident scenarios must be understood before their probabilities can be estimated or reduced with adequate certainty. This means that the accident “scores” should be available to the activity “composers” and analysts if the safety efforts are to be adequate. It also means that “accident scores”-—the investigative outputs—- should relate to the composers and the analysts scores.


Techniques used by the composer of the symphony can be adapted to accident investigation. A composer predictively specifies the timing, duration and nature of the actions for each actor in the orchestra, so that when they interact, their efforts will produce the desired musical sounds. The format used for these specifications is a graphic display of concurrent actions by each actor in the orchestra, using symbols arrayed in what might be described as a multilinear events sequences flow chart. This format provides for all the ingredients needed to display an accident process flow chart--including the actors, their actions relative to a time reference and relative to other actors’ actions, and a sequential order for displaying these actions. By tracking the actions (changes of state) of each actor involved in an accident, and establishing time relationships among these actions, investigators reconstruct the accident process. A “score” for the accident process can be developed by recording these retrospectively derived actions on a comparable multi-linear events sequences flow chart.

This concept has led to an investigative methodology based on principles related to the above discussion. The principles and methodology help to discipline accident investigations by providing generally applicable criteria for investigators to self-evaluate their investigative tasks and outputs during an investigation. The most significant principles follow. (20)

Think events: an event is one action by an actor, linked to a change of state. This principle forces the investigator to structure accident data into events “building blocks” used in graphic displays of the accident sequences; focuses the data search on actors involved in the accident; and concentrates attention on what each actor did from the beginning of the accident to its end.

Think event sequences : this principle forces an investigator to fit the events derived from the data into their temporal and spatial order. The sequential tracking of actions by witnesses is another application of the principle, because it helps the investigator structure the interview in a way that assures access to all the witness’ observations or concerns.

Make mental movies: this principle provides a framework for arranging the events sequences of several actors into a visual referent in which the continuity of sequences can be tested by the investigator. Gaps in the movie indicate unknowns that the investigator needs to resolve to fully understand or explain the accident. The technique also screens out data that do not fit into the movie, and thus minimizes the non-sequiturs that might be reported. Movies provide for the practical implementation of an important investigative Law:

“Everyone and everything always have to be someplace doing something.”

The Law demands that a complete investigation account for each actor involved in the accident, even if the actor was passive or “just resting;” however it does not require that every actor be reported. The controlling criterion is the “change of state” for events recorded on events flow charts.

Advancing time: This principle requires investigators to establish the times for each event recorded, and is the basis for the time logic testing used to qualify entries on events flow charts. Every entry on a chart must be tested for the validity of its placement relative to other events. This is done by establishing the relative timing of events pairs known to the investigator, based on where and when an actor did something during the accident or on physical laws that govern sequential behavior. The positioning of an event on the events flow chart is disciplined by an advancing time scale, so proceed/follow relationships among events pairs are not violated. The time test and positioning of each event acquired during the investigation, as it is acquired, constitutes a real-time quality control test for ordering either observed or inferred events discovered by the investigators. The constant application of this principle as data is acquired is of special value in that data validity is quickly tested, and early delineation of gaps in the understanding of the accident process remains visible until the gaps are resolved. This further focuses the data search efforts, facilitates data organization, and guides the recording of data during the investigation. With an arrow convention to show the flow of events with time, a crucial defect in other investigative processes is overcome.

These principles have been organized into an investigative system that is based on graphic representations of the accident phenomenon. The system incorporates a TIME LINE to discipline the positioning of events sequences in these displays; an EVENTS MATRIX for the orderly array of each actor’s actions; an ARROW CONVENTION to show the flow of valid proceed/follow interactions among the actors; and a COUNTERMEASURE TAB technique to identify candidate countermeasures that would change the accident “score.” The system is illustrated in Appendix G. (22)


The investigative process has been incorporated into four accident investigation games that simulate the accident investigation process. (22) These games replicate the development of an accident investigation plan in a team investigation process; the acquisition of a witness’ complete story during an interview; the development of a debris testing plan; and the organization, testing and reporting of accident data during a report evaluation exercise. These simulations have enabled students to identify and articulate criteria to guide investigators during accident investigations, and to develop a common basis for evaluation of investigative programs and deliverables.

Another result of the simulations has been the clarification of specific deficiencies in present investigative methods, which will probably have the long term effect of bringing about changes to overcome these deficiencies in the students’ organizations.


The principles and methods described have been applied to identify safety problems with emergency responses to hazardous material accidents, and to improve performance of the firefighters involved in those emergencies. In a 1971 hazardous materials accident investigation (21) firefighters’ actions and decisions were tracked to find out what happened, using some of the events display methods that were in early stages of development. The tracking disclosed flaws in the decision making process that were linked to the way the f ire-fighters had been programmed to deal with such emergencies. After discovering these flaws, ways to overcome them were developed by predictively tracking, step by step, the actions of hazardous materials cargoes, hazardous materials containers, firefighters’ actions and decisions, and their interactions with potential victims. When these actions were displayed graphically, gaps in the emergency response process became visible. An orderly process model was “composed” for the emergency, for the hazardous materials behavior in emergencies, and for the resultant decision making process required of firefighters. (20) The D.E.C.I.D.E. decision model is gaining increased’ acceptance, and is becoming a basis for teaching hazardous materials emergency decision making to firefighters. (39) The pay-out for the effort has been a substantive change in attitudes and tactics in the firefighting community, and an apparent decline in the casualty rate where these methods have been applied.

The models have provided another benefit for investigators. They have helped investigators organize their investigative tasks in team investigations and have helped focus the data search during investigations on data related to the model. This has been accomplished without sacrificing the discovery potential gained from the use of the events charting-based methodology.

These results occurred, in large measure, because the author explicitly identified his perception of the accident phenomenon on which he based his investigations and work products. This experience suggests that articulation of one’s perceptions of the accident phenomenon will provide useful criteria for self evaluation by accident investigators. This experience also suggests that if these perceptions of the accident phenomenon are not made explicit, attempts to reconcile the widely divergent views about safety and accident investigations hold little promise.


It is clear to the author from this research and its applications that there is an urgent need for unifying perceptions of the accident phenomenon and for disciplined accident investigation methodologies that will overcome practical and theoretical uncertainties facing accident investigators. Unless present diverse perceptions are made visible, attempts to gain acceptance of the need for reconciliation of diverse views are not likely to be supported. To get these debilitating differences resolved, certain actions seem to be needed.

1. Individuals practicing safety activities should reexamine their perceptions of the accident phenomenon, and then explicitly report that perception in the work products that it influences. This would include investigative reports, program evaluations, safety analyses reports, and similar work products. This action would have the initial effect of forcing articulation and disclosure of these perceptions. It should have a collateral effect of driving divergent perceptions toward a ‘consensus as the effectiveness of work products based on different perceptions become visible. In addition, it would have the immediate effect of providing disciplining criteria for the work products produced .

2. Accident investigators must reexamine their investigative methodologies, and then explicitly specify both the accident perception and the methodology used in each reported investigation to produce the deliverables offered. This action would have the effect of making visible implicit criteria for investigator’s decisions about accident scope, data sought, analysis methods and information reported. This should also have the longer term effect of demonstrating the benefits of each methodology, based on the utility of the outputs produced with each methodology. In addition, the perceptions and methods, if reported, would put users on notice about the known difficulties with each, identified in this research.

3. Persons performing secondary investigations, using primary data reported by accident investigators from field observations, should report both the accident perceptions on which their work is predicated, and also the perceptions on which the primary data was predicated. This would reduce the potential for misuse of primary investigative data in secondary investigations. It would also help secondary investigators self-evaluate their work products.

4. Research should be initiated to develop a “best” accident investigation methodology based on unifying safety, accident and methodological perceptions and concepts. The methodology should not be an adaptation of methodologies from other disciplines, but rather should be derived from the unifying safety, accident and related concepts. This research would transcend any single public or private agency’s interests, so it is not clear who should sponsor such research. The “best” methodology should serve the greatest number of users, whose perceptions of their needs may be reshaped if the first three actions are implemented.

5. Accident investigation program managers should reexamine their personal perceptions of the accident phenomenon, and then explicitly specify the perceptions on which their programs are predicated. This should have the effect of providing criteria for operating and assessing all the elements of an investigative program, as well as rationalizing the outputs. It could affect the nature of accidents investigated, as well as the methods specified for the programs. This action would probably provide a strong incentive to strive toward unifying perceptions of the phenomenon in fig. 1 that does this. These actions can all be taken by individuals, except for number 4. This means there is no valid reason for not getting started.

The views expressed are those of the author and do not necessarily represent the views of the National Transportation Safety Board.


1. US National Transportation Safety Board, “Risk Concepts in Dangerous Goods Transportation Regulations” Report NTSB STS 71-1, 1971

2. Haddon, W. Jr., Suchman, E.A. and Klein, D., “Accident Research”, Harper and Rowe, New York 1964 (Section 1)

3. Readers’ Forum: “Why do accidents happen?” Job Safety and Health, Aug. 1977

4. National Highway Safety Advisory Committee “Highway Safety Data.” A report to the Secretary of Transportation, June 19, 1979

5. Benner, L., “Crash Theories and Their Implications for Research.” American Association for Automotive Medicine Quarterly Journal, Jan. 1979

6. Jacobs, H.H., “Conceptual and Methodological Problems in Accident Research.” in Behavioral Approaches to Accident Research, Association for the Aid of Crippled Children, New York, 1961

7. Fontenot, B.P., personal communication.

8. Benner, L. “D.E.C.I.D.E. in Hazardous Materials Emergencies.” Fire Journal, 69:4, July 1975

9. US National Transportation Safety Board, “Inquiry Manual-Aircraft Accidents and Incidents.” NTSB Order 6200.1, Washington, DC 1975

10. Johnson, W.G., “Accident/Incident Investigation Manual.” ERDA7620. US Government Printing Office, Washington, DC 1976

11. American National Standards Institute, US Standard Method of Recording Basic Facts Relating to the Nature and Occurrence of Work Injuries, ANSI 216.2—1962 (Rev. 1969) New York 1962

12. “Feasibility of.securing research—defining accident statistics” Safety Sciences, Dept. of Health Education and Welfare, National Institute of Safety and Health Publication No. 78180, Sept. 1978

13. Haddon, W. Jr., “Reducing the Damage of Motor Vehicle Use.” Technology Review 77:8, Aug. 1975

14. McGrew, D.R., “Traffic Accident Investigation and Physical Evidence.” Thomas, Springfield, IL 1975

15 Hall, W.K. and O’Day, J., “Causal Chain Approaches to the Evaluation of Highway Safety Countermeasures.” J. Saf. Res. 3:1 1971

16. US National Transportation Safety Board, Letter to Secretary of Transportation transmitting Safety Recommendations 1769 through 17611, October 20, 1976

17. Pask, G., “Interaction between Individuals: Its Stability and Style.” Mathematical Biosciences 11, 1971

18. McGlade, F.S., “Adjustive Behavior and Safe Performance.” in Ferry, T.S. and Weaver, D.A., “Directions in Safety”, Thomas, Springfield, IL 1976

19. Blumenthal, M., “Problem Definition: The Driving Task in the System Context.” Behavioral Research in Highway Safety, 2:1, Spring 1971

20. Benner, L., “Hazardous Materials Emergencies” Lufred Industries, Inc., Oakton, VA 1976

21. US National Transportation Safety Board, “Derailment of Missouri Pacific Railroad Company Train 94 at Houston, Texas, October 19, 1971” NTSB RAR—72—6, 1972

22. Benner, L., “Four Accident Investigation Games Simulations of the Accident Investigation Process.” Lufred Industries, Inc. Oakton, VA 1979 .

23. Ames, J.S., “Aircraft Accidents: Method of Analysis.” Proceedings of National Safety Council, 17th Safety Congress, 1928

24. Heinrich, H.W., “Industrial Accident Prevention McGraw Hill, New York 1936

25. Thorndyke, R.L., “The Human Factor in Accident US Air Force School of Aviation Medicine, Project Report, Project No. 21—30—001, 1951

26. US Dept. of Health, Education and Welfare, Public Health Service, “Uniform Definitions of Home Accidents.” Washington, DC 1958

27. “Fault Tree Analysis as an aid to Improved Performance” AMC Safety Digest, US Army Materiel Command, May 1971

28. “Fault Tree for Safety” D57133, The Boeing Company, Seattle, WA 1966

29. Suchman, E.A., “A Conceptual Analysis of the Accident Phenomenon” in Behavioral Approaches to Accident Research, Association for the Aid of Crippled Children, New York 1961

30. Baker, J.S., “Traffic Accident Investigator’s Manual for Police” Northwestern University, Evanston, IL 1963 (Revised 1971)

31. Haddon, W. Jr., “The changing Approach to the Epidemiology, Prevention and Amelioration of Trauma: The Transition to Approaches Etiologically rather than Descriptively Based.” American Journal of Public Health 58:8, Aug. 1968

32. Surry, J. “Industrial Accident Research” University of Toronto, Toronto, Ontario, Canada, 1969

33. Johnson, W.G., “The Management Oversight and Risk Tree” prepared for the US Atomic Energy Commission under Contract AT (043)821, Feb.1973

34. Benner, L. “Safety Risk and Regulation” Proceedings, Transportation Research Forum, Vol. XIII, No. 1, Chicago, IL 1972

35. Wigglesworth, E.C., “A Teaching Model of Injury Causation and a Guide for Selecting Countermeasures” Occupational Psychology 48:2, 1972

36. Leplat, J., “Origin of Accidents and Risk Factors” paper presented on a Seminar sponsored by the Swedish Work Environment Fund, Stockholm, 1975

37 Joksch, H.C. Reidy, J.C. Jr., and Ball, J.T., “Construction of a Comprehensive Causal Network, Phase III, Final Report, Vol. I” Center for the Environment and Man, Inc. Windsor, CT 1977

38 Benner, L., “Risk Responsibility and Research” paper presented to the Symposium sponsored by the American Chemical Society Council Committee on Chemical Safety, Chicago, IL, August 26, 1975.

39 Wright, C., “Railroad and Emergency Response Personnel: A Cooperative Effort” presented to Hazardous Materials Workshop, 106th Annual Conference, International Association of Fire Chiefs, Kansas City, Mo. September 17, 1979

40. US National Transportation Safety Board, “Improving Survivability in Hazardous Material Accidents” Report HZM 795

41. Dieterly, D.L., “Accident Analysis: Application of the Decision/Problem State Analysis Methodology” AFHRL Technology Office, NASAAMES Research Center, Moffett Field, Ca. 1978 (In publication)

42. Manual of Aircraft Accident Investigation, Fourth Edition, International Civil Aviation Organization Document 6920AN/855/4, Montreal, Canada 1970



determine cause

 prevent accidents

 prevent similar accidents

 required by safety department

 generate statistics

 determine fault

 find violations .

 establish liability

 settle workmen’s compensation claims

 verify hypothesis

 grind an axe

satisfy public curiosity

evaluate a regulation

 satisfy a boss

 satisfy the employees

 find out who is to blame

 find out what went wrong

 have to fill in report

 determine subrogation chances

 settle insurance claim

 assess losses

 train students

 determine causes

 determine probable cause

 find causal factors

 identify recommendations

 improve system

 upgrade operations

 develop training materials

 improve training

 assign damages

 to do cost/benefit analysis

 understand phenomenon

 support legislation

 confirm predictions

 restore confidence in system

 fulfill research contract

 support civil litigation

 bring about changes

 restore reputation

 defend civil suit

 prosecute violation

 earn a living

sell ideas



Problems encountered:



The following lists show the wide ranging views about the nature of the accident phenomenon. Each entry on each list was taken from one or more of over 200 different definitions of the word “accident” reviewed and analyzed during a three year period (1975-78.) Definitions contained four common elements in addition to numerous descriptors that reflected special interests of the lexicographers. These common elements are precursors, occurrence(s), involvement, and result

Appendix C table



1. Single event perception and “cause”_theory

Assumes that an accident is a single event which has a “cause.” Investigator identifies cause to understand phenomenon. Investigative task is to find cause, correct it and the accident will be prevented in the future. Assumes replicability of phenomenon. Also assumes someone/something failed, was at fault or to blame; otherwise accident is “act of God” or unexplainable. May be related to historic need for “scapegoat” for inexplicable events. Singular “event” still widely used in literature.

2. Chain-of-events perception and “domino” theory.

Assumes “unsafe” conditions create vulnerable relationships in which “unsafe act” can trigger “chain—of-events” called accident. Investigative task is to identify “unsafe conditions” and “unsafe acts” that “caused” events sequence. Criteria for unsafe acts and conditions unspecific; conclusions rely on investigators’ judgment. Criteria for beginning and end of chain unspecified. Conclusions usually symptomatic and descriptive, rather than etiologic.

3. Determinant variable perception and “factorial” theory .

Best described by Thorndyke (25) as “the search for the experimental ideal of the single independent variable” which set “the goal and ideal of an accident investigation as the gathering of data in such a way that statistical comparisons will permit fair estimates of the influence of the variables in a particular factor on the probability of an accident.” Assumes common factors are present in accidents and that they can be discerned with statistical analysis of the “right” data from accident investigations. Assumes hypotheses about determinant variables can only be identified by secondary examination of facts. Criteria for scope, data, outputs dictated by hypothesis, rather than direct observations from accidents. Requires extensive exercise of investigator’s judgments; often uses data reporting forms. Requires occurrence of sufficient accidents to build data base. In practice, results in differentiation between fact gathering during field investigation and secondary data analysis function.

4. Branched events chains perception and “logic tree” theory.

Assumes accidental events are predictable, and structures predictive search for alternative events pathways leading to selected “undesired event, through speculations by knowledgeable systems analysts. Follows rules of procedure for structuring speculations and assigning probabilities in a branched events chains display. Demands ordering of events into accident sequences. Displays facilitate communication, discovery, constructive criticisms and technical inputs. Provides basis for identifying data needed .during operations to update probability estimates. Displays can provide guidance during investigation of actual accidents, and accidents can be used to upgrade predictions. Does not provide for incorporation of events time relationships and durations; criteria for undesired event choices are unspecified.

5. Multilinear events sequences perception and “process” or ”p-theory.

Assumes accident is transient segment of continuum of activities; views accident as a transformation process by which a homeostatic activity is interrupted with accompanying unintended harmed state. Process is described with actions by interacting actors, each acting in a sequential order with each sequence related to each other interacting sequence in a specific temporal and spatial proceed/follow logic. Investigative tasks call for identification of the actors, their actions and interactions and resultant changes of state from the initiating perturbation through the last sequential harm to the actors. Prescribes criteria for beginning, end of accident; for data search; selection; recording; organization and testing. Display provides “time coordinate” to discipline events timing relationships, and hypothesis generation method, in addition to benefits of “logic tree” displays described above.


1. 0ne-person process .

One investigator performs all investigative functions, from data gathering, interpretation and analysis, to reporting of findings. Also answers questions arising during investigation. Tasks usually governed by reporting forms or check lists which investigators must interpret and satisfy. May involve coding of entries. Outputs usually brief verbal or narrative reports, or completed accident report forms. Recommendations usually not made, or if made, are shallow. Investigative procedures usually insubstantial.

2. Intraorganizatinal ad hoc team process.

When accident happens, investigative team is formed within organization, staffed by regular employees without substantive investigative training. Usually teams search for chain of events, unsafe acts or conditions. Investigative duties often subordinated to other duties after initial cursory inquiries. Outputs usually internal reports of “cause(s)” with shallow recommendations because of self interests of team members. Team disbands after investigation is completed. Investigative procedures usually meager.

3. Intraorganizational standing team process.

Predesignated team performs full range of investigative tasks, often involving testing or examinations. Team usually includes one or more extensively trained investigators with investigative experiences. Outputs usually are narrative reports of “cause(s)” with reasoned, objective recommendations for improvements in non-management areas of the organization’s activities. Team members often able to implement improvements before report is issued. Investigative procedures usually provided.

4. Multi-organizational ad hoc team .

Participants designated after accident occurs; disbanded after investigation is completed. Used by both public and private sector organizations. Investigators usually detailed to team for duration of investigative effort. procedures governing tasks range from sketchy to comprehensive. Draws on mixed talents of investigators to achieve adequate investigation. Investigators usually trained. Outputs usually are narrative reports of “causes” with mixed quality recommendations that depend largely on project manager and team members, rather than procedures.

5. Multi-organizational standing team process .

One organization organizes team formation, using representatives from several other organizations or groups to investigate a series of accidents. Typified by multi—disciplinary investigation teams from several cooperating universities under contract to government, or teams from several companies investigating a kind of accident plaguing an industry. Investigators often trained on the job; often assisted by experienced investigators on teams. Teams usually focus on specific kinds of accidents to understand them better. Methodologies usually reflect academic disciplines of investigators. Outputs are narrative reports, usually comprehensive, often supplemented with substantial tabulated or other data. Team dismissed at end of project.

6. Judicial investigation process .

Special task force assembled and operated under the direction of a distinguished jurist, often of national prominence. May be directed by jurist in jurisdiction within which accident occurred. Investigation relies primarily on witnesses’ testimony; may involve some technical testing or analyses. Process governed by judicial procedures, rules of evidence and adversary methods. Usually associated with an accident of national concern. Outputs are usually narrative reports of “cause(s)” with recommendations, but may also include criminal or other legal proceedings against parties at fault.

7. Legislative Investigation process .

Investigation initiated by legislative entity acting in legislative or oversight capacity. Usually involves investigation by legislative staff, consultants. Investigation relies principally on witnesses’ testimony before legislative subcommittee, and hearing records. Outputs are hearing records, narrative committee report, and sometimes new legislation. Procedures of legislative body govern investigation.



1. Visual inspection for residues, deviations from norms, deformations, etc.

2. Chemical analysis methods, including chromatographic, infrared, wet chemical analyses, pH tests, miscibility tests.

3. Thermal analysis methods, including “temperature bars” differential thermal analyses, decomposition or polymerization temperature analyses.

4. Radiographic methods, including x—ray, gamma ray scans, carbon dating, emission measurements .

5. Structural analysis methods, including engineering calculations of force distribution or changes; rotational analyses.

6. Force vector analyses, based on Newton’s law of equal and opposite forces and directions of deformation.

7. Dimensional analyses, including comparative analyses of new vs. present dimensions .

8. Energy traces, including energy flow, stressor/stressee analyses, thermodynamic analyses.

9. Electrical analyses, including circuit, conductivity, static charge, sneak circuit analyses.

10. Metallurgical testing, including grain microphotographs, yield mode patterns, strength and hardness tests, bend tests, ductility tests.

11. Crystallographic analyses, including xray diffraction, formation conditions, types of crystals present.

12. Reconstruction of surviving parts, as with mockups, sequential break—up analyses .

13. Char analyses, such as determination of char depth, char patterns, and char composition versus known standard specimens and exposures.

14. Fault tree analysis, for speculation on how the condition observed might have come about.

15. Pressure analyses, including vapor equilibrium, reaction pressure and velocity, rate-of-pressure-rise-and—effects experiments .

16. Scenario modeling techniques, such as events charting.

17. Flash point tests for flammable liquids, dusts, powders.

18. Incubation tests for etiologic and infectious agents, carcinogen, etc.

19. Buoyancy tests for density determinations, mixing rate estimates.

20. Flow tests, for viscosity, angle of repose, air entrainment effects.

21. Toxicity tests, such as LD 50 and LC 50 animal tests, skin corrosivity tests, asphyxiation concentrations, blood tests.

22. Corrosion tests, such as inches per year (ipy) rates, stress corrosion cracking tests.