A place to ask questions and add to probative and informative discussions associated with the various aspects of the field of fire investigation. -- FORUM RULES---BE CIVIL AND NO NAME CALLING, NO BELITTLING, NO BERATING, NO DENIGRATING others. Postings in violation of these rules can be removed or editted to remove the offending remarks at the discretion of the moderators and/or site administrator.
Re: Back to basics testing your hypothesis.
Posted by:
SJAvato (IP Logged)
Date: January 22, 2007 12:51PM
This type of debate is exactly what fire investigation needs. It brings to light the complex "philosophical" issues that are often more important (and complex) than even pattern analysis can be (plus I love a good philosophical discussion.) So here are just a few of my thoughts:
1) Daubert specifically states that the decision was not meant to exclude opposing or alternate interpretations provided that a proper methodology was applied.
2) Peer review is clearly an important technique to use to insure that our ideas, conclusions and methodologies are, at least, acceptable and plausible (if not correct.) Peer review is used as a "test" by every reputable scientific journal prior to publication. The Daubert evaluation methods include an evaluation of whether the ideas are generally accepted and/or peer reviewed. (I realize that the Court may be referring to a more stringent peer review than a fire investigator may subject his/her report to, but still it is recognized as a legitimate test of general validity.) In presenting my report to peers, am I not opening it up to a cognitive evaluation based not only on my experience, training and knowledge of current science and literature, but to my peer's as well? (In effect a cognitive evaluation force multiplier) If done correctly, this keeps my opinions from becoming too inbred, exposes weaknesses or inaccuracies in my data interpretation or methodologies and prepares me for the rigors of a cross-examination on my report. In my experience, my toughest critics have been my peers. Also, it should be noted that history is full of wrong, but generally accepted theories and ideas, and that some great scientific concepts were rejected for publication after initial peer review. ( I read that Hans Krebss description of the citric acid cycle was initially rejected for publication. That was before he got the Nobel Prize for the same work that his peers initially rejected. Frankly, I wish it had stayed rejected, I could never get the damn thing right!)
3) Some of this discussion has been raging in the "real science" community for years. It is a demarcation issue. How do you separate what is science and scientific from what is pseudoscience or merely has a scientific patina? How do you make the rules loose enough to allow new ideas and concepts, but exclude implausible and unrealistic thinking? Karl Popper believed that the hallmark of science was its falsifiability, or at least that a hypothesis can be set up to be falsified. But failed experiments do not necessarily equate to a non-scientific or implausible hypothesis. We could discuss the ideas of deterministic versus probabilistic evaluations, but I'll save that. The point (if, in fact, I have one) is that issues such as; how are hypotheses adequately tested, what constitutes proof of a concept, how is data properly interpreted? are all issues that science grapples with. It is not just fire investigation that asks these questions.
4) The selective evaluation of certain data is a double-edged sword. As has been discussed previously, one piece of data can be the key to an entire analysis or investigation. In other cases, the focus on a single item, to the exclusion of all others, can lead to serious errors. What you think is important at a scene may be different than what I think is important. This (I think) is the basis of our adversarial system. I base my conclusion on the same data as you, but we come up with different answers because there is no single way to interpret the data. This is not just my opinion. Thomas Kuhn in "The Structure of Scientific Revolutions" said that "Philosophers of science have repeatedly demonstrated that more than one theoretical construction can always be placed upon a given collection of data." If we think that following NFPA 921 or a "scientific method" will always result in investigators coming to the same conclusion (presumably the "TRUE" one), we are, at best, being overly optimistic and at worst, deluding ourselves. (Just as an academic aside, it is interesting to see the contrast between Popper and Kuhn's ideas of the "is's" verses the "ought's" of science) Bottom line - the interpretation of available data will always be a potential source of investigator dissonance. Who's right and who's wrong will depend on a totality of circumstances and or decided by a "fact finder" (who could also be wrong.)
Steve