(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
Following up on our earlier discussion, Daniel Rubenson from Ryerson University in Toronto writes:
The course went really well (it was a couple of years ago now). The course was run through a partnership my department has with the Ontario Fire College. Basically, firefighters can do a certificate and sometimes a degree in public administration and part of that is a course on methods. It was a small group — about 8 or so — very motivated guys (all guys). Some of them were chiefs or deputy chiefs from small towns, others captains who were doing the certificate in order to improve their chances for promotion or as a step into a broader public admin career.
I had asked them ahead of time to bring with them whatever data they could get their hands on and that they thought would be interesting. This included response times, data on professional v voluntary firefighters, some insurance data and the like.
I should mention that is was an intensive mode course. So we had 4.5 days together and none of the students had any background in statistics whatsoever and most had been out of university for several years.
With this in mind, I started by doing a few exercises to get them thinking about data and numbers. These came mostly from your book, Andrew (Teaching Statistics: A Bag of Tricks). We spent a morning playing around with these sorts of exercises and then transitioned into some lecture time going through the ideas of taking a concept and moving toward a variable, thinking about measurement etc. In other words, some of background building blocks to getting to the point where we can start to do some analysis of data and answer questions.
We did an intro to very basic descriptive statistics and the students worked in pairs with the data they’d brought as well other data I came with or had simulated. I found with this type of group in particular that working in pairs or groups was very useful. We did exercises teaching concepts of probability and different distributions. We did the candy weighing exercise (always a big hit).
We also spent a fair amount of time talking and thinking about what for lack of a better term would be called research design. This included a discussion about confounding. Lots of nice examples related to the firefighting world here: e.g. why do fires/fire damage/whatever seem to increase with the increase in smoke detectors?
Along the way we worked our way up to bivariate tests and ended with a gentle introduction to regression.
To be honest, the lesson plan kind of flew out the window and we largely played it by ear during the week we had. But I kept a few “big picture” ideas in mind as we went along. The main goal was to get them up the point where they could think about (and usually do) some simple but useful analysis in their everyday work. And also to be able to understand and react to reports and the like coming from city governments and so on. So one of the other things we did was work on statistical literacy by reading, critiquing and and interpreting reports.
I think one of the main take aways for me was that it’s important with this kind of group — and also because of the intensive format — to really mix it up in terms of lecturing, exercises, games, group work, short assignments and quizzes etc.
And Brent Van Scoy from Omaha Fire in Nebraska writes:
I am not certain what fire code Canada follows, but most of the local departments are trying to become compliant with NFPA (National Fire Protection Agency) 1710, which covers fractal time responses. There are many different levels, but basically it measures different periods of times for fire department responses. As an example, time of dispatch until the unit leaves the engine house and also the time it takes for the until to “travel” to the call. It includes many other variables depending on the type of incident, but as general rule it is a fairly straightforward measurement of time. The code requires that 90% of the time a unit should arrive within 5 minutes, which would seem pretty straightforward, but the controversy comes with how it is interrupted. The code is a Boolean value; either you make the objective or you don’t. Instead, others look at the average time of all the calls and considered the code met if the average is below 5 minutes. Part of my job is presenting that information in a way that will help them understand the range of calls that were greater than 5 minutes.
If you have a few minutes, this article is worth a few minutes of your time (Los Angeles Fire). I feel sorry for the guy/gal who has to deal with this mess! Luckily my job is not nearly that complicated. I simply create histograms and other charts in Excel to graph the population sample of our incidents and times, then present it to the Fire Chief every quarter. It is pretty low level stuff, but I am trying to become more efficient in my job, so any article I can find on Fire Department’s statistics I take a great interest. We use our data to help us place units/rigs in different parts of the city to help improve our NFPA 1710 compliance.
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science