Uncertain Information

By Eric DuBois

Back in August 2011, I was in Massachusetts preparing for my first season as a teacher at Nature’s Classroom in Colebrook, CT.  Colebrook is located in the Berkshire Mountains in the rural northwest corner of Connecticut.  To get there requires driving for about an hour on small state and local roads.  Unfortunately, at the same time as I was preparing to leave, Hurricane Irene was hitting southern New England and by the time it was over, the region was a mess.

Many of the rivers across the region were flooded.  As many of the roads in the region either follow, or at a minimum cross, rivers, they had become impassable in the aftermath of the storm.  To further complicate matters, the huge number of downed trees and powerlines added additional roadblocks to my journey.  From the outside this appears to be a fairly straightforward OR problem.  All we must do is find the new shortest path. Right?

Unfortunately, in this instance as in most real disasters, the reality does not fit the model very well.  Specifically, most shortest path problem models assume perfect knowledge of the network or at the very least a good working knowledge of the probabilities that the roads will be open.  Two things worked against that in Connecticut.

  1. The ‘interdicted’ locations change over time. As the water proceeds down the mountains it floods progressively larger watercourses.  So when I first start out the major mountain streams may have flooded the local roads and the smaller rivers are beginning to flood.  By the time that I have reached the mountains, those small rivers have reached flood stage.  Worse yet, there is no guarantee that the local roads are now passable since they may have washed out or become impassible with debris left by the receding water.
  2. More importantly, information is quite scarce on where the blockages are. In the actual event, the State Police could provide little more than a rough sketch of what roads outside of the suburbs were underwater and almost no information on where debris had blocked transit.  It took me over two and a half hours to find a navigable route to Colebrook and that was after checking with the police and news sources for an hour.  Other teachers were not so lucky, having not properly checked the news, and didn’t arrive until the day after.

The moral of this tale is understanding the importance and difficulty in maintaining an accurate understanding of the state of the system during a disaster.  Significant literature is now looking at the use of social media and crowd-sourced information to flesh out the details of a disaster.  Unfortunately, this information is often inaccurate and misleading.  This can arise from something as simple as the lack of characters allowed in a tweet up to a complete misunderstanding of the situation by the individuals on the ground.  Unlike state police or disaster responders, these individuals have likely had no formal training in communicating accurately or previous disaster experience to gauge the conditions by.  So while the information provided may be better than nothing, it is certainly no panacea.

I can certainly see using the information to provide a gauge to determining where infrastructure restoration is most likely needed.  However, in the eventuality that we need to route emergency vehicles around these roadblocks, probabilities and guessing is not, in my opinion, a terribly great way to go about finding the optimal choice.  As operations researchers, how do you feel we can make use of this new source of data to inform out disaster response planning?

Advertisements

3 thoughts on “Uncertain Information

  1. I think data from these sources could be extremely useful in disaster response planning in the future. The crowd-sourcing example you cited seems to have achieved pretty amazing results despite the fact that it used data from Haiti, which is an extremely poor country. Since technology is only going to become more widespread, it’s not hard to imagine how some of the data collection techniques could become even better. Facebook now has a feature that prompts users to indicate “I’m safe” during a disaster, which could be used to identify the areas that were least affected and prioritize a response. Similarly, things like Google Maps are already starting to provide live traffic updates, so I wouldn’t be surprised if they incorporated a feature to detect a blocked roadway and automatically reroute traffic (which I’m sure would’ve been useful for you during Hurricane Irene). If the government, private companies, and nonprofits start collaborating more in this area, I imagine the data could be aggregated and made available for operations researchers to use in some way.

    Like

    1. I certainly agree that it is better than the currently available information supply and with collaboration could make for a fantastic tool. However, I also feel, especially if it were developed as an automated tool that there is some use in vetting the information that comes in.

      I can certainly see Google Maps being useful in something like evacuation planning, but I am not so sure it would’ve helped in an instance like I went through. Google determines traffic primarily by tracking smarphones. So I can see this can fail in two instances:
      1) When communications networks go down, which is a not unrealistic possibility in a disaster. Further in rural locations, especially in the hilly east, cell service can go out for many miles even during normal operations.
      2) When there simply aren’t enough cars to report that information. Where I was in Connecticut traffic is very light. So even if there is information, it may not be enough data points to determine if there is a roadblock.

      I agree that it is certainly a useful service, but more so in an urban or suburban setting than a rural one.

      Like

  2. As a Millennial, my first instinct in a disaster would be to check social media for guidance on good routes to take, even though I’m aware that most of the information is incomplete at best, and flat-out wrong at worst. I’ve done some work in data collection and synthesis, and I can say from experience that even though one source of data might be flawed, if you can corroborate that data with at least two other sources, there’s a high likelihood that the data is correct. I honestly don’t think it’s that far-fetched to say it’s only a matter of time before we have a system that’s able to comb through disaster-related data and piece together relatively accurate probability assessments.

    Technology will only improve and I believe crowd-sourcing is the way of the future, whether we like it or not. “Big data” is an overused buzzword, but as operations researchers we need to embrace this influx of information, not shy away from it. Of course, the caveat is that we have to be very careful how we use the data, but we’re well-equipped to figure out novel ways to do just that.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s