Monitoring and Evaluation for Digital Humanitarian Response – A Novel Function


By: Lillian Pierson

I’ve recently been tasked with building and managing a Monitoring and Evaluation Team for an organization within the digital humanitarian response community. Although after-action reports have been created by many groups since at least 2010, Monitoring and Evaluation is a relatively novel and absolutely vital function in the digital humanitarian response space. We need to be monitoring and evaluating how our work-products are performing on the ground, so that we can optimize our workflows and increase the effectiveness of our products. In digital humanitarian response, the main goal is for us to provide timely and accurate information to field humanitarians (or affected populations) who are responding to save lives or aid crisis-affected communities in an emergency situation on-the-ground. Within this core function, we must also work to ensure that any of the information we release adheres to the “do no harm principle”.

To monitor and evaluate the effectiveness of our workflows and product utilization, we must start by taking a broad view of the humanitarian response ecosystem and probe to discover our relative position in that ecosystem. To get a visual idea, please see the infographic posted below. We need to ask questions about on-the-ground information product usage (by field response organizations), information product utilization and tracking (by other digital humanitarian response organizations), information product monitoring (i.e.; quality assurance, quality control), and M&E team function and efficiency (a project management and implementation task).

Beyond asking the right questions and evaluating responses, as part of building an M&E team, I have also researched categorical functions of sub-teams within a typical monitoring and evaluation implementation. As stated above, M&E is a novel function in the digital humanitarian space, and the technical nature of digital humanitarian response cannot be omitted from consideration when building a team. Borrowing from and adapting UNDP’s Handbook on Monitoring and Evaluation for Results, my initial recommendation is that an M&E Team in digital humanitarian response should have the following 4 functional groups:

  • Outcome Monitoring Team (An Implementation Role)
  • Outcome Evaluation Team (An Implementation Role)
  • Partner Engagement Team  (An Implementation Role)
  • Monitoring Tools and Mechanisms (A Technical Team – Research and Development Role)

And from there moving forward, I have formulated a basic set of questions, the answers to which could be assessed to form some basic plans about how an M&E Team might best operate within the disaster response framework.

M&E Team Mission: To monitor, evaluate, and increase the effectiveness of utilization of digital humanitarian-derived information products during on-the-ground deployments.

crisis information flows

On-the-ground Information Product Usage

  1. How are on-the-ground organizations using digital humanitarian derived information to support them in the mechanisms/functions that they utilize to save lives?
  2. What on-the-ground mechanisms/functions are we best supporting at present and how can we improve, augment, and optimize that support?
  3. Are there additional information gaps that we can fill?
  4. Who can help us fill those gaps?
  5. How can we expand/develop existing systems to fill those gaps?
  6. How can we increase the utilization of digital humanitarian-derived information products on-the-ground?
  7. Are their particular critical mechanism/functions that we are more readily able to support? If so, what are they and how can we design our systems to produce information products that best meet those needs?
  8. Who can get us the information we need about on-the-ground operations, mechanisms, functions, and information needs? (We need *at least* 3 people who have solid experience working on-the-ground during disasters and trying to use digital humanitarian derived information resources.)
  9. How are information products being accessed/administered to on-the-ground organizations?

Information Product Utilization and Tracking

  1. How can we track and monitor utilization of digital humanitarian-derived information products?
  2. How can we use technology and automation to track the outflow of digital humanitarian-derived information products and how they are being used?
  3. How can we reduce the diffusion of digital humanitarian-derived work products among and within the work products of overlapping digital humanitarian organizations?
  4. Do we need to strictly brand each information product to increase the chances for proper attribution?
  5. How can we increase distribution of digital humanitarian-derived information products to on-the-ground organizations during an event?
  6. What organizations can help us increase distribution of digital humanitarian-derived information products to on-the-ground organizations?

Information Product Monitoring

  1. What metrics best represent the effectiveness of how information products are being used?
  2. Who is handling Quality Assuredness functions?
  3. Are our products and their use adhering to the “do no harm” principle?
  4. What is the quality of the information products that we are providing?
  5. How can we improve Quality Assuredness practices and the quality of the end products we produce?
  6. What data visualization types are most appropriate for monitoring and reporting:
    • Information product utilization on-the-ground
    • Impact of digital humanitarian-derived information product utilization
    • Information Product Quality
    • Effectiveness of utilization of digital humanitarian-derived information products on-the-ground
    • Internal operations efficiencies
  7. What opportunities are there for us to receive feedback directly from the affected communities?
  8. How can we best acknowledge, address, and incorporate feedback that is provided directly from affected communities?

M&E Team Function and Efficiency

  1. How can we pre-package our data visualization and reporting functions for easier roll-out during a deployment?
  2. Do we need to be producing post-deployment written reports with alternatives, analyses, and recommendations?
  3. What open source solutions (platforms, packages, programing libraries, methodologies) can we use to:
    • Improve internal team functions
    • Track data
    • Visualize and monitor effectiveness of digital humanitarian-derived information product utilization
    • Visualize and monitor quality
  4. Who can help us with data science tasks?
  5. Who can help us with research tasks?
  6. Who can help us with data visualization tasks?
  7. Who can help us with data engineering tasks?
  8. How can we structure our workflows and reporting structures?
  9. How can we streamline our internal operations for greatest efficiency?
  10. Who will QA/QC our final M&E Team reports and data visualizations?

Beyond considering answers to the above questions, we also need to consider how we will tailor M&E assessments to the scale of each event. Smaller events & responses may only have 1 or 2 critical pathways or processes that demand post-deployment investigation, while post-deployment studies of larger events may be much more involved.  Lastly, we need to consider resources and ensure that all M&E assessments can be performed within the resource-limited environments that are inherent to not-for-profit and volunteer digital humanitarian response communities.

Do you have experience in digital humanitarian response or disaster field response? Do you have any ideas to add to the list of questions? If you were putting together an M&E team for this type of work, how would you get started?