Assignment 1: Evaluation and extension of Hegre et al.’s predictions of armed conflict


Description

Hegre et al. (2013) predict “changes in global and regional incidences of armed conflict for the 2010-2050 period.” Hegre et al. (2021) evaluate these predictions over the 2010-2018 period. Both papers provide their code in their supplemental materials, and the UCDP/PRIO Armed Conflict Data Set is publicly available:

You have two goals for this assignment:

  1. Evaluation. Perform your own evaluation of the quality and value of the 2013 paper’s predictions over the 2010-2018 period. You have complete freedom on how you evaluate these predictions, as long as your decisions are justified. You may use the 2021 paper for inspiration, but your goal is not to simply replicate their evaluation procedure or results.

  2. Extension. Extend the 2013 and/or the 2021 paper(s) in some way. Here are some possible directions, but please do not feel limited by these ideas:

    • Calibration. How calibrated are the 2013 paper’s risk scores? Table 1 suggests that reducing the classification threshold can substantially increase the true positive rate without a large increase in the false positive rate.
    • Longer-run predictions. How well do longer-run predictions from the 2013 paper’s models hold up? You could consider generating additional predictions from the 1970-2000 model and/or using additional data from the 2022 update of the UCDP/PRIO Armed Conflict Data Set.
      • Time-varying drivers. Relatedly, the 2021 paper states that “the results indicate that the drivers of armed conflict are fairly stable over time” because the model did not perform much worse over the 2010-2018 period than during the 2001-2009 period of the original study. But note that the latter and former are both the first 9 years after the cutoff dates for their respective models, and perhaps drivers do not shift in such a short term. Is there another way to get at this question of time variance of drivers?
    • Extending the original models. Does training the original model on more available data help increase the effectiveness of these models? The 2021 paper suggests that adding data about political institutions could increase the predictivity of their models, and cites several recent papers with more comprehensive democracy data or forecasts of changes to political institutions.

Groups

You should work on this assignment in groups of 2 - 3. Interdisciplinary groups are strongly recommended, but not required. If you would like to work in a group of 1 or 4, please email us for permission (we will only permit this if there is a compelling reason).

Schedule

We recognize that you only have 3 weeks to complete this assignment so please be realistic about what is possible. If the assignment seems interesting and generative, you are welcome to keep working on it for the final project.

This assignment is worth 25% of your course grade. We will evaluate you on problem selection, creativity, correctness, thoroughness, quality of writing, and related work. See here for details.

Please submit on time. This being a grad seminar, adjudicating lateness penalties and such is not a good use of instructor time. If you are unable to submit on time due to unforeseeable circumstances, reach out to us.

Assignment 2: Participate in the Predicting Fertility data challenge


Participate in the Predicting Fertility data challenge (PreFer).

Description

Your assingment has three main parts:

Rubric

Title and abstract (5 points)

State and test one hypothesis about predictability (20 points)

Submit a model to PreFer by April 26, 2024 (20 points)

Quality of writing and presentation (25 points)

This includes linguistic clarity, exposition of technical concepts, logical structure, justification of claims, explanation of background concepts, quality of figures, and discussion of results.