Applications for this vacancy are now closed
Description: The distribution of economic opportunities lies at the heart of many of the world’s biggest problems. Growth of people’s living standards is one of the major ways in which living conditions have improved – and can improve further.
Presenting research and data to help people understand the nature, causes, and consequences of global poverty, inequality, and economic growth is thus a central part of Our World in Data’s mission.
We are looking for a data analyst to take a lead on managing the whole chain of collection, transformation, documentation, and dissemination of the data that underpins our work in this area. The role requires familiarity with the relevant academic research and, in particular, excellent knowledge of the data and key measurement issues in this field. You will be working closely with our research team to help develop our content on these topics.
For more background on the role, see this post written by one of our researchers, Joe Hasell, on why we’re looking to build our team in this way:
Contract type: Full-time, flexible hours
Location: Remote (US East & EU/African timezones preferred, but not essential)
Deadline: Monday 7th March 2022, midday (GMT)
Application & interview process: We will review applications as they come in and contact candidates meeting the job requirements (below) for intro calls. Shortlisted candidates will then be contacted for interviews, which will involve an in-depth discussion of a past project of yours. We aim to respond to applications within 14 days and conclude all interviews within 30 days, subject to your availability. You will have the opportunity to ask questions and assess us too as you go.
See this section below for more details about how to apply.
Compensation: We will consider candidates at different experience levels. Compensation will be discussed early in the selection process and will depend on your profile and experience.
Our World in Data is produced through a close collaboration between academic researchers, data managers, and web developers.
In this role you will be situated as part of our data team, but will work very closely with our researchers to analyze and explain key aspects of our economic world and how it is changing – on topics like extreme poverty, economic growth, income inequality, working hours, government spending, or taxation.
As we discuss in this article, our strength as a team lies in us dividing our labor between the specializations of research, data management, and web development, whilst making sure that each team understands very well the language and objectives of the others.
And for this role, we are looking for someone that has both excellent data management skills and excellent knowledge of the key data, concepts, and research relating to these topics.
As such, you might have a background in academic research in a related field, but with particular experience and strengths in managing and presenting data using robust, replicable pipelines. Or your background may lie more in data management or data science, but where past work experience or your own personal projects have equipped you with a solid understanding of the data and research in this area.
Your primary responsibilities will lie in the careful preparation, management, and documentation of relevant data. As such, excellent written communication skills (in English) are essential – in particular the ability to write clear and concise explanations of technical ideas for non-specialist audiences. Your knowledge of the research that underlies this data will be integral to ensuring it is correctly and meaningfully described, and to bringing out the key insights it contains.
Ultimately we are looking for someone who is able and motivated to build the best possible database of the key economic measures that shape people’s lives. This is the ambitious big-picture goal that you will be working on with us in this role.
Duties for this role include:
- Writing scripts to import, clean, and collate data from many sources and in many formats – including secondary databases of poverty and inequality statistics, detailed national accounts data and when necessary household survey microdata.
- Designing and implementing data pipelines to facilitate or automate regular updates of our datasets.
- Writing metadata (title, subtitles, descriptions) for our variables and charts that is understandable, perfectly accurate, and consistent across sources. This includes writing detailed explanations of the data in clear language that is understandable to non-specialists.
- Implementing and maintaining transparent and clear documentation of sources, including original and transformed data.
- Contributing to the development and management of our public datasets, made available through GitHub and a future data API.
- Identifying new datasets of potential interest, and assessing their relevance based on documentation, variable catalogs, and exploratory data analysis.
- Calculating derived variables in or across datasets, such as per capita measures, averages, and indices of poverty and inequality.
- Thoroughly testing newly-implemented features and variables to ensure their reliability.
- Working with our development team to design, pilot and test improvements of our data exploration tools.
- Understanding, prioritizing and replying to user feedback on our data and its presentation.
- Engaging with external data providers, journalists and policymakers on data availability, quality, and what it tells us across the many topics we cover.
- Constantly improving our public datasets and charts based on suggestions received via email, communication with experts, GitHub, or social media.
- Commitment to accuracy and attention to detail.
- Ability to think systematically about problems and provide solutions.
- Good judgement in assessing which data is reliable and insightful and which is not.
- Ability and confidence to explain your reasoning, alongside a willingness to learn from new evidence and to respond positively to feedback.
- Ability and drive to work without supervision, alongside good judgement about when to seek appropriate input and guidance.
- Ability to communicate effectively, recognize shared behaviors and collaborate as part of a team, including within remote/online working environments.
- Minimum of a Masters’ degree in economics or a related quantitative social science.
- Excellent skills in data wrangling, preferably in Python (pandas) but we also welcome applications from proficient R users (tidyverse, data.table).
- Extensive experience (approx. 2 years+) working with global data sets on poverty, inequality, economic growth, or other aspects of economic development we cover.
- Ability to write clear and concise explanations of technical ideas for non-specialist audiences.
- Knowledge of good practice in data visualization.
- Good understanding of our work at Our World in Data.
- Fluency in English (all our work, internally and externally, is conducted in English).
- An empirically-focussed PhD in economics or a related quantitative social science on a relevant topic.
- Basic knowledge of bash scripts, SQL, GitHub.
- Experience with science communication.
We’re interested in team members from diverse backgrounds, and strive to use fair criteria in hiring. Our team hours are also flexible enough to ensure that those of us with children can manage pick-up, drop-off, sicknesses, and the regular responsibilities that come with everyday life. Come join us!
Email us at email@example.com (subject line ‘Data Analyst (Poverty and Economic Development)’) with:
- Your CV, resume, or LinkedIn profile.
- A short cover letter (max 500 words) describing why working at Our World in Data is appealing to you and how you can contribute
- A completed data survey exercise – as described below – which forms a required part of the application.
Likewise, feel free to email us if you have any questions about this role, the data survey exercise, or the application process.
We have designed the following two-part exercise to help us gauge candidates’ skills and experience.
For this position we are looking for quite a specific profile: someone who knows the key data very well already, is confident at coding and writing clearly, and ideally knows our work and understands our mission well too.
For such a candidate, we’d expect this exercise to take a few hours to complete at most.
That is an investment of time, and we recognise that it’s unusual to have this high bar early on in the recruitment process. Our aim is to save people’s time overall. The purpose of the exercise is to allow candidates to be able to signal this quite particular combination of skills and experience from the outset, rather than much later on in the recruitment process.
We will be happy to give unsuccessful candidates who take the time to submit this exercise detailed feedback on their application, should they wish.
Take a look at the work listed on our home page under the topic of Poverty and Economic Development. This includes our entries on extreme poverty, economic growth, income inequality, working hours, government spending, and taxation.
What are your top recommendations for additional publicly-available datasets we should include, or other ways in which we could improve our data in this area?
In thinking about this, you should keep in mind our overall mission: to publish the research and data to make progress against the world’s largest problems.
Examples of improvement might be additional important metrics that are currently missing from our work; new data to increase the coverage or quality of existing metrics; or alternative derived variables that would help us illustrate important insights.
Write a short summary of each recommendation and your reasons for making it.
Recommendations for data currently missing from our work should include:
- The name of the dataset involved and a link or reference to the data.
- A very brief, high-level description of the data you are recommending: what does it measure, what is its scope, (very briefly) what underlying data or methodologies is this data based on?
- Its particular strengths – for instance in terms of coverage, reliability, or the insights that it is able to convey – that made you recommend it.
Aim for clear and concise explanations of the key information, written so as to be understandable for a non-specialist reader. There is no hard word limit but, as a very rough guide as to the level of detail we are looking for, we expect summaries to be around 100-400 words for each recommendation. Feel free to use prose, bullet points, tables, or illustrations as appropriate.
Likewise, you should limit your recommendations to those you see as being the most important additions we could make. There is no set maximum number, but we expect three or four recommendations should be sufficient for us to gauge the key skills and aptitudes we are looking for in this exercise. In particular, this part of the exercise will help us assess:
- Your knowledge of the data and research in this area.
- Your ability to summarize technical concepts concisely and in plain English.
- Your ability to judge the relevance and value of data in the context of Our World in Data’s mission.
Choose one of the datasets you discussed in part one of the exercise.
Write a script in Python or R that:
- Loads the data (from an API, an online file, or a locally-downloaded file if necessary).
- Provides a high-level summary of the structure of the data.
- Conducts any basic sanity checks as appropriate.
- Cleans and transforms the data as needed.
- Outputs a data visualization that you think communicates an important insight that this data can show.
Write your script clearly, including any necessary comments needed to follow your work.
Insert the visualization alongside the description of the dataset (discussed in Part one, above) and include a brief description of what the final visualization shows (again intended for a non-specialist audience).
In this part of the exercise, we are looking to gauge:
- Your ability to write well-structured code that’s easy to read.
- Your familiarity with basic principles of good data visualization and your intuition as to how to communicate important insights with data.
- Your ability to write good metadata explaining the visualization (title, subtitle, legend, any annotations etc.).
As such, the visualization need not be interactive or especially complex.
If you feel like you need to make more than one visualization or work with more than one dataset in order to demonstrate your abilities in these areas feel free to do so, but prepare no more than three visualizations.
Along with your CV and short cover letter (see above), please email us the two parts of the exercise in a single email:
- Your written recommendations and data descriptions should be sent as a DOC/DOCX/ODS or PDF document, or as a link to an online version on Google Docs. Include exports of your data visualization(s) in the relevant sections of this document also.
- The script producing the visualizations should be submitted as an R or Python script, or a Jupyter notebook (ipynb file).