CommonLit Readability Prize: Resources

Competition Information

Journals

Related Competition

Conventional Tools

External Data

DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD) Challenge.

DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD) Challenge.

URL: https://isbi.deepdr.org/

Aim

The aim of this challenge is to evaluate algorithms for automated fundus image quality estimation and grading of diabetic retinopathy.

Abstract

Diabetic Retinopathy (DR) is the most prevalent cause of avoidable vision impairment, mainly affecting the working-age population in the world. Early diagnosis and timely treatment of diabetic retinopathy help in preventing blindness. This objective can be accomplished by a coordinated effort to organize large, regular screening programs. Particularly computer-assisted ones. Well-acquired, large, and diverse retinal image datasets are essential for developing and testing digital screening programs and the automated algorithms at their core. Therefore, we provide a large retinal image dataset, DeepDR (Deep Diabetic Retinopathy), to facilitate the following investigations in the community. First, unlike previous studies, in order to further promote the early diagnosis precision and robustness in practice, we provide the dual-view fundus images from the same eyes, e.g. the optic disc as the center and the fovea as the center, to classify and grade DR lesions. The expected results should outperform the state-of-the-art models built with the single-view fundus images. Second, we include various image quality of fundus images in DeepDR dataset to reflect the real scenario in practice. We expect to build a model that can estimate the image quality level to provide supportive guidance to the fundus image-based diagnosis. Lastly, to explore the extreme generalizability of a DR grading system, we desire to build a model that transfer the capability of DR diagnosis learned from a large number of regular fundus images to the ultra-widefield retinal images. Usually, we use the regular fundus images for initial screening; the widefield scanning performs as a further screening mean because it can provide complete eye information. To the best of our knowledge, our database, DeepDR (Deep Diabetic Retinopathy), is the largest database of DR patient population, and provide more than 1,000 patients data. In addition, it is the only dataset constituting dual-view fundus images from the same eyes and various distinguishable quality levels images. This data set provides information on the disease severity of diabetic retinopathy, and image quality level for each image.

Moreover, we provide the first ultra-widefield retinal image dataset to facilitate the study of model generalizability and meanwhile further extend the DR diagnose means from traditional fundus imaging to wide-field retinal photography. This makes it perfect for development and evaluation of image analysis algorithms for early detection of diabetic retinopathy.

Challenge

The challenge is subdivided into three tasks as follows (participants can submit results for at least one of the challenges):

● Disease Grading: Classification of fundus images according to the severity level of diabetic retinopathy using dual view retinal fundus images. For more details please refer to Sub-challenge 1.

● Image Quality Estimation: Fundus quality assessment for overall image quality, artifacts, clarity, and field definition. For more details please refer to Sub-challenge 2.

● Transfer Learning: Explore the generalizability of a Diabetic Retinopathy (DR) grading system. For more details please refer to Sub-challenge 3.

AMLD 2020 – Transfer Learning for International Crisis Response

URL: https://www.aicrowd.com/challenges/amld-2020-transfer-learning-for-international-crisis-response

What’s the Challenge?

Background

Over the past 3 years, humanitarian information analysts have been using an open source platform called DEEP to facilitate collaborative, and joint analysis of unstructured data. The aim of the platform is to provide insights from years of historical and in-crisis humanitarian text data. The platform allows users to upload documents and classify text snippets according to predefined humanitarian target labels, grouped into and referred to as analytical frameworks. DEEP is now successfully functional in several international humanitarian organizations and the United Nations across the globe.

While DEEP comes with a generic analytical framework, each organization may also create its own custom framework based on the specific needs of its domain. In fact, while there is a large conceptual overlap for humanitarian organizations, various domains define slightly different analytical frameworks to describe their specific concepts. These differences between the analytical frameworks in different domains can still contain various degrees of conceptual (semantic) linkages, for instance on sectors such as Food Security and Livelihoods, Health, Nutrition, and Protection.

Challenge

Currently, the ML/NLP elements of DEEP are trained separately for each organization, using the annotated data provided by the organization. Clearly, for the organizations which start working with DEEP, especially the ones with own custom frameworks, due to the lack of sufficiently tagged data, the text classifier shows poor performance. For these organizations, DEEP faces a cold-start challenge.

This challenge is a unique opportunity to address this issue with a wide impact. It enables not only better text classification, but also showcases those conceptual semantic linkages between the sectors of various organizations, ultimately resulting in improved analysis of the humanitarian situation across domains. You will be provided with the data of four organizations, consisting of text snippets and their corresponding target sectors, where, three of the organizations has the same analytical frameworks (target labels), and one has a slightly different one.

The aim is to learn novel text classification models, able to transfer knowledge across organizations, and specifically improve the classification effectiveness of the organizations with smaller amount of available training data. Ideally, transfer and joint learning methods provide a robust solution for the lack of data in the data-sparse scenarios.

Societal Impact

The DEEP project provides effective solutions to analyze and harvest data from secondary sources such as news articles, social media, and reports that are used by responders and analysts in humanitarian crises. During crises, rapidly identifying important information from the constantly-increasing data is crucial to understand the needs of affected populations and to improve evidence-based decision making. Despite the general effectiveness of DEEP, its ML-based features (in particular the text classifier) lack efficient accuracy, especially in domains with little or no training data.

The benefits of the challenge would be immediately seen in helping to increase the quality of the humanitarian community’s secondary data analysis. As such, humanitarian analysts would be able to spend time doing what the human mind does best: subjective analysis of information. The legwork of the easier to automate tasks such as initial sourcing of data and extraction of potentially relevant information can be left to their android counterparts. With these improvements, the time required to gain key insights in humanitarian situations will be greatly decreased, and valuable aid and assistance can be distributed in a more efficient and targeted manner, while bringing together both in-crisis information, crucial contextual information on socio-economic issues, human rights, peace missions etc. that are currently disjointed.

What should I know to get involved?

The challenge is the classification of multilingual text snippets of 4 organizations into 12 sectors (labels). The data is provided in 4 sets, each one belongs to a humanitarian organization. The amount of the available data highly differs across the organizations. The first 3 organizations have used the same set of sectors; the 4th is tagged based on a different set of sectors, however, its sectors have many semantic overlaps with the ones of the first three organizations. The success of the final classifiers is measured base on the average of the prediction accuracies of organizations.

Resources

The data consists of 4 sets, belonging to 4 organizations (org1 to org4), and each comes with a development set (orgX_dev), and a test set (orgX_test).

The development sets contain the following fields:

  • id: the unique identifier of text snippet; a string value, created by concatenating the name of the organization with a distinct number, for example org1_13005.
  • entry_original: the original text of the snippet, provided in languages, such as English and Spanish.
  • language: the language of the text snippet.
  • entry_translated: the translation of the text snippet to English, done using Google Translator.
  • labels: the label identifiers of the sectors. Each entry can have several labels. These labels are separated with semicolons (;).

The test sets contain the following fields:

  • id: the unique identifier of text snippet.
  • entry_original: the original text of the snippet.
  • language: the language of the text snippet.
  • entry_translated: the translation of the text snippet to English, done using Google Translator.

Important: As mentioned before, the first three organizations have the same labels, but the fourth has a set of different ones. The sectors regarding each label identifier are provided in the label_captions file. Later in this section, you can find a detailed explanation of the meaning of these sectors, and their potential semantic relations.

Submissions

As mentioned above, each entry in train data can have one or more labels (sectors). However, for submission you should provide the prediction of only one label, namely the most probable one.

Given the test sets of the 4 organizations, the submissions should be provided in comma-separated (,) CSV format, containing the following two fields:

  • id: the unique identifier of text snippets in the test sets
  • predicted_label: the unique identifier of ONE predicted label

The submission file contains the predictions of all 4 organizations together. Here an example of a submission file:

`id,predicted_label

org1_8186,1

org1_11018,10

org2_3828,5

org2_5340,9

org3_2206,8

org3_1875,4

org4_75,107

org4_158,104

`

Evaluation

The evaluation is done based on the mean of accuracy values over the organizations: we first calculate the accuracy of the predictions of the test data of each organization, and then report the average of these 4 accuracy values. This measure is referred to as Mean of Accuracies.

Since the reference data, similar to train data, can assign one or more labels to each entry, we consider a prediction as correct, when at least one of the reference labels are predicted.

This evaluation measure gives the same weight to each organization, although each organization has a different number of test data. It incentivizes good performance on the organizations with smaller available training (and also test) data, as they have the same importance as the other ones.

To facilitate the development and test of the systems, we provide the evaluation script (deep_evaluator.py), available in Resources.

A Guidance through the Sectors

Humanitarian response is organised in thematic clusters. Clusters are groups of humanitarian organizations, both UN and non-UN, in each of the main sectors of humanitarian action, e.g. water, health and logistics. Those serve as global organizing principle to coordinate humanitarian response.

Sectors for the first, second, and third organization:

  • (1) Agriculture
  • (2) Cross: short form of Cross-sectoral; areas of humanitarian response that require action in more than one sector. For example malnutrition requires humanitarian interventions in health, access to food, access to basic hygiene items and clean water, and access to non-food items such as bottles to feed infants.
  • (3) Education
  • (4) Food
  • (5) Health
  • (6) Livelihood: Access to employment and income
  • (7) Logistics: Any logistical support needed to carry out humanitarian activities e.g. air transport, satellite phone connection etc.
  • (8) NFI: Non-food items needed in daily life that are not food such as bedding, mattrassess, jerrycans, coal or oil for heating
  • (9) Nutrition
  • (10) Protection
  • (11) Shelter
  • (12) WASH (Water, Sanitation and Hygiene)

Sectors for the fourth organization:

  • (101) Child Protection
  • (102) Early Recovery and Livelihoods
  • (103) Education
  • (104) Food
  • (105) GBV: Gender Based Violence
  • (106) Health
  • (107) Logistics
  • (108) Mine Action
  • (109) Nutrition
  • (110) Protection
  • (111) Shelter and NFIs
  • (112) WASH

Collaborative Challenge: Detecting Drought from Space

Deep Learning for Climate Adaptation: Detecting Drought from Space

The challenge

The dataset contains about 100,000 satellite images of Northern Kenya in 10 frequency bands, collected by the International Livestock Research Institute. Local experts (pastoralists, or nomadic herders) manually labeled the forage quality at the corresponding geolocations—specifically, the number of cows from {0, 1, 2, 3+} that the location at the center of the satellite image can feed. Each satellite image is 1.95km across, and each pixel in it represents a 30 meter square. Pastoralists estimate the forage quality within 20 meters when they stand on location, an area slightly larger than a pixel in the full 65×65-pixel satellite image. The satellite images thus provide a lot of extra context, which may prove useful since forage quality is correlated across space. The challenge is to learn a mapping from a satellite image to forage quality so we can more accurately predict drought conditions. Furthermore, the current labeling is very sparse, and we want dense predictions of forage quality at any pixel in a satellite image, not just at the center.