AIcrowd: Food Recognition Challenge

Link: https://www.aicrowd.com/challenges/food-recognition-challenge

Overview

Recognizing food from images is an extremely useful tool for a variety of use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest, and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants, but had to rely on food frequency questionnaires that are known to be imprecise.

Image-based food recognition has in the past few years made substantial progress thanks to advances in deep learning. But food recognition remains a difficult problem for a variety of reasons.

Problem Statement

The goal of this challenge is to train models which can look at images of food items and detect the individual food items present in them. We use a novel dataset of food images collected through the MyFoodRepo app where numerous volunteer Swiss users provide images of their daily food intake in the context of a digital cohort called Food & You. This growing data set has been annotated – or automatic annotations have been verified – with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight / volume estimation.

This is an evolving dataset, where we will release more data as the dataset grows over time.

DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD) Challenge.

DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD) Challenge.

URL: https://isbi.deepdr.org/

Aim

The aim of this challenge is to evaluate algorithms for automated fundus image quality estimation and grading of diabetic retinopathy.

Abstract

Diabetic Retinopathy (DR) is the most prevalent cause of avoidable vision impairment, mainly affecting the working-age population in the world. Early diagnosis and timely treatment of diabetic retinopathy help in preventing blindness. This objective can be accomplished by a coordinated effort to organize large, regular screening programs. Particularly computer-assisted ones. Well-acquired, large, and diverse retinal image datasets are essential for developing and testing digital screening programs and the automated algorithms at their core. Therefore, we provide a large retinal image dataset, DeepDR (Deep Diabetic Retinopathy), to facilitate the following investigations in the community. First, unlike previous studies, in order to further promote the early diagnosis precision and robustness in practice, we provide the dual-view fundus images from the same eyes, e.g. the optic disc as the center and the fovea as the center, to classify and grade DR lesions. The expected results should outperform the state-of-the-art models built with the single-view fundus images. Second, we include various image quality of fundus images in DeepDR dataset to reflect the real scenario in practice. We expect to build a model that can estimate the image quality level to provide supportive guidance to the fundus image-based diagnosis. Lastly, to explore the extreme generalizability of a DR grading system, we desire to build a model that transfer the capability of DR diagnosis learned from a large number of regular fundus images to the ultra-widefield retinal images. Usually, we use the regular fundus images for initial screening; the widefield scanning performs as a further screening mean because it can provide complete eye information. To the best of our knowledge, our database, DeepDR (Deep Diabetic Retinopathy), is the largest database of DR patient population, and provide more than 1,000 patients data. In addition, it is the only dataset constituting dual-view fundus images from the same eyes and various distinguishable quality levels images. This data set provides information on the disease severity of diabetic retinopathy, and image quality level for each image.

Moreover, we provide the first ultra-widefield retinal image dataset to facilitate the study of model generalizability and meanwhile further extend the DR diagnose means from traditional fundus imaging to wide-field retinal photography. This makes it perfect for development and evaluation of image analysis algorithms for early detection of diabetic retinopathy.

Challenge

The challenge is subdivided into three tasks as follows (participants can submit results for at least one of the challenges):

● Disease Grading: Classification of fundus images according to the severity level of diabetic retinopathy using dual view retinal fundus images. For more details please refer to Sub-challenge 1.

● Image Quality Estimation: Fundus quality assessment for overall image quality, artifacts, clarity, and field definition. For more details please refer to Sub-challenge 2.

● Transfer Learning: Explore the generalizability of a Diabetic Retinopathy (DR) grading system. For more details please refer to Sub-challenge 3.

Instalasi Apereo CAS Untuk Development

Berikut ini prosedur instalasi CAS (https://www.apereo.org/projects/cas) hanya untuk keperluan development , misal untuk pengujian fungsionalitas Single Sign On (SSO) di sebuah aplikasi, bukan untuk production.

Keterbatasan:

  • Webserver menggunakan WAR standalone, untuk production mestinya pakai Java Servlet seperti Tomcat
  • Database menggunakan cleartext, untuk production mestinya pakai database seperti LDAP
  • SSL certificate menggunakan self-signed. Untuk yang production mesti pakai yang CA signed.

Prosedur:
buat VM misal di VirtualBox, ukuran disk 10 GB cukup. Setelah instalasi CAS akan memakai space 4,05 GB
Install Ubuntu 19.10 , sebaiknya versi server (http://releases.ubuntu.com/19.10/ubuntu-19.10-live-server-amd64.iso) supaya lebih kecil.

CAS memakai Java, untuk itu perlu install java development kit (download 288 MB, memakai space 800 MB)

apt install default-jdk

install git:

apt install git

clone CAS:

cd /opt
git clone https://github.com/apereo/cas-overlay-template
cd cas-overlay-template

pilih CAS versi 6.1, kemudian lakukan build

checkout 6.1
./gradlew clean build

buat keystore

./gradlew createKeystore

copy konfigurasi CAS dari /opt/cas-overlay-template/etc/cas ke /etc/cas

./gradlew copyCasConfiguration

jalankan CAS sebagai executable WAR:

./gradlew run

Akses ke situs (misal https://192.168.0.202:8443). Akan ada peringatan karena menggunakan self-signed certificate. Klik saja di “accept the risk and continue”
Browse ke situs: https://192.168.0.202:8443/cas

Selanjutnya coba login ke CAS, dengan username:casuser, password:Mellon

Jika loginlancar akan muncul tampilan:

Jika password salah akan muncul tampilan:

Menambah user & password baru:

Edit file /etc/cas/config/cas.properties, user dan password dapat ditambahkan dengan baris berikut:

cas.authn.accept.users=casuserz::Mellon,abcd::efgh, user1::123456, user2::abcdefg

Username dan password dipisahkan dengan ‘::’, antar user dipisahkan dengan koma.

Konfigurasi client agar dapat diakses dari CAS client perlu tahap-tahap berikut:

  • perlu ditambah JSON service registry, untuk itu perlu aktifkan setting JSON service registry di file build.gradle kemudian build ulang CAS
  • Buat direktori /etc/cas/services berisi file-file JSON service registry
  • Tambahkan lokasi file JSON service registry ke file /etc/cas/config/cas.properties

nano /opt/cas-overlay-template/build.gradle

Edit file /opt/cas-overlay-template/build.gradle, edit supaya ada bagian ini:

dependencies {
compile “org.apereo.cas:cas-server-support-json-service-registry:${casServerVersion}”
}

Kemudian build ulang CAS

./gradlew clean build

Buat file /etc/cas/services/wildcard-1000 dengan isi sebagai berikut:

{
“@class” : “org.apereo.cas.services.RegexRegisteredService”,
“serviceId” : “^(https|imaps)://.*”,
“name” : “wildcard”,
“id” : 1000,
“evaluationOrder” : 99999
}

Tambahkan baris berikut di /etc/cas/config/cas.properties:

cas.serviceRegistry.initFromJson=false
cas.serviceRegistry.json.location=file:/etc/cas/services

Pengujian dengan CAS client

Contoh konfigurasi CAS Client di Drupal 8.8.1

Referensi

AMLD 2020 – Transfer Learning for International Crisis Response

URL: https://www.aicrowd.com/challenges/amld-2020-transfer-learning-for-international-crisis-response

What’s the Challenge?

Background

Over the past 3 years, humanitarian information analysts have been using an open source platform called DEEP to facilitate collaborative, and joint analysis of unstructured data. The aim of the platform is to provide insights from years of historical and in-crisis humanitarian text data. The platform allows users to upload documents and classify text snippets according to predefined humanitarian target labels, grouped into and referred to as analytical frameworks. DEEP is now successfully functional in several international humanitarian organizations and the United Nations across the globe.

While DEEP comes with a generic analytical framework, each organization may also create its own custom framework based on the specific needs of its domain. In fact, while there is a large conceptual overlap for humanitarian organizations, various domains define slightly different analytical frameworks to describe their specific concepts. These differences between the analytical frameworks in different domains can still contain various degrees of conceptual (semantic) linkages, for instance on sectors such as Food Security and Livelihoods, Health, Nutrition, and Protection.

Challenge

Currently, the ML/NLP elements of DEEP are trained separately for each organization, using the annotated data provided by the organization. Clearly, for the organizations which start working with DEEP, especially the ones with own custom frameworks, due to the lack of sufficiently tagged data, the text classifier shows poor performance. For these organizations, DEEP faces a cold-start challenge.

This challenge is a unique opportunity to address this issue with a wide impact. It enables not only better text classification, but also showcases those conceptual semantic linkages between the sectors of various organizations, ultimately resulting in improved analysis of the humanitarian situation across domains. You will be provided with the data of four organizations, consisting of text snippets and their corresponding target sectors, where, three of the organizations has the same analytical frameworks (target labels), and one has a slightly different one.

The aim is to learn novel text classification models, able to transfer knowledge across organizations, and specifically improve the classification effectiveness of the organizations with smaller amount of available training data. Ideally, transfer and joint learning methods provide a robust solution for the lack of data in the data-sparse scenarios.

Societal Impact

The DEEP project provides effective solutions to analyze and harvest data from secondary sources such as news articles, social media, and reports that are used by responders and analysts in humanitarian crises. During crises, rapidly identifying important information from the constantly-increasing data is crucial to understand the needs of affected populations and to improve evidence-based decision making. Despite the general effectiveness of DEEP, its ML-based features (in particular the text classifier) lack efficient accuracy, especially in domains with little or no training data.

The benefits of the challenge would be immediately seen in helping to increase the quality of the humanitarian community’s secondary data analysis. As such, humanitarian analysts would be able to spend time doing what the human mind does best: subjective analysis of information. The legwork of the easier to automate tasks such as initial sourcing of data and extraction of potentially relevant information can be left to their android counterparts. With these improvements, the time required to gain key insights in humanitarian situations will be greatly decreased, and valuable aid and assistance can be distributed in a more efficient and targeted manner, while bringing together both in-crisis information, crucial contextual information on socio-economic issues, human rights, peace missions etc. that are currently disjointed.

What should I know to get involved?

The challenge is the classification of multilingual text snippets of 4 organizations into 12 sectors (labels). The data is provided in 4 sets, each one belongs to a humanitarian organization. The amount of the available data highly differs across the organizations. The first 3 organizations have used the same set of sectors; the 4th is tagged based on a different set of sectors, however, its sectors have many semantic overlaps with the ones of the first three organizations. The success of the final classifiers is measured base on the average of the prediction accuracies of organizations.

Resources

The data consists of 4 sets, belonging to 4 organizations (org1 to org4), and each comes with a development set (orgX_dev), and a test set (orgX_test).

The development sets contain the following fields:

  • id: the unique identifier of text snippet; a string value, created by concatenating the name of the organization with a distinct number, for example org1_13005.
  • entry_original: the original text of the snippet, provided in languages, such as English and Spanish.
  • language: the language of the text snippet.
  • entry_translated: the translation of the text snippet to English, done using Google Translator.
  • labels: the label identifiers of the sectors. Each entry can have several labels. These labels are separated with semicolons (;).

The test sets contain the following fields:

  • id: the unique identifier of text snippet.
  • entry_original: the original text of the snippet.
  • language: the language of the text snippet.
  • entry_translated: the translation of the text snippet to English, done using Google Translator.

Important: As mentioned before, the first three organizations have the same labels, but the fourth has a set of different ones. The sectors regarding each label identifier are provided in the label_captions file. Later in this section, you can find a detailed explanation of the meaning of these sectors, and their potential semantic relations.

Submissions

As mentioned above, each entry in train data can have one or more labels (sectors). However, for submission you should provide the prediction of only one label, namely the most probable one.

Given the test sets of the 4 organizations, the submissions should be provided in comma-separated (,) CSV format, containing the following two fields:

  • id: the unique identifier of text snippets in the test sets
  • predicted_label: the unique identifier of ONE predicted label

The submission file contains the predictions of all 4 organizations together. Here an example of a submission file:

`id,predicted_label

org1_8186,1

org1_11018,10

org2_3828,5

org2_5340,9

org3_2206,8

org3_1875,4

org4_75,107

org4_158,104

`

Evaluation

The evaluation is done based on the mean of accuracy values over the organizations: we first calculate the accuracy of the predictions of the test data of each organization, and then report the average of these 4 accuracy values. This measure is referred to as Mean of Accuracies.

Since the reference data, similar to train data, can assign one or more labels to each entry, we consider a prediction as correct, when at least one of the reference labels are predicted.

This evaluation measure gives the same weight to each organization, although each organization has a different number of test data. It incentivizes good performance on the organizations with smaller available training (and also test) data, as they have the same importance as the other ones.

To facilitate the development and test of the systems, we provide the evaluation script (deep_evaluator.py), available in Resources.

A Guidance through the Sectors

Humanitarian response is organised in thematic clusters. Clusters are groups of humanitarian organizations, both UN and non-UN, in each of the main sectors of humanitarian action, e.g. water, health and logistics. Those serve as global organizing principle to coordinate humanitarian response.

Sectors for the first, second, and third organization:

  • (1) Agriculture
  • (2) Cross: short form of Cross-sectoral; areas of humanitarian response that require action in more than one sector. For example malnutrition requires humanitarian interventions in health, access to food, access to basic hygiene items and clean water, and access to non-food items such as bottles to feed infants.
  • (3) Education
  • (4) Food
  • (5) Health
  • (6) Livelihood: Access to employment and income
  • (7) Logistics: Any logistical support needed to carry out humanitarian activities e.g. air transport, satellite phone connection etc.
  • (8) NFI: Non-food items needed in daily life that are not food such as bedding, mattrassess, jerrycans, coal or oil for heating
  • (9) Nutrition
  • (10) Protection
  • (11) Shelter
  • (12) WASH (Water, Sanitation and Hygiene)

Sectors for the fourth organization:

  • (101) Child Protection
  • (102) Early Recovery and Livelihoods
  • (103) Education
  • (104) Food
  • (105) GBV: Gender Based Violence
  • (106) Health
  • (107) Logistics
  • (108) Mine Action
  • (109) Nutrition
  • (110) Protection
  • (111) Shelter and NFIs
  • (112) WASH