Why Digitalizing Agricultural Experimentation is important ?

Experimentation on wheat requires expensive manual measurement to monitor the yield components, such as wheat head density after earing. This is a tedious work, and it is usually evaluated manually on restricted portion of microplots. In response to the wheat phenotyping challenge, research institutes across the world are already equipped with high throughput vectors such as gantries or robots, but still lack the algorithms to make sense of it.  

The Global Wheat Head Dataset

The Global Wheat Head Dataset is a first collaborative answer to this global issue. 9 institutions from 7 countries (France, UK, Switzerland, Japan, China, Australia, Canada) have gathered their data and knowledge to tackle the wheat head density issue. Thanks to the sponsorship of Global Institute for food security (Saskatchewan), Kubota, Hiphen, Plant Phenomics and Digitag, a data science challenge with 2245 competitors was organized from 4th May to 4th August. The winners are Dung from Vietnam, Alexandre Liao from US and Javi from Slovenia ! Congrats to them. 

The goal of organizing such challenge is to get most of the value of the Global Wheat Head Dataset by getting a state-of-the-art detector. Kaggle is the best platform for such task. Few years ago, Netflix increased the quality of its recommendation algorithm by putting a 1m million cash prize. More than a simple detection challenge, the question asked to the competitors was to obtain a generic method that are robust to bias such as change in environment, illumination, and genotype. It is known as a “domain shift” problem in computer vision. To meet this objective, the training dataset have been limited to images from France, UK, Switzerland and Canada, and the candidate algorithms were evaluated on images from Japan, China and Australia. This design insured the generalization power of the proposed approached. 

Post-mortem of organizing a Kaggle Challenge !

If the initial plan was looking good, we encounter two issues with our competition design. First, we had a problem with the quality of our data, resulting from two combined problems. The first problem was that the definition of labels was slightly different from one dataset to another, depending on the reviewer. Despite having define it in the paper, some mismatch was found. The second problem was coming from the export tool of the platform we used. Despite having conducted severe audit, nothing can beat a Kaggle data audit! The main problem with the dataset quality was the fairness of the competition, therefore we have audited the test dataset before the end of the competition. The second issue we encountered was a problem of communication with the community regarding licensing issue, which was tricky. We ask for the solution to be MIT compliant as it was in our mind the most permissive license. It appears that Yolov5 was a compelling solution for the challenge. Unfortunately, the main implementation is issued with GPL license which is… not compatible with MIT ! The main challenge for this issue has been to detect it in the forum, and to provide quickly a clarification. We recommend to future challenges organizers to really prepare a communication strategy by reading the forum everyday ! 

We want to share three lessons on the winning solutions for a domain shift competition: 

The innovative and unique aspect of the GWHD dataset is the significant number of contributors from around the world, resulting in a large diversity across images. However, the diversity within each continent and environmental conditions are not well covered by the current dataset: more than 68% of the images within the GWHD dataset come from Europe and 43% from France. Further, some regions are currently missing, including Africa, Latin America, the Middle East, and the United States. An expansion to these missing areas is then very welcomed. The proposed guidelines in the Global Wheat Head Dataset paper and on our website can help people to generate new data and increase the robustness of all models.

And Now ?

As many competitors have noticed, quality of the initial dataset was not perfect for many reasons, due to technical errors and difficulty of the tasks. The whole dataset has been entirely corrected. The challenge is now alive on Codalab to help researchers to benchmark their solution for their paper ! This corrected version will be known as the “official” dataset.

Everything is available here : https://competitions.codalab.org/competitions/27449

The official dataset is on Zenodo : https://zenodo.org/record/4252246#.X6l7HVqSmUl

We also hope to update the challenge and organize a 2021 with more test data !

Leave a Reply

Your email address will not be published. Required fields are marked *