Insect pollinators are crucial for pollinating both wild and crop plants, but their populations are declining rapidly. Limited research has been conducted to assess the impact of habitat changes on pollinator abundance and diversity, particularly in tropical regions. Image-based technologies have provided a cost-effective and non-invasive method for insect monitoring, but data extraction can be time-consuming, especially for large datasets. Our study proposes using deep learning techniques, specifically, the You Only Look Once (YOLO) algorithm, to develop a tool for bee monitoring in single and multiple-batch field analysis and to investigate whether there exist differences in bee counts between organic and conventional fields. We trained our YOLOv5-based models using annotated images from organic and conventional guava farms in central Thailand. Our best YOLOv5 model achieved an average precision of 88.7% and recall of 84.9%, classifying eight important bee species. Our results showed no significant differences between the two field types.
-
Install Miniconda/Anaconda (if you don't already have)
-
Open the terminal and write the following commands:
conda create -n GUI-env python=3.9.13 # create new virtual env
conda activate GUI-env # activate environment in terminal
pip install -r requirements.txt # install requirements
- Run the main.py script from the BPT_Cranfield/GUI folder by using this command:
python src/main.py
The GUI then opens and you can follow the steps in the next section.
Before opening the GUI, you should download the weights for the given models in the right folders. Indeed, the '.pt' files are too heavy to be shared on GitHub! You will find a '.txt' file for each model (Named with the following convention: 'Weights_Name_of_the_Model') in the 'models' folder containing a link to download the corresponding weights. This step has to be done first before carrying on with the BeeTect app!
You can also find the four provided weights to download in the following folder: https://www.dropbox.com/sh/0gsjrr63c385us4/AABUp3cplmssdUmVpY3SzwjJa?dl=0
Once the BeeTect app (GUI) is launched (for the first time), a form asking for the user information pops up:
Enter the required (and optional) information and click on the 'Submit' button. The next time you will open the BeeTect app, your information will be saved and you will not need to fill this form again.
If several persons are working on the same project, it is possible to switch users in the BeeTect app:
Then, you will see the following window:
There are different options for you to use here:
- To create a new project, open the 'File' menu item, and select 'New Project':
All the projects will be saved in the 'projects' folder.
- To open an existing project, use the 'Open Project' option in the 'File' menu:
- There is also the possibility to access the five last opened projects from the 'File' menu:
- Once a project is opened, you have the possibility to either open a single image ('Open Image') or a folder containing several images ('Open Image Folder'). You can either do that from the 'File' menu or directly from the 'Visualisation Pane' by clicking on the corresponding buttons:
- To select the YOLO model you want to use, choose an existing one in the dropdown menu. If you want to add your own YOLO model (trained before hand), you can do that bu selecting the 'Add New Model' button. A dialog will open where you can choose the name of the model, and browse through your computer to seect the weights corresponding to the model:
You can also select supplementary files for the model summary in the HTML report exported to be more complete: the confusion matric .png file, the F1 curve .png file, the results .png file and the opt.yaml file containing the YOLO model parameters. To add these files, you have to check the box and browse through your files via the file dialogs.
When a new model is added, the dropdown menu showing the available models automatically updates itself.
- It is also mandatory to select a 'Batch folder' before testing the selected model on the selected images (You will not be able to start the detection if a folder has not been selected first): to do that, click on the 'Select Batch Folder' button in the main window:
Once all of these steps are done, you can click on the 'Start Detection' button.
The results will be saved in the selected fodler, along with a 'results.txt' file containing the data for each image, and an HTML report with statistics, graphs and details on the model and on the batch of images.
The images with bounding boxes for the detected pollinators (along with a .json file of the same name), are also saved in different subfolders in the selected folder:
- 'No-Pollinator' folder: the images that don't contain any pollinators.
- 'Pollinator' folder: the images containing at least one pollinator.
- A subfolder with the name of the detected species is created for each pollinator detected on an image: if there is only one pollionator on an image, it will be saved in that subfolder.
- If there are more than one pollinator on an image, it will be saved in the 'Multiple-Pollinators' subfolder.
The HTML report name after the batch is saved in the selected folder.
To have a quick summary of the results, you can check the 'Statistics' pane in the GUI:
There is a summary of the statistics calculated for the current batch, a summary of the model and two representative graphs.
To compare different batches from the same project, you can select the 'Export Batch Report' button in the menu bar:
It will open a file dialog for you to select multiple batches to be compared:
Once you submit the batches you want to comapre, an HTML report will be generated with statistical comaprison of the selected batches.
- python 3.9.13
- opencv-python 4.7.0.72
- PySide2 5.15.2.1
- torch 2.0.0
- torchvision 0.15.1
- matplotlib 3.7.1
- numpy 1.24.2
- pandas 1.5.2
- PyYAML 6.0
- scipy 1.10.1
- seaborn 0.12.2
- psutil 5.9.4
- tqdm 4.65.0
- gitdb 4.0.10
- gitpython 3.1.31
- smmap 5.0.0
- scikit-posthocs 0.7.0
- pandas 1.5.2 (not the latest version)