A dataset for computer-vision-based fig fruit detection in the wild with benchmarking you only look once model detector

Adi Izhar Che Ani, Mohammad Afiq Hamdani Mohammad Farid, Ahmad Shukri Firdhaus Kamaruzaman, Sharaf Ahmad, Mokh Sholihul Hadi


The image datasets that are most widely used for training deep learning models are specifically developed for applications. This study introduces a novel dataset aimed at augmenting the existing data for the identification of figs in their natural habitats, specifically in the wilderness. In the present study, researchers have generated numerous image datasets specifically for object detection focus on applications in agriculture. Regrettably, it is exceedingly difficult for us to obtain a specialized dataset specifically designed for detecting figs. To tackle this issue, a grand total of 462 photographs of fig fruits were gathered. The augmentation technique was utilized to substantially increase the size of the dataset. Ultimately, we conduct an examination of the dataset by doing a baseline performance study for bounding-box detection using established object detection methods, specifically you only look once (YOLO) version 3 and YOLOv4. The performance obtained on the test photos of our dataset is satisfactory. For farmers, the capacity to identify and oversee fig fruits in their natural or developed environments can be highly advantageous. The detecting device offers instantaneous data regarding the quantity of mature figs, facilitating decision-making procedures.


Agricultural automation; Deep learning; Fig fruits; Image dataset; Object detection

Full Text:


DOI: https://doi.org/10.11591/eei.v13i4.5705


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Bulletin of EEI Stats

Bulletin of Electrical Engineering and Informatics (BEEI)
ISSN: 2089-3191, e-ISSN: 2302-9285
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).