Skip to main content

Improving crop production using an agro-deep learning framework in precision agriculture

Abstract

Background

The study focuses on enhancing the effectiveness of precision agriculture through the application of deep learning technologies. Precision agriculture, which aims to optimize farming practices by monitoring and adjusting various factors influencing crop growth, can greatly benefit from artificial intelligence (AI) methods like deep learning. The Agro Deep Learning Framework (ADLF) was developed to tackle critical issues in crop cultivation by processing vast datasets. These datasets include variables such as soil moisture, temperature, and humidity, all of which are essential to understanding and predicting crop behavior. By leveraging deep learning models, the framework seeks to improve decision-making processes, detect potential crop problems early, and boost agricultural productivity.

Results

The study found that the Agro Deep Learning Framework (ADLF) achieved an accuracy of 85.41%, precision of 84.87%, recall of 84.24%, and an F1-Score of 88.91%, indicating strong predictive capabilities for improving crop management. The false negative rate was 91.17% and the false positive rate was 89.82%, highlighting the framework's ability to correctly detect issues while minimizing errors. These results suggest that ADLF can significantly enhance decision-making in precision agriculture, leading to improved crop yield and reduced agricultural losses.

Conclusions

The ADLF can significantly improve precision agriculture by leveraging deep learning to process complex datasets and provide valuable insights into crop management. The framework allows farmers to detect issues early, optimize resource use, and improve yields. The study demonstrates that AI-driven agriculture has the potential to revolutionize farming, making it more efficient and sustainable. Future research could focus on further refining the model and exploring its applicability across different types of crops and farming environments.

Peer Review reports

Introduction

Deep learning is an advanced technology that utilizes artificial intelligence and deep learning techniques to enhance the productivity of agricultural cultivation [1]. The process combines sophisticated algorithms with agricultural data to provide more precise forecasts and decisions to improve crop productivity. The method commences by gathering extensive agricultural data, encompassing weather patterns, soil attributes, crop genetics, and farming techniques [2]. The data is inputted into the deep learning models, which employ sophisticated algorithms to analyze and acquire knowledge from the data. This technique enables the model to discern patterns and correlations among variables that impact crop growth [3]. Subsequently, these observations are utilized to construct predictive models for predicting agricultural productivity in diverse circumstances. It employs real-time data gathered from sensors and drones to oversee the condition and development of crops [4]. The data is integrated with past data to enhance the precision of projections and identify prospective crop yield problems. An essential benefit of deep learning is its capacity to adjust and improve performance as time progresses dynamically [5]. With the accumulation of additional data, deep learning algorithms can constantly acquire knowledge and refine their predictions, resulting in outcomes that are more accurate and dependable [6]. Deep learning can also aid farmers in making decisions. By analyzing data and offering advice, farmers can make well-informed decisions about crop selection, nutrient management, and pest control [7]. Through its capacity to evaluate extensive data, make forecasts, and offer recommendations for optimizing crop development, it is a potent instrument for improving agricultural production [8]. By integrating cutting-edge artificial intelligence and developments in agricultural technology, this system assists farmers in making more informed decisions and enhancing their total crop production [9]. Utilizing deep learning to improve crop production entails employing artificial intelligence algorithms to analyze and forecast agricultural outcomes and offering suggestions for optimizing crop production. For this method to successfully transform agriculture and enhance food production, it is imperative to address several technological challenges [10, 11]. A vital obstacle lies in the accessibility and caliber of data. To make precise predictions about crop outcomes, it is necessary to train algorithms using extensive and varied datasets [12]. In numerous developing nations, particularly those where it would be most advantageous, there is a pressing need to enhance the data collecting and management systems, which poses challenges in acquiring the requisite data [13, 14]. A further concern is the intricate and fluctuating nature of agricultural systems.

Crop output is influenced by various factors, including climatic conditions, soil quality, pest infestations, and plant diseases [15]. These variables challenge algorithms to reliably forecast outcomes, as they may have yet to be trained on all potential scenarios [16]. The conditions and outcomes of each crop may differ from season to season, adding complexity to predictions. The concern lies in the precision and comprehensibility of the deep learning models themselves [17, 18]. Deep learning algorithms have demonstrated favorable outcomes in several applications. However, they are commonly perceived as “black boxes” because of their intricate architecture and inability to explain their judgments [19, 20]. The absence of transparency and interpretability in algorithms can result in farmers harboring suspicion and resistance since they may require a comprehensive understanding and trust in the advice provided by these algorithms. The practical factors should be considered while building an agricultural deep-learning architecture [21]. The exorbitant expenses associated with technology and infrastructure, together with the requirement for proficient workers to create and sustain the system, may render it unattainable for small-scale farmers in dire need [22, 23]. Internet connectivity and the availability of technology in rural locations may be limited, posing challenges in collecting and analyzing real-time data [24, 25]. Although the potential advantages of using an agro-deep learning framework to improve crop production are significant, various technological obstacles must be overcome to ensure its success and long-term viability [26]. To achieve the transformation of agriculture, it is imperative to foster collaboration among diverse stakeholders, such as government officials, farmers, and technology specialists. The research's primary contribution encompasses the following:

  • Creation of an innovative framework: The research aims to create a novel framework that combines agro ecosystem data and deep learning techniques to improve crop yield. Farmers, agronomists, and researchers can utilize this framework to enhance the efficiency of crop management practices.

  • Enhanced predictive precision: Through the utilization of deep learning algorithms, the framework is capable of precisely forecasting crop yield, disease outbreaks, and nutritional deficits. It can assist farmers in making well-informed decisions regarding crop management strategies, enhancing yield.

  • The framework integrates various data sources, such as satellite images, weather data, soil properties, and crop growth metrics, to develop a complete and all-encompassing approach to crop monitoring and management.

  • Enhanced crop management plans: This study aids in the creation of more effective crop management strategies by employing deep learning methods to examine extensive datasets and detect patterns and trends. It can assist farmers in maximizing water and fertilizer utilization, minimizing insect and disease harm, and enhancing overall crop health and productivity.

  • The framework can mitigate the adverse environmental effects of agriculture by offering precise forecasts and suggestions, promoting sustainable agriculture. The process involves enhancing resource utilization efficiency, minimizing chemical inputs, and advocating for environmentally sustainable farming methods.

  • Progress in precision agriculture: The use of deep learning algorithms inside the framework enables accurate and prompt decision-making, making it a significant addition to precision agriculture. It has the potential to assist farmers in lowering expenses, enhancing effectiveness, and promoting sustainable and productive farming methods.

The subsequent sections of the paper are arranged in the following manner. Section “Foundations and prior research” presents a thorough examination of the relevant literature. Section “Methodology” elucidated the technique of the suggested model and diverse roles of the proposed framework. Section “Proposed algorithm” displays the comparative outcomes. Section “Conclusion” encompasses the conclusion and potential improvements for the planned research.

Foundations and prior research

This section provides an overview of the existing research and technologies that have contributed to the development of the Agro Deep Learning Framework (ADLF). It highlights key advancements in precision agriculture, deep learning applications, and data-driven farming methods, offering a comprehensive understanding of the groundwork that this study builds upon. By examining these contributions, we contextualize the current approach and identify the gaps that the proposed framework aims to address.

In their study, Sakthipriya et al. [27] examined the concept of precision agriculture, which entails the utilization of cutting-edge technologies like convolutional neural networks and ML methods such as genetic algorithms to enhance rice production through efficient nutrient management. This strategy employs data and algorithms to make informed decisions, enhancing resource utilization and increasing agricultural productivity. Choudhari et al. [28] have examined Precision Agriculture as a method that employs modern technologies such as Multidomain Feature Engineering and Multimodal Deep Learning to gather and evaluate data from several agricultural domains, including soil, weather, and plant health. Subsequently, this data is utilized to provide precise forecasts of agricultural output, enabling farmers to make well-informed choices and enhance overall efficiency. Kumari et al. [29] have examined the growing utilization of ML techniques in agriculture for enhancing agricultural yield. These methodologies encompass data analysis, predictive modelling, and automated decision-making. By utilizing these methodologies, farmers can discern patterns and make decisions based on empirical evidence to maximize crop productivity, minimize expenses, and enhance profitability. This assessment assesses the efficacy of certain strategies in agriculture.

Ahmed et al. [30] have examined the potential of utilizing ML to analyze data from sensors and photographs to monitor and track the growth phases of agricultural crops. It enables farmers to make well-informed choices about irrigation, fertilization, and pest control in real time, resulting in enhanced crop yields and improved resource management. Rahu et al. [31] have stated that combining IoT and ML framework is a comprehensive solution for monitoring and analyzing water quality in agriculture. The system employs sensors and data analytics to constantly analyze water parameters, anticipate future problems, and offer suggestions. Using this resilient framework can assist farmers in making well-informed decisions and guarantee the optimal utilization of water resources. Shwetabh et al. [32] have examined the Smart Health Monitoring System of Agricultural Machines as an advanced technology that employs deep learning, IoT, and AI to enhance the performance and efficiency of agricultural machines. The system constantly monitors and analyzes various data, including soil moisture, weather conditions, and machine performance, to make immediate modifications and enhance production in agricultural activities.

Dixit et al. [33] examined the detection and classification of wheat crop illnesses in their study. They employed ML techniques to identify and categorize diseases that impact wheat plants, utilizing sophisticated algorithms and models. This technology aids farmers and researchers in efficiently and precisely identifying illnesses in crops, resulting in prompt treatments and enhanced agricultural productivity. Mujawar et al. [34] examined a technologically advanced IoT-Enabled Intelligent Irrigation System with ML-Based Monitoring. This system aims to enhance the effectiveness and output of rice farming. The system employs a blend of Internet of Things technology, ML algorithms, and remote monitoring to enhance irrigation and water use in rice fields, leading to improved and sustainable rice agriculture. Pandey et al. [35] examined a case study investigating the utilization of Internet of Things (IoT) technology in cultivating turmeric. The study emphasizes the utilization of diverse IoT sensors, devices, and data analysis to enhance the efficiency, productivity, and quality of turmeric production in agriculture.

Sonali et al. [36] have examined an enhanced deep learning-based classifier, a computer model designed to identify and categorize a particular plant disease known as Aloe Barbadensis Miller disease. This classifier employs sophisticated algorithms to scan photos of the plant's symptoms and precisely detect the disease, enabling more streamlined and potent disease management and prevention. Hamouda et al. [37] have examined a paper that presents a technique for choosing the most appropriate sensor nodes in wireless sensor networks for precision agriculture. This method utilizes a combination of a genetic algorithm and an extended Kalman filter. This technique aims to enhance the efficiency of the network and the accuracy of data collection and analysis for precision agriculture. The authors Gupta et al. [38] have examined how utilizing artificial intelligence in sustainable agriculture supply chains entails employing technology to enhance productivity, minimize inefficiencies, and enhance decision-making processes. It encompasses using AI algorithms to evaluate data and generate predictions to enhance agricultural management, anticipate supply and demand, and allocate resources more effectively. Ultimately, this results in a more sustainable and efficient farm sector.

Gyamfi et al. have explored Agricultural 4.0 as a concept that leverages sophisticated technologies, including the Internet of Things (IoT), big data analytics, and artificial intelligence, to enhance the efficiency of the agricultural industry. This study investigates the capacity of these technology solutions to empower farmers, enhance production efficiency, and attain sustainable agriculture in the digital transformation era. Dashand et al. [39] have examined the Distributed and Analogous simulation framework as a technology-driven method that employs IoT (Internet of Things) to manage pests and plant diseases. The process entails generating virtual simulations of several scenarios and utilizing data from sensors, drones, and other devices to observe and address potential risks, ultimately enhancing the well-being and productivity of plants. In their paper, Unhelkar et al. [40] have presented innovative deep-learning models aimed at effectively identifying insect pests in agriculture. Additionally, it advocates for utilizing organic pesticides and encouraging the adoption of sustainable and intelligent farming methods. Gryshova et al. [41] examined the application of artificial intelligence (AI) in climate-smart agriculture. This involves utilizing sophisticated technologies, such as ML and predictive modelling, to enhance the efficiency and sustainability of farming in response to climate change. Implementing this strategy can enhance agricultural productivity, minimize inefficiencies, and enable farmers to adjust to evolving environmental circumstances, ultimately fostering a more ecologically sound future for the farming industry.

Parmar et al. [42] have examined the utilization of deep learning frameworks to identify the severity of diseases in fruits. Their study focuses on applying several deep-learning techniques to estimate the severity of these diseases accurately. The process entails instructing algorithms with extensive datasets to detect patterns in photos of fruits and categorize them according to their degree of harm or illness. The Venkatasaichandrakanth et al. [43] study examines the efficacy of deep learning and ML techniques in accurately categorizing crop pests. The investigation examines each methodology's merits and drawbacks, scrutinizing precision, effectiveness, and flexibility to ascertain the most appropriate technique for categorizing agricultural environments. Gerber et al. [44] have examined the temporal patterns of global spatially explicit yield gaps, which involves examining crop yields in various locations to find areas where yield increases may be decelerating or reaching a plateau. It aids in identifying areas that are susceptible to future stagnation in crop yields, enabling focused actions to enhance and sustain agricultural productivity. The article by Olisah C. et al. explores the utilization of neural Networks in The Corn Yield Prediction Model. This decision support system employs sophisticated methodologies to corn crop production for small, reliable, held farmers. The system employs data from diverse sources and deep neural networks to deliver dependable and prompt information to farmers, enabling them to make well-informed decisions on their crops.

Table 1 provides a detailed comparative analysis of various deep learning models employed in precision agriculture. The table highlights key performance metrics, such as accuracy, precision, recall, F1-score, false negative rate (FNR), and false positive rate (FPR), for each model. Additionally, it outlines the specific applications of each model, including image analysis for plant segmentation and disease detection, predictive modeling for crop yields, automated data collection through sensors and drones, precision irrigation scheduling, and disease and pest detection. The comparison aims to showcase the strengths and limitations of each model in enhancing crop monitoring, decision-making, and overall agricultural productivity. The data presented in this table underscores the importance of selecting appropriate deep learning models to address specific challenges in precision agriculture and highlights the potential improvements in crop yield and sustainability that can be achieved through advanced technological integration.

Table 1 Comparative analysis of deep learning models in precision agriculture

Identified issues

  • Selecting suitable crops Precision agriculture requires choosing the right crops and attributes based on climate, soil, water availability, disease resistance, and market demand.

  • Data collection and integration Precision agriculture relies on accurate, timely data from various sources like sensors, weather stations, and management software. Integrating this data while ensuring its accuracy and consistency is challenging.

  • Developing precise crop models Crop models must accurately simulate plant growth and response to water, nutrients, and pesticides, considering soil and climate variability. Building and validating these models require extensive research and data.

  • Personalized decision support systems Location-specific, data-driven decisions necessitate user-friendly decision support systems that provide real-time input recommendations, such as fertilizers and irrigation, accessible to farmers of varying expertise.

  • Integration with equipment and automation Linking computational models with farming equipment and automation is crucial for real-time data collection and efficient operation of agricultural machinery, enhancing precision and accuracy.

  • Scalability and adaptability Precision agriculture must adapt to new technologies and methods, incorporating novel data sources and emerging technologies like drones and AI to stay current in the evolving agricultural landscape.

Research objectives

  • Innovative image analysis Implement advanced algorithms for plant segmentation, leaf counting, and disease detection to improve the accuracy and efficiency of crop monitoring.

  • Predictive crop modeling Develop predictive models using historical data on weather, soil conditions, and crop growth patterns to aid farmers in making informed decisions about planting, irrigation, and harvesting.

  • Automated data collection Utilize sensors and drones to automate the collection of real-time data on soil moisture, temperature, and plant health, reducing the need for manual labor and enhancing decision-making accuracy.

  • Precision irrigation scheduling Create precise irrigation schedules by analyzing data on soil moisture levels and plant water needs, conserving water and improving crop health and yield.

  • Proactive disease and pest detection Design systems to accurately detect crop diseases and pests using vast amounts of data from images and sensors, allowing for timely intervention and yield loss prevention.

  • Optimized farm management Enhance farm management practices by analyzing data on crop growth and environmental factors, optimizing decisions on crop rotation, pest control, and soil management for better yields and cost savings.

Technical novelty

The novelty of the proposed system resides in its capacity to precisely and effectively evaluate extensive and intricate datasets to enhance crop management. This strategy integrates conventional agricultural methods with state-of-the-art deep learning algorithms to comprehensively comprehend plant development and health. Agro-deep learning entails training artificial neural networks using extensive agricultural data, encompassing soil and meteorological conditions, crop genetics, and plant imagery. Subsequently, these networks employ ML algorithms to generate forecasts and suggestions for crop management, utilizing up-to-date data. This approach enables the detection of intricate patterns and trends that may not be readily discernible to humans, resulting in more accurate and data-driven decision-making in crop cultivation. It can diminish the dependence on human labor and enhance the effectiveness of agricultural procedures. Deep learning integration in precision agriculture facilitates the creation of predictive models for optimizing crop conditions, detecting diseases, and forecasting yields. This offers farmers vital knowledge to optimize their resources and enhance overall crop output. Agro-deep learning in precision agriculture is a notable progression that improves crop production and management accuracy and efficiency.

Methodology

Farmers can rely on the ADLF, which analyses huge datasets of sources such as soil moisture, weather patterns and crop health to work like an intelligent assistant. This information enables the program to correctly predict crop production, detect any potential diseases in their early stages and suggest adjustments that can be made to agricultural methods. This technology enables farmers to increase their production, efficiency and decision-making capacity.

Finally, after data collection is done, there is nothing, but deep learning techniques used on this data such as neural networks. By learning from past data and predicting future states, these networks then help the system to learn more and more about it. The technical details of these predictions generation are explored further in the next section.

Agricultural crop production influences the food supply, economic progress, and ecological equilibrium. Nevertheless, conventional crop production techniques frequently require a significant amount of manual effort, consume considerable time, and depend on subjective judgment. The focus of technological progress has switched towards precision agriculture. This method seeks to enhance crop output by utilizing remote sensing, data analytics, and ML. Deep learning frameworks are being employed in crop production as a revolutionary method in precision agriculture. Deep learning is a branch of ML that uses artificial neural networks to acquire knowledge and make determinations based on extensive datasets. In agriculture, deep learning can be utilized to analyze data obtained from diverse sources, including sensors, satellites, and drones, to make accurate and efficient determinations for crop production. The crop production process using an agro-deep learning framework comprises multiple sequential stages. The building of the proposed methodology is illustrated in Fig. 1.

Fig. 1
figure 1

Construction of proposed methodology

Data collection

In this study, we used a dataset from Kaggle, which is a popular site for data science and ML competitions, with each entry containing 56,717 entries. Information ranging from the top 10 most consumed crops globally such as Cassava, Maize, Plantains, Potatoes, Rice (paddy), Sorghum, Soybeans, Sweet potatoes, Wheat and Yams has been provided in this dataset. Key weather conditions (temperature, humidity etc.) and crop health indicator variables along with soil properties are included in each entry and are used to predict the crop yield. To ensure the robustness and generalizability of the model, the dataset was split into training and test sets with an 80% (45,373) of the data was used to train the model while the remaining 20 percent was kept aside for testing the model’s performance. Further to reduce the likelihood of overfitting, cross validation techniques were also employed to make sure the model actually works on any subset of data. No external datasets were used for validation of the model; however, the large dataset and the wide variety of conditions allow us to train a model to generalize well to real world conditions.

Data preprocessing

In this research, data preprocessing is a critical step to ensure the quality and suitability of the dataset for ML models. The preprocessing begins with data cleaning, where missing values and inconsistencies are addressed. Missing values are imputed using statistical methods or domain-specific knowledge to ensure the dataset is complete. Outliers are identified and either removed or transformed to prevent them from skewing the model's performance. Next, data normalization is performed to scale the features to a standard range, typically between 0 and 1, which helps in improving the convergence of gradient-based algorithms. This step is particularly important for features with different units or scales. Categorical variables, such as crop types, are encoded using techniques like one-hot encoding to convert them into a numerical format that can be processed by ML algorithms. Additionally, feature engineering is applied to create new relevant features or transform existing ones to enhance the model's predictive power. For instance, interactions between certain variables are considered, and new features like moving averages or growth rates are derived. Finally, the dataset is split into training, validation, and test sets to enable the evaluation of model performance and prevent overfitting. This systematic preprocessing ensures that the dataset is well-prepared for the subsequent stages of model development and training, ultimately leading to more accurate and reliable crop yield predictions [45, 46].

Imagine a small farm starting with wheat production. Data for the deep learning model is collected using multiple sources: Using satellite images to monitor crop growth, IoT sensors that measure soil moisture and local weather stations that record how much rain and how hot it is. On this farm, sensors in the soil value moisture every 30 min, and then drones do aerial photos of the fields once a week. The model's input comes from these images and sensor data which predict which part of the farm needs more water or pest control measures. Experiment with feature extraction on a maize farm. Field’s condition is captured with a high-resolution image from a drone as the input data. At this stage, a convolutional neural networks (CNN) model identifies the essential features: soil moisture patterns, plant stress areas, weed growth, etc. For example, the model is using the drone’s image to recognize the patterns of discoloration in particular regions of the field which might indicate nutrient deficiencies. The system then automatically extracts these critical features by feeding this data into the model, saving the farmer a tremendous amount of time spent diagnosing possible issues in the field.

Feature extraction

Feature maps play a vital role in extracting essential visual characteristics from images in computer vision. These matrices reflect the activations of distinct features within an image, such as edges, forms, or textures. In this stage, the model consumes the extracted features and predicts the crop yield of a large farm in a drought-prone zone. According to the model, given soil moisture levels and weather forecasts, the yield of one section of the farm for wheat will be lower than predicted. In this case, the system recommends increasing irrigation because the crop health is affected in this region.

Feature maps are produced by applying different convolutional filters on the input image. The channel attention unit is a layer that improves the quality and relevance of the features in the feature maps through specific procedures. The process comprises four primary stages: pooling, convolutional layers (conv 1 × 1), rectified linear unit (ReLU), and sigmoid. In the pooling step, the input feature map undergoes sampling, which decreases its spatial dimensions while retaining the crucial information. Next, the convolutional layer with a 1 × 1 filter is employed to reduce the dimension of the channel in the feature map, reducing both size and complexity. The Rectified Linear Unit (ReLU) function introduces non-linearity and amplifies the fundamental features. The sigmoid function normalizes the values, ensuring they fall within the range of 0–1. It assigns greater importance to the essential properties. The symbol C × 1 × 1 denotes the dimensions of the feature map after its passage through the channel attention unit. The symbol ‘C’ means the number of channels decreased from the initial input feature map. The functions of feature extraction are illustrated in Fig. 2. Feature maps are crucial in computer vision for extracting essential visual characteristics from images. They reflect the activations of distinct features such as edges, forms, or textures. Feature maps are produced by applying various convolutional filters to the input image. The channel attention unit enhances the quality and relevance of the features through four primary stages: pooling, convolutional layers (conv 1 × 1), Rectified Linear Unit (ReLU), and sigmoid. This process improves the feature maps by assigning greater importance to essential properties. Figure 2 illustrates these functions and the dimensions of the feature map after processing through the channel attention unit.

Fig. 2
figure 2

Feature extraction

The dimensions of 1 × 1 indicate the spatial dimensions, which undergo down sampling during the pooling step. The spatial attention unit is an additional layer that operates on the feature maps to emphasize the relevant features regarding their geographical location. It comprises two convolutional layers, specifically conv 1 × 1, ReLU, and sigmoid activation functions. However, in contrast to the channel attention unit, the spatial attention unit does not pool and operates on the full feature map. The convolutional layers with 1 × 1 filters reduce the spatial dimensions, while the ReLU function reintroduces non-linearity. After that, the sigmoid function normalizes and prioritizes the crucial spatial characteristics. The notation 1 × H × W represents the dimensions of the feature map after the spatial attention unit has processed it. The channel dimension is represented by one dimension, whereas the height and width of the feature map are represented by H and W, respectively. The concatenate layer combines the results of the channel and spatial attention units, resulting in a feature map that integrates the improved channel and spatial characteristics. The Conv 1 × 1 layer is a convolutional layer that utilizes a filter size 1 × 1. The process involves merging the feature maps produced by the attention units with the initial input feature map. It aids in capturing both local and global characteristics. The adder conducts element-wise addition of the feature maps to amplify the crucial features further. The output layer produces the ultimate feature map for diverse applications, including image classification, object detection, and semantic segmentation. The output feature map is an enhanced representation of the original input image, containing essential features that assist decision-making for different computer vision tasks. Integrating feature maps, channel and spatial attention units, and convolutional and adder layers collaboratively enhance the quality and significance of the feature maps, resulting in improved performance across various computer vision tasks.

Proposed model

The input image serves as the initial input to the network and often consists of a digital image depicting a field or farmland that requires segmentation. The image is subsequently fed into the encoder module, which has multiple layers of neural networks. The initial layer consists of a compact inception network that employs diverse filter sizes to capture characteristics from the input image at many scales. Next is a convolutional layer (Conv_1), which performs feature extraction by utilizing a collection of adaptable filters. Following the Conv_1 layer, a max-pooling layer is used to decrease the resolution of the feature maps and diminish the spatial dimensions of the image representation. Subsequently, a dense-inception-block-1 is employed, comprising several dense-inception units that feature extraction utilizing diverse filter sizes and merging the results. The results obtained from the dense-inception-block-1 are passed via a dense block-1, which comprises a series of consecutive convolutional layers, followed by a transition layer-1 that decreases the size of the feature maps. Subsequently, the output generated by the transition layer-1 is sent into an inception unit-1, which employs parallel convolutions with varying kernel sizes to extract features across numerous scales. Subsequently, a dense-inception block-2 and a dense block-2 execute analogous operations to their initial blocks. The results generated by these blocks are later passed through a transition layer-2 and an inception unit-2 for further processing. The procedure above is iterated twice, utilizing the dense-inception block-3 and dense block-3 consecutively, followed by a transition layer-3 and an inception unit-3. The purpose of these layers is to extract additional information from the input image at various scales and decrease the spatial size of the image representation. Following the dense-inception-block-3 and dense block-3, a spatial pyramid module is employed. This module conducts pooling operations at various sub-sampling rates (Rate 2, 4, 8, 16) to extract features from different sizes of the feature maps. This process enables the extraction of global and local contextual information from the input image. The construction of the proposed block diagram is illustrated in Fig. 3.

Fig. 3
figure 3

Proposed block diagram

The spatial pyramid module's output is subsequently transmitted to the decoder module, comprising multiple layers that execute sampling and concatenation operations. The spatial pyramid module is initially replicated, followed by a deconvolutional layer (Deconv 2 × 2) and a concatenation operation. This concatenation operation merges the feature maps from the spatial pyramid module with the equivalent feature maps from the encoder module. This procedure is iterated twice, wherein the identical actions are executed on the outputs derived from the preceding decoder layers. The CnSAU layers, which consist of convolution, sub-pixel shuffling, activation, and up-sampling, contribute to the further enhancement of the feature maps. The output from the previous CnSAU layer undergoes processing through a convolutional layer with a 3 × 3 kernel and a softmax layer. The softmax layer applies a normalization process to the output data, yielding a probability distribution representing the distinct classes. The map subsequently produces the ultimate output mask, comprising particular areas of weeds, crops, and other entities inside the input image. The primary objective of this network is to divide the input image precisely into several categories, enabling precision farming methods and enhancing total agricultural output.

Prediction

The role of the Input Embedding function is to transform the one-hot encoded input of the model into a compact, lower-dimensional vector representation. This is essential because it enables the model to acquire knowledge about the meaning and connections between words and represent them as numerical values. Utilizing this vector format allows the model to excel in text classification, machine translation, and language synthesis tasks. Positional Encoding is a vital feature in the suggested paradigm. Due to the absence of recurrence or convolution in the model, it lacks an innate understanding of word order. Positional Encoding addresses this issue by assigning a distinct positional embedding to each word in the input sequence. The embedding provides the model with information regarding the position of each word in the sequence, enabling it to collect contextual information effectively. The proposed model relies on multi-head attention as a fundamental component for capturing the connections among various words in the input sequence. This is achieved by calculating weighted combinations of the input embeddings and utilizing them to produce context-aware representations. The multi-head feature enables the model to focus on many segments of the input sequence, enhancing its capacity to comprehend interdependence and associations among words. The Add & Norm function is employed in the proposed model after each sub-layer, including the Multi-Head Attention and Feed Forward layers. This aims to incorporate the remaining connections from the input to the output of the sub-layer and standardize the resulting values. This feature aids in stabilizing the training process by mitigating the issue of gradient explosion or vanishing that may occur during the back propagation phase. The feed-forward function is a straightforward yet potent element of the proposed model, comprising two linear transformations separated by a ReLU activation function. This structure enables the model to learn intricate non-linear associations among words in the input sequence, leading to more precise forecasts. The Output Embedding function transforms the model's output back into the initial representation of the input sequence, ensuring output format consistency. The decoder section utilizes Positional Encoding and Masked Multi-Head Attention, selectively concealing specific locations in the input sequence to prevent focusing on future words. Figure 4 illustrates this prediction process, showcasing the various functions involved in generating the final output.

Fig. 4
figure 4

Prediction process of proposed model

The feed-forward function is an uncomplicated yet potent element of the suggested paradigm. The structure comprises two linear transformations separated by a ReLU activation function. This enables the model to acquire intricate non-linear associations among words in the input sequence and generate more precise forecasts. The Output Embedding function transforms the output of the proposed model back into the initial representation of the input sequence. The Output Embedding function is the inverse of the Input Embedding function, enabling the model to produce output in the identical format as the input. The decoder section of the proposed model additionally utilizes the Positional Encoding and Masked Multi-Head Attention functions. Masked Multi-Head Attention shares similarities with the conventional Multi-Head Attention. However, it selectively conceals specific locations in the input sequence to prevent the model from focusing on upcoming words. The decoder component of the proposed model incorporates the Multi-Head Attention, Add & Norm, and Feed Forward functions. This enables the model to leverage the information from the encoder and produce an output sequence that is contextually pertinent to the input sequence. The Linear function converts the decoder output into a vector representing the size of the vocabulary of the output sequence. Subsequently, this vector is employed in the Softmax function, which transforms the vector into a probability distribution throughout the vocabulary. This enables the model to anticipate the words in the output sequence. The Output Probabilities function utilizes the probability distribution generated by the Softmax function to determine the most probable words in the output sequence. This concludes the prediction process and yields the ultimate output from the suggested model.

Classification

The suggested architecture takes a set of data points as input, which will be utilized for training the model. Regarding the classification problem, the input usually consists of a collection of photos, where each image corresponds to a distinct class or category. In image classification, the input data consists of photos assigned specific labels corresponding to their respective categories, such as weeds, crops, and other classifications. The classification block serves as the fundamental structure of the model and is tasked with extracting significant features from the input data. The initial element of the classification block is the convolutional block, comprising convolutional layers that are subsequently followed by non-linear activation functions. These layers can acquire intricate patterns and characteristics from the input images. The results of the convolutional block are later fed into an adder, which conducts element-wise addition on the feature maps. The classification block serves as the fundamental structure for extracting significant features from input data, typically images. The input data, consisting of labeled photos, passes through a convolutional block comprising convolutional layers followed by non-linear activation functions. These layers acquire intricate patterns and characteristics from the images. The results are then fed into an adder, which performs element-wise addition on the feature maps. Figure 5 illustrates the functions within this classification block, detailing the steps from initial input to final feature extraction.

Fig. 5
figure 5

Classification

The adder's result is input into the subsequent convolutional block, where the procedure is reiterated, extracting additional features from the input data. This process iterates across numerous blocks, each acquiring progressively intricate features and patterns. After the input data has undergone all the convolutional blocks, it is fed into a dense layer. The purpose of this layer is to utilize the characteristics obtained from the convolutional blocks and assign them to the corresponding output classes using a set of adjustable weights. Ultimately, the output of the dense layer undergoes the softmax function, which standardizes the output values into probabilities. These probabilities then indicate the anticipated class for each input image. The Medusa attention block is an essential component of the model that integrates attention mechanisms into the categorization process. The attention block comprises convolutional layers, followed by a scale layer. The scale layer modifies the values of the feature maps obtained from the convolutional layer by either increasing or decreasing them, using a weight that has been learned. This enables the model to concentrate on particular portions of the image, assigning greater importance to pertinent characteristics for classification. The outputs generated by the scale layer are subsequently transmitted to the encoder-decoder, which integrates the characteristics acquired from the convolutional blocks with the attention mechanisms from the scale layers. This enables the model to develop more intricate and subtle representations of the input data. Ultimately, the output layer utilizes the amalgamated characteristics from the encoder-decoder to forecast the ultimate category for the input image. By integrating attention mechanisms, the model enhances its ability to differentiate across comparable courses and generate more precise predictions (Fig. 6).

Fig. 6
figure 6

Flowchart

For a rice farm, classification is crucial. Using data from sensors, the model classifies parts of the field into categories: healthy crops, crops under water stress, and regions with potential pest infestations. This segmentation helps the farmer understand which parts of the farm require different treatments. For example, one region may be flagged for potential pest treatment while another might need water adjustments, based on the model’s classification output.

Proposed algorithm

The agro-deep learning method is a sophisticated ML approach specifically developed to efficiently process and evaluate vast quantities of agricultural data. By leveraging deep learning and data analytics capabilities, this technology offers significant insights to farmers and agronomists. Data processing is a crucial aspect of agro-deep learning algorithms. The agricultural data is extensive and intricate, necessitating preprocessing before it can be analyzed. This algorithm can effectively process vast quantities of data, encompassing satellite imagery, weather data, soil data, and agricultural patterns. It can cleanse, arrange, and convert the data into an appropriate format for analysis. The agro-deep learning algorithm performs feature extraction as one of its functions. This procedure entails the identification of the essential characteristics within the data that can be utilized for making predictions. For instance, the algorithm can identify various crop varieties, assess their health, and determine the condition of the soil by analyzing satellite photographs. Additionally, it can retrieve temperature, precipitation, and humidity data from weather records. This facilitates the development of a broad array of characteristics that may be utilized for forecasting and decision-making. Following feature extraction, the subsequent role of the agro-deep learning algorithm is to train the model. The algorithm employs sophisticated neural networks to acquire knowledge from the data and discern patterns and correlations among the characteristics. The system undergoes a series of training stages to enhance its precision and effectiveness. As it acquires more data, its predictive abilities and ability to offer insights improve. The primary role of agro-deep learning algorithms is to predict and make decisions. After undergoing training, the model demonstrates a high level of accuracy in forecasting crop production, soil health, and the likelihood of insect infestations. It can offer valuable information regarding the timing and quantity of crop irrigation, strategies to enhance fertilization, and optimal timing for harvesting. These predictions are derived from the patterns and relationships identified during training. The final purpose of the agricultural deep learning method is to evaluate and enhance the model. The construction of the proposed algorithm is illustrated below,

figure a

The function Initialize_P() is responsible for initializing the population of crops, weeds, and other features. The variable P represents the population and encompasses all elements of crop production, including weeds and other contributing factors—the Access_X function. The function "Crop Group Fitness Value" receives the crop group's fitness value and initializes the variable "a" value to 0. The function employs a while loop to iterate through the provided number of maximum iterations. Inside the loop, the function verifies if a value is divisible by A and, if it is, continues to SORT_CROP the fitness score and SORT_CROP the population. This entails arranging the fitness ratings and population of the crops according to their performance and efficacy in production. This enables the identification and prioritizing of the most high-performing crops. The Analyze_Production Level function calculates the production levels of the current crop population. This data provides the latest fitness ratings and population updates for the upcoming iteration. Subsequently, the function proceeds to examine each characteristic inside the crop population. The function iterates through each characteristic (represented by the variable i) from 1 to X and tests if it falls under the category of WEED. If it does, the function adjusts the corresponding value accordingly. Subsequently, it replicates this procedure for the CROP and other classifications. Once all features have been examined and updated, the function retrieves the recent fitness score and compares it to the previous one. If the new score surpasses the previous one, it will replace the Final fitness score. Otherwise, it returns the last score fitness. The primary objective of this function is to enhance the crop population by consistently evaluating and modifying the fitness scores and population according to their performance. This enables selecting and promoting the most prosperous crops while recognizing and mitigating issues such as weeds that could impede production. Given another occurrence, the conditional probability of an event is distinct from the probability provided in Eq. 1.

$$ E\left( {{B \mathord{\left/ {\vphantom {B N}} \right. \kern-0pt} N}} \right) \ne E\left( {{N \mathord{\left/ {\vphantom {N B}} \right. \kern-0pt} B}} \right) $$
(1)
$$ E\left( {{V \mathord{\left/ {\vphantom {V B}} \right. \kern-0pt} B}} \right) = \left( {E\left( V \right) \times E\left( {{B \mathord{\left/ {\vphantom {B V}} \right. \kern-0pt} V}} \right)} \right)\left( {E\left( B \right)} \right) $$
(2)
$$ d\left( {m,u} \right) = \sum {\left( {j = 1} \right)^{y} \left[ {u_{i} g_{i} \left( m \right) + o} \right]} $$
(3)

The equation can be stated as follows for the feature vectors and the crop prediction dataset class, respectively. The classification of image quality levels is a challenging scientific subject that falls under the umbrella of objective image quality evaluation. The following quality equations are mostly used to achieve automatic evaluation.

$$ G_{y,c} = \sum\limits_{i = 2}^{o} {\sum\limits_{g = 1}^{i = 1} {H_{ig} } \left( {e_{i} a_{g} + e_{g} a_{i} } \right)} F_{ig} $$
(4)

where G stands for the equation's coefficient; F represents variations of the assignment exist: Relative edge response, or RER, is a representation of relative edge response. The image must have the proper edge shape properties in order to calculate RER, and there is no one-size-fits-all method for calculating SNR. Crop disease and insect pest photos are quality-assigned using the picture quality equation.

$$ o\left( m \right) = \frac{1}{{\sqrt {2\pi } }}\exp \left( { - \frac{{m^{2} }}{2}} \right) $$
(5)
$$ h_{t} = z_{t} \Theta h_{t - 1} + \left( {1 - v_{q} } \right)\Theta g_{q} $$
(6)
$$ w_{{\left( {{i \mathord{\left/ {\vphantom {i j}} \right. \kern-0pt} j}} \right)}} = u_{ji} S_{j} ,a_{i} = \sum\limits_{j} {b_{ji} w_{{\left( {{i \mathord{\left/ {\vphantom {i j}} \right. \kern-0pt} j}} \right)}} } $$
(7)

This method's deep feature extraction network topology and low computation efficiency make it unsuitable for the coarse quality level classification problem. It also makes it susceptible to overfitting because there is no random loss of connections.

$$ In\left( {\frac{{FI_{it} }}{{FI_{it} - 1}}} \right) = \alpha + \beta InFI_{it} - 1 + v_{j} + \Im_{j} $$
(8)

Simultaneously, there are significant mistakes and challenges in the subjective quality label computation process. When employing this kind of network structure for quality level classification, the feed forward convolutional neural network structure based on supervised learning has a strong classification function.

$$ c_{ij} = \frac{{p^{{c_{ji} }} }}{{\sum\nolimits_{o} {p^{{c_{jo} }} } }} $$
(9)
$$ r = \frac{\alpha }{1 - \beta } $$
(10)

With data scale compression, the maximum pooling (Max_Pooling) method preserves the image's detailed properties from large too small.

$$ o_{q1} \left[ j \right] = \sum\limits_{i} {\cos \left( {u_{j}^{1} ,u_{i}^{2} } \right)} $$
(11)

The quality classification DCNN architecture is represented by the formula above. The Conv layer, ReLU layer, and max-pooling layer are all shortened. Following conversion, the following formula can be used to express it.

$$ Gw = \sum\limits_{i = 1}^{k} {G_{jj} p_{j} s_{j} } $$
(12)

The suitable loss function must obtain as the following,

$$ d_{jh} = \int_{0}^{\infty } {dF_{h} \left( y \right)} \int_{0}^{y} {\left( {y - x} \right)dF_{j} \left( y \right)} $$
(13)

These are the conversion processes from the physical length unit to the pixel length unit and the mapping from three dimensions to two dimensions. The following is the transformation of the insect camera's internal parameters:

$$ G = \frac{{\sum\nolimits_{i = 1}^{k} {\sum\nolimits_{h = 1}^{k} {\sum\nolimits_{t = 1}^{n} {\sum\nolimits_{r = 1}^{{n_{th} }} {\left| {y_{ij} - y_{hr} } \right|} } } } }}{{2n^{2} u}} $$
(14)
$$ x^{1} \to \left( {w^{1} } \right) \to x^{2} \to x^{L - 1} \to \left( {w^{L - 1} } \right) $$
(15)

The first layer, designated as (w1), is where the pre-processing parameters are represented as w1. After convolution, × 2 represents the first layer's output, which serves as an input for the layer that follows.

$$ conv\left( {I,K} \right)_{x,y} = \sum\limits_{i = 1}^{nH} {\sum\limits_{j = 1}^{nw} {\sum\limits_{k = 1}^{nc} {K_{i,j,k} I_{x + i - 1,y + j - 1,k} } } } $$
(16)
$$ \dim \left( {conv\left( {I,K} \right)} \right) = \left[ {\frac{{n_{H} + 2p - f}}{s} + 1} \right]\left[ {\frac{{n_{W} + 2p - f}}{s} + 1} \right];s < 0 $$
(17)

After the convolution layer is pooled

$$ s > 0 = \left( {n_{H} + 2p - f,n_{w} + 2p - f,n_{c} } \right);s = 0 $$
(18)

A vector a[i − 1] is input by a fully linked layer, which outputs a vector a[i].

$$ z_{j}^{\left[ i \right]} = \sum\limits_{l = 1}^{{n_{i - 1} }} {w_{j,i}^{\left[ i \right]} a_{l}^{{\left[ {i - 1} \right]}} + b_{j}^{\left[ i \right]} \to a_{j}^{\left[ i \right]} = \varphi^{\left[ i \right]} \left( {z_{j}^{\left[ i \right]} } \right)} $$
(19)
$$ n_{i - 1} = n_{H}^{{\left[ {i - 1} \right]}} ,n_{W}^{{\left[ {i - 1} \right]}} ,n_{C}^{{\left[ {i - 1} \right]}} $$
(20)

The matching value for the input × 1, and we compute the difference between the target t and the forecasted value xL, which is mathematically stated as

$$ z = \frac{1}{2}\left\| {t - x^{L} } \right\|^{2} $$
(21)

After receiving the input volume, an activation layer applies the specified AF.

$$ \frac{1}{{1 + e^{ - x} }} $$
(22)
$$ \frac{{e^{x} - e^{ - x} }}{{e^{x} - e^{ - x} }}2*sigmoid\left( {2x} \right) - 1 $$
(23)
$$ A\left( x \right) = xifx \ge 0,otherwise0 $$
(24)
$$ f\left( x \right) = x;if\_x \ge 0 $$
(25)

By reducing the number of network parameters, it reduces the spatial dimension and number of convolutional outputs. It gives us more control over fitting. It employs dropouts because it want to directly modify the network architecture during training in order to reduce over fitting.

$$ E\left[ {\frac{{\partial E_{D} }}{{\partial w_{i} }}} \right] = \frac{{\partial E_{N} }}{{\partial w_{i} }} + wipi\left( {1 - pi} \right)I_{i}^{2} $$
(26)

When establishing a neural network, hyper parameters are parameters that are extremely important. These are particular values that regulate the network's learning process.

$$ n_{n + 1} = \frac{{\eta_{n} }}{{1 + d_{n} }} $$
(27)
$$ \eta_{n} = n_{0} d^{{\left[ {1 + {n \mathord{\left/ {\vphantom {n r}} \right. \kern-0pt} r}} \right]}} $$
(28)

That the learning rate needs to decrease every ten episodes. The formula for an exponential learning scheduler is

$$ \eta_{n} = n_{0} e^{ - dn} $$
(29)

This algorithm continuously learns and adapts to new data, which helps improve its accuracy over time. It can also evaluate its performance and adjust its parameters to improve its predictions.

We described the technical methodology that was used to construct the ADLF from start to finish, data collection, preprocessing and model development. Those are the steps to ensure precision in crop predictions and to optimize agricultural practice. We now proceed with the evaluation of the proposed framework. The next section will outline how we represented the results of applying our model to real world situations and then discuss the key performance metrics used—accuracy, precision, recall and how they help improve crop yield prediction.

Novelty of the work

In this work, we propose two key features that establish a novel, ADLF that significantly differs from conventional precision agriculture based models. According to its integration of multi-source data, such as satellite imagery, IoT sensor reading, and environmental data, which gives us a comprehensive and accurate analysis of farmland. Moreover, this model differs from others in that it uses a spatial pyramid module in order to capture global and local contextual information in order to provide an accurate segmentation of images in areas that are highly complex such as the agricultural environment. Dense-inception blocks are added to the framework to improve feature extraction using multiple filter sizes to make a better model in identifying crop health and field conditions in detail. Additionally, the utilization of CnSAU layers (convolution, sub pixel shuffling, activation, up sampling) provides the capacity for high resolution outputs with this computational efficiency. This allows us to scale and adapt the model to apply to small and large farms. Built on a hybrid architecture wherein deep learning is combined with optimization approaches, this system not only predicts crop yields, but also makes real time resource optimization suggestions (e.g., water or fertilizer use). Finally, the model is designed for edge computing compatibility, in order to be deployed in environments where continuous internet connectivity is not available, allowing it to be used by farmers in diverse geographic and economic contexts. The innovations in the framework represent significant advances in precision agriculture due to their increased accuracy, flexibility and real-world applicability.

Results and discussion

Real world agriculture data was used to test the ADLF for the effectiveness of predicting crop yield and improving farming practices. As presented here, the results in this section will show the model’s performance across major metrics such as accuracy, precision, recall, and F1 score. The output of these metrics gives us insights into the capability of the model to assist farmers in making such decisions, to cut down the resource wastage and to improve the management of the crops.

The proposed ADLF is implemented using Jupyter Notebook, a popular tool for interactive computing and data analysis. The choice of Jupyter Notebook allows for an iterative development process, where code can be executed in cells, facilitating real-time experimentation and visualization of results. The programming language used for this implementation is Python, which is widely recognized for its extensive libraries and frameworks that are well-suited for ML and deep learning tasks. Python’s libraries such as TensorFlow, PyTorch, Pandas, and NumPy play a crucial role in developing, training, and evaluating the deep learning models within the ADLF framework.

The development environment is configured with a Windows operating system, running on an Intel® Core™ Ultra 7 Processor 155UL. This processor, with its 12 M Cache and ability to reach speeds up to 4.80 GHz, provides the necessary computational power to handle intensive data processing and model training tasks efficiently. The system is equipped with 8 GB of RAM, which supports multitasking and smooth operation of the Jupyter Notebook environment and associated libraries. Overall, the combination of Jupyter Notebook, Python, and the specified hardware configuration provides a robust environment for developing and testing the ADLF model, ensuring efficient processing and analysis of agricultural data.

The performance of proposed ADLF has compared with the existing Deep learning-based computer vision approach (DLCVA), Improved Agro Deep Learning Model (IADLM), Deep Learning-based Optimization (DLBO), and Improved Deep Learning-Based Classifier (IDLBC). Here the crop yield prediction dataset [47] is used to simulate the results and the python simulator is the tool used to execute the results. The dataset used in this study consists of a total of 56,717 data points, which have been split into training and testing sets using an 80:20 ratio. Thus, 80% of the data point (45,373) will be used for training the model and 20% of the data point (11,344) will be used to test the performance of the model. Such a split is beneficial such that a lot of data is available for training of the model and another separate testing set is also available to assess the model's accuracy and generalization capability. Finally, for the robustness and generalizability of the proposed model, we used the fivefold cross validation technique. Here, the dataset was random partitioned into five equals subsets. For every iteration of the training process, a model was trained on four subsets and was validated on one. We repeated this five times, using a different subset for validation in each iteration. Once all iterations are done, we average the results from a fold to get a solid model performance estimation. By employing this cross validation, this approach prevents overfitting, in that the model is not unduly reliant on any segment of data that it will serve to generalize to other datasets well.

Computation of accuracy

Accuracy in Crop Production is computed by comparing the predicted yield generated by the ADLF to the actual yield data collected from the field. This comparison is done using a percentage calculation:

$$ {\text{Accuracy}} = \left( {{\text{Actual Yield}} - {\text{Predicted Yield}}} \right)/{\text{Actual Yield}} \times {1}00 $$
(30)

This gives an estimate of the percentage of accuracy in predicting crop production. The ADLF leverages various techniques such as remote sensing, weather data, and crop health monitoring systems to generate predictions based on historical and real-time data. This enables more accurate and timely predictions, leading to improved decision-making for farmers, ultimately resulting in better crop production. Continuous evaluation and refinement of the framework further enhances the accuracy of forecasts over time. Table 2 shows the comparison of accuracy for various models on different inputs.

Table 2 Comparison of accuracy for various models on different inputs

We compare the accuracy, as shown in Fig. 7. We chose the baseline models based on their technical strengths in crop yield prediction, farm management optimization, and relevance in precision agriculture. For this, we selected convolutional neural networks (CNNs) for tasks such as crop health monitoring, which DLCVA performs. Among others, IADLM demonstrates how multi-source data and optimization can enhance accuracy. The DLBO method integrates deep learning into optimization, making it suitable for evaluating performance in agricultural applications. Since this benchmark compares classification accuracy, IDLBC is well-suited for classification tasks. We included non-deep learning models such as Random Forest (RF) and Support Vector Machine (SVM) to provide a broader comparison. By comparing the models across varying input sizes, we show that the proposed ADLF model consistently outperforms all others, especially with larger input sizes. For instance, the proposed ADLF model achieved the highest performance with an accuracy of 92.44, which significantly exceeds that of other models, including DLCVA (87.02), DLBO (77.60), and traditional ML models like RF and SVM, which have accuracies of 84.21 and 86.39, respectively. The same trend occurs as the number of inputs increases. The ADLF model remains dominant at 200 inputs, with a score of 90.45, compared to DLCVA's 85.53 and RF's 83.78. SVM and RF show consistency but are far less successful than deep learning models across all input sizes. When the number of inputs exceeds 700, the ADLF model still leads with 85.41, while other models, including DLCVA (80.09), DLBO (66.87), and IADLM (61.32), perform progressively worse.

Fig. 7
figure 7

No. of inputs vs accuracy comparison for various models

We conducted a t-test to evaluate the statistical significance of the performance of the proposed model against baseline models' results. These results show that there are statistically significant differences in performance (p < 0.05), indicating that the improvements we observed with our model were not just random chance. We find p-values less than 0.05 for both the ADLF (Proposed Model) and DLCVA, confirming that the improvements in performance exhibited are statistically significant, and unlikely to be due to random chances. The results of this suggest that the proposed ADLF model provides significant increase in prediction accuracy, precision, recall and F1 score over other models. Conversely, the p values of IADLM, DLBO and IDLBC models are larger than 0.05 implying that none of the differences in performance is statistically significant.

Computation of precision

Precision for Crop Production in an agro-deep learning framework refers to the accuracy of the crop yield prediction model. It is computed by dividing the number of correctly predicted crop yield values by the total expected values. This metric evaluates how precise the model is in identifying the correct yield values, which directly impacts the accuracy of crop production estimates. To compute precision, the model compares the predicted values to the actual crop yield values from historical data. A higher precision score indicates a more accurate model, which can be used to make informed decisions and optimize crop production strategies. It is a critical evaluation metric for monitoring and improving the performance of crop production models. Table 3 shows the comparison of precision for various models on different inputs.

Table 3 Comparison of precision for various models on different inputs

Figure 8 shows the comparison of precision. In a computation tip, existing DLCVA obtained 68.32%, IADLM reached 61.29%, DLBO reached 67.86% IDLBC obtained 63.84% precision. The proposed ADLF reached 84.87% precision. The proposed framework used CNN to extract meaningful features from the input data, such as satellite images and crop yield data. It is also helpful for sequentially modeling crop growth patterns and effectively capture temporal dependencies. The pre-trained models were fine-tuned for the specific task of Crop Production. These combinations from multiple models result in a more accurate and robust prediction. Combining these techniques led to an exact and reliable prediction of Crop Production.

Fig. 8
figure 8

No. of inputs vs precision comparison for various models

Computation of recall

Recall is a performance metric used to evaluate the effectiveness of the agro-deep learning framework for Crop Production. It measures the proportion of relevant data points the model correctly identifies. In Crop Production, recall is computed by dividing the number of correctly predicted crop yield values by the total number of actual crop yield values in the dataset. This computation considers both accurate positive and false pessimistic predictions, where a true positive is a correctly identified relevant data point, and a false negative is a missed appropriate data point. This provides a comprehensive understanding of the model's ability to accurately predict crop yields, which is crucial for efficient and successful crop production. Table 4 shows the comparison of recall for various models on different inputs.

Table 4 Comparison of recall for various models on different inputs

Figure 9 shows the comparison of recall. In a computation tip, existing DLCVA obtained 66.52%, IADLM reached 60.35%, DLBO reached 68.17% IDLBC obtained 64.91% recall. The proposed ADLF reached 84.24% recall. The framework integrates these CNNs with a clustering algorithm to identify regions of interest in the images, which are then used to generate training data. This data is used to train the deep learning model, which can recognize and classify different crops accurately. It also enhances a pre-trained model is fine-tuned for specific crop types, further improving the recall performance.

Fig. 9
figure 9

No. of inputs vs recall comparison for various models

Computation of F1-score

The F1 score is a metric commonly used to evaluate the performance of classification models in ML. It is calculated using precision and recall values, which measure the accuracy and completeness of the model's predictions. The precision refers to the percentage of correctly predicted crop types out of all expected ones. In contrast, recall refers to the rate of correctly predicted crop types out of all actual crop types. The F1-score is then calculated as the harmonic means of precision and recall, providing a combined and balanced measure of the model's performance in accurately predicting crop types. Table 5 shows the comparison of f1-score for various models on different inputs.

Table 5 Comparison of F1-score for various models on different inputs

Figure 10 shows the comparison of F1-score. In a computation tip, existing DLCVA obtained 84.32%, IADLM reached 72.50%, DLBO reached 79.03% IDLBC obtained 78.45% F1-score. The proposed ADLF reached 88.91% F1-Score. The proposed framework collects a large amount of data related to crop production, such as weather patterns, soil conditions, and historical yields. This data is then pre-processed and fed into a deep learning model, which uses various algorithms to learn patterns and make accurate predictions. The framework also utilizes transfer learning and data augmentation techniques to improve its performance. Furthermore, it employs feedback mechanisms to improve and update its predictions continuously. The constant improvement results in a high F1-Score for crop production prediction.

Fig. 10
figure 10

No. of inputs vs F1-score comparison for various models

Computation of false negative rate

The false negative rate measures the percentage of incorrect classifications made by the model, specifically when a crop is present but is predicted as absent. It is calculated by dividing the total number of false pessimistic predictions by the total number of positive cases in the dataset. To compute this rate, the deep learning framework uses a process called back propagation, where the network weights are adjusted based on the difference between the predicted and actual outputs. This allows the model to continuously improve and reduce the false negative rate, leading to more accurate predictions and better crop production results.

Table 6 and Fig. 11 presents a comparison of the False Negative Rate (FNR) across different models with varying numbers of inputs. The models evaluated include DLCVA, IADLM, DLBO, IDLBC, and ADLF. As the number of inputs increases from 100 to 700, the FNR generally increases for all models, indicating a trend towards higher false negative rates with more data. Among the models, ADLF consistently demonstrates the lowest FNR across all input sizes, suggesting superior performance in minimizing false negatives compared to the others. In contrast, DLCVA exhibits the highest FNR, highlighting potential limitations in its ability to handle larger datasets effectively.

Table 6 Comparison of false negative rate for various models on different inputs
Fig. 11
figure 11

No. of inputs vs false negative rate comparison for various models

Computation of false positive rate

The false positive rate is calculated by analyzing the number of incorrect predictions made by the model compared to the total number of negative cases. This is accomplished by dividing the number of false positives (cases predicted as positive but are negative) by the sum of false positives and true negatives (cases correctly predicted as negative). This calculation provides a measure of the model's ability to accurately identify negative cases, which is essential for Crop Production as it helps reduce unnecessary interventions in regions where they are not needed. The lower the false positive rate is, the more reliable and efficient the ADLF is in predicting crop production accurately.

Table 7 and Fig. 12 compare the False Positive Rate (FPR) across various models with different input sizes. The models include DLCVA, IADLM, DLBO, IDLBC, and ADLF. As the number of inputs increases from 100 to 700, all models exhibit a rising trend in FPR. Among the models, ADLF consistently shows the lowest FPR, indicating its superior performance in reducing false positives compared to the others. Conversely, DLCVA demonstrates the highest FPR across all input sizes, suggesting it may be less effective in managing false positives as the dataset size grows. This variation highlights the differences in model robustness and accuracy. Table 8 shows the overall performance comparison of various models.

Table 7 Comparison of false positive rate for various models on different inputs
Fig. 12
figure 12

No. of inputs vs false positive rate comparison for various models

Table 8 Overall performance comparison of various models

Figure 13 shows the overall performance comparison between the existing and proposed model. In a comparison tip, the proposed model achieved 85.41% accuracy, 84.87% precision, 84.24% recall, 88.91% F1-Score, 91.17% false negative rate, and 89.82% false positive rate. The proposed framework analyses large agricultural data datasets such as soil composition, climate, and crop yield. By accurately extracting high-level features from the data, the framework can create a predictive model to forecast crop production based on various factors accurately. Additionally, the framework incorporates transfer learning, where pre-trained models on other domains are fine-tuned for the specific agricultural data, leading to improved results. Deep learning allows for a more comprehensive data analysis, capturing complex relationships and patterns that may not be apparent with traditional statistical methods. These results are more accurate and robust predictive models for crop production. Table 9 shows the comparison of proposed system across different aspects.

Fig. 13
figure 13

Overall performance comparison of various models

Table 9 Comparison across various aspects

To address the practical challenges of integrating the proposed ADLF (ADLF) with existing farm management systems, it is essential to explore specific integration strategies. Integrating ADLF with current systems can significantly enhance its practical relevance and adoption by farmers. Data exchange and interoperability are crucial, involving the development of APIs to facilitate seamless communication between ADLF and existing farm management systems. Employing standardized data formats such as CSV or JSON will ensure compatibility and smooth integration. Real-time data integration can be achieved by connecting ADLF with IoT sensors and data streaming platforms, enabling continuous analysis and timely insights. Improving user accessibility involves creating a unified dashboard that combines data from ADLF and existing systems into a comprehensive view. This dashboard should be accessible through mobile and web applications, allowing farmers to interact with the system from various devices. Automated alerts and recommendations based on ADLF’s analysis should be incorporated into existing workflows to enhance decision-making efficiency. Additionally, training programs and technical support are essential to help users understand and utilize the integrated system effectively.

We also presented the practical challenges associated with deploying our proposed ADLF. Dependency on uninterrupted network operability in rural and remote agricultural zones is a big issue. This can be effectively mitigated by implementing offline data collection and processing facilities through edge computing devices. This way the system is also capable of top performance even if it does not have internet or good network coverage. The major challenge is the large upfront investments for deploying deep learning models and its hardware. We propose a multi-phased approach to make those costs more manageable. It consists of initiating at a very small-scale scope, as moving confidently and easily expanding on further resources come. Further, there are also government subsidies and technology providers' collaboration that help reduce the cost. Performing ongoing maintenance and updates to the system is laborious. But to ease that burden, we suggest establishing regular maintenance schedules and advising the use of automated resolution tools. This course will also build capacities for local staff regarding the system and its management, which are useful for post-graduation sustainability as well in reduction on outside expert’s requirements. Our goal is to ensure that through tackling these real-world environmental and practical implementation constraints, we present a holistic solution for implementing our proposed framework in agriculture practice. We think these additions will greatly strengthen the paper and add useful information for practitioners in this sector.

We propose incorporating feature importance analysis and visual explanations so that the model’s predictions become more interpretable. For example, farmers may receive a ranked list of the most influential factors (such as soil moisture levels, temperature, or pest prevalence) for predicting the crop yield or health status. The Farmers can highlight these factors and know how these variables are affecting their crops and take the appropriate action accordingly. Beyond this, we have created the model’s output to be delivered to a user-friendly dashboard that translates difficult data into clear visuals and useful insights. The predictions are presented as easy to read charts and graphs that present trends in crop yield, areas of potential risk (example: likelihood of pest infestation) and recommendations for actions such as irrigation or fertilization. In short, these visuals give the farmers an immediate understanding of what the outcomes are and then they decide what to do to remediate it based on this evaluation without necessarily having to rely heavily on deep technical knowledge. Thus, in the manuscript we will exemplify how the model predictions are communicated to farmers in order to strengthen the practical applicability. An example: A farmer might get an alert that even with the available moisture level and projected weather they will begin to suffer water stress in the next week, and it would be suggested that they irrigate. It provides actionable info to bridge the gap between great model predictions and real world farming decision. We integrate these interpretability measures and present the model’s outputs in a form that is not only accurate, but meaningful and practical for farmers to take well timed and effective decisions.

Scalability across different farming contexts

The ADLF is flexible and scalable, allowing it to be applied to small and large farming operations as to different crop types. Specifically, the model can be fine-tuned in such small-scale farms where farmers may grow only a few or two crop types. In these sorts of scenarios, the model could be applied to inform optimized resource use, maybe water or fertilizers, based on local data. However, for large scale farms, the framework can leverage data from various sources, like satellite images, IoT sensors and drones, to monitor a large area and gain high quality insights on different crop types and zones. Advances in analytics can help large farms see patterns in pest and disease outbreaks and put preventative measures in place. Case studies reveal the promises for deep learning frameworks to significantly improve crop management, yield prediction and resource optimization in precision agriculture implementations such as large maize farms in the United States or small rice farms in Southeast Asia.

Computational resources and challenges in resource-constrained areas

In order to implement this framework, the computational resources needed are a key consideration in areas with limited access to high end technology and internet infrastructure. While the model is efficient, it still requires a lot of processing power to train and inference, a problem that can obstruct the use by the farmers in the remote or resource-restricted area. If a farm or operation has sufficient computational resources, cloud-based platforms can help with real time processing as well as data analysis. However, in areas that don’t have connectivity or access to technology this may not be possible. To address these challenges, we propose the following strategies to make the framework more accessible to farmers with limited resources:

  • Edge computing solutions The framework is designed such that it does not rely on the cloud-based platforms rather it is deployed on the edge computing devices like the low-cost microprocessor and local server that would take the need to process the data locally without a continuous presence of internet. That would let farmers take advantage of the model's predictions in real time even if you would have no Internet access.

  • Simplified mobile applications The framework can be incorporated into lightweight mobile applications, running on smallholder farmers' mobile phones, delivering simplified outputs and recommendations. These apps could run offline, given preloaded data and simple decision-making models akin to the basic model. The apps sync to cloud based systems for updates and enhanced functionality when internet access is offered.

  • Collaborations with local agriculture centers partnerships with local agricultural extension centers or cooperatives can be a source of hub to run the model on behalf of farmers because farmers with limited access to technology can also benefit. Deep learning can process these data collected by these centers and provide actionable insights to the farmers. In regions of Africa and South Asia, this approach has been successfully implemented at projects for precision agriculture.

Model limitations

The main limitation of the model proposed is that its operation is dependent on large volumes of high-quality data for training and validation. Such data are, however, unavailable or inconsistent in real world applications in most cases, especially in developing regions. For instance, a poorly kept remote area soil data or inconsistent weather can result in inaccurate model predictions. Another is that, if one trains the model on a small or very specific dataset, there is a chance to overfit the training data. It leads to the model doing well on the training data but poorly on new, unseen data. Despite it, we have made use of techniques like cross validation and regularization, which however is still an area that could be further optimized. The limitation is also that the model needs data updates in real time for accuracy over time. Since crop conditions and environmental factors can vary radically among regions over time, the model must be frequently updated with new data, or else it risks becoming outdated and unreliable. Unsupervised learning techniques or transfer learning might be the topic of future enhancements and in this case, the model can be less sensitive to the needs of large, high-quality datasets.

Although the model has relatively low false positive and false negative rates, the magnitude of impact in a real-world agricultural environment must be discussed further. In other words, a false negative occurs when the model underestimates the crop yield, or it fails to detect the disease or the pest infestation. This could mean a farmer could miss important intervention windows which might cause unexpected yield loss or damage. In these cases, farmers might have to do more frequent crop inspections manually or use a complement of decision support tools to make sure the problems are spotted when they can. However, a false positive, when the model over bends the curve, indicating an increased crop health or yield, could generate unnecessary actions. This is evident for example in how overwatering, over fertilization, or excessive use of pesticides can happen paying no heed to yield predictions which are too optimistic for farmers to use without wasting resources and possibly destroying the environment. In order to address this we suggest building a system of alerts with confidence levels for each prediction that allow farmers to evaluate the risk and make decisions. Also, integrating other real time data sources, such as sensor-based field monitoring, could be able to verify predictions before action.

Conclusion

The proposed framework integrates deep learning with precision agriculture to enhance crop production by providing a more accurate understanding of crop growth patterns, nutrient deficiencies, and potential threats such as pests and diseases. This approach leverages advanced algorithms and techniques to analyze large volumes of data, which are then used to create predictive models for forecasting crop yields and detecting potential crop issues. The ADLF demonstrated promising results, achieving an accuracy of 85.41%, precision of 84.87%, recall of 84.24%, F1-Score of 88.91%, a false negative rate of 91.17%, and a false positive rate of 89.82%. By adopting this data-driven approach, the ADLF facilitates more efficient and sustainable farming practices. It allows farmers to make informed decisions, optimize resource use, and improve overall productivity. Future work should focus on improving the model's robustness to missing or incomplete data, a common challenge in real-world agricultural settings. One potential solution is to develop techniques such as data imputation or generative adversarial networks (GANs) to estimate and fill in missing values. Additionally, incorporating transfer learning would allow models pre-trained on larger datasets to adapt to smaller or incomplete datasets, making them more effective in regions with limited data availability. Future efforts should also explore real-time data integration from IoT devices and ensemble learning techniques to improve the model's adaptability and performance across varying data conditions.

Availability of data and materials

The real data we used in this paper are public data downloaded from this link: https://www.kaggle.com/datasets/patelris/crop-yield-prediction-dataset

References

  1. Lu B, Dao PD, Liu J, He Y, Shang J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020;12(16):2659.

    Article  Google Scholar 

  2. Pathmudi VR, Khatri N, Kumar S, Abdul-Qawy ASH, Vyas AK. A systematic review of IoT technologies and their constituents for smart and sustainable agriculture applications. Sci Afr. 2023;19:e01577.

    Google Scholar 

  3. Rao EP, Rakesh V, Ramesh K. Big Data analytics and Artificial Intelligence methods for decision making in agriculture. Indian J Agron. 2021;66(5):279–87.

    Google Scholar 

  4. Javaid M, Haleem A, Khan IH, Suman R. Understanding the potential applications of Artificial Intelligence in agriculture sector. Adv Agrochem. 2023;2(1):15–30.

    Article  Google Scholar 

  5. Shah SA, Lakho GM, Keerio HA, Sattar MN, Hussain G, Mehdi M, et al. Application of drone surveillance for advance agriculture monitoring by Android application using convolution neural network. Agronomy. 2023;13(7):1764.

    Article  Google Scholar 

  6. Kong J, Wang H, Yang C, Jin X, Zuo M, Zhang X. A spatial feature-enhanced attention neural network with high-order pooling representation for application in pest and disease recognition. Agriculture. 2022;12(4):500.

    Article  CAS  Google Scholar 

  7. Arrubla-Hoyos W, Ojeda-Beltrán A, Solano-Barliza A, Rambauth-Ibarra G, Barrios-Ulloa A, Cama-Pinto D, Manzano-Agugliaro F. Precision agriculture and sensor systems applications in Colombia through 5G networks. Sensors. 2022;22(19):7295.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Sharma A, Jain A, Gupta P, Chowdary V. Machine learning applications for precision agriculture: a comprehensive review. IEEE Access. 2020;9:4843–73.

    Article  Google Scholar 

  9. Dhanya VG, Subeesh A, Kushwaha NL, Vishwakarma DK, Kumar TN, Ritika G, Singh AN. Deep learning based computer vision approaches for smart agricultural applications. Artif Intell Agric. 2022;6:211–29.

    Google Scholar 

  10. Shafi U, Mumtaz R, García-Nieto J, Hassan SA, Zaidi SAR, Iqbal N. Precision agriculture techniques and practices: from considerations to applications. Sensors. 2019;19(17):3796.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Javaid M, Haleem A, Singh RP, Suman R. Enhancing smart farming through the applications of agriculture 4.0 technologies. Int J Intell Netw. 2022;3:150–64.

    Google Scholar 

  12. Segarra J, Buchaillot ML, Araus JL, Kefauver SC. Remote sensing for precision agriculture: Sentinel-2 improved features and applications. Agronomy. 2020;10(5):641.

    Article  CAS  Google Scholar 

  13. Khanal S, Fulton J, Shearer S. An overview of current and potential applications of thermal remote sensing in precision agriculture. Comput Electron Agric. 2017;139:22–32.

    Article  Google Scholar 

  14. Huang Y, Chen ZX, Tao YU, Huang XZ, Gu XF. Agricultural remote sensing big data: management and applications. J Integr Agric. 2018;17(9):1915–31.

    Article  Google Scholar 

  15. Sangeetha R, Logeshwaran J, Rocher J, Lloret J. An improved agro deep learning model for detection of panama wilts disease in banana leaves. AgriEngineering. 2023;5(2):660–79.

    Article  Google Scholar 

  16. Dutta, P. K., & Mitra, S. (2021). Application of agricultural drones and IoT to understand food supply chain during post COVID‐19. Agricultural Informatics: Automation Using the IoT and Machine Learning, 67–87.

  17. Khanh PT, Ngoc TTH, Pramanik S. Future of smart agriculture techniques and applications. In: Khang A, editor. Handbook of research on AI-equipped IoT applications in high-tech agriculture. Hershey: IGI Global; 2023. p. 365–78.

    Chapter  Google Scholar 

  18. Fahad M, Javid T, Beenish H, Siddiqui AA, Ahmed G. Extending ONTAgri with service-oriented architecture towards precision farming application. Sustainability. 2021;13(17):9801.

    Article  Google Scholar 

  19. Jouini O, Sethom K, Bouallegue R. The impact of the application of deep learning techniques with IoT in smart agriculture. In: 2023 international wireless communications and mobile computing (IWCMC). IEEE; 2023. p. 977–82.

  20. Tahir MN, Lan Y, Zhang Y, Wenjiang H, Wang Y, Naqvi SMZA. Application of unmanned aerial vehicles in precision agriculture. In: Precision agriculture. Academic Press; 2023. p. 55–70.

  21. Gopal SK, Mohammed AS, Saddi VR, Dhanasekaran S, Naruka MS. Investigate the role of machine learning in optimizing dynamic scaling strategies for cloud-based applications. In: 2024 2nd international conference on disruptive technologies (ICDT). IEEE; 2024. p. 543–8.

  22. Rufus NHA, Anand D, Rama RS, Kumar A, Vigneshwar AS. Evolutionary optimization with deep transfer learning for content based image retrieval in cloud environment. In: 2022 international conference on augmented intelligence and sustainable systems (ICAISS). IEEE; 2022. p. 826–831.

  23. Thamaraimanalan T, Mohankumar M, Dhanasekaran S. Experimental analysis of intelligent vehicle monitoring system using Internet of Things (IoT). EAI Endorsed Trans Energy Web. 2018; 169336.

  24. Mandala V, Senthilnathan T, Suganyadevi S, Gobhinat S, Selvaraj D, Dhanapal R. An optimized back propagation neural network for automated evaluation of health condition using sensor data. Meas Sens. 2023;29:100846.

    Article  Google Scholar 

  25. Dhanasekaran S, Mathiyalagan P, Rajeshwaran AM. Automatic segmentation of lung tumors using adaptive neuron-fuzzy inference system. Ann RSCB 202; 17468–83.

  26. Karthick Perumal V, Supriyaa T, Santhosh PR, Dhanasekaran S. CNN based plant disease identification using PYNQ FPGA. Syst Soft Comput. 2024;6:200088.

    Article  Google Scholar 

  27. Sakthipriya S, Naresh R. Precision agriculture based on convolutional neural network in rice production nutrient management using machine learning genetic algorithm. Eng Appl Artif Intell. 2024;130:107682.

    Article  Google Scholar 

  28. Choudhari A, Bhoyar DB, Badole WP. MFMDLYP: precision agriculture through multidomain feature engineering and multimodal deep learning for enhanced yield predictions. Int J Intell Syst Appl Eng. 2024;12(7s):589–602.

    Google Scholar 

  29. Kumari J, Kumari K, Sinha A. Assessment of machine learning techniques for improving agriculture crop production. In: Holland B, Sinha K, editors. Handbook of research on innovative approaches to information technology in library and information science. Hershey: IGI Global; 2024. p. 303–22.

    Chapter  Google Scholar 

  30. Ahmed S, Basu N, Nicholson CE, Rutter SR, Marshall JR, Perry JJ, Dean JR. Use of machine learning for monitoring the growth stages of an agricultural crop. Sustain Food Technol. 2024;2(1):104–25.

    Article  Google Scholar 

  31. Rahu MA, Shaikh MM, Karim S, Chandio AF, Dahri SA, Soomro SA, Ali SM. An IoT and machine learning solutions for monitoring agricultural water quality: a robust framework. Mehran Univ Res J Eng Technol. 2024;43(1):192–205.

    Article  Google Scholar 

  32. Shwetabh K, Ambhaikar A. Smart health monitoring system of agricultural machines: deep learning-based optimization with IoT and AI. In: BIO web of conferences, vol 82. EDP Sciences; 2024. p. 05007.

  33. Dixit N, Arora R, Gupta D. Wheat crop disease detection and classification using machine learning. In: Infrastructure possibilities and human-centered approaches with industry 5.0. IGI Global; 2024. p. 267–80.

  34. Mujawar RY, Lakshminarayanan R, Jyothi AP, Patnaik S, Dhayalini K. IoT-enabled intelligent irrigation system with machine learning-based monitoring, for effective rice cultivation. Int J Intell Syst Appl Eng. 2024;12(11s):557–65.

    Google Scholar 

  35. Pandey DR, Mishra N. IoT integration for enhanced turmeric cultivation: a case study in smart agriculture. In: BIO web of conferences, vol 82. EDP Sciences; 2024. p. 05008.

  36. Sonali S, Dhotre SS. Improved deep learning-based classifier for detection and classification of aloe barbadensis miller disease. Int J Intell Syst Appl Eng. 2024;12(2s):239–54.

    Google Scholar 

  37. Hamouda YE. Optimally sensors nodes selection for adaptive heterogeneous precision agriculture using wireless sensor networks based on genetic algorithm and extended Kalman filter. Phys Commun. 2024;63:102290.

    Article  Google Scholar 

  38. Gupta C, Khang A. Cultivating efficiency-harnessing Artificial Intelligence (AI) for sustainable agriculture supply chains. In: Agriculture and aquaculture applications of biosensors and bioelectronics. IGI Global; 2024. p. 372–88

  39. Dashand SS, Kumar P. Distributed and Analogous simulation framework for the control of pests and diseases in plants using IoT Technology. In: BIO web of conferences, vol 82. EDP Sciences; 2024. p. 05017.

  40. Unhelkar B, Chakrabarti P. A novel deep learning models for efficient insect pest detection and recommending an organic pesticide for smart farming. Int J Intell Syst Appl Eng. 2024;12(9s):15–31.

    Google Scholar 

  41. Gryshova I, Balian A, Antonik I, Miniailo V, Nehodenko V, Nyzhnychenko Y. Artificial intelligence in climate smart in agricultural: toward a sustainable farming future. Access J. 2024;5(1):125–40.

    Article  Google Scholar 

  42. Parmar PJ, Shrimali M. Identification of fruit severity and disease detection using deep learning frameworks. Int J Intell Syst Appl Eng. 2024;12(12s):288–95.

    Google Scholar 

  43. Venkatasaichandrakanth P, Iyapparaja M. A detailed study on deep learning versus machine learning approaches for pest classification in field crops. In: Artificial intelligence and machine learning for smart community. CRC Press; 2024. p. 1–25.

  44. Gerber JS, Ray DK, Makowski D, Butler EE, Mueller ND, West PC, Sloat L. Global spatially explicit yield gap time trends reveal regions at risk of future crop yield stagnation. Nat Food. 2024;5:125–35.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Mishra AM, et al. Weed density estimation in soya bean crop using deep convolutional neural networks in smart agriculture. J Plant Dis Prot. 2022;129(3):593–604.

    Article  CAS  Google Scholar 

  46. Mishra AM, Harnal S, Mohiuddin K, Gautam V, Nasr OA, Goyal N, et al. A deep learning-based novel approach for weed growth estimation. Intell Autom Soft Comput. 2022;31:1157–72.

    Article  Google Scholar 

  47. https://www.kaggle.com/datasets/patelris/crop-yield-prediction-dataset. Accessed 15th July 2024.

  48. Maraveas C, Konar D, Michopoulos DK, Arvanitis KG, Peppas KP. Harnessing quantum computing for smart agriculture: empowering sustainable crop management and yield optimization. Comput Electron Agric. 2024;218:108680.

    Article  Google Scholar 

  49. Li J, Mingle Xu, Xiang L, Chen D, Zhuang W, Yin X, Li Z. Foundation models in smart agriculture: basics, opportunities, and challenges. Comput Electron Agric. 2024;222:109032.

    Article  Google Scholar 

  50. Hasan HR, Musamih A, Salah K, Jayaraman R, Omar M, Arshad J, Boscovic D. Smart agriculture assurance: IoT and blockchain for trusted sustainable produce. Comput Electron Agric. 2024;224:109184.

    Article  Google Scholar 

  51. Pranaswi D, Jagtap MP, Shinde GU, Khatri N, Shetty S, Pare S. Analyzing the synergistic impact of UAV-based technology and knapsack sprayer on weed management, yield-contributing traits, and yield in wheat (Triticum aestivum L.) for enhanced agricultural operations. Comput Electron Agric. 2024;219:108796.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2024R235), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Funding

This research was financially supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R235), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Contributions

JL: contributed to the conceptualization and design of the study, as well as the acquisition and analysis of data. DS: participated in the methodology development and data interpretation. KSK: was involved in the drafting and critical revision of the manuscript. MJR: contributed to the software development and implementation of the deep learning framework. AAR: assisted with data collection and pre-processing. MG (corresponding author): supervised the entire project and provided significant intellectual input throughout the study. BOS: contributed to the validation and final approval of the manuscript. All authors read and approved of the final manuscript.

Corresponding author

Correspondence to Masresha Getahun.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Logeshwaran, J., Srivastava, D., Kumar, K.S. et al. Improving crop production using an agro-deep learning framework in precision agriculture. BMC Bioinformatics 25, 341 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12859-024-05970-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12859-024-05970-9

Keywords