Yiannis Ampatzidis
Luigi De BellisLuigi De Bellis
3 and
Andrea LuvisiAndrea Luvisi
Department of Agricultural and Biological Engineering, University of Florida, Southwest Florida Research and Education Center, 2685 FL-29, Immokalee, FL 34142, USA
Department of Physics and Engineering, California State University, 9001 Stockdale Highway, Bakersfield, CA 93311, USA
Department of Biological and Environmental Sciences and Technologies, University of Salento, via Prov.le Monteroni, 73100 Lecce, Italy
Authors to whom correspondence should be addressed. Sustainability 2017, 9(6), 1010; https://doi.org/10.3390/su9061010Submission received: 13 March 2017 / Revised: 8 June 2017 / Accepted: 9 June 2017 / Published: 12 June 2017
The rapid development of new technologies and the changing landscape of the online world (e.g., Internet of Things (IoT), Internet of All, cloud-based solutions) provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems, laser technologies, actuators, and mechatronics have enabled the development and implementation of robotic systems and intelligent technologies for precision agriculture. Herein, we present and review robotic applications on plant pathology and management, and emerging agricultural technologies for intra-urban agriculture. Greenhouse advanced management systems and technologies have been greatly developed in the last years, integrating IoT and WSN (Wireless Sensor Network). Machine learning, machine vision, and AI (Artificial Intelligence) have been utilized and applied in agriculture for automated and robotic farming. Intelligence technologies, using machine vision/learning, have been developed not only for planting, irrigation, weeding (to some extent), pruning, and harvesting, but also for plant disease detection and identification. However, plant disease detection still represents an intriguing challenge, for both abiotic and biotic stress. Many recognition methods and technologies for identifying plant disease symptoms have been successfully developed; still, the majority of them require a controlled environment for data acquisition to avoid false positives. Machine learning methods (e.g., deep and transfer learning) present promising results for improving image processing and plant symptom identification. Nevertheless, diagnostic specificity is a challenge for microorganism control and should drive the development of mechatronics and robotic solutions for disease management.
Research in agricultural robots has been growing in the last years, thanks to potential applications and industry efforts in robot development [1]. Their role was investigated for many agricultural tasks, mainly focused in increasing automation of conventional agricultural machines and covering processes such as ground preparation, seeding, fertilization, and harvesting. Systematic, repetitive, and time-dependent tasks seem to represent the best fields of application for robots, especially in an arable farming context with temporary crops [2]. Beside agronomic practices, robotic plant protection has also been investigated, but may represent the most complex challenge for researchers and developers because questions relative to pathogen diagnosis have to be considered along with common robot-related issues.
Recently, research in automatic recognition of diseases has been rapidly growing, with potential applications for developing robots able to recognize single plants, locate and identify diseases, and start routines for disease management. This paper aims to provide details of that new generation of robots that could support plant pathologists.
The rapid development of new technologies and the changing landscaping of the online world (e.g., the Internet of Things (IoT), Internet of All, cloud-based solutions) provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems (GPS), laser technologies, actuators, and mechatronics (embedded computers, micro-sensors, electrical motors, etc.) have enabled the development and implementation of robotic systems [3,4,5] and “smart” (intelligent) technologies [6,7,8] for precision agriculture.
Machine vision technologies have been applied to agriculture to identify and locate individual plants [9], with prospective use in highly-automated contexts such as greenhouses. As reported by Correll et al. [10], object recognition is considered one of the hardest problems in robotics and computer science. In an environment, such as a garden or open field, there is no guarantees that an object has an exact size or shape, due to growth conditions and interaction factors among plants, organisms, and environment. Even the light plays a significant role compared to in a controlled environment; objects such as plants, leaves, weeds, and disease spots might be obscured or show spectral highlights due to the reflection of light [10]. Additionally, machine vision navigation systems have been developed to guide autonomous vehicles in agricultural fields [11,12].
Important features for plant management have been implemented in robotic farming, especially for conventional tasks such as tilling, sowing, harvesting of grains, planting, watering, and harvesting of fruit; these activities can be also managed by Farm Management Information Systems [13]. Seed planting represents an important task in plant management; thus, autonomous agriculture robot prototypes have been specifically designed and developed for seed sowing tasks. Naik et al. [14] developed an agricultural robot in which distances between crop rows and crops represent the only human input. Plant row detection was achieved utilizing IR sensors. Automation of this activity is particularly important in greenhouses, because the high plant density and short life cycle of many cultivated plants impose frequent and massive planting procedures. Automatic vehicles can use the basic guidance structure provided by the iRobot Create platform to navigate inside a greenhouse and detect specific plant(s) [10,15]. The robot iPlant provides one arm implemented for watering and a second arm that consists of three main components, a plowing tool, a seeds container, and an excavation tool, to perform seed planting tasks [15].
Robotic fertilization can be carried out using solutions such as the Ladybird robot [16], which has an electric drivetrain supported by solar panels, and it is able to capture features of surroundings using a Light Detection and Ranging (LIDAR) laser system. A stereo camera creates RGB (Red, Green, Blue) images of the crops, while IR and UV data are collected using a hyperspectral imaging camera (400–900 nm). This robotic system can evaluate plant status using machine learning algorithms. It is equipped with a spraying system, attached to a Universal Robots UR5 six-axis robot arm, for plant fertilizing. A smaller version of Ladybird was developed as RIPPA (Robot for Intelligent Perception and Precision Application), the production phase of which is underway [16]. Further research has been carried out on robotic fertilizing; Zavala-Yoe et al. [17] designed a robotic manipulator utilizing laser sensors able to detect the content of water and nutrients in the plant and/or soil, as well as to decide the optimal dose of fertilizer for the plant. This site-specific approach can optimize nutrient and water usage/efficiency.
Semi-automated and automated technologies have been designed and developed for selective plant thinning. Siemens et al. [18] designed and evaluated an automatic multi-purpose machine for thinning, weeding, and variable fertilizing of lettuce using machine vision. Moghaddam et al. [19] developed a selective thinning algorithm and compared two methods for recognizing sugar beet plants. They reported that the average width algorithm (AW) performed better than the mass center algorithm (MC) on detecting plants with 88% accuracy, but it required longer processing time. Additionally, thinning of pome and stone fruit is a very challenging, labor-intensive, and time-consuming operation. Lyons et al. [20] presented a selective and automated thinning system for peaches. A prototype robotic manipulator and clamp-like end effector for brushing off peach blossoms was developed and investigated as well [21,22].
The harvesting of fruit from woody plants is a frequent challenge for robotic applications [23,24] as well as for shrub or herbaceous plants [25,26,27]. Several harvesting robotic systems have been developed and evaluated for cucumber [28], strawberry [29], tomato [30], asparagus [31], grain [32], lettuce [33], mushroom [34], and orange crops [35], among others. Fruit detection and localization is very critical for developing robotic harvest technologies. Gongal et al. [36] provided a review on machine vision for fruit load estimation, suggesting the utilization of integrated color and shape-based feature classification. Most of the supervised learning methods require high numbers and accurate training samples, but deliver better results than simple image processing techniques. Modeling and analyzing 3D plant and tree structures for robotic applications is computationally intensive and a time-consuming procedure [37,38]. In general, there are two types of 3D modeling systems: active (e.g., laser systems such as LIDAR—Light Imaging, Detection, And Ranging-, LIDAR-based LemnaTec Scanalyzer HTS—High Throughput Screening-, Microsoft Kinect etc.) and passive sensors (e.g., digital cameras and photogrammetric techniques). The main drawback of the active techniques is the high cost of these systems. The passive techniques are less expensive than the laser-based active techniques, and can produce more detailed information in addition to faster data acquisition and process rates.
However, the critical issue for robotic harvesting is identifying crops that may overlap, or be subject to varying lighting conditions. Recently, tomato [39] and apple [40] detection in highly variable lighting conditions was performed by applying an algorithm on multiple feature images (image fusion). Supplemental application may involve the detection of fruit disease as well (see Section 3.2.3). The application of deep neural networks has also led to the development of new object detectors such as the Faster Region-based Convolutional Neural Networks (Faster R-CNN) that was adapted for fruit detection using color (RGB) and near-infrared (NIR) images [41]. The combination of multi-modal image information via early and late fusion methods leads to an enhanced Faster R-CNN model with higher detection accuracy.
Furthermore, distributed autonomous gardening systems have been utilized for urban/indoor precision agriculture applications, where multi-robot teams are networked with the environment [10]. Post-harvest robotic technologies have been developed based on machine vision for disease detection as well as fruit and product sorting [42,43,44].
As Correll et al. [10] suggested, systems for automated plant management may rely on the vision of an Ecology of Physically Embedded Intelligent Systems, or PEIS-Ecology [45]. In this vision, a network of cooperating robot devices embedded in the environment is preferred over having an extremely competent robot in a passive environment [46]. This represents an ecological view of the robot-environment relationship, in which the robot and the environment are seen as parts of the same system, working to achieve a common goal; a vision particularly suitable for greenhouses or well-defined environments such as gardens. In this case, a robot (or PEIS) is intended as any device incorporating some computational and communication resources, and is able to interact with the environment via sensors and/or actuators [45]. PEIS are connected by a uniform communication model for exchanging information and each component can use functionalities from other PEIS. A model like that [45] could be applied in the agricultural sector as well, particularly in highly controlled conditions which can be obtained in greenhouse cultivation. In a traditional approach, the robot would use its sensors to acquire information from the environment, i.e., to self-localize and recognize/identify plants, evaluate soil moisture, and water the plants. In comparison to the PEIS approach, the robot will recover information from the environment, such as calculate its relative position in the garden or greenhouse (e.g., using cameras), identify plants by RFID (Radio Frequency Identification Device) tags attached or inserted in the plant, monitor soil moisture, and transmit data wirelessly to an automated irrigation system. The lower complexity of robotic devices used in this vision seem to fit well in the agricultural context, where “objects” are stressed by many physical factor (heat and light, soil and sand, water, etc.), which allows for an increase of maintenance services.
Open-field robotic management is a quite difficult task, mainly due to limited control of environmental conditions, increased maintenance workload of high-tech devices, and lack of complete knowledge of objects within the operative range of robots. Thus, robots for open-field application are mainly involved in agronomic tasks or weed control. However, an emerging challenge in precision agriculture is the use of Unmanned Aerial Vehicles (UAVs), for which applications are countless and very promising for plant health monitoring. UAVs can be used for plant disease and abiotic crop disorder detection [47], such as rice lodging or crop nutrient diagnosis via spectral analysis [48]. UAV systems have also been tested for pathogen detection in crops. Pathogen detection can be achieved by analyzing the crop canopy [49,50], or detecting declines in chlorophyll levels (commonly expressed as a reflectance decrease in the near-infrared band) [48]. Similarly, for monitoring applications for abiotic stress, this approach seems effective when the pathogens lead to a widespread attack on crops. At this stage of development, this approach should be considered for remote sensing applications [51], rather than for robotic disease management, where the robots recognize and control the disease. Disease detection and control in open-fields (not only in well-defined environments) could be achieved by utilizing aerial (UAV) and ground robots (Unmanned Ground Vehicles, UGVs) [52]. In this study [52], UAVs were used to collect spectral images and analyze them to identify “critical” regions. This information was transferred wirelessly to the UGV, which navigated to the “critical” areas to analyze and collect leaf samples. The UGV first examined the plant (strawberry) using a spectral camera and, if it was classified as “diseased,” it collected leaf samples using a manipulator and web cameras for leaves localization.
Precision plant protection can be considered as a part of precision agriculture [49], in which a site-specific application of pesticides plays a significant role in agriculture sustainability [53]. In this field, monitoring and control of environmental parameters represents an essential feature in order to automatize machinery and robots. Moreover, precision agriculture is a cyclic system in which steps can be divided into data collection and localization, data analysis, management decisions on applications, and evaluation of management decisions; data are stored in a database and can be used as historical data for future decision-making [54], a very useful feature for plant protection purposes. While environmental control in an open filed condition is very limited, greenhouses are the best environment to implement precision farming protection systems. Even if further investigation and development is needed, many parameters can be monitored or even controlled in well-defined spaces. Pre-determined physical and biological objects and the availability of infrastructure (i.e., wires, pipes, roofs) can support the implementation and mobility of smart machinery.
Smart systems for greenhouse management have been greatly developed in recent years, mainly through rapid advances in Information Technology and Wireless Sensor Networks [8,55]. Greenhouse indoor environmental parameters, such as temperature or humidity, can be measured by sensors. Disease occurrence and development are also linked to plant parameters such as humidity lasting time on leaves and temperature of leaves. Systems able to collect information and effectively and automatically control greenhouses on site or remotely through a web browser have been developed [56]. Further difficulties linked to the high nonlinear interaction between the physical and the biological subsystems have suggested the use of the filtered Smith predictor to improve disturbance rejection dynamics, but maintain the robust stability of a greenhouse environment [57]. The high level of technology involved in greenhouse environmental control has been underlined by identifying optimal vent configurations and opening management procedures for indoor environment control using CFD (Computational Fluid Dynamics) software [58].
While control of environmental parameters can help farmers control diseases, to protect plants and optimize production, humans (and machinery) still need to intervene on single plants or in the proximity of robotic controls. On the other hand, plant identification devices do not directly support automation [59,60].
Among abiotic stresses, water management represents a core topic in robotic management, due to its importance in greenhouses, gardens, and urban greening. These are environments in which robots and automated systems fit well due to the limited extension of surfaces or items to be monitored, environmental conditions that are partially controlled, and the high value of plants. Automated and smart irrigation systems are variable in greenhouses, because growth performance strongly relies on highly efficient water management. Pathogen development is also dependent on water disposal, while a water surplus may lead to increasing costs or cause issues for water discharging in urban greening.
The robot iPlant utilizes one arm, implemented with a soil moisture sensor that can control a water pump for precision and target-based irrigation [15].
An interesting approach in plant management, with potential effects in protection from abiotic stress, is suggested by symbiotic robot-plant bio-hybrids. Hamann et al. [61] investigated the symbiotic relationship between robots and plants in order to develop plant-robot societies able to create meaningful architectural structures; the robots could cause artificial stimuli in addition to the natural stimuli of the environment. Thus, a robot-plant bio-hybrid could serve as a green wall able to adapt to environmental stimulus, i.e., adapting plant growth and foliage distribution according to solar radiation and temperature, establishing a system that could be used for urban mitigation. Solar access is an important factor and may represent a constraint to urban agriculture [62], since most edible plants have a relatively high sunlight requirement. Thus, the access to sunlight must be considered [63]. However, symbiotic plants seem to have the potential to manage multiple requirements of future architectures. Symbiotic plants may also act as bio-sensors, because the bio-hybrid part has set environmental, electrophysiological, photosynthetic, and fluidic sensors. This system monitors the status of plants and their environment. Plants represent a primary, sensitive biological sensor of different environmental parameters, and technological sensors convert the plants’ response into digitally represented information, creating a phyto-sensing system [61]. This brings an advance in the current use of bioindicators for air pollution assessment [64,65,66]. Moreover, a set of actuators can maintain the plants’ homeostasis in terms of watering, chemical additives, spectral light, or temperature, contrasting abiotic stress and minimizing input. These features are very attractive to future urban greening maintenance and protection.
Machine learning techniques can aid in the recognition of abiotic stress, since it is very challenging to distinguish between a variety of diseases with similar symptoms (e.g., biotic or abiotic stress). Self-organizing classifiers (counter-propagation artificial neural networks, supervised Kohonen networks, and XY-fusion) have been able to distinguish healthy and diseased canopy (using hyperspectral images), and to distinguish between nitrogen-stressed and yellow rust-infected wheat [67].
Besides the development of robots for auto-navigation purposes, weed control represents the main field of research, mainly due to (a) the economic relevance of the topic and (b) the feasibility in distinguishing target (weeds) from non-target (cultivated plants) items by machine vision technology.
Distinguishing between weeds and cultivated plants represents the first step for achieving effective weed control. The use of agrochemicals for weeding offers an effective solution to growers, but some concern may arise when considering the environmental impact of this approach. In order to avoid environmental pollution, chemicals could be distributed only towards weeds (target-based applications) or mechanical methods may be applied. Both of these approaches are difficult to implement after seeding, transplanting, or during vegetative growth due to technical difficulties and high costs of the target-based systems. Currently, identification of weeds relies on human capability in discerning (identifying and distinguishing) plants visually. This operation (human supervision of weed control) is time-consuming and expensive. The “screening for weeds” procedure is similar with the “screening for anomalies” in an industrial production chain, an activity widely managed by automatized machinery.
Tillett et al. [68] used a machine vision system to detect and remove weeds within rows of vegetables, utilizing conventional inter-row cultivation blades. This system can remove up to 80% of the weeds and the damage to the crops is low. However, weed re-growth and new germination was detected. Astrand and Baerveldt [69] developed an autonomous mechanical weeding system using two vision systems: one for navigation along crop rows, and another for identifying crop plants using color and shape parameters. A rotating weeding tool was used for weed removal; when a crop plant was encountered, the rotating tool was lifted out of engagement to avoid plant damage. To avoid crop plants, Griepentrog et al. [70], devised a continuously rotating ring of rigid steel tines that are moved individually.
Lee et al. [9] designed one of the first robotic spraying systems able to minimize the use of pesticides. The proposed machine vision technology distinguished tomatoes from weeds, dividing the acquired imaged to obtain precise spraying application, giving a spray cell size of 1.27 cm by 0.64 cm. The weed leaf locations were then sent to the spray controller and the proper amount of herbicide was sprayed in the image cell in which the weed was detected. The plant recognition performance varied greatly, according to the high level of variability in the shape patterns. Limits seem to be linked to a single, two-dimensional, top view of plants and uncontrolled environmental growth parameters. A similar approach was carried out by Astrand and Baerveldt [69], employing two cameras, encapsulating the vision system and limiting issues due to non-uniform illumination. Using gray-level vision to navigate in a structured outdoor environment and color vision to distinguish crops from weeds, very high classification rates of plants (more than 90%) were achieved. The previously described Ladybird or RIPPA robots [16] are also able to control weeds in the same way that they fertilize plants, but their efficacy should be improved by using the method developed by Lee et al. [9] for precise spraying applications.
Deploying multiple robots could reduce the cost of weed management in terms of energy, labor, and agrochemical usage. In the RHEA project (Robot Fleets for Agriculture and Forestry Management) [71], robots are integrated in tractor units and equipped with machine vision-based weed and crop row detection systems. This approach seems able to detect up to 90% of weed patches and eliminate most of them by using a combination of physical and chemical means. However, the applications of multiple robots have to be evaluated considering the cost of the individual units and the issues related to collisions, communication, and safety [16].
Diagnostic real-time tools for weed detection are limited in their commercial availability. For mechanical (target-based) weeding systems, the robotic system needs only to identify and distinguish the cultivated plants; all the other plants (weeds) will be mechanically removed. Conversely, some issues may be linked to chemical protection because the use of broad spectrum compounds may be necessary, or highly precise spraying systems need to be implemented.
Categorizing plants according to their morphological parameters is a very difficult task, especially through visual (human) observation. Thus, identifying diseased plants from symptomless ones is an extremely challenging task. The difficulty is considered so intrinsic due to the non-specificity of many symptoms that can be present. Moreover, the degree of alterations of morphological parameters, due to biotic or abiotic stress (such as dimension, shape, or colors), may be very low compared to the variability of appearance in healthy plants; particularly during the first stage of infection. Control strategies against microorganisms have to be carried out before the occurrence of disease outbreak in order for it to be effective, and diagnostic specificity is mandatory for effective plant protection.
In the last fifty years, disease diagnosis shifted from the observation of symptoms or symptom-based diagnostic procedures (i.e., indexing with sensitive host), towards protein-based or molecular-based tests. Even if the role of the human eye and the skills of pathologist may still play a significant role in some cases, such as yellows caused by phytoplasma or in recognizing cultured pathogen in vitro, tests such as ELISA or PCR are considered almost the only answer to biotic infections. Therefore, the advancements of techniques for pathogen detection (via sensors) rely on a more complex concept of vision.
Non-destructive methods for plant disease detection have been developed in the few last decades [50], belonging to two groups: spectroscopic and imaging techniques or volatile organic compound-based techniques. In particular, sensors may assess the optical properties of plants within different regions of the electromagnetic spectrum and they could detect early changes in plant physiology due to biotic stresses that can lead to change in tissue color, leaf shape, transpiration rate, canopy morphology, and plant density. Plant disease detection by imaging sensors were deeply reviewed by Mahlein [72]. Thermography, chlorophyll fluorescence, and hyperspectral sensors are the most promising technologies [49]. Plant phenotyping is also critical to understanding plant phenotype behavior in response to genetic and environmental factors [73,74]. Nevertheless, the use of spectral analysis is a matter of debate in plant pathology, due to similar symptoms or physiologic alteration caused by different diseases [75]. Pathogens detected by image analysis is rapidly increasing [72], mostly involving widespread pathogens frequently recurrent in arable agriculture systems [76]. This represents the main field of application because this approach may greatly reduce the workload of routine pest management, but quarantined pests may be also involved by research. In these cases, the requirement for diagnostic tools may be different, because of the need for large-scale monitoring diagnostic techniques (and relative fast and low cost of execution) and high diagnostic specificity, due to the significant workload and social impact that may originate from recognition of positive plants [77]. Deng et al. [78] developed a recognition method based on visible spectrum image processing to detect symptoms of citrus greening diseases (also named citrus Huanglongbing, HLB, caused by Candidatus Liberibacter spp.) on leaves. The experimental results showed a high detection accuracy of 91.93%. Pourreza et al. [79] presented an HLB detection system at the pre-symptomatic stage utilizing polarized imaging. Citrus canker caused by Xanthomonas axonopodis caused foliar symptoms that were analyzed to evaluate the efficacy of image analysis [80]. The image analysis was more accurate than visual raters for various symptom types (lesion numbers, % area necrotic, and % area necrotic and chlorotic), but symptom heterogeneity or coalescence of lesions could lead to discrepancies in results, making full automation of the system not yet affordable. Cruz et al. [81] developed a vision-based, novel transfer and deep learning technique for detecting symptoms of leaf scorch on leaves of Olea europaea infected by Xylella fastidiosa , with a true positive rate of 95.8 ±8%. These vision-based systems can be mounted on mobile vehicles (e.g., autonomous vehicle) to optimize disease detection procedure and disease management.
Robotic fruit recognition is one of the most relevant topics for agricultural robot development, due to the economic impact of full automation and mechanization in harvesting. Color represents one of the simplest identifier to distinguish fruit from woods and foliage. Infrared light with wavelengths about 700–1000 nm reflects well by all parts of the plants, while red light (at about 690 nm) is not reflected well by unripe fruit, leaves, and stalks, but is reflected well by red ripe fruit. This feature can be used to recognized even small red fruits, such as cherries, using different light beams [24]. Red and infrared signals were detected separately using lock-in amplifiers, with the aim of eliminating the interference effects caused by environmental light. Further refinements were necessary due to red light reflections on wood, but the reflection of fruit cause a specula phenomenon that can be used to recognize the center of the cherry. Thus, fruit and obstacles, such as wood or leaves, can be properly recognized to accomplish an efficient harvest. This system should be susceptible to plant disease. Most cherry diseases are related to twigs, branches, or wood, causing a reduction in foliage biomass that may alter light reflection; while diseases such as necrotic ring spot ( Necrotic ring spot virus ) or leaf spot ( Blumeriella jaapii ) may reduce the reflection of infrared laser beams due to spots on the leaves. Cherry recognition could be compromised by brown rot ( Monilinia fructicola ), that could strongly reduce reflections of red laser beams due to browning on ripening fruit and coverage with gray masses of spores. Even if the biotic stress interferes with the recognition of plant organs, alterations in light reflection parameters should represent an interesting starting point in order to adapt harvesting-based solutions to disease detection ones. Moreover, the use of 3D vision sensors, essential in order to recognize fruit covered by foliage [24], will be useful even to evaluate symptoms caused by pathogens that are erratically distributed.
Fruit detection via color can be a difficult task when the fruit has the same color as the vegetation, such as in the case of cucumber [28]. The reflection of leaves at 850 and 970 nm is nearly the same. Fruit showed a significantly higher reflection at 850 nm, and hence, the use of different filters allowed the distinction between the reflectance of leaves versus fruit, achieving high efficiency in fruit recognition. It is likely that a leaf disorder, such as fungal, bacterial leaf blight ( Alternaria cucumerina, Xanthomonas campestris ), or anthracnose ( Colletotrichum orbiculare ), may cause rupture in the reflection consistency. Rot caused by Rhizoctonia solani may alter ratios in 850–970 nm reflectance, but these disorders could also compromise the whole recognition method, providing useless value for pathology purposes.
Other approaches in fruit and leaf recognition for harvesting purposes, not based on color, could lead to interesting applications on disease detection. Correll et al. [10] indicated that recognizing plants and tomatoes was a difficult challenge due to the complex geometry of the leaves; the foliage interferes significantly with the lighting in unpredictable ways. Tomatoes are generally round, but the variations in size and shape may be numerous. Lighting may also complicate their detection due to obscuration of other fruit or leaves, and their skin may reflect spectral highlights. To solve the identification of red or green tomatoes on plants, filter-based object recognition can be carried out [10]. Possible tomato locations are derived from a collective estimate of filters based on shape, color, size, and spectral highlights of tomato skin. Some filters, such as spectral highlights, are not robust and their performance is affected by changing light conditions and by the object’s shape, leading to numerous false positive. In Correll et al. [10], two parameters were selected for fruit classification and identification (e.g., red, green tomatoes); color and fruit smoothness. Fruit smoothness was the most critical factor for identification; fruit identification using only color features could lead to false positives (between green tomatoes and leaves). This approach may lead to interesting applications for symptom recognition. A machine vision technology able to identify fruit or leaves independently from color could individuate the specific part of an image or a plant, on which to apply to other detection tools (i.e., hyperspectral imaging) or further filters, limiting interferences of foliage or background. Moreover, if smoothness and color are the key factors in identifying fruit, breakage in homogeneous areas should be used as symptom alerts. Early blight symptoms caused by Alternaria solani cause sunken areas and a black and velvety appearance, while anthracnose, revealed by small, indented spots on tomato skin caused by Colletotrichum coccodes , results in a decay in tomato flesh. Bacterial speck ( Pseudomonas syringae pv. tomato ) and bacterial spot ( Xanthomonas campestris pv. vesicatoria ) are characterized by raised brown specks or pimple-like dots, while bacterial canker ( Clavibacter michiganensis subsp. michiganensis ) causes raised black and white spots. Alterations in skin smoothness and color may therefore also indicate by physiological disorders such as fruit cracking, catfaced fruit, or sunscalding. The combined spot alterations of smoothness and color caused by these fungi, bacteria, or disorders should be properly recognized within the area that, for the same feature, was identified as fruit. Thus, the robot could harvest only symptomless tomatoes, reducing post-harvest work. Other biotic stress such as late blight ( Phytophtora infestans ) that causes mainly color alterations on fruit (i.e., brown blotches), or physiological disorder such as blotchy ripening, should be more difficult to distinguish, due to low impact on smoothness of fruit skin. Further sensors should be implemented/developed in order to integrate robot potential, such as hyperspectral techniques that provide much information over the 350–2500 nm spectral range, with a resolution of less than 1 nm [49]. These techniques have shown high potential in plant pathology because the information retrieved from hyperspectral image is based on the spatial X - and Y -axes and a spectral Z -axis, generating a detailed and allocated interpretation of the signal object interaction [82]. Zhang et al. [83] were able to identify tomatoes infected by P. infestans , but this was related to canopy disorder instead fruit. Applications to fruit disease are quite rare, but Quin et al. [84] developed a hyperspectral method able to recognize citrus canker. Thus, data provided by this class of techniques may be able to integrate performances of robotic fruit recognitions, creating an effective robot pathologist.
The progressive interaction between robotics and plants, which is increasing rapidly through the development of sensors, actuators, and mechatronics, must also be evaluated in terms of social and environmental sustainability issues. Agricultural robotic systems and tools may utilize hazardous materials and chemicals, which could increase costs and environmental impact for their disposal [85,86,87]. Further challenges for the introduction and widespread use of efficient robotic solutions are also relative to social dimensions of labor automation in a post-industrial society. Besides sensational headlines [88] and differences in historical influences of technology among countries [89], the potential impact of robots, artificial intelligence, cloud-based computing, and big data on unemployment should be examined.
Human-robot collaboration (co-robot) has the potential to transform the division of labor between workers and machines as it has existed for past years. Robots collaborate with human workers in factories for the last few years; robots and robotic systems, isolated behind safety fences, have already become smart enough to collaborate with people on manufacturing production lines, offering greater efficiency and flexibility. In agriculture and in open field environments, the development of efficient co-robot systems is more challenging and demanding, and needs further investigation.
Hardware and software of platforms (UAVs, UGVs, or sensors installed on vehicles/structures) and manipulators, even if originally intended for agronomic tasks such as monitoring or harvesting, are developing fast. Autonomous navigation of robots within (or over) crops and orchards have achieved high levels of complexity, and plants or part of them can be successfully manipulated for seeding, thinning, or harvesting. Obviously, room for improvement is still available, particularly for the management of fleets of robots and the coordination between robots specialized in different tasks. It is also desirable that robot development will be strictly associated with the development of diagnostic systems, because many of the most advanced techniques of image analysis for plant pathogen recognitions are often developed and tested independently from robotic application/integration.
The role of image analysis in robotic management should also be investigated. Image analysis and recognition, regardless the object of the image (disease symptoms, weeds, abiotic stress), is the central topic of research for automatic crop protection. However, as reported by Mahlein [72], methods for disease diagnostics are still in the developmental stage and the availability of standardized applications is not so predictable. Besides routine monitoring of crops, such as cereals, in which robotic management may be not really plant- or organ-specific, vegetables and fruit plants require single-plant observation, diagnosis, and management. Is image analysis the best tool to achieve this goal? This field of research is recent, and huge steps forward are constantly being achieved. Diagnostic specificity may match conventional diagnostic tools for some diseases, but not for all or not for all stages of disease development. We need to further investigate innate limitations of image processing techniques and technologies for plant disease detection, based on the application and plant protection requirements. Robots could act as very experienced pathologists that recall thousands of images to deliver a diagnosis. A robot’s “eye” is far better than human one, and it can collect a large amount of data that are invisible for us. However, we have to consider the possibility that data mining/processing of images may not be enough for every pathogen/host/environment combination. A parallel field of investigation for enhancing “robotic pathologists” may involve the further development of manipulators and sampling hardware, shifting the research focus from “experienced robotic observer” toward “robotic observer and analyst.” In recent years, research of on-site detection assay is growing fast and methods such as Loop-Mediated Isothermal Amplification (LAMP) [90] or the lab-on-chip based on Electrical Impedance Spectroscopy (EIS) [91] may permit fast molecular diagnosis in the field. Because of their speed, robustness, and simplicity, these tests should be integrated in traditionally vision-based robots, enhancing their diagnostic specificity when required.
This material is based upon work that is supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award numbers 2016-38422-25544 and 2016-67032-25008.
A.L. and Y.A. conceived the review; A.L. analyzed plant pathology literature; Y.A. analyzed engineering and precision agriculture literature; A.L., L.D.B. and Y.A. prepared the manuscript; L.D.B. edited the manuscript.
Ampatzidis, Y.; De Bellis, L.; Luvisi, A. iPathology: Robotic Applications and Management of Plants and Plant Diseases. Sustainability 2017, 9, 1010. https://doi.org/10.3390/su9061010
AMA StyleAmpatzidis Y, De Bellis L, Luvisi A. iPathology: Robotic Applications and Management of Plants and Plant Diseases. Sustainability. 2017; 9(6):1010. https://doi.org/10.3390/su9061010
Chicago/Turabian Style
Ampatzidis, Yiannis, Luigi De Bellis, and Andrea Luvisi. 2017. "iPathology: Robotic Applications and Management of Plants and Plant Diseases" Sustainability 9, no. 6: 1010. https://doi.org/10.3390/su9061010
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.