By Arnold Schumann, Laura Waldo, William Holmes, Gary Test and Tim Ebert
Artificial intelligence (AI) is increasingly common in electronic devices at home or work, in social media, video streaming services, electronic commerce, and in internet search engines. Now, AI is rapidly entering the farming scene.
Growers using modern precision agriculture tools and techniques often face a barrage of high data volumes created by increasingly prolific, data-hungry electronic devices and services. Compare a smart phone’s data needs with an old desktop phone. Or contrast an old-style paper map of your farm with today’s digital geographic information system maps, showing multiple layers of every square inch of your fields, updated every week or month by automated aerial surveys with drones.
Successful use of precision agriculture requires improved, smarter methods for managing, understanding and integrating all the “big data” being generated. Fortunately, AI implemented as machine learning may be able to help.
Machine learning uses computer algorithms to parse data, learn from it and make determinations without human intervention. Since about 2012, new machine vision techniques using deep-learning convolutional neural networks (DL-CNN) have excelled in image recognition, especially in the detection (identification and localization) of objects within images (Figure 1).
DL-CNN models can be trained to recognize virtually anything that the human eye can perceive, including objects that are partially obscured. In recent global competitions, the accuracy of DL-CNN machine vision in tests with hundreds of thousands of images has matched or exceeded the accuracy of human viewers. The speed at which machine vision can process thousands of images greatly exceeds the capacity of humans.
AG APPLICATIONS
Some of the leading uses of machine vision in precision agriculture are to process video from cameras for identification of weeds, pests, diseases and hazards, and to enhance auto-steering guidance of farm vehicles. Smart sprayers for precision spraying of herbicide only on weeds in agronomic crops have already been developed. They rely on AI machine vision to identify every plant on the ground and make on-the-go decisions about which plants to spray. Herbicide savings of 90 percent were reported from the use of the AI sprayer technology. Development of similar technologies for specialty crops is currently in progress.
There are three free iPhone apps from the Apple Store for image and object detection. They are PictureThis – Plant Identifier, AlphaPic and What Objects. The first one requires an internet connection (image processing is done on a remote server). The other two apps process everything on the iPhone, which is preferable. In a quick test, the Plant Identifier was able to correctly identify 10 out of 10 houseplants. The Plantix app for Android devices can be downloaded from Google Play. Plantix, a free crop advisory app that uses DL-CNN machine vision, was developed in Germany for Android smartphones. At this time, the multilingual app is not available yet on the iPhone iOS platform. Plantix includes 26 diagnosed citrus maladies, including citrus canker and greening diseases. Dozens of other crops, pests and diseases are included in the Plantix library. There are similar initiatives underway to develop new and exciting smart agricultural applications for machine learning and AI-enabled computer vision on various mobile computer devices and farm machinery. At the University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) Gulf Coast Research and Education Center, Nathan Boyd is developing diagnostic apps for Florida specialty crops using smartphones as well as AI vision-equipped smart sprayers for vegetables and strawberries. At the UF/IFAS Citrus Research and Education Center, AI research is focusing on autonomous robotic scouting platforms for citrus under protective screen (CUPS) groves. All these projects rely on DL-CNN machine learning for rapid, real-time analysis of video feeds and photographs to identify situations and objects of interest such as weeds, pests and diseases. ACP DETECTION IN CUPS Human scouts are currently the only methods for detecting pests and diseases in citrus groves, and the known methods for ACP detection are very inefficient. Only a small fraction of trees in a grove are sampled, and for large trees, only a tiny fraction of each canopy is sampled, with most of the canopy above 6 to 7 feet remaining un-scouted. As a result of these limitations and its laborious nature, scouting is often conducted only weekly or less frequently. Consequently, the probability of non-detected psyllids being present during an incursion in CUPS remains high. A more efficient, rapid and comprehensive robotic scouting method using machine vision and AI is being developed for daily inspection and early detection of ACP and other pests and diseases in CUPS. A robotic scouting vehicle was designed to be energy self-sufficient, using solar-electric power to move autonomously through the grove without human intervention (Figure 2). If you truly want to see and understand everything in a grove, you should live in it or close to it. The same philosophy should apply to the robotic scout, so it should stay in the grove 24/7, working whenever the sun shines. Preliminary results of some trained DL-CNN models are shown in Figure 3. Training of new DL-CNN models typically requires 500 to 2,000 digital images that feature the objects of interest for detection. These include arthropod pests such as psyllids, leafminers, mealybugs, scale insects, thrips, spider mites and rust mites, and diseases like citrus canker, greasy spot, anthracnose, melanose and scab. INDENTIFYING OTHER OBJECTS OF INTEREST These video clips show artificial intelligence (AI) being used in agriculture: UF/IFAS researchers will also explore other tasks that the scout can perform with machine vision, such as searching for holes in the screen roof and walls, detecting blocked or defective microjet sprinklers, and measuring tree growth. Examples of scouting tasks are shown in Figure 3. The robotic scout will be equipped with a global positioning system (GPS), Wi-Fi and cellular communications networks, and various cameras of different resolutions. The main computer includes a fast graphics processing unit for running the DL-CNN object recognition software, while smaller computer peripherals are responsible for solar-power regulation, speed control, steering, navigation and communication. When fully operational, the scout will be guided by both computer vision and GPS waypoints delineating the intended scouting paths through the grove. SUMMARY Acknowledgement: This material is based on work supported by the National Institute of Food and Agriculture, U.S. Department of Agriculture, under award number 2018-70016-27387. Arnold Schumann (schumaw@ufl.edu), Laura Waldo (ljwaldo@ufl.edu), William Holmes (wdholmes@ufl.edu), Gary Test (gkt@ufl.edu) and Tim Ebert (tebert@ufl.edu) are at the UF/IFAS Citrus Research and Education Center in Lake Alfred. Share this PostDetection App Options
The principal reason for growing citrus in protective screen houses is to exclude the Asian citrus psyllid (ACP), which is the vector of citrus greening disease. Therefore, the integrity of the protective screen house must be ensured at all times. Occasional incursions of ACP into the CUPS may be possible, and if not detected immediately, they could quickly colonize substantial areas of a CUPS, with subsequent spreading of greening disease.
Images of additional identifiable physiological events such as flowering, color break and fruit damage (such as sunburn and splitting) will also be collected as they occur. These other objects of interest are additional identification tasks that a robotic scout could accomplish while moving about to enhance the operation of an intensively managed CUPS.See Artificial Intelligence in Action
DL-CNN on modern computers combined with digital cameras makes machine vision that rivals the capabilities of human perception to recognize objects of interest. AI machine vision sensors are being combined with autonomous robotic vehicles to improve daily grove scouting in CUPS structures. Although computer AI and machine memory is not even close to the capabilities of human brains, the neural networks are very capable of specific trained tasks that they have been “imprinted” with. A specialized robotic scout could feasibly work faster and for longer hours than human scouts. Future research will compare human and machine scouting abilities.