Any unique idea that we can add in image recognition using deep learning or machine learning by preprocessing the image and then training and testing on it. It is general but something new to add.
One promising avenue is to develop a multimodal, fluorescence-enhanced imaging pipeline that fuses traditional RGB with ultraviolet (UV)‐induced chlorophyll fluorescence to catch diseases before visible symptoms appear, then feed both channels into a self-supervised vision transformer for robust feature learning.
First, a low-cost UV LED array is paired with a smartphone camera: when you illuminate a leaf under UV, healthy tissue emits a characteristic red fluorescence (from chlorophyll), whereas pathogen-infected or nutrient-stressed areas show altered fluorescence intensity and spectrum. Capturing synchronized RGB + UV-fluorescence frames lets you create a two-channel image stack per sample.
Next, instead of hand-labeling thousands of lesions, you pretrain a vision transformer via contrastive learning on millions of paired stacks pulled from healthy and early-stress plants. The model learns to map healthy versus abnormal fluorescence-RGB patterns into distinct regions of its latent space. Finally, you fine-tune the network on a small set of expert-annotated images for segmentation or classification of specific diseases (e.g., powdery mildew, early blight).
Key innovations here are
Fluorescence-RGB fusion: UV-induced chlorophyll fluorescence highlights stress at a biochemical level invisible to human eyes.
Self-supervised pretraining: Contrastive learning on paired healthy/stressed stacks reduces annotation needs and enhances robustness to lighting or cultivar variation.
Vision transformer backbone: Its global attention mechanism excels at capturing subtle, non-local fluorescence anomalies across the leaf surface.
This pipeline could be extended with real-world field tests—mounting the LED rig on drones or handheld rigs—offering early disease warning, more precise spraying guidance, and improved crop health monitoring long before visible lesions emerge.
Let me remind you that plant diseases are caused by the interaction between a pathogen, a plant, and its environment. In fields and gardens, it's difficult to find completely healthy plants. Most plants have latent infections caused by soil pathogens and viruses, as well as physiological stress.
Early detection of the cause of an epidemic, such as brown rust or potato late blight, is crucial. If the level of first detection is less sensitive than 5% of the affected leaf area, pesticides may not be effective at weather suitable for such disease. The lower leaves and shoots, shaded by upper tiers, are often the first to become infected, but they are not available for remote sensing. In such situations, preventive treatment is the best course of action.
A weak signal from infected leaves can be mistaken for other factors, such as other pathogens, different physiological conditions of the crop, soil conditions, light levels, crop variety, weed infestation, and even pesticide effects. Identifying the primary pathogen responsible for disease symptoms in the field requires the use of molecular or serological techniques. Often, the same disease can be caused by various pathogens, or the same pathogen may cause different symptoms in different plants. I first became familiar with remote diagnosis of plant diseases in 1989, and although the technical level has undoubtedly improved, the challenges of solving this problem have not significantly changed.
Before classic resizing/normalization, do some preprocessing of images with physics- and vision-based methods (e.g., polarization-based filtering, frequency-wavelet transformations, and topology maps) to bring out hidden features. These multi-domain representations are then fused and routed through a deep network. The learnable preprocessing block can be used to further adapt transformations automatically.