📢 Download our 45-page white paper on AI Implementation in 2026

Edge-Based Inference Pipeline Optimization for Weed Control

Learn how edge-based AI inference was optimized on Nvidia Jetson to enable real-time, laser-based weed control without chemicals.

December 17, 2025

13 minutes read

article featured image

This project demonstrates how edge based AI can replace chemical herbicides with real time, vision driven weed control. By optimising deep learning models and the full inference pipeline for on board execution, the system achieves millisecond level latency on embedded hardware without cloud connectivity. The result is precise mechanical weed removal, reduced environmental impact and a practical blueprint for deploying low latency AI in autonomous agricultural robots.

Introduction

Modern agriculture still relies heavily on chemical herbicides, despite growing concerns about environmental impact, human health and herbicide resistance. To explore a sustainable alternative, Omdena organised the global WeedBot challenge, bringing together forty collaborators to develop a vision-based weed control system that runs entirely on edge devices.

The resulting proof of concept combines a camera, embedded computing and a laser on a mobile platform to detect and eliminate weeds autonomously. Because crops and weeds often look similar from above, and targets move as the vehicle advances, the system must deliver both high accuracy and extremely low latency without relying on cloud connectivity. This article describes how the team achieved millisecond-level inference on embedded hardware by optimising deep-learning models and the full edge-based inference pipeline within just eight weeks.

Why move away from herbicides?

U.S. Geological Survey (USGS) scientists have reported that glyphosate and its degradation product AMPA (aminomethylphosphonic acid) are transported from agricultural and urban sources and are now widely present in the environment. Although chemical herbicides remain popular because they are inexpensive and easy to apply, up to 55 percent of their toxic residues persist in soil, water and air.

These residues accumulate in water bodies, harm surrounding plants and algae, and reduce overall biodiversity. Farm workers are also exposed, as chemicals can remain on crops and enter the body during harvesting. Over time, repeated herbicide use encourages the development of resistant weed species, forcing farmers to apply increasing amounts of chemicals and creating a self-reinforcing cycle of dependency.

Fig1: Effect of Chemical herbicides on environment

Fig1: Effect of Chemical herbicides on the environment

Given these environmental and health costs, sustainable alternatives are urgently needed. Technology‑enabled weed control supports organic farmers by replacing repetitive manual labour with automated precision. Removing chemicals from the process yields pesticide‑free produce and helps make healthy food more affordable. The WeedBot challenge aimed to demonstrate that vision‑based systems can meet this need. This shift away from chemical-heavy practices mirrors a broader movement across the agri-tech ecosystem, where leading sustainable agriculture companies and organisations are investing in AI-driven alternatives that reduce environmental harm while maintaining yields.

The accompanying illustration underscores how herbicides influence every layer of the ecosystem—from the soil up to the water table—and why finding mechanical alternatives matters. 


AI‑Enabled Weed Control Hardware

For this task, technical characteristics of the second laser weeding prototype developed by WeedBot was used.

Fig 2: For this task, the technical characteristics of the second laser weeding prototype developed by WeedBot were used.

The WeedBot prototype is a laser weeding platform developed by WeedBot. For this task, the team used the technical characteristics of WeedBot’s second prototype: a high‑speed camera mounted above the crop row sends a constant video stream to an embedded computer. A laser module then targets weeds identified by the neural network. At the core of this system is fast and accurate visual understanding, building on advances in instance segmentation that allow autonomous machines to distinguish crops from weeds at pixel level in real time. This interplay between rapid sensing and immediate actuation is what enables precise weed removal without damaging crops. The hardware must operate autonomously in the field, so all computation happens on the on‑board processor without reliance on remote servers or expensive connectivity.

Hardware and software framework

At the heart of the system is the Nvidia Jetson AGX Xavier, a compact module that delivers workstation‑class performance at the edge. Thanks to its carefully balanced architecture of CPUs, GPUs and memory, it can process high‑resolution images and run complex neural networks in real time. Key features include:

  • A 512‑core GPU paired with an 8‑core ARM CPU and 32 GB of memory for high‑throughput parallel processing.
  • Support for configurable power profiles at 10, 15 or 30 W, enabling the device to balance performance and energy consumption.
  • Up to 32 TeraOPS (TOPS) of compute performance and 750 Gbps of I/O bandwidth in a 100 × 87 mm form factor.

These capabilities make the Jetson AGX Xavier suitable for robots, drones and other autonomous machines where space and power are limited. In this project the device ran Jetpack 4.4.1, which bundles several essential libraries:

  • TensorRT 7.1.3 — a high‑performance inference runtime with support for INT8 calibration and quantization.
  • CuDNN and CUDA 10.2 — primitives and kernels optimized for deep learning and general GPU computing.

The team built a real‑time inference pipeline on top of segmentation models such as YolactEdge and Bonnetal, each paired with a lightweight backbone like MobileNet. These models provide a good balance between segmentation accuracy and speed, which is crucial when a vehicle is moving over uneven terrain. For more detail on how YolactEdge was used, read here.

Dataset and annotations

The system’s camera captures images at 3008 × 3008 px from above the crop row. Each frame contains multiple overlapping plants at various stages of growth, so careful labelling is essential. The team used the CVAT annotation tool with the MS COCO format to mark every weed and crop in each image. To improve robustness, the dataset was augmented with Albumentations to simulate variations in brightness, rotation and scale—conditions that a field‑deployed system will inevitably encounter. Additional annotation classes were introduced so the model could distinguish among different types of plants rather than simply grouping all weeds together; this more granular labelling improved the mean average precision (mAP) during inference.

mAP of validation Image 3008x3008px resized to 550x 550px

Fig 3: MAP of validation Image 3008x3008px resized to 550x 550px

Overview of pipeline optimizations

Building a responsive weed control system required reducing both the size and the complexity of the neural networks and streamlining the surrounding code. In practice this meant examining not only the architecture of the model itself but also how data moved through the application from camera to laser. These optimizations are especially effective when paired with modern image segmentation techniques for weed or crop detection, which balance accuracy and speed for deployment on resource-constrained edge devices. The team investigated three broad areas:

  1. Model‑level optimizations using pruning, quantization and knowledge distillation to create lightweight networks.
  2. TensorRT conversion and calibration to exploit Nvidia’s optimized inference runtime.
  3. Pipeline improvements outside the model, including rewriting critical Python code in C++ and accelerating post‑processing with GPU‑based libraries.

Each approach is explored in more detail below.

Together, these techniques form a holistic strategy: a compact model running on a tuned inference engine will still fall short if it is surrounded by slow preprocessing steps, and an efficient pipeline will not succeed if the model is too large for the hardware. The following sections describe how each layer was optimized.

Optimizing deep learning models

Model pruning

Pruning reduces the number of parameters in a neural network by removing connections or neurons that contribute little to the final prediction. This method lowers memory consumption, latency and power draw while retaining accuracy. Research such as the Lottery Ticket Hypothesis suggests that a large network contains smaller sub‑networks that, when trained in isolation, perform as well as the full model. By iteratively pruning weights and neurons—sometimes entire neurons, as illustrated below—the team produced compact networks that ran efficiently on the Jetson. Instead of randomly dropping parameters, they used heuristics based on weight magnitudes and activation patterns to decide which parts of the network were expendable.

Fig2: Removing one entire neuron. Left: the unpruned neural network with a neuron in red that is thought to be unnecessary. Right: the equivalent pruned neural network, whose computational complexity is 25% smaller

Fig 4: Removing one entire neuron. Left: the unpruned neural network with a neuron in red that is thought to be unnecessary. Right: the equivalent pruned neural network, whose computational complexity is 25% smaller

Different pruning strategies exist, including iterative pruning, weight pruning, and neuron pruning. Regardless of the strategy, the goal is to remove redundant structure without harming performance. The team referenced a variety of neural network architectures to guide these choices.

Model quantization

By default, most deep‑learning frameworks store weights and activations with FP32 precision. Converting a model to INT8 reduces its size by a factor of four and lowers memory bandwidth requirements, which in turn lowers heat and power consumption on an embedded board. INT8 models typically deliver two to four times higher throughput on supported hardware. There are several ways to perform quantization:

  • Dynamic quantization and static quantization — both are applied after training by converting FP32 weights to lower precision during inference. These methods are simple to implement but may incur some loss in accuracy.
  • Quantization‑aware training (QAT) — quantization is incorporated into the training loop so the model learns to compensate for reduced precision. QAT often yields little or no loss in accuracy but requires more training time.

The team evaluated these techniques and found that INT8 quantization provided the best trade‑off between latency and accuracy for their hardware.

Fig3: Model Quantization

Fig 5: Model Quantization

This choice does come with engineering overhead: calibration routines must be prepared and, in the case of quantization‑aware training, the training pipeline itself must change. Nonetheless, when every millisecond counts, the extra effort pays dividends.

Knowledge distillation

Fig 4: The generic teacher-student framework for knowledge distillation.

Fig 6: The generic teacher-student framework for knowledge distillation.

Distillation compresses a network by teaching a smaller student model to mimic a larger teacher model. Introduced in a paper titled “Distilling the Knowledge in a Neural Network” by Geoffrey Hinton, Oriol Vinyals and Jeff Dean, the method first trains a high‑capacity teacher on ground‑truth labels. The predictions from this teacher become soft targets for the student. By matching these soft targets rather than hard labels, the student learns generalised behaviour with far fewer parameters. In the WeedBot project, this strategy allowed the team to deploy models that were faster and more memory‑efficient without sacrificing much accuracy. Distillation is particularly appealing when a project must ship quickly: instead of redesigning a new architecture, engineers can compress an existing model and preserve its behaviour.

Nvidia TensorRT optimization

To make the most of the Jetson’s GPU, the models were converted to TensorRT, Nvidia’s high‑performance inference library. TensorRT accepts networks exported to ONNX, Caffe or UFF formats and compiles them into an optimized engine. During this build step, developers can set the batch size, workspace size, and choose between FP32, FP16 or INT8 precision. For INT8 engines, TensorRT requires a calibration dataset to compute appropriate scaling factors. These calibration samples should reflect real‑world inputs so that quantization errors are minimized. The resulting binary can be serialized and loaded directly at runtime.

Fig5: Onnx Workflow

Fig 7: Onnx Workflow

The team benchmarked their models at two input resolutions, 1920 × 1200 px and 560 × 560 px, to study the trade‑off between latency and precision. Lower‑resolution inputs reduce computation and increase throughput, while higher resolutions preserve detail and maintain accuracy. Across the experiments, the INT8‑calibrated engine consistently achieved the lowest latency at 560 × 560 px, making it the preferred setting for real‑time operation.

These results illustrate a broader lesson: larger inputs and higher precision may improve accuracy, but they also raise memory use and computational cost. Selecting the optimal resolution and precision requires profiling different combinations and choosing the sweet spot for the task at hand.

Benchmark results of the Bonnetal model

Benchmark Results of Bonnetal Model

Fig 8: Benchmark Results of Bonnetal Model

One of the most promising candidate networks was the Bonnetal model, a lightweight semantic segmentation architecture. After applying INT8 quantization and TensorRT calibration, the Bonnetal engine delivered strong performance on the Xavier platform. The figure below summarises the benchmarking results, highlighting how pruning and quantization shift the balance between speed and accuracy. While absolute numbers vary with resolution and batch size, the relative trends were consistent across trials.

Refining the surrounding pipeline

Optimizing the neural network is only part of the challenge. The WeedBot pipeline also includes image preprocessing, post‑processing and other tasks that run alongside inference. The team examined each component and rewrote critical code to squeeze out additional speed.

Translating Python to C++

Several supporting tasks originally written in Python were moved to C++ for performance. Running on a separate CPU thread in parallel with inference, these tasks include data loading, image transformations and control logic. Because Python is interpreted and lacks compile‑time optimizations, its performance on heavy numeric workloads can lag, especially when tight loops and array manipulations are involved. In contrast, C++ is compiled to machine code and can leverage aggressive optimizations, vector instructions and cache management. Converting the support routines yielded a 3.25× speed‑up compared with their Python equivalents and simplified integration with other C++ libraries.

Image preprocessing

The team employed CUDA‑enabled libraries for preprocessing images before sending them to the neural network. C++ and Python bindings both exist, and some routines were rewritten as CUDA C++ kernels to offload operations to the GPU. However, not all steps benefitted from GPU acceleration. When an operation involves relatively few computations or small images, the overhead of copying data to and from GPU memory can outweigh the gains. Careful profiling is therefore necessary to decide whether a given function should run on the CPU or GPU; the fastest path may vary depending on image size and available memory bandwidth.

In order to integrate C++ code into the predominantly Python pipeline, the developers created a Pybind11 wrapper. Pybind11 is a lightweight header‑only library that exposes C++ types and functions as native Python objects. It supports overloaded functions, custom data structures and static methods, and makes use of C++11 features such as tuples and lambdas. By overloading the NumPy ndarray type, the wrapper allows seamless exchange of arrays between Python and C++ without data copying.

Cupy enhancements

Post‑processing tasks in the pipeline originally relied on NumPy routines. To further reduce latency, the team experimented with replacing NumPy functions with Cupy, a GPU‑accelerated drop‑in replacement. Although Cupy can significantly speed up large array operations, the post‑processing in this pipeline involved relatively small computations and occasional branching. As a result, the overhead of launching CUDA kernels outweighed the benefits, and the Cupy conversion did not yield a measurable improvement. This finding underscores the importance of benchmarking even intuitive optimizations: not every GPU library automatically yields a faster program, and micro‑benchmarks are essential to avoid wasted effort.

Conclusion

By combining model compression, TensorRT optimization, and targeted code refactoring, the WeedBot team reduced end-to-end inference latency to about 12 ms while maintaining high precision. Techniques such as pruning, quantization, and distillation cut model size without accuracy loss, while TensorRT leveraged Jetson hardware for faster runtimes and C++ rewrites removed Python overhead. Together, these optimizations show how careful engineering can turn complex models into responsive, field-ready edge AI systems.

If you are exploring how to deploy low-latency AI on edge devices for agriculture or robotics, Omdena works with teams to turn research prototypes into production-ready systems. Reach out to explore how similar optimizations can be applied to your edge AI use case.

FAQs

It is running weed detection AI models directly on embedded devices in the field, without relying on cloud processing.
Low latency is critical because the system must detect and remove weeds in real time as the robot moves.
The system was optimized on an Nvidia Jetson AGX Xavier, designed for high-performance edge AI workloads.
The team used model pruning, INT8 quantization, TensorRT optimization and pipeline-level code improvements.
TensorRT converts trained models into highly optimized inference engines that run faster with lower memory usage.
When done carefully with pruning and distillation, accuracy is largely preserved while latency is reduced.
C++ executes faster and removes interpreter overhead, significantly reducing end-to-end pipeline latency.
Yes, it enables precise laser-based weed removal, reducing or eliminating the need for chemical spraying.