Representative role.This is a composite of searches we run regularly at this level and stack. If your background matches, apply anyway — we’ll match you to similar live briefs.
An Australian IoT company is shipping edge-AI inference on its product line. This role sits between the ML team (who train the models) and the product-firmware team (who ship the product) — your job is making the handoff work.
You'll be working primarily with Jetson Orin Nano and Hailo-8L class hardware, deploying vision and time-series models, and keeping the pipeline from model-train to fleet-rollout sharp.
What you’ll do
- Take trained models and make them run efficiently on Jetson / Hailo / Coral — quantisation, TensorRT, tooling
- Build the deployment pipeline: OTA firmware + model bundle + rollback
- Profile inference and optimise — latency, thermal, power
- Write the fleet-level monitoring that tells us when an edge device's inference quality degrades
What you bring
- 3+ years on embedded / edge ML deployments
- Hands-on with at least one edge-AI accelerator (Jetson, Hailo, Coral, Qualcomm AI Engine)
- Python + C/C++; comfortable with ONNX, TensorRT, or TFLite Micro
- Linux on embedded; device tree basics; cross-compilation
Nice-to-haves
- Experience with Foxglove or other edge-fleet tooling
- Model quantisation-aware training experience
- Prior role at an edge-AI or robotics company
