Embedded & Physical AI

We recruit for teams where AI meets atoms — sensor fusion, edge inference, robotics firmware, and defence-grade autonomy. Engineers who have actually shipped models on hardware, not just trained them in notebooks.

What we mean by embedded AI

Most AI recruitment stops at the cloud boundary. The job spec says “ML Engineer,” the recruiter searches for PyTorch experience, and the shortlist lands on engineers who have trained models and deployed them to an API endpoint. That works for SaaS feature-engineering. It fails completely when the model has to run on a Jetson Orin in a drone, an STM32 in a sensor module, or a Hailo-8 in a factory inspection rig.

Embedded AI — sometimes called Physical AI — is the discipline of deploying machine learning on resource-constrained hardware that operates in the real world. The engineers who do this work fluently across two worlds: they understand model architectures and memory budgets, training dynamics and interrupt service routines, Python notebooks and C++ on bare metal.

The stack is fundamentally different from cloud ML. Quantisation (INT8, INT4) and pruning replace scaling laws. On-target profiling replaces cloud autoscaling. Sensor drivers, real-time scheduling, and over-the-air update pipelines replace REST APIs and feature stores. The failure mode isn’t a 500 error — it’s a drone that drops out of the sky.

The embedded AI stack

Model deployment

TFLite Micro, TensorRT, ONNX Runtime, CoreML, STM32Cube.AI, OpenVINO

Hardware targets

NVIDIA Jetson (Orin, AGX), Hailo-8, Google Coral TPU, Qualcomm AI Engine, STM32, NXP

Languages & OS

C/C++ (14/17/20), Python, Rust (emerging). Linux (Yocto/Buildroot), FreeRTOS, Zephyr, QNX

Robotics & sensors

ROS 2, Foxglove, Isaac Sim, Gazebo. LiDAR, radar, IMU, EO/IR, depth cameras, CAN bus

Eight roles we recruit at depth

Each archetype below represents a distinct hiring brief we run regularly. Salary bands are AUD total package inclusive of super, Sydney market, Q1 2026.

Embedded ML Engineer

Takes models from Jupyter to Jetson. Owns quantisation, on-target profiling, and the inference runtime.

C/C++, Python, TFLite Micro, TensorRT, ONNX, STM32Cube.AI$140k–$220k

Sensor Fusion Lead

Fuses heterogeneous sensor data (LiDAR, radar, IMU, EO/IR) into decisions an autonomous platform can act on.

C++, Kalman/particle filters, factor graphs, GTSAM, ROS 2$200k–$290k

Computer Vision Engineer

Builds detection, tracking, and classification systems for real-world imagery — often in defence or industrial contexts.

PyTorch, OpenCV, ONNX, TensorRT, Jetson, Hailo$135k–$220k

Robotics Firmware Engineer

Writes the firmware that makes robots move, sense, and communicate. Bridges the gap between hardware and autonomy.

C/C++, ROS 2, FreeRTOS/Zephyr, CAN bus, STM32/NXP$130k–$195k

Edge AI Engineer

Deploys vision and time-series models to fleet hardware. Owns the OTA pipeline from model-train to device-rollout.

Python, C/C++, TensorRT, Jetson, Hailo, Coral TPU$130k–$195k

Perception Stack Lead

Owns the full perception pipeline for an autonomous system — from raw sensor input to semantic understanding.

C++, PyTorch, LiDAR/camera fusion, SLAM, Isaac Sim$220k–$300k

Autonomy Tech Lead

Sets the technical direction for an autonomy programme. Bridges ML research, firmware, systems engineering, and programme delivery.

Architecture, C++/Python, ROS 2, safety-critical systems$240k–$320k

Embedded AI Architect

Designs the end-to-end embedded ML platform — hardware selection, inference pipeline, model lifecycle, fleet telemetry.

System design, TensorRT, CUDA, Yocto/Buildroot, CI/CD for hardware$260k–$350k

AGSVA security clearances — what you need to know

Many embedded AI roles in Australia sit inside defence programmes or dual-use companies. If a role requires access to classified information, the candidate must hold (or be eligible for) an Australian Government Security Vetting Agency (AGSVA) clearance.

Australian citizenship is a prerequisite for all AGSVA clearances.

Baseline

PROTECTED

Entry-level clearance for access to PROTECTED-level information. Required for most government IT and many defence-adjacent roles.

Typical processing: ~20 working days

NV1

SECRET

Negative Vetting Level 1. Required for access to SECRET information. Standard for most defence engineering programmes.

Typical processing: ~70 working days

NV2

TOP SECRET

Negative Vetting Level 2. Required for TOP SECRET access. Intensive background investigation. Reserved for senior roles on the most sensitive programmes.

Typical processing: ~100 working days

Source: AGSVA public guidance. Processing times are indicative and subject to change. Sonitec does not provide security clearance advice — for official guidance, contact AGSVA directly.

Our embedded-fluency rubric

Generic recruiter screening — “Do you know PyTorch?” — misses the point for embedded AI. We screen for four signals that separate engineers who have shipped on hardware from those who have only trained in the cloud.

1

Shipped firmware evidence

Has the candidate deployed a model to actual production hardware — not a demo board, not a Kaggle submission? We ask for specific products, device counts, and uptime metrics.

2

Quantisation fluency

Can they explain the trade-off between INT8 and INT4? Do they know quantisation-aware training vs post-training quantisation? Have they actually measured the accuracy drop on their own models?

3

On-target debugging

JTAG, logic analysers, thermal profiling, instruction-level tracing — can they debug on the actual hardware, not just in a simulator? This is the skill that separates ML engineers from embedded ML engineers.

4

Sensor integration experience

Have they fused data from real sensors — cameras, LiDAR, IMUs, radar — under real-world conditions? Simulated sensor data in Isaac Sim is useful context but not a substitute for field deployment.

Common questions

What is different about hiring embedded AI engineers compared to SaaS ML engineers?

SaaS ML engineers typically work in Python, train models in notebooks, and deploy to cloud endpoints. Embedded AI engineers must also write C/C++, optimise models for resource-constrained hardware (quantisation, pruning, on-target profiling), work with real-time operating systems, and integrate physical sensors. The overlap is narrower than most hiring managers assume — a strong PyTorch researcher may have never touched a Jetson or written a line of firmware. We screen for the delta explicitly.

Do you recruit for defence and cleared roles?

Yes. We routinely recruit for roles requiring AGSVA security clearances (Baseline, NV1, and NV2). We understand the citizenship prerequisites, typical processing timelines, and the practical constraints of advertising cleared positions. We run NDA-first searches where required and can redact job advertising on request.

What geographies do you cover for embedded AI recruitment?

Our primary market is Australia — Sydney, Melbourne, Canberra, Adelaide, Perth, and Brisbane. We also recruit across the APAC region and maintain a growing network in the DACH region (Germany, Austria, Switzerland) for cross-border placements, particularly in defence-tech, robotics, and automotive AI.

What engagement models do you offer?

We offer four models: Retained Search for senior and executive hires, Exclusive Search for critical specialist roles, Contingent Recruitment for scaling teams, and Contract & Interim for project-based needs. For embedded AI, we most commonly run retained or exclusive searches because the talent pool is small and passive — contingent spray-and-pray does not work in this niche.

What is the minimum seniority you recruit for?

We typically recruit from mid-level (3+ years) through to VP/CTO. For embedded AI specifically, most briefs are senior (5+ years) or lead (8+ years) because companies need engineers who have actually shipped firmware and models to production hardware, not just completed a TinyML course.

Hiring embedded AI engineers?

Tell us about the role — we'll come back with a shortlist approach and a market read within five business days.