The Risk
Companies are deploying AI systems into production without thinking through the security implications. Adversarial attacks can fool image classifiers. Data poisoning can corrupt training data. Model theft and prompt injection expose proprietary systems. AI systems introduce new attack surfaces that traditional security teams aren't trained to defend.
We're seeing regulatory pressure mount. ASIC has started asking banks about AI governance. OWASP has published a framework for ML vulnerabilities. NIST released the AI Risk Management Framework. Insurance companies are beginning to ask security questions before covering AI initiatives. The gap between what companies are deploying and what they're securing is widening, and it's becoming a liability.
The Talent Gap
There are maybe a few hundred experienced AI security engineers in Australia. Demand is growing at 40%+ year-over-year. Most security teams are built on traditional cybersecurity foundations — firewalls, networks, access controls. That foundation is necessary but not sufficient for AI. You need people who understand ML internals, threat modeling for neural networks, and how to audit model behavior.
The supply problem is structural. Universities are only beginning to teach AI security. There's no recruitment pipeline. Most AI security practitioners came from security or ML backgrounds and taught themselves the intersection. These people are expensive, rare, and almost always already employed at FAANG or defence contractors.
What Companies Need
Hybrid profiles who understand both ML internals and security fundamentals. Someone who can read PyTorch code, understand how gradient-based attacks work, and also think like a threat actor. Familiarity with frameworks like OWASP for ML, NIST AI RMF, and MITRE ATLAS. Experience with model monitoring, adversarial testing, and governance automation. Ideally, someone who's thought through data lineage, model versioning, and audit trails in production.
This person doesn't exist in large numbers. The best approach is to hire for half the skills and train aggressively for the other half. Someone from threat intelligence or application security can learn ML. Someone from ML can learn security if mentored by a strong security leader. But you need to be intentional about it and give people time to ramp.
Our Take
Companies should hire AI security early, not after an incident. This is a strategic differentiator, not just a compliance checkbox. Your first AI security hire should report to the CISO or equivalent, not buried in a data team. They should own threat modeling, governance frameworks, and audit processes. They should have authority to slow down or block deployments that don't meet your risk tolerance.
If you're a hiring manager: move fast, offer competitive comp (AI security engineers are scarce), and be clear about what problems they'll solve. If you're a candidate in this space: you have leverage. Companies need you, and they're not yet optimizing for cost. If you're a recruiter: this is where we're placing the highest-value hires. The talent is underutilized, and demand will only accelerate.
Want to Discuss This Topic?
If you're building an AI security team, looking to develop expertise in this space, or curious about how to structure your AI governance, let's talk. Sonitec places AI security specialists and can help you navigate hiring and team building.
Contact Us →