AI Security Engineer
PAIR Finance
Seniority
Senior
Model
Hybrid
Sector
Salary
Undisclosed
Contract
Full-Time
About the role
We're looking for a skilled AI Security Engineer (f/m/d) to strengthen the security of our intelligent systems. You'll be instrumental in protecting our AI/ML pipelines, embedding security best practices across the full machine learning lifecycle, and ensuring compliance with evolving industry standards. Working closely with data scientists, ML engineers, and platform teams, you'll define and implement concrete security controls across our pipelines, infrastructure, and applications.
What you'll do
- Consult and review secure architectures for our AI systems – from in-house models to third‑party LLMs (incl. RAG, vector databases, APIs, and integrations into our products and internal tools)
- Conduct AI-specific threat modeling and security reviews across the ML lifecycle (data → training → deployment → monitoring)
- Perform security testing / red-teaming of LLM and ML systems (e.g. prompt injection tests, jailbreaks, exfiltration and data-leakage tests)
- Work closely with data scientists, Machine Learning engineers, platform engineers and Compliance & IT Security to define and implement concrete controls in pipelines, infrastructure and applications
- Own and support AI risk assessments, and help write/review policies, standards and governance documentation for AI use
- Translate EU AI Act, financial-services regulation and relevant standards into practical technical and process controls
- Help define monitoring, logging and incident response for AI/LLM systems, including misuse and data-leak detection
- Collaborate with Legal, Compliance and Procurement on AI vendor selection, risk assessments and contract reviews
What you'll need
- Demonstrable experience in Artificial Intelligence/Machine Learning security in a production context – not just general cybersecurity
- Practical knowledge of LLM-specific risks, such as: prompt injection and jailbreaks, data leakage and sensitive information exposure, model inversion, membership inference, supply chain risks in AI tooling and models
- Solid understanding of the ML lifecycle and typical MLOps setups (data pipelines, training, evaluation, deployment, CI/CD, monitoring) and where to place security controls
- Experience designing or reviewing secure architectures for AI/LLM systems, including API security and authentication/authorization, secrets management, isolation of tenants/contexts and access control
- Experience working side-by-side with data scientists or ML engineers – you have credibility in technical rooms and can challenge design decisions constructively
- Ability to read Python code and basic ML pipelines and to build small scripts/tools
- Background in risk assessment and in writing or reviewing policy and governance documentation
- Familiarity with relevant AI standards and frameworks, such as: ISO 42001, OWASP LLM Top 10, NIST AI RMF, OECD AI Principles
- Understanding of EU AI Act obligations and how they apply to a fintech / financial services context
- Strong grasp of data protection and privacy-by-design in AI
Nice to have
- Experience reviewing AI vendor contracts or working with procurement/legal on technology and SaaS agreements
- Prior audit or regulatory experience, ideally representing technical systems to auditors or financial regulators
- Experience with logging, monitoring and incident response for AI or other high-risk systems
- Background in financial services or fintech, or another highly regulated industry
What they offer
- Opportunity to change the market and make a real impact
- Work at the intersection of AI and fintech solving complex security challenges
- Collaborative environment with data scientists, ML engineers, and platform teams
- Company values: Transparency, Execution, Ownership, Customer Centricity, Innovation and Integrity

