AI Security Engineer
PAIR Finance
Seniority
Senior
Model
Hybrid
Sector
Salary
Undisclosed
Contract
Full-Time
About the role
We're looking for a skilled AI Security Engineer to strengthen the security of our intelligent systems. You'll be instrumental in protecting our AI/ML pipelines, embedding security best practices across the full machine learning lifecycle, and ensuring compliance with evolving industry standards.
What you'll do
- Consult and review secure architectures for our AI systems – from in-house models to third‑party LLMs (incl. RAG, vector databases, APIs, and integrations into our products and internal tools)
- Conduct AI-specific threat modeling and security reviews across the ML lifecycle (data → training → deployment → monitoring)
- Perform security testing / red-teaming of LLM and ML systems (e.g. prompt injection tests, jailbreaks, exfiltration and data-leakage tests)
- Work closely with data scientists, Machine Learning engineers, platform engineers and Compliance & IT Security to define and implement concrete controls in pipelines, infrastructure and applications
- Own and support AI risk assessments, and help write/review policies, standards and governance documentation for AI use
- Translate EU AI Act, financial-services regulation and relevant standards into practical technical and process controls
- Help define monitoring, logging and incident response for AI/LLM systems, including misuse and data-leak detection
- Collaborate with Legal, Compliance and Procurement on AI vendor selection, risk assessments and contract reviews
What you'll need
- Demonstrable experience in Artificial Intelligence/Machine Learning security in a production context – not just general cybersecurity
- Practical knowledge of LLM-specific risks, such as: prompt injection and jailbreaks, data leakage and sensitive information exposure, model inversion, membership inference, supply chain risks in AI tooling and models
- Solid understanding of the ML lifecycle and typical MLOps setups (data pipelines, training, evaluation, deployment, CI/CD, monitoring) and where to place security controls
- Experience designing or reviewing secure architectures for AI/LLM systems, including API security and authentication/authorization, secrets management, isolation of tenants/contexts
- Experience working side-by-side with data scientists or ML engineers – you have credibility in technical rooms and can challenge design decisions constructively
- Ability to read Python code and basic ML pipelines and to build small scripts/tools
- Background in risk assessment and in writing or reviewing policy and governance documentation
- Understanding of EU AI Act obligations and how they apply to a fintech / financial services context
Nice to have
- Experience reviewing AI vendor contracts or working with procurement/legal on technology and SaaS agreements
- Prior audit or regulatory experience, ideally representing technical systems to auditors or financial regulators
- Experience with logging, monitoring and incident response for AI or other high-risk systems
- Background in financial services or fintech, or another highly regulated industry
What they offer
- Strong experienced international team to support and mentor you along the way, smooth onboarding process
- Personal learning & development budget as well as German and English language courses
- Unlimited employment contract, flexible working hours and 28 vacation days
- Company pension plan, partly covered Deutschlandticket and access to Corporate Benefits voucher platform
- Modern office near Uhlandstraße with fresh fruit, muesli and drinks
- Fun company summer and Christmas parties as well as regular team events

