Job Drop BerlinYOUR WAY INTO BERLIN TECH
NewsletterLinkedIn
AboutTermsImpressumPrivacy

AI Infrastructure Engineer

IIntercom
Seniority
Midweight
Model
Hybrid
Sector
Enterprise Software
Salary
Undisclosed
Contract
Full-Time

About the role

We're looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom's next generation of AI products. You'll join a small, highly technical team working at the cutting edge of modern AI infrastructure, building training pipelines and running inference for custom models like Fin Apex.

What you'll do

  • Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.
  • Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.
  • Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.
  • Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.
  • Play an active role in hiring, mentoring, and developing other engineers on the team.
  • Raise the bar for technical standards, reliability, and operational excellence across Intercom's AI platform.

What you'll need

  • 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.
  • Degree in Computer Science, Computer Engineering, or a related field (or equivalent experience with very strong fundamentals).
  • Hands-on experience with model training (especially transformers and LLMs), model inference at scale, or low-level GPU work such as writing CUDA or Triton kernels.
  • Comfortable working in production environments at meaningful scale (traffic, data, or organizational).
  • Deep knowledge of at least one programming language; ability to write clean, reliable code and learn new stacks quickly.
  • Communicates clearly and enjoys close collaboration with both engineers and non-engineers.

Nice to have

  • Experience at AI native companies that train and/or run inference for their own models.
  • Experience running training or inference workloads on Kubernetes.
  • Experience with AWS or other major cloud providers.
  • Production experience with Python in ML or infrastructure contexts.

What they offer

  • Competitive salary, annual bonus and equity
  • Unlimited access to Claude Code and best-in-class AI tools
  • Generous paid time off above statutory minimum
  • Hybrid working (minimum three days per week in office)
APPLY →