MLOps Engineer | AI Infrastructure | £125k
Fudo Partners, a specialist technology recruitment agency, is excited to represent an innovative AI Startup in Central London in their search for a skilled MLOps Engineer. This is a unique opportunity to work at the forefront of AI deployment, optimizing machine learning operations and ensuring seamless production scalability.
About the Company
Our client is a cutting-edge AI company revolutionizing how businesses manage transactions and financial workflows. Their AI solutions transform complex data into structured insights, empowering users with natural language interactions and improving transparency, efficiency, and decision-making. They focus on enhancing user experience, streamlining workflows, and driving innovation in AI-powered automation.
About the Role
As an MLOps Engineer, you will be instrumental in deploying, monitoring, and maintaining AI models at scale. You will bridge the gap between machine learning and DevOps, ensuring models are production-ready, highly available, and continuously optimized for peak performance.
Key Responsibilities:
Develop and maintain scalable and automated MLOps infrastructure to facilitate seamless model deployment, versioning, and monitoring.
Build self-service AI tools that empower data science teams to deploy models efficiently while ensuring operational excellence.
Optimize model serving infrastructure for real-time inference, batch processing, and AI-driven APIs, ensuring low-latency, high-throughput execution across cloud and on-prem environments.
Implement AI observability and monitoring tools to track performance metrics such as model drift, accuracy, and inference speed, ensuring reliability in production.
Collaborate with DevOps and AI teams to integrate best practices for scalable machine learning operations.
Foster a culture of continuous improvement, sharing knowledge and refining MLOps best practices within the organization.
Required Skills & Qualifications:
Experience in MLOps, AI infrastructure, or DevOps, with a proven track record of deploying and managing machine learning models in production.
2+ years of hands-on experience in a role focused on MLOps-related tasks.
Expertise in AWS and Helm deployments, with deep knowledge of Kubernetes, Docker, and Terraform. Familiarity with serverless AI architectures and GPU/TPU-accelerated workloads is a plus.
Hands-on experience with ML model serving frameworks such as TensorFlow Serving, TorchServe, and KFServing, ensuring high-performance AI services.
Strong background in AI pipeline orchestration and automation, with experience using tools like Kubeflow Pipelines, MLflow, Dagster, or Prefect.
Passion for optimizing MLOps workflows, enabling AI teams to iterate and deploy with efficiency.
A collaborative mindset, with a passion for mentoring and sharing best practices within AI infrastructure teams.
Why Join?
Flexible Work Environment: Manage your schedule with flexible hours and remote work options.
Comprehensive Health Benefits: Additional health insurance, including dental coverage.
Hybrid & Remote Options: Work from a modern office or remotely, with no fixed in-office days required.
Ready to Apply?
If you're passionate about MLOps and AI infrastructure, we’d love to hear from you! Apply now to be considered for this exciting opportunity.

Interested in this role or exploring new opportunities in AI?
If this role or something similar catches your eye, don’t hesitate to apply, email us, or give us a call. We’re always eager to connect with technology experts who are looking for their next challenge.
Please note: We specialise exclusively in the AI ecosystem. If you’re not in this space professionally, we won’t be able to assist but we appreciate your interest!