HomeJobsManchesterData Engineer
Back to all jobs
⚡ Source: ReedRef: 56789386

Data Engineer

Harnham - Data & Analytics Recruitment·Manchester·Posted 2 weeks ago
🏠 Hybrid💰 70-90k GBP/year⭐ Senior
Tailor my CV for this job — Free

Job description

Original text imported from Reed

Senior Data Engineer 70,000- 90,000 Manchester (Hybrid) This is an exciting opportunity to join a growing tech scale-up where data is at the heart of their product. You will play a key role in shaping their data infrastructure, enabling large-scale machine learning and real-time analytics that directly impact their customers. THE COMPANY They are a high-growth SaaS organisation operating in the digital advertising space. Their platform helps cus...
SpeedCV AI

Key skills

AI-extracted from the job advert

Must-have skills
PythonSQLAWSApache SparkETL DevelopmentData Pipeline Architecture
Nice-to-have
KafkaSnowflakeDockerKubernetesTerraformApache AirflowScala
Soft skills
Problem SolvingCollaborationCommunicationAutonomyInnovationAnalytical Thinking
SpeedCV AI

Application advice

5 AI-generated recommendations to maximise your chances.

1

⭐ Highlight your AWS and Apache Spark expertise at the top as this role focuses on cloud-based data infrastructure

2

📊 Quantify your data engineering impact: 'Built ETL pipelines processing 2TB daily, reducing latency by 45%'

3

🌐 Emphasise real-time analytics experience as the company needs large-scale ML and real-time processing

4

🎯 Showcase SaaS or digital advertising domain knowledge to demonstrate industry alignment

5

🤝 Detail your experience with machine learning infrastructure and data pipeline scalability

NEW
AI SpeedCV

Suggested CV bullets

3 bullets our AI drafted for this specific advert, mirroring its ATS keywords.

How to tailor your CV

Add these 3 bullets under your most recent experience:

  • Architected AWS-based ETL pipelines processing 3TB daily advertising data using Apache Spark, reducing processing time from 8 hours to 45 minutes
  • Built real-time analytics infrastructure with Kafka and Snowflake, enabling ML model predictions with sub-200ms latency for 2M+ daily requests
  • Led data pipeline migration to Kubernetes, improving system reliability by 99.8% and reducing infrastructure costs by £180k annually

Free to copy — tailoring requires a 30-sec CV upload.

NEW
AI cover letter

Your cover letter is ready

We've drafted a cover letter for Harnham - Data & Analytics Recruitment. Preview the opening, then unlock the full personalised version.

Letter preview — tailored to Harnham - Data & Analytics Recruitment

Dear Hiring Manager,

Your Senior Data Engineer position at this growing SaaS scale-up immediately caught my attention, particularly the focus on machine learning infrastructure and real-time analytics within the digital advertising space. My expertise in AWS, Apache Spark, and ETL pipeline development aligns perfectly with your need to process large-scale datasets that directly impact customer outcomes.

My background in building scalable data infrastructure for high-growth technology companies has equipped me with the skills to architect robust pipelines and optimise performance at enterprise scale. I have successfully implemented real-time analytics systems and collaborated closely with data science teams to enable ML model deployment in production environments.

Get my personalised letter — free

Free signup, no card needed. Export to PDF/Word requires a £1.99 trial (14 days).

SpeedCV exclusive
SpeedCV AI

Interview questions

10 questions generated from this advert.

Technical

  • How would you design a real-time data pipeline using Kafka and Spark for processing advertising data?
  • Explain your approach to optimising ETL performance when dealing with terabyte-scale datasets
  • How do you ensure data quality and consistency in a distributed data architecture?
  • Describe your experience with AWS data services and infrastructure as code using Terraform
  • What strategies would you use to handle schema evolution in a high-volume streaming environment?

Behavioural

  • Tell me about a time when you had to troubleshoot a critical data pipeline failure under pressure
  • Describe a situation where you had to collaborate with data scientists to implement ML infrastructure
  • Give an example of how you've improved data processing efficiency in a previous role
  • Tell me about a time when you had to learn a new technology quickly to meet project deadlines
  • Describe how you've handled conflicting priorities between different stakeholders in a data project
SpeedCV AINEW

STAR answer examples

Model answers using the Situation-Task-Action-Result framework. Adapt to your own experience.

1Question

Tell me about a time when you had to troubleshoot a critical data pipeline failure under pressure

During a Black Friday campaign, our real-time bidding pipeline failed at 2am, affecting £50k hourly ad spend. I immediately identified the issue as a Kafka partition rebalancing problem causing 3-hour data lag. I implemented emergency consumer group rebalancing and deployed a hotfix within 45 minutes. I then coordinated with the DevOps team to implement monitoring alerts and circuit breakers. The pipeline was fully restored with zero data loss, and we prevented similar incidents through improved error handling and automated failover mechanisms.
2Question

Describe a situation where you had to collaborate with data scientists to implement ML infrastructure

Our data science team needed to deploy a customer churn prediction model but lacked production infrastructure. I worked closely with 4 data scientists to understand their Python scikit-learn model requirements and designed a scalable MLOps pipeline using AWS SageMaker and Lambda. I built automated model training workflows processing 500k customer records daily and implemented A/B testing infrastructure. The deployment reduced manual model updates from 2 weeks to 2 hours, enabling the team to iterate 8x faster and improve model accuracy by 12%.

Similar jobs

View all