Data Engineer
Ardanis
Brazil
•2 horas atrás
•Nenhuma candidatura
Sobre
- We are looking for a Mid-Senior Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data solutions. This role requires strong hands-on engineering skills, the ability to work autonomously on well-defined problems, and experience operating data pipelines in production environments. You will contribute to the development of modern data platforms, working closely with senior engineers, product teams, and analytics stakeholders in an Agile environment.
- Key Responsibilities
- Design, develop, and maintain reliable and scalable data pipelines.
- Implement ETL/ELT workflows for batch and streaming data processing.
- Develop data processing jobs using Apache Spark (Python or Scala).
- Build and maintain cloud-native data solutions on Azure or AWS.
- Implement data transformations and models using DBT or equivalent tools.
- Ensure code quality, testing, and documentation across data pipelines.
- Participate in CI/CD pipelines for data engineering workloads.
- Monitor, troubleshoot, and optimize data pipeline performance and reliability.
- Collaborate with senior engineers on architecture and design decisions.
- Work within Agile teams, contributing to planning, estimation, and delivery.
- 3–4 years of experience in Data Engineering or similar roles.
- Strong programming skills in Python or Scala.
- Hands-on experience with Apache Spark for large-scale data processing.
- Experience working with cloud platforms (Azure or AWS).
- Solid knowledge of SQL and relational data modelling.
- Experience building and maintaining production-grade data pipelines.
- Hands-on experience with CI/CD pipelines for data workflows.
- Experience with data testing (unit, integration, data validation).
- Ability to work independently on assigned tasks with guidance when needed.
- Strong collaboration and communication skills.
Nice-to-Have
- Experience with DBT or similar data transformation frameworks.
- Experience with Infrastructure as Code (IaC), preferably Terraform.
- Exposure to NoSQL databases.
- Experience with streaming platforms (e.g. Kafka, Kinesis, Event Hubs).
- Familiarity with data quality, monitoring, and observability tools.





