Data Engineer (Senior) ID40199

Data Engineer (Senior) ID40199

Data Engineer (Senior) ID40199

Agileengine

2 horas atrás

Nenhuma candidatura

Sobre

  • AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.
  • WHY JOIN US
  • If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!
  • ABOUT THE ROLE
  • We are looking for a Senior Data Engineer to take ownership of our data infrastructure, designing and optimizing high-performance, scalable solutions. You’ll work with AWS and big data frameworks like Hadoop and Spark to drive impactful data initiatives across the company.
  • WHAT YOU WILL DO
  • - Design, build, and maintain large-scale data pipelines and data processing systems in AWS.
  • - Develop and optimize distributed data workflows using Hadoop, Spark, and related technologies.
  • - Collaborate with data scientists, analysts, and product teams to deliver reliable and efficient data solutions.
  • - Implement best practices for data governance, security, and compliance.
  • - Monitor, troubleshoot, and improve the performance of data systems and pipelines.
  • - Mentor junior engineers and contribute to building a culture of technical excellence.
  • - Evaluate and recommend new tools, frameworks, and approaches for data engineering.
  • MUST HAVES
  • - Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;
  • - 5+ years of experience in data engineering, software engineering, or related roles;
  • - Strong hands-on expertise with AWS services (S3, EMR, Glue, Lambda, Redshift, etc.);
  • - Deep knowledge of big data ecosystems, including Hadoop (HDFS, Hive, MapReduce) and Apache Spark(PySpark, Spark SQL, streaming);
  • - Strong SQL skills and experience with relational and NoSQL databases;
  • - Proficiency in Python, Java, or Scala for data processing and automation;
  • - Experience with workflow orchestration tools (Airflow, Step Functions, etc.);
  • - Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts;
  • - Excellent problem-solving skills and ability to work in fast-paced environments;
  • - Ability to work German TimeZone (~6 - 7 am to ~ 2-3 pm Brazil/ ART time),
  • - Upper-Intermediate English level.
  • NICE TO HAVES
  • - Experience with real-time data streaming platforms (Kafka, Kinesis, Flink).
  • - Knowledge of containerization and orchestration (Docker, Kubernetes).
  • - Familiarity with data governance, lineage, and catalog tools.
  • - Previous leadership or mentoring experience.
  • PERKS AND BENEFITS
  • - Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
  • - Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
  • - A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
  • - Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.