Software Engineer II – LLM Ops (LLMOps Engineer) – Brazil
Housecall Pro
1 dia atrás
•Nenhuma candidatura
Sobre
- TO BE CONSIDERED FOR THIS ROLE, PLEASE SUBMIT AN UPDATED RESUME TRANSLATED TO ENGLISH
- Why Housecall Pro?
- Help us build solutions that build better lives. At Housecall Pro, we show up to work every day to make a difference for real people: the home service professionals that support America’s 100 million homes. We’re all about the Pro, and dedicate our days to helping them streamline operations, scale their businesses, and—ultimately—save time so they can be with their families and live well. We care deeply about our customers and foster a culture where our company, employees, and Pros grow and succeed together. Leadership is as focused on growing team members’ careers as they expect their teams to be on creating solutions for Pros.
- 🤜🤛 WHAT’S IN IT FOR YOU?
- 💻🌎Remote environment: totally built to make you feel that we are all together in one space without leaving your home office!
- 😎🏝Self Managed PTO: Beach? Mountains? Camping? Discovering new experiences? You are free to take time out as you need!
- ⏰Flexible work hours: We believe that you can reach your professional and personal goals working with us and encourage you to have a work life balance!
- 💡 A culture built on innovation that values big ideas: We are always open to new ideas that will improve the life of our Pros!
- 💻 MacBook (or PC if you prefer!) + Setup Fee ($500): What is remote work without the right tools? Here at HCP, you can choose your computer and set up your home office!
- We know what you are thinking…WHAT IS THE ROLE AND WHAT WOULD YOU BE DOING? 👀
- As a Software Engineer 2 focused on LLM Operations, you support the development, deployment, and monitoring of production AI applications powered by large language models. You build reliable, scalable systems using our core stack: Python, FastAPI, LangChain, LangGraph, and LangSmith. You play a key role in designing agentic workflows, integrating retrieval-augmented generation (RAG) pipelines, and implementing observability standards that ensure system performance and traceability. Your curiosity, eagerness to learn, and strong communication skills empower you to grow and deliver impactful AI features in a fast-paced environment.
- Our team is passionate, empathetic, hard working, and above all else focused on improving the lives of our service professionals (our Pros). Our success is their success.
In your day to day, you will
- Build and maintain LLM-powered applications and agent systems using Python, FastAPI, LangChain, and LangGraph
- Design and optimize agentic workflows for multi-step reasoning, tool usage, and state management
- Deploy and manage LLM applications on AWS, ensuring system reliability, performance, and scalability
- Implement observability using LangSmith to monitor token usage, latency, and prompt/response quality
- Build and maintain Airflow-based data pipelines to support LLM workflows, embeddings, and retrieval
- Implement and tune RAG (Retrieval-Augmented Generation) systems using PGVector for semantic search
- Work with Snowflake to manage evaluation datasets, data analytics, and warehousing for AI features
- Design RESTful APIs using FastAPI to expose LLM capabilities and handle streaming and async responses
- Develop testing strategies and evaluation frameworks to ensure consistent LLM performance
- Partner cross-functionally with data scientists, product managers, and engineers on architecture and delivery
- We think this role is for you if you have...
- Bachelor’s degree in Computer Science, Engineering, Data Science, or related field, or equivalent work experience.
- 2–4 years of professional software engineering experience with strong Python development skills
- Experience building and deploying production-grade RESTful APIs (i.e. FastAPI, Flask)
- Hands-on experience integrating large language models (i.e. OpenAI, Anthropic, Bedrock, or open-source LLMs)
- Familiarity with LangChain or similar frameworks for LLM orchestration
- Understanding of prompt engineering, context handling, and model interaction patterns
- Experience with AWS infrastructure (i.e. EC2, Lambda, ECS/EKS, S3)
- Knowledge of relational databases and SQL (PostgreSQL preferred)
- What will help you succeed???
- Familiarity with vector databases and semantic search using vector embeddings
- Production experience building agentic workflows using LangGraph
- Hands-on use of LangSmith for LLM observability, tracing, and evaluation
- Familiarity with PGVector or other vector stores (i.e. Pinecone, Weaviate, Chroma)
- Experience building data and ML pipelines with Airflow
- Use of Snowflake for analytics and LLM training dataset management
- Working knowledge of Docker and orchestration tools such as Kubernetes or ECS
- Understanding of CI/CD workflows and deployment best practices
- Experience with async Python and streaming LLM responses
- Awareness of LLM evaluation, fine-tuning, or synthetic data generation practices
- Interest in optimizing LLM cost and performance through caching, prompt tuning, and model selection
- ✨ Let’s talk numbers! ✨
- Our compensation range for this role begins at $5000 USD per month 💵
- Housecall Pro is a fintech company founded in 2013. We built a SaaS platform that helps Home Service Professionals operate their businesses. We created the application for plumbers, electricians, and other Pros in the home improvement/trades industries.
- Housecall Pro is a simple, cloud-based field service management software platform aimed at helping companies keep track of jobs, monitor technician activity, and produce invoices easily.
- Our core product helps our clients with scheduling, dispatching, job management, invoicing, payment processing, marketing, and more. They used to struggle with the ton of paperwork after their hours. Now they can save time, and manage their business in one app.
- We support more than 27,000 businesses and have over 1,300 ambitious, mission-driven employees in San Diego, Denver, and all over the world (including 200+ talented and innovative Engineers). #LI-Remote





