5+ years building production-grade backend systems or developer-facing tools
Hands-on experience with AI/ML technologies such as practical production experience with LLM APIs (OpenAI, Anthropic, etc.), prompt engineering, or AI agent development
Proficiency in Go (preferred), Rust, Java, or Python with strong software engineering fundamentals
Experience designing and building distributed systems, microservices, or platform infrastructure
Strong understanding of cloud-native systems (AWS, GCP, or Azure), APIs, and data stores
Solid grasp of CI/CD, automated testing, code review practices, and modern development workflows
Product-minded approach to building developer tools with focus on user experience and measurable outcomes
Excellent communication skills in remote, asynchronous environments with ability to document technical decisions clearly
Ownership mentality with bias for action and iterative delivery
Comfortable working autonomously across distributed teams and navigating ambiguity
Preferred:
Preferred:
Experience with AI agent frameworks (LangChain, LangGraph, CrewAI, or similar)
Contributions to open source AI tools, developer tooling, or platform engineering projects
Experience with MCP (Model Context Protocol) or similar AI agent integration standards
Background in developer productivity, DevOps, SRE, or platform engineering domains
Experience with Kubernetes, Docker, and container orchestration
Knowledge of developer tools ecosystems (IDEs, CI/CD platforms, observability tools)
Experience with infrastructure-as-code (Terraform, Pulumi) and GitOps deployment patterns (ArgoCD, FluxCD)
Understanding of security, compliance, and operational best practices for production AI systems
What to Expect
What to Expect
First 30 Days
First 30 Days
Get up to speed on Docker's AI Developer Tools vision, current Agent Dev project status, and existing AI tool prototypes
Meet your team, Principal Engineer, Senior Manager, and key stakeholders across product engineering and platform teams
Understand Docker's developer tooling landscape such as deployment systems, observability platforms, and CI/CD pipelines
Explore Docker's LLM provider relationships, AI technology choices, and existing integration patterns
Make your first contributions to the AI Developer Tools codebase through bug fixes, small features, or documentation improvements
Participate in design discussions and code reviews to understand team technical standards and decision-making processes
First 90 Days
First 90 Days
Take ownership of and deliver your first significant AI developer tool feature (e.g., code review assistant capability, test generation module, or deployment diagnostic agent component)
Contribute to platform infrastructure improvements that enable faster development and deployment of AI tools
Collaborate with product and design teams on feature requirements and user experience for AI developer tools
Participate in user research and customer calls to understand developer pain points and validate AI tool effectiveness
Begin mentoring other engineers through code reviews and technical discussions
Establish monitoring and instrumentation for AI tools you've shipped to measure adoption and effectiveness
Support hiring efforts by participating in interviews and providing feedback on candidates
First Year Outlook
First Year Outlook
Own significant components of AI developer tools platform with responsibility for design, implementation, and operations
Ship multiple production AI agents and tools with demonstrated adoption across Docker's engineering organization and measurable productivity improvements
Contribute to technical strategy and architectural decisions for AI developer tools alongside Principal Engineer
Mentor engineers on AI/LLM integration patterns and developer tool best practices
Drive measurable improvements in developer productivity metrics such as AI tool adoption, commit frequency, PR velocity, deployment times, and CI run times
Participate in productization efforts as internal AI tools evolve into customer-facing offerings
Establish yourself as a go-to expert for AI in developer workflows within Docker's engineering organization
Responsibilities
Build AI-Powered Developer Tools: Design, implement, and ship production-ready AI agents and tools that accelerate developer productivity such as code review and refactoring assistants, automated test generators, local environment setup tools, deployment pipeline diagnostic agents, and on-call assistance tools
Implement LLM Integrations: Build robust, production-grade integrations with LLM APIs (OpenAI, Anthropic, etc.) such as prompt engineering, response parsing, error handling, rate limiting, cost management, and performance optimization
Develop Agent Orchestration Systems: Create agent frameworks and orchestration systems that enable complex multi-step workflows, tool calling, context management, and agent-to-agent communication
Contribute to Platform Infrastructure: Build self-service platform capabilities that enable teams across Docker to rapidly deploy and operate their own AI developer tools such as deployment pipelines, observability integration, security controls, and operational tooling
Drive Adoption of AI-Native Development: Build tools and programs that accelerate adoption of AI developer tools such as Claude Code, Cursor, and Warp across Docker's engineering organization
Ensure Production Quality: Write well-tested code with strong test coverage (unit, integration, end-to-end); establish monitoring, alerting, and operational excellence for AI systems
Collaborate Cross-Functionally: Partner with Principal Engineer on architecture, work with product and design teams on features and UX, and collaborate with platform teams (Infrastructure, Security, Data) on integrations
Participate in Operations: Take part in on-call rotation for AI developer tools; respond to incidents, debug production issues, and drive continuous improvement of system reliability
Mentor and Share Knowledge: Guide other engineers through code reviews, pair programming, and technical discussions; document patterns and best practices for AI tool development
Measure and Iterate: Instrument AI tools to measure adoption, effectiveness, and developer productivity impact; iterate based on data and user feedback to continuously improve developer experience
Benefits
Equity options mentioned but no specific details provided.
Insurance benefits are stated as being part of the compensation package but no specific details provided.
Perks such as remote work options and flexible schedules are explicitly mentioned for all team members to enjoy in their own home countries, which implies that these perks may be available at Docker's discretion based on business needs or local regulations; however, it is not guaranteed across the board without further information.
Remote work opportunities are stated as being part of a remote-first team environment but no specific details provided regarding how often one can work remotely or any associated policies beyond general availability in their home countries when needed by business needs.