Juniper Square - QA Automation Lead (Performance Testing)
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Education: Bachelor's degree in Computer Science, or equivalent professional experience. • Education: • Experience: 7-10 years in Software Quality Assurance, with at least 5 years focused on performance, load, stress, and endurance testing. • Performance Testing: Strong hands-on experience designing and executing performance test strategies for web applications and APIs with an ability to read architectural diagrams and identify potential single points of failure. • Performance Testing: • Programming Skills: Strong proficiency in Python, with the ability to write clean, maintainable, and scalable test code. • Tools and Systems: Experience with performance testing tools such as Locust (preferred), JMeter, Gatling, or similar • Tools and Systems: • Locust (preferred) • Metrics & Analysis: Solid understanding of performance metrics (response time, throughput, latency, error rates, resource utilization) and profiling techniques. • Metrics & Analysis: • APIs & Backend: Hands-on experience testing REST APIs and backend services under load. • APIs & Backend: • CI/CD : Experience designing and owning the Performance Gate in the CI/CD pipeline, ensuring automated performance regressions are caught before reaching production. • CI/CD : • Performance Gate • Observability & Profiling: Advanced skills in using APM tools (e.g., Datadog, New Relic, or Dynatrace) and profiling tools to pinpoint code-level bottlenecks, memory leaks, and thread contention. • Observability & Profiling: • APM tools • Data Strategy: Experience managing large-scale, sanitized test data sets required for high-volume performance execution without skewing cache results. • Data Strategy: • Infrastructure & Cloud: Deep experience with AWS infrastructure (EC2, Lambda, RDS, ELB), and containerization (Docker, Kubernetes) • Infrastructure & Cloud: • AWS infrastructure • Test Process: Experience in performance test plans, scenarios, workload models, and test data strategies. • Test Process: • Soft Skills: Excellent analytical and problem-solving abilities, attention to detail, and the ability to work independently within Agile development teams. • Communication: Clear written and verbal communication skills, with the ability to explain performance findings to both technical and non-technical stakeholders. • Communication: • Experience designing evaluation frameworks for LLM-powered features, including prompt regression testing and behavioral drift detection • Proactively leverage AI tools (e.g., Cursor, Gemini) to accelerate test authoring, debugging, and maintenance of automation frameworks • Use AI to diagnose failures, generate test scenarios, and improve coverage and efficiency • Contribute to testing strategies for AI-powered features, including validation of LLM outputs, edge cases, and reliability • Drive best practices for the ethical and effective use of AI tools within QA workflows and across the broader engineering team • At Juniper Square, we believe building a diverse workforce and an inclusive culture makes us a better company. If you think this job sounds like a fit, we encourage you to apply even if you don’t meet all the qualifications.
Responsibilities
• Review functional and non-functional requirements, technical design documents, and provide meaningful feedback to identify performance risks early. • Design, develop, and execute performance, load, and stress tests for web and backend systems. • Use AI to analyze production traffic patterns and automatically generate representative performance scripts in Locust or JMeter that mirror real-world user behavior • representative performance scripts • Build and maintain performance test scripts using Python, primarily leveraging Locust (tool-agnostic mindset, but Locust experience is a strong plus) • performance test scripts using Python • Locust • Collaborate closely with development, QA, DevOps, and SRE teams to define performance benchmarks, SLAs, and acceptance criteria. • Analyze test results to identify bottlenecks related to application code, APIs, databases, infrastructure, or third-party dependencies. • Produce clear and actionable performance test reports, highlighting trends, risks, and recommendations for optimization. • performance test reports • Integrate performance tests into CI/CD pipelines and support continuous performance testing practices. • CI/CD pipelines • Monitor application performance during releases and contribute to capacity planning and scalability discussions. • Lead performance engineering best practices and help shift performance testing left in the SDLC.
No credit card. Takes 10 seconds.