Kpler - Senior Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Deliver high-quality, well-tested, and maintainable code, setting a strong example for engineering best practices. • Own significant parts of the data platform end-to-end, from ingestion to production delivery. • Make architectural contributions to data processing, storage, and delivery patterns. • Contribute to and improve CI/CD pipelines, automation, and operational tooling. • Instrument services and pipelines with metrics, logs, and alerts, and help define operational standards. • Play an active role in incident response, root-cause analysis, and long-term system improvements. • Review code, mentor other engineers, and help reinforce shared coding and architectural standards across the team. • Exposure to Kafka, Spark, or streaming architectures. • Familiarity with event-driven or microservices architectures. • Exposure to analytical datastores (e.g. Elasticsearch). • Full-stack awareness (e.g. ability to read, review, and provide feedback on frontend or API-layer pull requests, without being a primary frontend contributor). • Prior experience working on data products in energy, commodities, or industrial domains. • We are a dynamic company dedicated to nurturing connections and innovating solutions to tackle market challenges head-on. If you thrive on customer satisfaction and turning ideas into reality, then you’ve found your ideal destination. Are you ready to embark on this exciting journey with us? • We make things happen • We act decisively and with purpose, going the extra mile. • We build together • We foster relationships and develop creative solutions to address market challenges. • We are here to help • We are accessible and supportive to colleagues and clients with a friendly approach. • Our People Pledge • Don’t meet every single requirement? Research shows that women and people of color are less likely than others to apply if they feel like they don’t match 100% of the job requirements. Don’t let the confidence gap stand in your way, we’d love to hear from you! We understand that experience comes in many different forms and are dedicated to adding new perspectives to the team.
Responsibilities
• Develop and maintain scalable ETL pipelines to extract, transform, and load large volumes of structured and unstructured data into our analytics platform. • Design complex database schemas for big data applications that support high concurrency with low latency access patterns. • Optimize SQL queries and stored procedures to improve performance on the most frequently accessed datasets within production environments. • Implement automated testing frameworks, including unit tests, integration tests, and end-to-end tests, ensuring code reliability before deployment into our data warehouse environment. • Collaborate with cross-functional teams (e.g., business analysts, product managers) to understand requirements for new or modified ETL processes based on evolving organizational needs and priorities. • Monitor system performance using established monitoring tools like Prometheus/Grafana; identify bottlenecks in data processing workflows and propose optimizations where necessary. • Stay current with emerging technologies, best practices, and industry trends related to big data engineering by attending conferences, workshops, or webinars as needed for the role's responsibilities.