Newxel - Senior Backend & Data Infrastructure Engineer (NXJ-155)
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• 7+ years of backend engineering; 3+ years specifically in distributed systems or data infrastructure • 5+ years of production Go — not aggregate with other languages, standalone Go at scale; data hot paths, memory-sensitive workloads • 5+ years of Node.js/TypeScript in production — APIs, business logic, NestJS or similar framework • Hands-on event-driven architecture at scale: Kafka as a production dependency, not a tutorial project • Deep AWS expertise (EKS, S3, RDS) + Kubernetes in production • Strong GitOps background — ArgoCD or equivalent, declarative deployments, rollback strategies • Familiarity with the Lakehouse paradigm: Iceberg, Nessie, or equivalent table-format tooling • Experience leading technical projects and mentoring engineers — not just informally • Cybersecurity domain knowledge: SIEM, log processing, threat data pipelines • Experience sustaining 100K+ events/sec throughput in production • Delta Lake or Apache Iceberg hands-on at production scale • Rust (relevant for systems-level work adjacent to the hot path) • Open-source contributions to data tooling • Leadership & Architecture • This role carries formal technical leadership responsibility — not a "senior with some mentoring" framing. You'll own architectural standards for the data layer, run design reviews, and be accountable for the team's engineering quality. The team is mid-level; your decisions set the ceiling for what they can build.
Responsibilities
• Own the end-to-end ingestion pipeline — Kafka → Iceberg — including ACID compliance, schema evolution, and throughput reliability at production scale • Build and maintain Go microservices on the data hot paths; own latency characteristics and resource efficiency • Architect and scale SeaweedFS for analytics workloads — billions of small files, O(1) disk seeks, no performance regression as the dataset grows • Drive the 'Git-for-Data' model with Nessie — zero-copy cloning, dataset branching and merging as a first-class operational capability • Lead the move to fully declarative infrastructure on AWS EKS via ArgoCD — environment parity, automated rollbacks, no snowflake deployments • Build NestJS APIs and business logic layer on top of the data platform — clean separation from the hot paths • Set engineering standards: architectural reviews, code quality, AWS/DevOps depth across the team; mentor mid-level engineers who can grow under real ownership
Benefits
• You're the first architect of a greenfield cybersecurity data platform — your decisions will be in production at scale, not revised by a committee • The lakehouse stack (Iceberg, Nessie, StarRocks) is genuinely modern; you won't be working around constraints imposed by older tech choices • Direct CTO collaboration in a founder-adjacent environment — short feedback loops, real influence on product direction • The workload is real: billions of events, real-time hot paths, a domain where data reliability has security consequences
No credit card. Takes 10 seconds.