Armada - AI Factory Customer Engineer
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• Working knowledge of cloud platforms, containerization, virtualization, and modern enterprise infrastructure. • cloud platforms, containerization, virtualization, and modern enterprise infrastructure • Comfort operating in complex, live customer environments, with strong troubleshooting skills. • complex, live customer environments • Ability to build credibility across IT and OT stakeholders, particularly in industrial or energy‑adjacent contexts. • IT and OT stakeholders • Excellent communication skills, capable of engaging both deeply technical teams and executive audiences. • Willingness to travel as required. • Exposure to energy, utilities, telecom, oil & gas, or industrial infrastructure environments. • energy, utilities, telecom, oil & gas, or industrial infrastructure • Understanding of power generation, distribution, or energy‑driven deployments. • power generation, distribution, or energy‑driven deployments • Familiarity with AI data center co‑location or Neocloud models. • AI data center co‑location or Neocloud models • Experience with regulated or mission‑critical systems and structured sales methodologies (e.g., MEDDPICC, Challenger). • Armada is seeking a visionary VP of Customer Engineering to lead a world-class, globally distributed team of Customer Engineers at the forefront of AI infrastructure and edge computing. This is a pivotal leadership role for a builder and operator who thrives at the intersection of cutting-edge AI technology and large-scale industrial deployment. • As Armada accelerates adoption of its AI-powered edge platform spanning ruggedized modular data centers, GPU-accelerated inference, and real-time edge AI this leader will shape how we engage with customers globally: from initial technical discovery through validated, deployment-ready architectures. You will own the pre-sales technical lifecycle across all regions, ensuring our Customer Engineers operate with rigor, speed, and clarity in North America, EMEA, APAC, and emerging markets. • The CE function guides customers from mission-critical AI ambitions and complex operational environments to scalable, field-proven Armada solutions taking each opportunity 80% of the way by scoping requirements, framing AI infrastructure trade-offs, validating feasibility, and ensuring full qualification before moving to detailed engineering. • Global Leadership & Team Development • Proven ability to build, lead, and scale Customer Engineering teams across multiple geographies — hiring in new markets and establishing regional functions from the ground up. • Strong coaching and talent development instincts: grows senior AI infrastructure architects and cultivates future technical leaders. • Operates with clarity and conviction in ambiguous, high-growth environments creates structure and alignment for globally distributed teams. • Culturally fluent and experienced working across diverse international stakeholders including enterprise customers, governments, and strategic partners. • AI Infrastructure & Edge Architecture • Deep expertise in GPU-accelerated compute, AI inference pipelines, LLM deployment, and ML workloads in resource-constrained environments. • Strong architectural fluency across edge computing, modular/containerized data centers, distributed systems, and AI platform integration. • Practical knowledge of AI infrastructure trade-offs: latency vs. bandwidth, on-device vs. cloud inference, sovereignty vs. scale, and power density vs. performance. • Familiarity with MLOps, AI observability, and model lifecycle management in edge and hybrid environments. • System Architecture & Engineering Design • Translate mission and business objectives including AI workload requirements — into actionable infrastructure requirements aligned to operational outcomes. • Architect end-to-end systems within modular or containerized data centers, integrating GPU compute, high-speed storage, and networking into defined form factors. • Interpret engineering documentation including rack elevations, BOMs, airflow diagrams, and power schematics; create conceptual architecture drawings for customers and partners. • Perform on-site assessments and deployment planning, incorporating power, cooling, physical security, and connectivity constraints across global environments. • Technical Strategy & Competitive Positioning • Expert technical storyteller: frames AI infrastructure trade-offs, simplifies complexity, and aligns diverse technical and executive stakeholders toward confident decisions. • Deep understanding of real-world AI deployment constraints: thermal limits, bandwidth scarcity, satellite connectivity, mobile power, and austere environments. • Competitive intelligence and positioning across edge AI, cloud infrastructure, and industrial IoT markets. • Bachelor's degree in Computer Science, Electrical Engineering, Systems Engineering, or equivalent technical field. • 7+ years leading Customer Engineering or Solutions Architecture teams in pre-sales; demonstrated success hiring and scaling globally. • 7–10+ years of hands-on pre-sales or solutions engineering experience in AI infrastructure, edge computing, datacenter, or distributed systems. • Deep expertise in GPU and AI accelerator infrastructure: NVIDIA GPU architectures, AI inference frameworks (TensorRT, ONNX, vLLM), and edge AI platforms. • Strong grounding in datacenter and edge infrastructure: compute (GPU, bare metal, virtualization), storage (SAN/NAS/Object/NVMe), networking (LAN/WAN/SD-WAN/SATCOM), and facility systems (power, cooling). • Hands-on experience with container orchestration (Kubernetes), virtualization (VMware, KVM, Hyper-V), and cloud service models (IaaS, PaaS, hybrid). • Proven ability to engage and influence C-level technical and operational leaders across global enterprise and government customers. • Willingness to travel internationally, including to remote and operationally austere field sites. • Experience deploying or architecting AI solutions in oil & gas, defense & intelligence, utilities, telecommunications, or mining verticals. • Hands-on exposure to modular, containerized, or mobile data center deployments — including skid-based and rapid-deploy form factors. • Familiarity with edge AI inference optimization, model quantization, and deployment frameworks for bandwidth-constrained environments. • Background integrating OT/IT convergence — connecting sensors, IIoT devices, and SCADA systems to AI-enabled edge platforms. • Experience with satellite and hybrid connectivity architectures (Starlink, LEO, VSAT) for remote AI deployments. • International experience building CE teams or managing customer engagements in EMEA, APAC, or Middle East markets. • Certifications in AI/ML (e.g., NVIDIA DLI), cloud infrastructure (AWS, Azure, GCP), or datacenter design (CDCP, DCDC, RCDD). • Experience collaborating with construction, facilities, and deployment partners on large-scale infrastructure projects. • You're a Great Fit if You're • A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge • A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude • Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company • A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda • Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you • Equal Opportunity Statement
Responsibilities
• Core Technical Customer Engagement • Serve as the primary technical partner between customers and Armada’s Product and Engineering teams, translating real‑world requirements into actionable designs. • Provide hands‑on technical guidance on AI Factory solutions, including modular and liquid‑cooled data centers and NVIDIA‑based GPU systems. • AI Factory solutions • Advise customers on workload suitability, rack‑level design, system architecture, and deployment tradeoffs. • workload suitability, rack‑level design, system architecture, and deployment tradeoffs • Lead technical demos, proofs‑of‑concept, and working sessions tailored to customer environments and constraints. • Build trusted‑advisor relationships with engineering, infrastructure, IT, OT, and security stakeholders. • engineering, infrastructure, IT, OT, and security stakeholders • Enable Sales and field teams by distilling complex infrastructure topics into clear, outcome‑driven narratives. • Take end‑to‑end ownership of technical engagements, bringing curiosity, urgency, and a problem‑solving mindset. • Required Qualifications & Technical Expertise • Bachelor’s degree in Engineering, Computer Science, or related field (or equivalent hands‑on experience); advanced degrees a plus. • 5+ years of experience in data center engineering, infrastructure engineering, pre‑sales/sales engineering, or solution architecture. • 5+ years • Strong foundation in compute, networking, and storage, including GPU‑based AI systems (e.g., NVIDIA DGX, HGX, MGX). • compute, networking, and storage • Hands‑on experience with data center infrastructure, including MEP systems, cooling architectures, and rack‑level design. • MEP systems, cooling architectures, and rack‑level design • Deep familiarity with modular and/or liquid‑cooled data center architectures, power density, and thermal management. • modular and/or liquid‑cooled data center architectures • Ability to translate AI workload requirements into scalable, production‑ready infrastructure designs. • Build & Scale a Global Customer Engineering Organization • Lead, coach, and develop a globally distributed team of Customer Engineers spanning North America, EMEA, and emerging markets. • Define and execute a global hiring strategy: build CE presence in new regions, establish operating rhythms, onboard early hires, and set standards for technical excellence worldwide. • Create talent development pathways that grow CEs into senior AI infrastructure architects and future leaders. • Build a culture of continuous learning around AI infrastructure, edge computing, and real-world deployment at scale. • Drive AI-Focused Technical Discovery & Solution Architecture • Champion a rigorous, AI-first discovery methodology guiding CEs to uncover customer mission goals, AI workload requirements, data sovereignty constraints, and connectivity realities across diverse global environments. • Ensure the team consistently translates complex, distributed AI environments into validated edge architectures built around Armada's Galleon modular data centers, Atlas platform, and GPU-accelerated edge AI stack. • Define and govern solution design standards for AI inference, real-time analytics, and edge ML pipelines in bandwidth-constrained and disconnected environments. • Elevate Global Pre-Sales Technical Quality • Set and raise the bar on discovery outputs, AI architecture designs, technical narratives, demo environments, and proof-of-value success criteria worldwide. • Standardize technical qualification frameworks ensuring AI infrastructure opportunities are well-scoped, feasible, and commercially validated before deep engineering engagement. • Develop a global review cadence and peer architecture process to maintain consistency and quality across all regions. • Partner Cross-Functionally to Accelerate Global Revenue • Collaborate tightly with regional Sales leaders, Product, Engineering, and Global Deployment teams to align on AI infrastructure positioning, competitive differentiation, and customer roadmaps. • Bridge technical architectures to measurable customer outcomes articulating ROI, operational efficiency, and AI-driven value creation across energy, defense, telecommunications, and industrial verticals. • Synthesize global customer insights to inform Armada's AI product roadmap, hardware evolution, and platform strategy. • Build Scalable AI Infrastructure Methodologies & Playbooks • Develop globally consistent reference architectures for AI inference at the edge, GPU cluster deployments, satellite-connected operations, and hybrid cloud-edge patterns. • Create repeatable frameworks for AI proof-of-value pilots, technical discovery, and competitive positioning across Armada's key verticals. • Enable regional CE teams with localized deployment guides, regulatory considerations, and partner ecosystem alignment — ensuring global consistency while preserving local agility.
No credit card. Takes 10 seconds.