Techtorch - AI Data Engineer
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• 4+ years of experience in data engineering, with strong hands-on Snowflake experience • Practical experience building AI-enabled data solutions or preparing data for ML/LLM workloads • Strong proficiency in SQL and Python for data transformation and pipeline development • Experience with Snowflake Cortex AI or similar warehouse-native AI capabilities • Hands-on experience with AWS, ideally including AWS Bedrock, Lambda, S3, and IAM • Experience with ELT tooling such as dbt, Airflow, or similar orchestration frameworks • Solid understanding of data modeling, data warehousing, and performance optimization • Comfortable working in cloud-native, enterprise environments with high delivery expectations • Strong communication skills and ability to collaborate across technical and business teams • Experience with vector databases or embedding workflows (e.g., Snowflake, OpenSearch, Pinecone) • Familiarity with LLM orchestration frameworks (e.g., LangChain, Bedrock Agents) • Experience supporting AI agents, RAG pipelines, or GenAI analytics use cases • Exposure to regulated or security-conscious environments • Client First — We focus relentlessly on delivering outcomes that create value for our clients • We, Not Me — We win together. Collaboration drives transformation at scale • Get Stuff Done — We execute with speed and precision — because in PE, time matters • AI First — We embed AI at the core, enabling scalable, high-leverage solutions • Own It — We take accountability for results, delivering on what we promise • Agile Mindset — We adapt quickly and proactively seek better ways to move forward
Responsibilities
• Design, build, and maintain scalable data pipelines using Snowflake as the central data platform • Develop AI-ready data models and feature layers to support analytics, ML, and GenAI use cases • Leverage Snowflake Cortex AI for embedding, classification, summarization, and AI-assisted analytics • Integrate and operationalize AI workflows using AWS Bedrock and related AWS services (e.g., Lambda, Step Functions) • Build and optimize ELT pipelines using tools such as dbt, SQL, and Python • Integrate data from diverse sources including APIs, SaaS platforms, databases, and event streams • Ensure data quality, observability, and governance across pipelines and AI workloads • Collaborate with AI engineers, data scientists, and business teams to translate use cases into scalable data solutions • Document data models, pipelines, and AI-related design decisions clearly for long-term maintainability
Benefits
• Opportunity to work on AI-first data platforms using Snowflake Cortex and AWS Bedrock • High-impact role at the intersection of data engineering and applied AI • Exposure to private equity-backed enterprise transformation programs • Global, collaborative team with strong technical standards • Flexible, remote-first working environment with autonomy and ownership
No credit card. Takes 10 seconds.