CoreStory - AI Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• 7+ years of overall engineering experience with at least 3+ years of experience in AI engineering, machine learning, or applied NLP. • Strong hands-on experience with LlamaIndex, LangChain, or similar orchestration frameworks. • LlamaIndex • LangChain • Experience designing and implementing vector database solutions (e.g., Pinecone, Neo4j, FAISS, Milvus, Weaviate). • vector database • Pinecone, Neo4j, FAISS, Milvus, Weaviate • Solid understanding of LLM APIs (OpenAI, Anthropic, Mistral, Hugging Face, etc.). • LLM APIs • Proficiency in Python, with experience in libraries such as FastAPI, Pandas, or NumPy. • Python • FastAPI • Pandas • NumPy • Understanding of retrieval-augmented generation (RAG) patterns, embeddings, and tokenization. • retrieval-augmented generation (RAG) • Familiarity with prompt engineering, tool calling, and chat agent architectures. • prompt engineering • tool calling • chat agent architectures • Strong problem-solving and analytical mindset, with attention to performance and scalability. • Demonstrated interest in staying up-to-date with the fast-evolving AI landscape. • Preferred: • Experience deploying AI services in production (e.g., using Docker, Azure, or AWS). • Exposure to LangGraph, semantic search, or hybrid RAG systems. • LangGraph • semantic search • Familiarity with knowledge graphs, document intelligence, or multimodal AI. • knowledge graphs • document intelligence • multimodal AI • Previous experience in SaaS or early-stage startup environments.
Responsibilities
• Design, implement, and optimize LLM-powered systems (e.g., RAG, chat agents, summarizers, knowledge graph integration). • Build and manage data indexing and retrieval pipelines using LlamaIndex, LangChain, or similar frameworks. • LlamaIndex • LangChain • Implement and maintain vector databases (e.g., Pinecone, Neo4j, Weaviate, Chroma, or Azure Cognitive Search). • vector databases • Pinecone, Neo4j, Weaviate, Chroma, • Integrate open-source and proprietary LLMs (e.g., GPT, Claude, Llama) into the CoreStory Platform. • Develop and refine AI-driven features — including generative insights, automated summarization, and narrative analytics. • Collaborate with DevOps and backend teams to deploy scalable AI services within CoreStory’s cloud infrastructure. • Continuously benchmark model performance, latency, and cost, identifying opportunities for optimization. • Stay current with advancements in AI — from model architectures to emerging frameworks — and propose innovative applications aligned with CoreStory’s mission. • Contribute to internal documentation, experimentation frameworks, and evaluation methodologies.
Benefits
• Competitive compensation and equity. • Flexible, remote-first work environment. • Opportunities to define and build the AI roadmap of a fast-growing technology company. • Collaborative, learning-oriented culture. • Access to cutting-edge AI models, research, and infrastructure.
No credit card. Takes 10 seconds.