Deepgram - Research Staff, LLMs
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• 3+ years of experience in applied deep learning research, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanism • Proven experience working with large language models (LLMs) - including experience with data curation, distributed large-scale training, optimization of transformer architecture, and RL Learning • Strong experience coding in Python and working with Pytorch • Experience with various transformer architectures (auto-regressive, sequence-to-sequence.etc) • Experience with distributed computing and large-scale data processing • Prior experience in conducting experimental programs and using results to optimize models • IT WOULD BE GREAT IF YOU HAD • Deep understanding of transformers, causal LMs, and their underlying architecture • Understanding of distributed training and distributed inference schemes for LLMs • Familiarity with RLHF labeling and training pipelines • Up-to-date knowledge of recent LLM techniques and developments • We are seeking researchers who: • See "unsolved" problems as opportunities to pioneer entirely new approaches • Can identify the one critical experiment that will validate or kill an idea in days, not months • Have the vision to scale successful proofs-of-concept 100x • Are obsessed with using AI to automate and amplify your own impact • If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative. • Published papers in Deep Learning Research, particularly related to LLMs and deep neural networks
Responsibilities
• Brainstorming and collaborating with other members of the Research Staff to define new LLM research initiatives • Broad surveying of literature, evaluating, classifying, and distilling current methods • Designing and carrying out experimental programs for LLMs • Driving transformer (LLM) training jobs successfully on distributed compute infrastructure and deploying new models into production • Documenting and presenting results and complex technical concepts clearly for a target audience • Staying up to date with the latest advances in deep learning and LLMs, with a particular eye towards their implications and applications within our products • YOU'LL LOVE THIS ROLE IF YOU • Are passionate about AI and excited about working on state of the art LLM research • Have an interest in producing and applying new science to help us develop and deploy large language models • Enjoy building from the ground up and love to create new systems. • Have strong communication skills and are able to translate complex concepts clearly • Are highly analytical and enjoy delving into detailed analyses when necessary
Benefits
• HOLISTIC HEALTH • Annual wellness stipend • Mental health support • Life, STD, LTD Income Insurance Plans • WORK/LIFE BLEND • Unlimited PTO • Generous paid parental leave • Flexible schedule • 12 Paid US company holidays • Quarterly personal productivity stipend • One-time stipend for home office upgrades • 401(k) plan with company match • Tax Savings Programs • CONTINUOUS LEARNING • Learning / Education stipend • Participation in talks and conferences • Employee Resource Groups • AI enablement workshops / sessions • For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible. • Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
No credit card. Takes 10 seconds.