← All Careers

Member of Engineering (Inference)

Position
Full-time
Location
Remote (EMEA/East Coast)

ABOUT POOLSIDE

In this decade, the world will create artificial intelligence that reaches human level intelligence (and beyond) by combining learning and search. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will determine who survives and wins. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research and engineering at scale. They will create powerful economic engines. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this.

poolside exists to be one of these companies - to build a world where AI will drive the majority of economically valuable work and scientific progress.

We believe that software development will be the first major capability in neural networks that reaches human-level intelligence because it's the domain where we can combine Search and Learning approaches the best.

At poolside we believe our applied research needs to culminate in products that are put in the hands of people. Today we focus on building for a developer-led increasingly AI-assisted world. We believe that current capabilities of AI lead to incredible tooling that can assist developers in their day to day work. We also believe that as we increase the capabilities of our models, we increasingly empower anyone in the world to be able to build software. We envision a future where not 100 million people can build software but 2 billion people can.

View GDPR Policy

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You will be focused on building out our multi-device inference of Large Language Models, both standard transformers and custom linear attention architectures. You will be working with lowered precision inference and tensor parallelism. You will be comfortable diving into vLLM, Torch, AWS libraries. You will be working on improvements for both NVIDIA and AWS hardware. You will be working on the bleeding edge of what's possible and will find yourself, hacking and testing the latest vendor solutions. We are rewrite-in-Rust-friendly.

YOUR MISSION

To develop and continuously improve the inference of LLMs for source code generation, optimizing for the lowest latency, the highest throughput, and the best hardware utilization.

RESPONSIBILITIES

  • Follow the latest research on LLMs, inference and source code generation

  • Propose and evaluate innovations, both in the quality and the efficiency of the inference

  • Monitor and implement LLM inference metrics in production

  • Write high-quality high-performance Python, Cython, C/C++, Triton, ThunderKittens, native CUDA, Amazon Neuron code

  • Work in the team: plan future steps, discuss, and always stay in touch

SKILLS & EXPERIENCE

  • Experience with Large Language Models (LLM)

    • Confident knowledge of the computational properties of transformers

    • Knowledge/Experience with cutting-edge inference tricks

    • Knowledge/Experience of distributed and lower precision inference

    • Knowledge of deep learning fundamentals

  • Strong engineering background

    • Theoretical computer science knowledge is a must

    • Experience with programming for hardware accelerators

    • SIMD algorithms

    • Expert in matrix multiplication bottlenecks

    • Know hardware operation latencies by heart

  • Research experience

    • Nice to have but not required: Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc

    • Can freely discuss the latest papers and descend to fine details

    • You have strong opinions, weakly held

  • Programming experience

    • Linux

    • Git

    • Python with PyTorch or Jax

    • C/C++, CUDA, Triton, ThunderKittens

    • Use modern tools and are always looking to improve

    • Opinionated but reasonable, practical, and not afraid to ignore best practices

    • Strong critical thinking and ability to question code quality policies when applicable

    • Prior experience in non-ML programming is a nice to have

PROCESS

  • Intro call with Eiso, our CTO & Co-Founder

  • Technical Interview(s) with one of our Founding Engineers

  • Team-fit call with Beatriz, our Head of People

  • Final interview with Eiso, our CTO & Co-Founder

BENEFITS

  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you and dependents

  • Company-provided equipment

  • Wellbeing, always-be-learning and home office allowances

  • Frequent team get togethers

  • Great diverse & inclusive people-first culture

Position
FullTime
Location
Remote (EMEA/East Coast)