Careers

Member of Engineering, Data Quality

Position
Full-time, indefinite contract
Location
Remote EMEA/East Coast

About poolside

In this decade, the world will create artificial intelligence that reaches human level intelligence (and beyond) by combining learning and search. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will determine who survives and wins. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research and engineering at scale. They will create powerful economic engines. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this.

poolside exists to be one of these companies - to build a world where AI will drive the majority of economically valuable work and scientific progress.

We believe that software development will be the first major capability in neural networks that reaches human-level intelligence because it's the domain where we can combine Search and Learning approaches the best.

At poolside we believe our applied research needs to culminate in products that are put in the hands of people. Today we focus on building for a developer-led increasingly AI-assisted world. We believe that current capabilities of AI lead to incredible tooling that can assist developers in their day to day work. We also believe that as we increase the capabilities of our models, we increasingly empower anyone in the world to be able to build software. We envision a future where not 100 million people can build software but 2 billion people can.

View GDPR Policy

About our team

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

About the role

You would be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where you would own the critical topic of data quality end-to-end from sourcing to evaluation.

You would be closely collaborating with other teams like Pre-training, Fine-tuning and Product to define high quality data both quantitatively and qualitatively, and also to get feedback on the quality of the data being exploited for building the poolside models.

Staying in sync with the latest research in the field of dataset design is key to being successful in a role where you would be constantly showing original research initiatives with short time-bounded experiments and highly technical engineering competence while deploying your solutions in production.

With the volumes of data to process being massive, you'll have at your disposal a performant distributed data pipeline together with a large GPU cluster.

Your mission

To deliver massive-scale datasets of natural language and source code with the highest quality for training poolside models.

Responsibilities

  • Own the quality of the datasets used for training poolside models, including sourcing, curation, filtering, deduplication, compliance, enrichment and any kind of data transformation
  • Follow the latest research related to LLMs and data quality in particular. Be familiar with the most relevant open source datasets of natural language and source code
  • Closely work with other teams such as Pretraining, Fine-tuning or Product to ensure short feedback loops on the quality of the models delivered
  • Conduct and analyze experiments on data to provide insights into the quality distributions
  • Design, implement, deploy and monitor efficient solutions leveraging large-scale clusters to process petabytes of data on short-time constraints
  • Proactively identify and mitigate potential quality issues, biases and vulnerabilities in the data delivery pipeline
  • Ensure the datasets delivered are compliant with data privacy-related regulations

Skills & Experience

  • Several years of experience in ML, and NLP
  • Experience in dataset design, working with large-scale distributed data pipeline and GPU clusters
  • Experience working with LLMs for data labeling, filtering and data synthesis
  • Experience with embedding models in general
  • Experience running large-scale, time-bounded research experiments, communicating and landing them in fast-paced environments
  • Scientific knowledge of the field of Generative AI with the ability to search, process and communicate about high volumes of technical research papers
  • Strong obsession to data quality details while staying focused on the high-level picture and prioritized goals
  • Experience in designing and implementing complex software while making them usable in production
  • Experience with Git, Docker, k8s and cloud managed services
  • Familiarity with data analysis and labeling tools

Process

  • Intro call with Beatriz, our Head of People
  • Technical Interview(s) with two of our Founding Engineers
  • Meet & greet call with Eiso, our CTO & Co-Founder

Benefits

  • Fully remote work & flexible hours
  • 37 days/year of vacation & holidays
  • Health insurance allowance for you and dependents
  • Company-provided equipment
  • Wellbeing, always-be-learning and home office allowances
  • Frequent team get togethers
  • Great diverse & inclusive people-first culture
Position
Full-time, indefinite contract
Location
Remote EMEA/East Coast