Infrastructure Engineering Role
- based in London, UK or remote
- creating infrastructure for machine learning
- learn what you need, when you need it, with full support from your colleagues
- VC-backed company with technical founders and investors
- multiple openings; looking for anyone with experience in managing systems
- stack includes AWS, Docker, Kubernetes, Terraform and Ansible
- generous skill-based compensation, salary and equity
We are an early-stage product company working towards building the next generation of AI to enhance software development.
Our mission is to drive the inevitable AI transformation of the software industry to empower humans, while letting machines take care of our grunt work.
In order to achieve this, we’re creating next-generation Machine Learning models to solve problems that only humans could solve before.
About the role
We’re looking for infrastructure engineers who have a knack for making servers go.
Our current projects include:
- running experiments in a repeatable and measurable fashion
- making the lives of researchers easier
- managing the hardware to run experiments, going as fast as possible while keeping costs down
- productionising ML code to work at scale
Lots of this work is going to become open-source in the near future. (We ❤ open-source.)
Everyone at Prodo gets involved with everything. This doesn’t mean they need to know everything, but it does mean they enjoy learning new things, especially outside their comfort zone. We take pride in delivering value over specific features, so we work wherever we can deliver the most.
We believe in agile practices. We love to pair when we can, we test-drive large amounts of our code, we refactor early and often, and generally think it is better to try a few things instead of spending lots of time designing.
We always try to use the best technology for the job, and we change what we use with the job.
We are big fans of automation, pair programming, type systems, sushi, squash, flexible working hours, and healthy amounts of sleep.
Our stack will never be set in stone, and newcomers will have the opportunity (and responsibility) to question and improve any technical choices made before they joined. But just to give you a flavour of our stack today, we are currently using:
- AWS to host GPU instances for neural network training
- PyTorch for ML modelling
- Kubernetes for managing work
- Terraform for spawning VMs
- Ansible for managing them when they’re up
- Packer for building AMIs and Docker images
- Python for our in-house experiment runner
We don’t expect you to be an expert in all of the above, but that you’re willing to learn how we currently do things and help us improve.
How to apply
Did this sound intriguing? Please email us at firstname.lastname@example.org with a CV or some links to your profile (or previous work) to start a discussion.