How we like our DX
by [Fran]# Intro
Although we are flexible because we are acostumed to a diversity of clients, we have our preferences regarding developer experience (DX, if you tolerate buzzwords).
In this article I will try to make our guiding principles explicit, to provoke discussion and evolution.
I will be writing about what we do as a team, but keep in mind it will be my personal interpretation.
## Preliminars
- This is subjective and social at the same time, so it is impossible for us to agree 100%. Some room to accommodate differences must be allowed, and due to its nature, cannot be systematized (there will always remain an irrational "residual"). May the judgement be with you.
- We are tackling a complex problem, of many dimensions that interact in ways we can't predict,
so the best we can have is a set of principles that most of the time are compatible and easy to prioritize,
but in no way we can expect that to be always the case.
- Usually we are aligned with the abstract principles but in disagreement with the concrete implementations and prioritization when they collide. Arm yourself with charity and tolerance for experiments.
# Principles
## Single source of truth
- we will assume this a version-controlled repository of mainly code and documentation represented by text files.
- Code will mainly be about automating the solution to some problem using one or more programming languages, but it usually includes also the code for
- tool bootstrapping
- building (if needed by compiled languages)
- testing
- deploying
## Context
Nothing is 100% defined without its context but, sadly, by its own nature, its impossible to fully capture it.
The more explicit we are with the requirements, the better.
We can agree in the need to be explicit about which compiler we require and also in the absurdity of being explicit about how many calories the developer will need to expend building the project.
## Instructions as "living" code
If instructions are code, they can be tested. - You can write them down in a README, but they will eventually go out of sync with the code. - If instead you put them in a script and provide an example context in which to run it, developers can always test. - This also helps to maintain Environment Parity.
## Environment parity
Automated builds, tests and deploys must differ the least possible from the ones the developers run in their local environments.
- Example: complex .gitlab-ci.yml
are difficult to run locally. The more you factor out, the more you can reuse.
## Tooling boostraping
- Easy environment bootstrapping
- The project should impose as little as possible to the developer in terms of tools needed.
- Everything the developer needs to install in its computer before being able to do proper work,
is a security concern and an opportunity to diverge between team members.
- Remember: security is not only related to malicious agents; it is mostly related to unintended errors non sufficiently contained with unbounded consequences.
- Everything the developer needs to install in its computer before being able to do proper work,
is a security concern and an opportunity to diverge between team members.
- depend as little as possible in the OS
- Some things that are reasonable to require (left to be manually satisfied by the developer):
- POSIX cli tools like
ls
,chmod
,sh
make
for- smart artifact build
- command orchestration
git
for- getting the code
- manage versions (nothing replaces humans talking to each other)
docker
for- building OS container images
- running controlled environments for build, test and deploy.
- POSIX cli tools like
- anything that is not reasonable to expect, should be easy to bootstrap.
- Some things that are reasonable to require (left to be manually satisfied by the developer):
- modern languages provide local environments for installing dependencies independent from the OS
- when not possible or enough, some container tools like
docker
do the trick, but with many drawbacks to watch out- in some cases, even VMs may be needed
- when not possible or enough, some container tools like
- Nix is promising and we are testing it.
- The project should impose as little as possible to the developer in terms of tools needed.
# The repository
This is the thing we craft. We must be conscious and explicit about what's the intended purpose and target audiences of the repo.
In version-controlled repos, many "versions" of the code exists at the same time while developers naturally fork and diverge temporally until they convene to merge again. - Everything that "happens", does it in an implicit version. - We should be as careful as possible with the mixture: - a branch may contain new code for building the project that is not compatible with the main code. - developers may have caches, dirty states, etc.
## Typical purposes
- store, share and synchronize code between developers who work on it
- be the base directory in to which execute the
- build instructions and generate artifacts
- artifacts can be:
- compiled binaries
- html or pdf documents
- visual images
- file system images
- and many other things...
- artifacts can be:
- deploy the artifacts into some production environment (usually another computer)
- build instructions and generate artifacts
## Typical audiences
- developers
- writing code
- building artifacts
- testing artifacts
- sysadmins
- building artifacts
- deploying artifacts
- automated "cloud" bots
- building artifacts
- testing artifacts
- deploying artifacts
# Dependencies
There are at least 3 sets that usually differ:
- Build (compilers, library source files, etc.)
- test (compiled libraries, utilities like
pytest
) - runtime (a subset of test minus utilities)
We should be careful of not mixing them unnecessarily.
# OS Containers
Beside their usefulness to create a reproducible uniform local environment for all developers, containers are also useful for code-documenting build, test and deploy instructions and automatically execute them in cloud unmaned environments.
- They do not necessarily replace humans doing those 3 things (which is usually desired). Their value resides in automatically, objectively and unattendedly validating those instructions so we are aware when we broke them.
Always remember that they add complexity, so a "native run" will be preferable.
We are currently experimenting with Nix and we should investigate Landlock Make.
# Testing
## Unitary
- they must be runnable locally without installing in the host OS
- they must not require special privileges (or hardware if possible)
## Integration
- a context must be defined in which the artifacts are "integrated"
- OS + runtime dependencies
- hardware (e.g., SDR , Raspberry Pi)
- auxiliary services (e.g., DBs)
- install artifacts in the canonical OS file-system image and do a trivial test like
prog --help
just to catch silly things.