I don't agree with (1), but agree with (2). I recommend just putting a Makefile in the repo and have that have CI targets, which you can then easily call from CI via a simple `make ci-test` or similar. And don't make the Makefiles overcomplicated.
Of course, if you use something else as a task runner, that works as well.
For certain things, makefiles are great options. For others though they are a nightmare. From a security perspective, especially if you are trying to reach SLSA level 2+, you want all the build execution to be isolated and executed in a trusted, attestable and disposable environment, following predefined steps. Having makefiles (or scripts) with logical steps within them, makes it much, much harder to have properly attested outputs.
Using makefiles mixes execution contexts between the CI pipeline and the code within the repository (that ends up containing the logic for the build), instead of using - centrally stored - external workflows that contains all the business logic for the build steps (e.g., compiler options, docker build steps etc.).
For example, how can you attest in the CI that your code is tested if the workflow only contains "make test"? You need to double check at runtime what the makefile did, but the makefile might have been modified by that time, so you need to build a chain of trust etc.
Instead, in a standardized workflow, you just need to establish the ground truth (e.g., tools are installed and are at this path), and the execution cannot be modified by in-repo resources.
That doesn't make any sense. Nothing about SLSA precludes using make instead of some other build tool. Either inputs to a process are hermetic and attested or they're not. Makefiles are all about executing "predefined steps".
It doesn't matter whether you run "make test" or "npm test whatever": you're trusting the code you've checked out to verify its own correctness. It can lie to you either way. You're either verifying changes or you're not.
You haven't engaged with what I wrote, of course it doesn't make sense.
The easiest and most accessible way to attest what has been done is to have all the logic of what needs to be done in a single context, a single place. A reusable workflow that is executed by hash in a trusted environment and will execute exactly those steps, for example.
In this case, step A does x, and step B attests that x has been done, because the logic is immutably in a place that cannot be tampered with by whoever invokes that workflow.
In the case of the makefile, in most cases, the makefile (and therefore the steps to execute) will be in a file in the repository, I.e., under partial control of anybody who can commit and under full control of those who can merge.
If I execute a CI and step A now says "make x", the semantic actually depends on what the makefile in the repo includes, so the contexts are mixed between the GHA workflow and the repository content.
Any step of the workflow now can't attest directly that x happened, because the logic of x is not in its context.
Of course, you can do everything in the makefile, including the attestation steps, bringing them again in the same context, but that makes it so that once again the security relevant steps are in a potentially untrusted environment. My thinking specifically hints at the case of an organization with hundreds of repositories that need to be brought under control.
Even more, what I am saying make sense if you want to use the objectively convenient GH attestation service (probably one of the only good feature they pushed in the last 5 years).
Usually, the people writing the Makefile are the same that could also be writing this stuff out in a YAML (lol) file as the CI instructions, often located in the same repository anyway. The irony in that is striking. And then we have people who can change environment variables for the CI workflows. Usually, also developers, often the same people that can commit changes to the Makefile.
I don't think it changes much, aside from security theater. If changes are not properly reviewed, then all fancy titles will not help. If anything, using Make will allow for a less flaky CI experience, that doesn't break the next time the git hoster changes something about their CI language and doesn't suffer from YAMLitis.
You're correct. It's absolutely security theater. Either you trust the repository contents or you don't. There's no, none, zilch trust improvement arising from the outer orchestration being done in a YAML file checked into the repo and executed by CI instead of a Makefile also executed by CI.
What's the threat model Wilder is using exactly? Look, I'm ordinarily all for nuance and saying reasonable people can disagree when it comes to technical opinions, but here I can't see any merit whatsoever to the claim that orchestrating CI actions with Make is somehow a security risk when the implementations of these actions at some level live in the repo anyway.
> when the implementation of these actions at some level live in the repo anyway
This is the false assumption. You can standardize that not to happen, and you can verify at runtime that's not the case.
You can control admission of container images for example, restricting those that were built by your workflow X (the approved, central, controlled, standard one) and reject anything else. You do this via build provenance attestation.
With makefile I don't know how you can achieve standard and centrally manageable (and verifiable) instructions (which are part of SLSA).
> With makefile I don't know how you can achieve standard and centrally manageable (and verifiable) instructions (which are part of SLSA).
The way I'm thinking about it, we distinguish instructions from policy. Policy, of course, has to be imposed from outside a package somehow, but the instructions taken within policy seem like things that should evolve with it. For example, "no network access" might be an externally enforced policy. Another might be limitations of dependencies, enforced by making exceptions to the previous policy.
But in the end, you have to do something. Are you suggesting a model in which a project's entire test suite lives in a separate repository with an access control policy that differs from that project's so that the project's authors can't cheat on test coverage or pass rate?
Sure, you can do that, but the instructions for running those tests still have to live somewhere, and a makefile in that test repository seems like as good a place as any.
A classic example would be build (container build) images instructions.
Of course, other examples are running a security static analysis, running tests, or whatever other standard practice is desidered for security.
Say that you build 100 images, each with its own repository where there is the code, there is the Dockerfile etc.
Now, if you build images with "make build", it means that ultimately the actual command line flags for the build command are going to be in the Makefile, inside each repository. These flags could include flags like --cache-from, build arguments, --add-host, or many other flags that may not be desirable in an organization.
Imagine what the workflow file would like for such a case: you have your in-repo CI file that clones the repo, and then executes make test, make build, make push, etc.
Do you know if make push sent the PCI registry secret somewhere else on the internet? Do you know what exactly flags were used to build? Not really, you need to trust the code inside the repo to at most collect this information, and how do you know it was not tampered or spoofed?
Compare it with the following scenario:
All 100 repos at some point in that CI invoke an external workflow, which contains instructions to build images, push them to the registry (and run tests before, whatever). This can obviously also be a combination or workflows, I am saying one for simplicity.
This workflow (or workflows) at the end produces an attestation, including provenance data and about what is run.
It can do so reliably, because the instructions of what had to be done are within its own context. There is no input that comes from outside (e.g., the repository) that determined build flags or instructions, outside of what the workflow exposed (e.g., few variables).
In the second case, I can then use a policy that restrict images that have attestation and that were built by my workflow(s) that contain standard and deterministic instructions. This way I know that no image was built using weird build flags, including strange dependencies, maybe tampering with network and DNS (in makefile I can write whatever) or local docker cache, etc.
I know this because a standard workflow was executed in a fresh, "sterile" environment with a standard set of instructions, that can't be changed or tampered by those controlling the repository content.
Note that all of the above is solely concerned with being able to make security statements on the build artifact produced, not on the context of the build (e.g., no network access).
That's a great point. If we keep following the requirement for attestation to its logical conclusion we would end up replicating the entire server running the repository at the source, then the cycle repeats
Not really, that's the point. Reusable workflow, in a tightly controlled repos avoid exactly what you are saying and they are a fairly standard practice (if anything, also to avoid having 200 versions of CI instructions).
You can also verify attestations provenance by enforcing attestation performed via that particular, approved workflow, which is not security theater, it's actual real controls.
Makefile or scripts/do_thing either way this is correct. CI workflows should only do 1 thing each step. That one thing should be a command. What that command does is up to you in the Makefile or scripts. This keeps workflows/actions readable and mostly reusable.
Neither do most people, probably but it's kinda neat how they suggested fix for github actions' ploy to maintain vendor lock-in is to swap it with a language invented by that very same vendor.
Of course, if you use something else as a task runner, that works as well.