Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's literally point #2, but I had the same reaction as you when I first read point #1 :)




I agree with #2, I meant more if you are calling out to something that is not a task runner(Make, Taskfile, Just etc) or a shell script thats a bit of a smell to me. E.g. I have seen people call out to Python scripts etc and it concerns me.

My software runs on Windows, Linux and MacOS. The same Python testing code runs on all three platforms. I mostly dislike Python but I can't think of anything better for this use case.

You might consider Deno with Typescript... it's a single exe runtime, with a self-update mechanism (deno upgrade) and can run typescript/javascript files that directly reference the repository/http/modules that it needs and doesn't require a separate install step for dependency management.

I've been using it for most of my local and environment scripting since relatively early on.


I don't touch Windows so I would not know.

> The same Python testing code runs on all three platforms.

I have no objections to Python being used for testing, I use it myself for the end to end tests in my projects. I just don't think Python as a build script/task runner is a good idea, see below where I got Claude to convert one of my open source projects for an example.


It's interesting because #1 is still suggesting a shell script, it's just suggesting a better shell to script.

I had no idea 'pwsh' was PowerShell. Personally not interested, maybe if your a Microsoft shop or something then yeah.

"pwsh" is often used as the short-hand for modern cross-platform PowerShell to better differentiate it from the old Windows-only PowerShell.

I think pwsh is worth exploring. It is cross-platform. It is post-Python and the Python mantra that "~~code~~ scripts are read more often than they are written". It provides a lot of nice tools out of the box. It's built in an "object-oriented" way, resembling Python and owing much to C#. When done well the "object-oriented" way provides a number of benefits over "dumb text pipes" that shells like bash were built on. It is easy to extend with C# and a few other languages, should you need to extend it.

I would consider not dismissing it off hand without trying it just because Microsoft built it and/or that it was for a while Windows-only.


It's also both a larger download and slower to start than Java, which is not known for being light and nimble. In fact, PowerShell is so slow that you can both compile and run the equivalent C# program before PowerShell finishes launching. Not ideal for a shell or a scripting language.

Also, the newer versions aren't included with Windows, which would have been useful – instead Windows includes an incompatible older version that admonishes you to download the new version. But why would you download several hundred megabytes of pwsh when you can equally well download any other language runtime?

Also, it sends "telemetry" to Microsoft by default.

Also, the error handling is just awful, silencing errors by default, requiring several different incantations to fix.

Also, the documentation is vague and useless. And the syntax is ugly.


It gets faster to boot on subsequent launches and some distros are now packaging pre-baked versions.

The new versions aren't included in Windows and the old versions are still in Windows for the exact same reasons of Windows backwards compatibility requirements. But at this point the bootstrap on Windows is as easy as `winget install --id Microsoft.PowerShell`.

The error handling isn't far from the bash defaults, but the magic incantations actually tell you what they do versus the number of bash scripts littered with `set -euxo pipefail` is the exact same as the number of scripts that need an `$ErrorActionPreference = "Stop"` and/or a `$PSNativeCommandUseErrorActionPreference = true`.

I find the documentation less vague and more useful than the average `man` page and the syntax is fine (and better than bash) to me, but I understand how much of that is personal preference and familiarity.


It's actually a pretty good shell! FOSS and cross-platform, too.

Huh? Who cares if the script is .sh, .bash, Makefile, Justfile, .py, .js or even .php? If it works it works, as long as you can run it locally, it'll be good enough, and sometimes it's an even better idea to keep it in the same language the rest of the project is. It all depends and what language a script is made in shouldn't be considered a "smell".

Once you get beyond shell, make, docker (and similar), dependencies become relevant. At my current employer, we're mostly in TypeScript, which means you've got NPM dependencies, the NodeJS version, and operating system differences that you're fighting with. Now anyone running your build and tests (including your CI environment) needs to be able to set all those things up and keep them in working shape. For us, that includes different projects requiring different NodeJS versions.

Meanwhile, if you can stick to the very basics, you can do anything more involved inside a container, where you can be confident that you, your CI environment, and even your less tech-savvy coworkers can all be using the exact same dependencies and execution environment. It eliminates entire classes of build and testing errors.


I use to have my Makefile call out and do `docker build ...` and `docker run ...` etc with a volume mount of the source code to manage and maintain tooling versions etc.

It works okay, better than a lot of other workflows I have seen. But it is a bit slow, a bit cumbersome(for langs like Go or Node.js that want to write to HOME) and I had some issues on my ARM Macbook about no ARM images etc.

I would recommend taking a look at Nix, it is what I switched to.

* It is faster. * Has access to more tools. * Works on ARM, X86 etc.


You mean nix inside a container? Or what exactly?

I replaced building/testing etc inside Docker containers to just using Nix.

* https://github.com/DeveloperC286/clean_git_history/commit/f8...


I've switched to using Deno for most of my orchestration scripts, especially shell scripts. It's a single portable, self-upgradeable executable and your shell scripts can directly reference the repositories/http(s) modules/versions it needs to run without a separate install step.

I know I've mentioned it a few times in this thread, just a very happy user and have found it a really good option for a lot of usage. I'll mostly just use the Deno.* methods or jsr:std for most things at this point, but there's also npm:zx which can help depending on what you're doing.

It also is a decent option for e2e testing regardless of the project language used.


Shell and bash are easy to write insecurely and open your CI runners or dev machines up for exploitation by shell injection. Non-enthusiasts writing complex CI pipelines pulling and piping remote assets in bash without ShellCheck is a risky business.

Python is a lot easier to write safely.


You shouldn't be pulling untrusted assets in CI regardless. Hacking your bash runner is the hardest approach anyways, just patch some subroutine in a dependency that you'll call during your build or tests.

> Huh? Who cares if the script is .sh, .bash, Makefile, Justfile, .py, .js or even .php?

Me, typically I have found it to be a sign of over-engineering and found no benefits over just using shell script/task runner, as all it should be is plumbing that should be simple enough that a task runner can handle it.

> If it works it works, as long as you can run it locally, it'll be good enough,

Maybe when it is your own personal project "If it works it works" is fine. But when you come to corporate environment there starts to be issues of readability, maintainability, proprietary tooling, additional dependencies etc I have found when people start to over-engineer and use programming languages(like Python).

E.g.

> never_inline 30 minutes ago | parent | prev | next [–]

> Build a CLI in python or whatever which does the same thing as CI, every CI stage should just call its subcommands.

However,

> and sometimes it's an even better idea to keep it in the same language the rest of the project is

I'll agree. Depending on the project's language etc other options might make sense. But personally so far everytime I have come across something not using a task runner it has just been the wrong decision.


> But personally so far everytime I have come across something not using a task runner it has just been the wrong decision.

Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

Typically I see larger issues being created from "under-engineering" and just rushing with the first idea people can think of when they implement things, rather than "over-engineering" causing similarly sized future issues. But then I also know everyone's history is vastly different, my views are surely shaped by the specific issues I've witnessed (and sometimes contributed to :| ), than anything else.


> Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

Strong opinions, loosely held :)

> Typically I see larger issues being created from "under-engineering" and just rushing with the first idea people can think of when they implement things, rather than "over-engineering"

Funnily enough running with the first idea I think is creating a lot of the "over-engineering" I am seeing. Not stopping to consider other simpler solutions or even if the problem needs/is worth solving in the first place.

> Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

I quickly asked Claude to convert one of my open source repos using Make/Nix/Shell -> Python/Nix to see how it would look. It is actually one of the better Python as a task runners I have seen.

* https://github.com/DeveloperC286/clean_git_history/pull/431

While the Python version is not as bad as I have seen previously, I am still struggling to see why you'd want it over Make/Shell.

It introduces more dependencies(Python which I solved via Nix) but others haven't solved this problem and the Python script has dependencies(such as Click for the CLI).

It is less maintainable as it is more code, roughly x3 the amount of the Makefile.

To me the Python code is more verbose and not as simple compared to the Makefile's target so it is less readable as well.


> It introduces more dependencies(Python which I solved via Nix) but others haven't solved this problem and the Python script has dependencies(such as Click for the CLI).

UV scripts are great for this type of workflow

There are even scripts which will install uv in the same file effectively making it just equivalent to ./run-file.py and it would handle all the dependency management the python version management and everything included and would work everywhere

https://paulw.tokyo/standalone-python-script-with-uv/

Personally I end up just downloading uv and so not using the uv download script from this but if I am using something like github action which are more (ephemeral?) I'd just do this.

Something like this can start out simple and can scale much more than the limitations of bash which can be abundant at times

That being said, I still make some shell scripts because executing other applications is first class support in bash but not so much in python but after discovering this I might create some new scripts with python with automated uv because I end up installing uv on many devices anyway (because uv's really good for python)

I am interested in bun-shell as well but that feels way too much bloated and even not used by many so less (AI assistance at times?) and I haven't understood bun shell at the same time too and so bash is superior to it usually


> UV scripts are great for this type of workflow

So previously when I have seen Python used as a task runner I think they used UV to call it. Although I don't think they had as a complete solution as your here auto-installing UV etc.

Although the example you've linked is installing UV if missing, the version is not pinned, I also don't think it is handling missing Python which is not pinned even if installed locally. So you could get different versions on CI vs locally.

While yes you are removing some of the dependencies problems created via using Python over Make/Shell I don't think this completely solves it.

> Something like this can start out simple and can scale much more than the limitations of bash which can be abundant at times

I personally haven't witnessed anytime I would consider the scales to have tipped in favour of Python and I would be concerned if they ever do, as really the task runner etc should be plumbing, so it should be simple.

> That being said, I still make some shell scripts because executing other applications is first class support in bash but not so much in python but after discovering this I might create some new scripts with python with automated uv because I end up installing uv on many devices anyway (because uv's really good for python)

Using Python/UV to do anything more complex than my example PR above?


I think UV scripts can/will actually install python and manage it itself as well and you can actually pin a specific version of python itself via Uv scripts

I copied this from their website (https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...)

uv also respects Python version requirements: example.py

# /// script # requires-python = ">=3.12" # dependencies = [] # ///

# Use some syntax added in Python 3.12 type Point = tuple[float, float] print(Point)

> Using Python/UV to do anything more complex than my example PR above?

I can agree that this might be complex but that complexity has a trade off and of course nothing is shoe fits all but there are times when someone has to manage a complex CI environment and I looked at and there are some CI deterministic options too like invoke etc. and when you combine all of these, I feel like the workflow can definitely be interesting to say the least

Once again, I don't know what really ends up in github actions since I have never really used it properly, I am basing its critiques based on what I've read and what solutions (python came quite frequently) and something recently which I discovered (which was the blog)


This thing does a global uv install when run? That's obnoxious! Never running stuff from whoever wrote this.

Oh, and later the author suggests the script modify itself after running. What the fuck. Absolutely unacceptable way to deploy software.


Does it really matter if its a global install of uv or not especially on Github Actions

Also if this still bothers you, nothing stops you from removing the first x lines of code and having it in another .py file if this feels obnoxious to you

> Oh, and later the author suggests the script modify itself after running. What the fuck. Absolutely unacceptable way to deploy software.

Regarding author suggest its removes itself its because it does still feel clutterish but there is virtually 0 overhead in using/having it still be if you are already using uv or want to use uv

Oh also, (I am not the Author) but I have played extensively with UV and I feel like the script can definitely be changed to install it locally rather than globally.

They themselves mention it as #overkill on their website but even then it is better than whatever github action is


I'm a huge believer in the rule that everything GH actions does should be a script you can also run locally.

Yes I believe the same too and I think we are on the same goal. I think that I can probably patch this code to install uv, let's say locally instead of globally if that's a major concern. I feel like its not that hard.

It's easy enough to patch. It's the philosophy that bugs me. We already have a huge problem with routine workflows pulling things from the network (often, without even a semblance of hash-locking) and foregoing the traditional separation between environment setup and business logic. There's a lot of value into having discrete steps for downloading/installing stuff and doing development, because then you can pay special attention to the former, look for anything odd, read release notes, and so on. Between explicit, human-solicited upgrades, dev workflows should be using, ideally, vendored dependencies, or, if not that, then at least stuff that's hash-verified end-to-end.

Someday, someone is going to have a really big disaster that comes out of casual getting unauthenticated stuff from somebody else's computer.


I agree with you and you raise some good points

I think your reason of worrying is either that A) packages can update and contain malware or B) Uv's installation itself. might have malware if any of A) or B) get hacked

Regarding A) I feel like uv's dependencies can be pinned to a certain date to make them reproducible and this can come of help (https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...)

Regarding B) I feel like they provide attestations via GitHub Artifact Attestations and the script could once again be modified to actually verify it via github attestations and they also provide ghcr artifacts (as such immutability) atleast of docker images and I looked further into it and it seems that you can use github artifacts to upload normal binary files as well so I will probably take a look into seeing if I can do something like this for uv's ghcr

Effectively after A) and B) the trust just ends up being reliant on Github's Microsoft infrastructure (usually) and perhaps python infrastructure which is on fastly

But I feel like this is for cases of extremely sensitive workflows But I feel like I might still take a look at it because security still feels very interesting to me and just because of this discussion, I can see at some pointers of following up on curiosity lol

Anyways would love to continue our discussion and probably update you on trying to make a script which could actually be completely pinned (atleast uv binary instead of just running a shell script from the astral servers in such case)


Using shell becomes deeply miserable as soon as you encounter its kryptonite, the space character. Especially but not limited to filenames.

I find that shell scripting has a sharp cliff. I agree with the sentiment that most things are over engineered. However it’s really easy to go from a simple shell script running a few commands to something significantly more complex just to do something seemingly simple, like parse a semantic version, make an api call and check the status code etc, etc.

The other problem with shell scripting on things like GHA is that it’s really easy to introduce security vulnerabilities by e.g forgetting to quote your variables and letting an uncontrolled input through.

There’s no middle ground between bash and python and a lot of functionality lives in that space.


> However it’s really easy to go from a simple shell script running a few commands to something significantly more complex just to do something seemingly simple, like parse a semantic version, make an api call and check the status code etc, etc.

Maybe I keep making the wrong assumption that everyone is using the same tools the same way and thats why my opinions seem very strong. But I wouldn't even think of trying to "parse a semantic version" in shell, I am treating the shell scripts and task runners as plumbing, I would be handing that of a dedicated tool to action.


Let’s say have a folder of tarballs and need to install the latest version. I could reach for an additional “dedicated” tool, get it installed into the CI environment and then incorporate it into the build process, or I could just make a slight modification to my existing shell script and do something like “ls mypkg-*.tar.gz | xargs -n1 | sort -Vr | head -n1” and then move on. But then we start publishing multiple release candidates and now I need to add to that logic to further distinguish between earlier and later rc versions. And so on and so forth…

Now I’m in agreement with you that this is a bad fit for shell scripting, but it is often pragmatic and expedient. And bc there is a cliff between bash and (say) python, some of the time, you’re going to choose the path of least resistance.

Now scale this out to a small team of engineers all facing the same dumb decision of needing to make some tradeoff when they would much rather be writing application logic. The lack of a ubiquitous and robust intermediate language leads to brittle CI fraught with security vulnerabilities.

While the example I provided is a bit contrived, this behavior isn’t hypothetical. I see it everywhere I’ve worked.


Yes I am seeing what your saying, it can start of simple and people will choose the path of least resistance and you'll end up with a ball of mud eventually. Fortunately I have not come across anything that bad yet.

I know it is a made up example, but I am not sure Python would improve the situation. I would be looking at either Docker or SBOMs to try and improve the situation and then hopefully the release process would become a lot simpler.


yea imagine having to maintain a python dependency (which undergoes security constraints) all because some junior cant read/write bash... and then that junior telling you you're the problem lmao



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: