Hacker Newsnew | past | comments | ask | show | jobs | submit | oncallthrow's commentslogin

Just use rps

Shame that this report is LLM-generated slop.

A GitHub README.md without a torrent of AI-generated slop? Refreshing

have you considered implementing a +- operator?

For example a +- b would be [a - b, a + b]


I’ve spent dozens of hours reading about the conflict on social media. I don’t think I’ve seen a single western account, outside of schizophrenic conspiracy theorist anons, saying that Iran is some paradise that can do no wrong.


Yes, because that quite literally isn’t “news”. Western leaders including the pope have condemned jihadism for decades.


No, it will likely be a state actor who reaches it first, who will never give away such a capability so easily


No, and even if we could, it would require a migration of approaching the same difficulty of a migration to PQ, at which point why not just migrate to PQ


I've read the entire page and still don't know whether or not I can import Go modules in this language, which seems rather important


The first example suggests yes.


Really? Almost every example imports something from Go, and it states "interoperability with the Go ecosystem" (or similar, from memory).


That isn’t the same thing. Indeed, upon reading further, it appears there is no way to import non-stdlib go modules.


Support for Go third-party packages is not part of this first release, but the tooling to generate bindings for Go packages (which enables imports from the Go stdlib) is already in place[1]. Extending it to support third-party packages is on the roadmap.

[1] https://github.com/ivov/lisette/blob/main/tools/bindgen/READ...


I think this article is largely, or at least directionally, correct.

I'd draw a comparison to high-level languages and language frameworks. Yes, 99% of the time, if I'm building a web frontend, I can live in React world and not think about anything that is going on under the hood. But, there is 1% of the time where something goes wrong, and I need to understand what is happening underneath the abstraction.

Similarly, I now produce 99% of my code using an agent. However, I still feel the need to thoroughly understand the code, in order to be able to catch the 1% of cases where it introduces a bug or does something suboptimally.

It's possible that in future, LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on. When doing straightforward coding tasks, I think they're already there, but I think they aren't quite at that point when it comes to large distributed systems.


> LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.

The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.

LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.

I like working on computers because it minimizes the amount of biological-like things I have to work with.


I suppose transistors is a bad example.

Perhaps a better analogy would be the Linux kernel. It's built by biological humans, and fallible ones at that. And yet, I don't feel the need to learn the intricacies of kernel internals, because it's reliable enough that it's essentially never the kernel's fault when my code doesn't work.


Kernel is a bad analogy, if you understand how it behaves you can understand how its built. LLMs don't have that, their behaviour is not completely defined by how they are built.

Every abstraction is leaky, its not like I have 1 in every 100 tickets I work on needs understanding of the existence of filesystem buffers, it's in the back of my mind, it's always there. I didn't read linux kernel source, but I know it's existence. LLM output doesn't have that.


So we already have this problem and things are "fine"?


In my personal experience, the rate at which Claude Code produces suboptimal Rust is way higher than 1%.


That is dependent upon the quality of the AI. The argument is not about the quality of the components but the method used.

It's trivial to say using an inadequate tool will have an inadequate result.

It's only an interesting claim to make if you are saying that there is no obtainable quality of the tool that can produce an adequate result (In this argument, the adequate result in question is a developer with an understanding of what they produce)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: