Hacker Newsnew | past | comments | ask | show | jobs | submit | kodablah's commentslogin

> Saying there is no opt-out is just false

I can't see where one can opt-out of this new behavior and into the existing behavior, only a description of the new behavior's bypass (which is not the same thing at all)

> easy to bypass the cooling-off period with ADB

I don't think this is a reasonable use of the term "easy". I should be able to give my non-technical friend an apk and they can use it right then, with the one "are you very sure" screen.


> . I should be able to give my non-technical friend an apk and they can use it right then

Unfortunately that is the same vector that scammers use to drain people's bank accounts


I will say, an underrated use case for even small, local LLMs is making command line tools drastically more accessible to laypeople

I now know zero people I don't think should use linux, and people I know seems to run quite a gamut of technical know-how compared to most other technical folks I know


The way you give your non-technical friends an APK and they just install it is by you signing it.

But I want to let someone MITM my non-technical friend and repalce my APK with malware.

> I can't see where one can opt-out of this new behavior and into the existing behavior, only a description of the new behavior's bypass (which is not the same thing at all)

I don't understand this, the ability to bypass new behavior in settings menus is basically the defenition of a new feature having an opt-out. Can you elaborate?


> If you dont have analytics you are flying blind

More like flying based on your knowledge as a pilot and not by the whims of your passengers.

For many CLIs and developer tooling, principled decisions need to reign. Accepting the unquantifiability of usage in a principled product is often difficult for those that are not the target demographic, but for developer tools specifically (be they programming languages, CLIs, APIs, SDKs, etc), cohesion and common sense are usually enough. It also seems real hard for product teams to accept the value of the status quo with these existing, heavily used tools.


Actually it's more like flying in the clouds with no instruments which can lead to spatial disorientation when you exit the cloud cover and realize you're nosediving towards the earth. https://en.wikipedia.org/wiki/Spatial_disorientation

Flying based on the whims of your passengers would be user testing/interviewing, which is a complementary, and IMO necessary, strategy alongside analytics.


> You can try the technical preview today by running npx cf. Or you can install it globally by running npm install -g cf.

A couple of obvious questions - Is it open source (npmjs side doesn't point to repo)? And in general will it be available as a single binary instead of requiring nodejs tooling to install/use? If so, using recently-acquired Bun or another product/approach?


I can't find any repository, either, but the package is listed as MIT-licensed and includes source maps, so I assume it will be published soon.


I suppose you could probably legally justify claude-code-ing the package from the source maps by the license if they don't...


It is also in that book, page 36/37, with transcription and minor note on issues with ISS toilets in 2008.


Duralade - a programming language for durable execution (but has many neat aspects)

Most of the work as of today is in a branch, can see the language spec at https://github.com/cretz/duralade/blob/initial-runtime/docs/..., and some samples at https://github.com/cretz/duralade/tree/initial-runtime/sampl....

May not amount to anything, but the ideas/concepts of this durable language are quite nice.


I'm curious what advantages this has over adding durability to an existing language, like DBOS does:

https://github.com/dbos-inc/dbos-demo-apps/blob/main/python/...


Modern languages are not safe enough nor are they very amenable to versioning, serialization, resumption, etc. It makes sense for modern durable execution engines to meet developers where they are (I wrote multiple of the SDKs at Temporal, including the Python one, this is just a fun toy side project), but a purpose-built language that has serialization, patching, wait conditions, kwargs everywhere, externalizing side effects, etc, etc, etc is a big win vs something like Python.

Admittedly the lang spec doesn't do a great job at the justification side, but the engine spec adjacent to it at https://github.com/cretz/duralade/blob/initial-runtime/docs/... that has sections showing CLI/API commands can help make it clearer where this runtime is unique.


Fascinating, thanks for the info!


> People who are saying they're not seeing productivity boost, can you please share where is it failing?

At review time.

There are simply too many software industries that can't delegate both authorship _and_ review to non-humans because the maintenance/use of such software, especially in libraries and backwards-compat-concerning environments, cannot justify an "ends justifies the means" approach (yet).


I'm of the mind that it will be better to construct more strict/structured languages for AI use than to reuse existing ones.

My reasoning is 1) AIs can comprehend specs easily, especially if simple, 2) it is only valuable to "meet developers where they are" if really needing the developers' history/experience which I'd argue LLMs don't need as much (or only need because lang is so flexible/loose), and 3) human languages were developed to provide extreme human subjectivity which is way too much wiggle-room/flexibility (and is why people have to keep writing projects like these to reduce it).

We should be writing languages that are super-strict by default (e.g. down to the literal ordering/alphabetizing of constructs, exact spacing expectations) and only having opt-in loose modes for humans and tooling to format. I admit I am toying w/ such a lang myself, but in general we can ask more of AI code generations than we can of ourselves.


I think the hard part about that is you first have to train the model on a BUTT TON of that new language, because that's the only way they "learn" anything. They already know a lot of Python, so telling them to write restricted and sandboxed Python ("you can only call _these_ functions") is a lot easier.

But I'd be interested to see what you come up with.


> that's the only way they "learn" anything

I think skills and other things have shown that a good bit of learning can be done on-demand, assuming good programming fundamentals and no surprise behavior. But agreed, having a large corpus at training time is important.

I have seen, given a solid lang spec to a never-before-seen lang, modern models can do a great job of writing code in it. I've done no research on ability to leverage large stdlib/ecosystem this way though.

> But I'd be interested to see what you come up with.

Under active dev at https://github.com/cretz/duralade, super POC level atm (work continues in a branch)


> you first have to train the model on a BUTT TON of that new language

Tokenization joke?


> The thing is, if you want people to understand durability but you also hide it from them, it will actually be much more complicated to understand and work with a framework.

> The real golden ticket I think is to make readable intuitive abstractions around durability, not hide it behind normal-looking code.

It's a tradeoff. People tend to want to use languages they are familiar with, even at the cost of being constrained within them. A naive DSL would not be expressive enough for the turing completeness one needs, so effectively you'd need a new language/runtime. It's far easier to constrain an existing language than write a new one of course.

Some languages/runtimes are easier to apply durable/deterministic constraints too (e.g. WASM which is deterministic by design and JS which has a tiny stdlib that just needs a few things like time and rand replaced), but they still don't take the ideal step you mention - put the durable primitives and their benefits/constraints in front of the dev clearly.


This still assumes an all encompassing transparent durability layer, what I'm arguing for is the opposite: something that can just be a library in any language, and any runtime, because it does not try to be clever about injecting durability in otherwise idiomatic code.


> that your entire workflow still needs to be idempotent

If just meaning workflow logic, as the article mentions it has to be deterministic, which implies idempotency but that is fine because workflow logic doesn't have side effects. But the side-effecting functions invoked from a workflow (what Temporal dubs "activities") of course _should_ be idempotent so they can be retried upon failure, as is the case for all retryable code, but this is not a requirement. These side effecting functions can be configured at the callsite to have at-most-once semantics.

In addition to many other things like observability, the value of durable execution is persisted advanced logic like loops, try/catch, concurrent async ops, sleeping, etc and making all of that crash proof (i.e. resumes from where it left off on another machine).


> The author's point about the friction from explicit step wrappers is fair, as we don't use bytecode generation today, but we're actively exploring it to improve DX.

There is value in such a wrapper/call at invocation time instead of using the proxy pattern. Specifically, it makes it very clear to both the code author and code reader that this is not a normal method invocation. This is important because it is very common to perform normal method invocations and the caller needs to author code knowing the difference. Java developers, perhaps more than most, likely prefer such invocation explicitness over a JVM agent doing byte code manip.

There is also another reason for preferring a wrapped-like approach - providing options. If you need to provide options (say timeout info) from the call site, it is hard to do if your call is limited to the signature of the implementation and options will have to be provided in a different place.


I'm still swinging back and forth which approach I ultimately prefer.

As stated in the post, I like how the proxy approach largely avoids any API dependency. I'd also argue that Java developers actually are very familiar with this kind of implicit enrichment of behaviors and execution semantics (e.g. transaction management is weaved into applications that way in Spring or Quarkus applications).

But there's also limits to this in regards to flexibility. For example, if you wanted to delay a method for a dynamically determined period of time, rather than for a fixed time, the annotation-based approach would fall short.


At Temporal, for Java we did a hybrid approach of what you have. Specifically, we do the java.lang.reflect.Proxy approach, but the user has to make a call instantiating it from the implementation. This allows users to provide those options at proxy creation time and not require they configure a build step. I can't speak for all JVM people, but I get nervous if I have to use a library that requires an agent or annotation processor.

Also, since Temporal activity invocations are (often) remote, many times a user may only have the definition/contract of the "step" (aka activity in Temporal parlance) without a body. Finally, many times users _start_ the "step", not just _execute_ it, which means it needs to return a promise/future/task. Sure this can be wrapped in a suspended virtual thread, but it makes reasoning about things like cancellation harder, and from a client-not-workflow POV, it makes it harder to reattach to an invocation in a type-safe way to, say, wait for the result of something started elsewhere.

We did the same proxying approach for TypeScript, but we saw as we got to Python, .NET, and Ruby that being able to _reference_ a "step" while also providing options and having many overloads/approaches of invoking that step has benefits.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: