Haven't read the book, but points two and three definitely struck some bells in the back clocktowers of my mind.
More generally, reading a bit of Orwell was inescapable in my schooling, but I sought out 1984 myself. I discovered I had kind of a thing for both utopias and dystopias.
And as I contemplate things I might write or compose, I do note that outrage towards this regime is very much in the mix of my motivations.
But you cannot predict a priori what that deterministic output will be – and in a real-life situation you will not be operating in deterministic conditions.
> This bug is categorically distinct from hallucinations.
Is it?
> after using it for months you get a ‘feel’ for what kind of mistakes it makes, when to watch it more closely, when to give it more permissions or a longer leash.
Do you really?
> This class of bug seems to be in the harness, not in the model itself.
I think people are using the term "harness" too indiscriminately. What do you mean by harness in this case? Just Claude Code, or...?
> It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”
How do you know? Because it looks to me like it could be a straightforward hallucination, compounded by the agent deciding it was OK to take a shortcut that you really wish it hadn't.
For me, this category of error is expected, and I question whether your months of experience have really given you the knowledge about LLM behavior that you think it has. You have to remember at all times that you are dealing with an unpredictable system, and a context that, at least from my black-box perspective, is essentially flat.
I think it might be a bad thing. I'm no stranger to math or computer science, but even after staring at the front page for a minute I was ready to dismiss this as the ravings of a lunatic.
It's like they had the idea of marketing this like a software project, not realizing that most front pages of software projects are utter bunk as well. It introduces terminology and syntax with no motivation or explanation.
Even once trying to get into "Quick Start" and "Specification" I was still mystified as to what it is or why I should want to play with it, or care. I had to go to the link mentioned upthread to get any sense of what this was or how it worked.
I think it's just badly written.
That being said, what seems to be proposed is a structure and calculus that are an alternative to lambda-calculus. The structures, as you can probably guess from the picture, are binary trees, ostensibly unlabeled except that there is significance to the ordering of the children. The calculus appears to be rules about how trees can be "reduced", and there is where the analogy to lambda calculus comes in.
Hopefully someone who actually knows this stuff can see whether I managed to get all that right – because I promise you, none of that understanding came from the website.
If you don’t understand what it does, it’s not for you. But if you don’t understand what it does, get good.
TLDR; what happens when a very small piece of js can be run in the browser or any environment and offer a meta programming layer, that is stupid simple, but also useful because it offers Turing completeness with reflection? Also, it’s site explains what it does, but you have to center on what it is doing. “Minimal” 20 lines of rust is the entire calculus. If you don’t know what Turing complete means get out. Similarly with reflective. Modular, look at the demos.
You flunked out of putting in an effort before spouting your mouth do try and actually be useful before you respond, there are those of us actually paying attention.
They are not trying to buy developer goodwill, they are trying to catch up with Antrophic in terms of getting those B2B contracts, which is currently the most realistic path towards not running out of money.
1. The Register reports OpenAI is well ahead of Anthropic in B2B contracts. It's Anthropic playing catch-up, not OpenAI.
2. In any case, the announcement strongly suggests that customer acquisition had little to do with this. The stated purpose of the acquisition, as I read it, is an acquisition (plus acquihire?) to bolster their Codex product.
3. But if they were hoping for some developer goodwill as a secondary effect... well, see my note above.
I went to a James Gosling talk where he excoriated the Emacs users in his audience for clinging to outdated technology and not using a state-of-the-art IDE.
But the IDE he was hawking wasn't Eclipse. I think it was Sun Studio.
Please do not use the occasion of the death of thousands of Iranians in a war we launched against them as some sort of illustrative point about return to office and birth rates in the West.
There's no lemonade to be found here at this time. What there is to be found are a bunch of tone-deaf people who seem utterly ignorant and indifferent to the war's reality.
Obviously there is, you just refuse to recognize it. I think this war is terrible, Trump is the worst president of American history by a wide margin, and yet, I can still be happy that we are able to glean insight from learnings that came about as a result of forces not under my control.
Finding good among the bad is such a commonplace occurrence that my native tongue, English, has many metaphors for it. I already mentioned making lemons into lemonade; there is also "every cloud has a silver lining," "every dog has his day," "light at the end of the tunnel," and "April showers bring May flowers"
I suspect you would respond, "but Mayflowers brought genocidal white settlers!!!!"
As a Mayflower descendant I would scarcely say that. But the fact that you offer cheap platitudes and tales from hundreds of years ago to justify why it's OK to consider why the current bombing and slaughter is good for our work-life balance remains astonishingly tone-deaf. This is absolutely a you problem.
1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.
2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.
It sounds to me like what you want is not a better XML, but just s-exprs. Which is fine, but not quite solving the same problem.
3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent – so much so that YAML authors are trying to use it to validate their stuff (the poor bastards) – and XML schema validation, while robust, is a complex and fragmented landscape around DTD, XSD, RELAX-NG, and Schematron. So although XML might have the edge, it's a more nuanced picture than XML proponents are claiming.
4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.
> 1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.
No, they're barely adequate for those purposes. And you could (and if you have a XSD you probably should) still replace them with elements. If you argue that you can't, then you're arguing that JSON does not function. You can just inline metadata along side data. That works just fine. That's the thing about metadata. It's data!
You don't need attributes. Having worked in information systems for 25 years now, they are the most heavily, heavily, heavily misused feature of XML and they are essentially always wrong.
Well, now you're a bit stuck. You can make the XSD look at basic data types, and that's it. You can never use complex types. You can never use multiple values if you need it, or if you do you'll have to make your attribute a delimited string. You can never use complex types. You can't use order. You're limiting your ability to extend or advance things.
That's the problem with XML. It's so flexible it lets developers be stupid, while also claiming strictness and correctness as goals.
> 2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.
Sure, but the fact that closing tags in the proper order is is mandatory, you're not actually including anything at all. The only thing you're doing is introducing trivial syntax errors.
Because the truth is that this is 100% unambiguous in XML because the rules changed:
The reason SGML had a problem with the generic close tag was because SGML didn't require a closing tag at all. That was a problem It didn't have `<tag />`. It let you say `<tag1><tag2>...</tag1>` or `<tag1><tag2>...</>`.
Named closing tags had more of a point when we were actually writing XML by hand and didn't have text editors that could find the open and close tags for you, but that is solved. And now we have syntax highlighting and hierarchical code folding on any text editor, nevermind dedicated XML editors.
> 3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent
Then my guess is that you have worked exclusively in the tech industry for customers that are also exclusively in the tech industry. If you have worked in any other business with any other group of organizations, you would know that the rest of the world is absolute chaos. I think I've seen 3 examples of a published JSON Schema, and hundreds that do not.
> 4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.
No, I think you're looking at what the format was intended to do 25 years ago and trying to claim that that should not be extended or improved ever. You're ignoring what it's actually being used for.
Unless you're going to make data queries return large tabular data sets to the user interface as more or less SQLite or DuckDB databases so the browser can freely manipulate them for the user... you're kind of stuck with XML or JSON or CSV. All of which suck for different reasons.
1. I don't disagree that attributes have been abused – so have elements – but you yourself identified the right way to use them. Yes, you can inline attributes, but that also leads to a document that's harder to use in some cases. So long as you use them judiciously, it's fine. In actual text markup cases, they're indispensable, as HTML illustrates.
2. As far as JSON Schema, you're wrong on all acounts – wrong that I haven't seen Some Stuff, wrong that JSON schema doesn't get used (see Swagger/OpenAPI), and wrong that XML Schema doesn't also get underitilized when a group of developers get lackadaisical.
3. As far as what historical use has been, I'm less interested in exhuming historical practice than simply observing which of the many use cases over the last 20 years worked well (and still work) and which didn't. The answer isn't that none of them worked, and it certainly isn't that XML users had a better bead on how to use it 20 years ago – it went through a massive hype curve just like a lot of techs do.
4. Regarding tabular data exchange, I stand by my statement. Use XML or JSON if you must, and sometimes you must, but there are better tools for the job.
More generally, reading a bit of Orwell was inescapable in my schooling, but I sought out 1984 myself. I discovered I had kind of a thing for both utopias and dystopias.
And as I contemplate things I might write or compose, I do note that outrage towards this regime is very much in the mix of my motivations.
reply