Hacker Newsnew | past | comments | ask | show | jobs | submit | vbezhenar's commentslogin

Kubernetes enabled qualities small companies didn't dream before.

I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.

My infrastructure is self-healing. No more crashed app server.

Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).

My infrastructure can be easily scaled horizontally. Both up and down.

I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.

I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.


I'm not sure it is about security. For security, CRLs and OCSP were a thing from the beginning. Short-lived certificates allow to cancel CRLs or at least reduce their size, so CA can save some expenses (I guess it's quite a bit of traffic for every client to download CRLs for entire letsencrypt).

Operating systems should prevent privilege escalations, antiviruses should detect viruses, police should catch criminals, claude should detect prompt injections, ponies should vomit rainbows.

Claude doesn't have to prevent injections. Claude should make injections ineffective and design the interface appropriately. There are existing sandboxing solutions which would help here and they don't use them yet.

Are there any that wouldn't also make the application useless in the first place?

I don't think those are all equivalent. It's not plausible to have an antivirus that protects against unknown viruses. It's necessarily reactive.

But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.

Or like how FIDO2 and passkeys make it so we don't really have to worry about users typing their password into a lookalike page on a phishing domain.


> But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.

Any such document or folder structure, if its name or contents were under control of a third party, could still inject external instructions into sandboxed Claude - for example, to force renaming/reordering files in a way that will propagate the injection to the instance outside of the sandbox, which will be looking at the folder structure later.

You cannot secure against this completely, because the very same "vulnerability" is also a feature fundamental to the task - there's no way to distinguish between a file starting a chained prompt injection to e.g. maliciously exfiltrate sensitive information from documents by surfacing them + instructions in file names, vs. a file suggesting correct organization of data in the folder, which involves renaming files based on information they contain.

You can't have the useful feature without the potential vulnerability. Such is with most things where LLMs are most useful. We need to recognize and then design around the problem, because there's no way to fully secure it other than just giving up on the feature entirely.


I'm not following the threat model that begins with a malicious third party having control over my files

Unless you've authored every single file in question yourself, their content is, by definition, controlled by a third party, if with some temporal separation. I argue this is the typical case - in any given situation, almost all interesting files for almost any user came from someone else.

Did you mean "not plausible"? AV can detect novel viruses; that's what heuristics are for.

I believe the detection pattern may not be the best choice in this situation, as a single miss could result in significant damage.

Operating systems do prevent some privilege escalations, antiviruses do detect some viruses,..., ponies do vomit some rainbows?? One is not like the others...

> why would you ever want to not close tags?

Because browsers close some tags automatically. And if your closing tag is wrong, it'll generate empty element instead of being ignored. Without even emitting warning in developer console. So by closing tags you're risking introducing very subtle DOM bugs.

If you want to close tags, make sure that your building or testing pipeline ensures strict validation of produced HTML.


Putting an explicit end tag is more error-prone. It won't do anything for valid HTML but it'll add empty tag for invalid HTML. If you want to improve human readability, put end tag enclosed in HTML comment. At least it won't add empty elements.

Closing optional HTML tags just adds more ambiguity. How many HTMLParagraphElements here, what do you think?

    <p>
      text1
      <p>
        text2
      </p>
    </p>

2. And there’s no ambiguity there, just invalid HTML because paragraphs aren’t nestable.

It may look nested but the first p is actually closed when the second p starts, according to https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

Wouldn't this still result in just two paragraph elements? Yes, the first gets auto-closed, but I don't see how a third paragraph could emerge out of this. Surely that closing tag should just get discarded as invalid.

edit: Indeed, it creates three: the </p> seems to create an empty paragraph tag. Not the first time I've been surprised by tag soup rules.


Hence why I said “2” ;)

Browser will parse that as three HTMLParagraphElements. You may think that's invalid HTML, but browser will parse it and won't indicate any kind of error.

> Browser will parse that as three HTMLParagraphElements

Why?

> You may think that's invalid HTML, but browser will parse it and won't indicate any kind of error.

It isn’t an opinion, it literally is invalid HTML.

What you’re responding to is an assumption that I was suggesting browsers couldn’t render that. Which isn’t what I claimed at all. I know full well that browsers will gracefully handle incorrect HTML, but that doesn’t mean that the source is magically compliant with the HTML specification.


> Why?

I don't know why. Try it out. That's the way browsers are coded.

> It isn’t an opinion, it literally is invalid HTML.

It matters not. You're writing HTML for browser to consume, not for validator to accept. And most of webpages are invalid HTML. This very HN page contains 412 errors and warnings according to W3C validator, so the whole point of HTML validness is moot.


> I don't know why. Try it out. That's the way browsers are coded.

I’m not saying you’re wrong, but I’d need more than that to be convinced. Sorry.

> It matters not. You're writing HTML for browser to consume, not for validator to accept.

It matters because you’re arguing a strawman argument.

We weren’t discussing what a browser can render. We were discussing the source code.

So your comment wasn’t a rebuttal of mine. It was a related tangent or addition.


> I’m not saying you’re wrong, but I’d need more than that to be convinced. Sorry.

So basically my point is:

1. You can avoid closing some tags, letting browser to close tags for you. It won't do any harm.

2. You can choose to explicitly close all tags. It won't do anything for valid HTML, but it'll introduce subtle and hard to find DOM bugs by adding empty elements.

So you're trying to improve HTML source readability by risking to introduce subtle bugs.

If you want to do that, I'd recommend to implement HTML validation for build or test pipeline at least.

Another alternative is to use HTML comments to close tags, as this closing tag is supposed to be documentation-only and won't be used by browser in a proper code.


I get your point, but again, that’s not relevant to the point I was making.

You posted a terse comment with some HTML. I responded specifically about that comment and HTML. And you’re now elaborating on things as a rebuttal to my comment despite the fact that wasn’t the original scope of my comment.

Another example of that is how you’ve quoted my reply to the 2 vs 3 elements, and then answered a completely different question (one I didn’t even ask).

I don’t think you’re being intentionally obtuse but it’s still a very disingenuous way to handle a discussion.


> You're writing HTML for browser to consume, not for validator to accept.

I'm not a web programmer, but shouldn't one program against the specified interface instead of some edge case behavior of an implementation?


>Why?

Because the second open p-tag closes the first p-tag and then the last closing p has no matching starting p-tag and creates one thus resulting in 3 p-elements.

> It isn’t an opinion, it literally is invalid HTML.

the only "invalid" part is the last closing p.


At the end of the day, browsers have to handle most of the invalid atrocities thrown at it.

It doesn't make the code valid according to the specifications.


My point is that by closing optional tags you can introduce subtle bugs into your layout that might take some time to find and browser won't be of any help. You write closing tag, browser will implicitly add starting tag. It's better to memorise which tags are optional and do not close them at all.

You can also introduce subtle bugs by not closing them. Or forgetting which tags can be closed and thus leaving the wrong ones dangling.

So I think your argument here is tough to take at face value. It feels a lot more like you’re arguing personal preference as fact.


Precisely, it's an added burden to remember and what might be skipped. The less many exception, the better.

Though if a linter is formatting the whole codebase on its own in an homogeneous way, and someone else will deal with the added parsing complexity, that might feel okayish also to me.

Generally speaking, the less clutter the better. A bit like with a js codebase which is semicolon free where possible.

For pleasant experience of read and write, html in a simple text editor is very low quality. Pug for example is bringing far less clutter, though mandatory space indentation could be avoided with some alternative syntactic choices.


Excellent catch

Why would you nest paragraph tags?

They are not nested, according to HTML5 parsing rules. You get 3 (yes, three) sibling paragraphs, including an empty one.

There being nesting is just implied by the closing tags and indentation. But it is not actually there. I think this is the point of the example: Adding the closing tags just confuses the reader, by implying nesting that is not actually there, and even introduces a third empty paragraph. It might be better left out entirely.


That is invalid syntax. Only phrasing content is allowed the p element (https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Con...)

The second <p> is not inside of the first. The first <p> closes when the second <p> is encountered.

The syntax is invalid, but that's because the final </p> has no opening <p> that it can close.


This is invalid html, p tag can be nested in a p tag.

Even though it arguably should be, according to HTML5 parsing rules, this is not invalid. It is just interpreted differently from what most people would probably expect.

I think this is the point of the example, afaiui: The closing tags don’t clarify anything, quite the contrary, actually. They serve only to confuse the reader.


I've spent few days and got some basic zsh settings adjust for me. Since then I'm mostly using zsh with very little configuration and I like it a lot. Yes, it's a steep curve, but I'm spending all my life in zsh, so I think that was good time investment for me. In my experience default zsh settings are good enough and require very little customization.

You can use javascript as a single cross platform compile target. What's the difference?

Javascript comes with mandatory garbage collection. I suppose you could compile any language to an allocation-free semantic subset of Javascript, but it's probably going to be even less pretty than transpiling to Javascript already is.

> it's probably going to be even less pretty than transpiling to Javascript already is.

I don't see how it'd be much different to compiling to JavaScript otherwise. Isn't it usually pretty clear where allocations are happening and how to avoid them?


“Pretty clear” is good, “guaranteed by language specifications” is better.

Why reverse-engineer each JS implementation if you can just target a non-GC runtime instead?


WASM allows you to run some parts of the application a bit faster. ;)

WASM, and asm.js before it, roughly exist because Javascript is such a bad compile target.

WASM works with any language and can be much faster than javascript

You can compile any language to JavaScript. jslinux compiled x86 machine code to JavaScript.

So basically wasm is some optimisation. That's fine but it's not something groundbreaking.

And if we remove web from the platform list, there were many portables bytecodes. P-code from Pascal era, JVM bytecode from modern era and plenty of others.


> some optimisation

That's underselling it a bit IMO. There's a reason asm.js was abandoned.


Wikipedia mentions that Wasm is faster to parse than asm.js, and I'm guessing Wasm might be smaller, but is there any other reason? I don't think there's any reason for asm.js to have resulted in slower execution than Wasm.

> I don't think there's any reason for asm.js to have resulted in slower execution than Wasm

The perfect article: https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-...

Honestly the differences are less than I would have expected, but that article is also nearly a decade old so I would imagine WASM engines have improved a lot since then.

Fundamentally I think asm.js was a fragile hack and WASM is a well-engineered solution.


After reading the, I don't feel convinced abtout the runtime performance advantages of WASM over asm.js. he CPU features mentioned could be added to JS runtimes. Toolchain improvements could go both ways, and I expect asm.js would benefit from JIT improvements over the years.

I agree 100% with the startup time arguments made by the article, though. No way around it if you're going through the typical JS pipeline in the browser.

The argument for better load/store addressing on WASM is solid, and I expect this to have higher impact today than in 2017, due to the huge caches modern CPUs have. But it's hard to know without measuring it, and I don't know how hard it would be to isolate that in a benchmark.

Thank you for linking it. It was a fun read. I hope my post didn't sound adversarial to any arguments you made. I wonder what asm.js could have been if it was formally specified, extended and optimized for, rather than abandoned in favor of WASM.


Whatever it would have ended up like it would have been a big hack so I'm glad everyone agreed to go with a proper solution for once!

Both undersell and oversell. There are still cases where vanilla JS will be faster.

And AFAIK asm.js is the precursor to WASM, like the early implementations just built on top of asm.js's primitives.


You can't estimate a country with a single number. It makes no sense and actually hurts when someone decides to optimise for that number.

I can play World of Warcraft indefinitely.

Indeed, video games are probably the things most of humanity will retire to if they didn't attach so much ego and meaning to their jobs and by extension, the people around them.

Just be sure to swap games once in a while so you don't get bored.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: