Hacker Newsnew | past | comments | ask | show | jobs | submit | clickety_clack's commentslogin

The headline suggests that people have seen treetops glowing and it just hasn’t been captured on video before. The actual pictures and video is of something that nobody could have seen with their eyes.

This reminds me of a chat room interaction I had maybe 25 years ago. The other person was adamant that humans can't see the infrared from TV remotes, and I was adamant that I could. It was pretty a widespread belief (even in school science books) at that time that humans couldn't see infrared. Since then more science was done to prove that, in fact, some humans can see some infrared under some conditions.

I share that mainly to state that humans are amazing and have a wide and inconsistent range of capabilities (and sometimes even mutating into new capabilities!) Personally, I will always hesitate to say "nobody" and I lean towards "no typical human" instead. :)


I suppose this also depends on the types of remote controls? There are some where I can see red and some where I cannot.

The faint red glow is actual red light as many IR LED's (esp the ones used in cameras for night illumination) are close to the visible spectrum and have some visible light emission.

850nm is easily visible, but most remotes are 940nm, which is also visible as a faint purple glow but the source needs to be really bright.

Isn't infrared, by definition, wavelengths beyond what people can see?

Which people? And no, it's not defined that way: "radiation having a wavelength between about 700 nanometers and 1 millimeter"

You can absolutely see corona discharge like that with your eyes.

If you come to my day job, and we shut off all the lights in the test room, after your eyes adjust in the dark for a minute, you'll see the soft purple glow coming from the edge our 160kV test rig.

Definitely emits UV, but there is enough visible to see it for sure. It comes from the electrons exciting nitrogen in the air.[1]

1. https://commons.wikimedia.org/wiki/File:Nitrogen_discharge_t...


> (1:If you come to my day job), and (2: we shut off all the lights) (3:in the test room), (4:after your eyes adjust in the dark for a minute), you'll see the soft purple glow coming (5:from the edge our 160kV test rig).

So, 5 different things that make it glow "not coming from treetops". Parent poster wanted to see glowing treetops in a forest, where we might not be adjusted to dark for a minute.

You can also see such corona discharge with benchtop tesla coils even in lighted room, but those are not trees in forest glowing from a storm.


Even a smallish Tesla coil easily produces voltages north of 160kV. I built one using 4" PVC for the secondary with a wound length of maybe ~2 feet of secondary? From memory of the calculations I did at the time I think it was around 350 kV peak? Might have been higher. Threw 24 inch sparks quite easily.

I’m not saying it can’t be seen, I’m saying that you can’t prove something can be seen by showing me a photo that captures light that I can’t see.

what's the job?

I think that banning smoking in public places makes sense because you are impacting other people. I think banning things for kids makes sense because it’s a big wide world and it’s our duty to protect them. I’m not a fan of banning the things that a grown adult can do when it only affects them personally, however much I despise smoking. Since when have people decided that giving up personal liberty is fine. If you want to look 15 years older with gross teeth, horrible smell and die at 60, it’s kind of up to you.

You can train a tokenizer on old data just like you can train a model on old data.

But you can't use an old model with a new tokenizer. Changing the tokenizer implies you trained the model from scratch

A little bit of post-training will fix that. Folks on /r/LocalLLaMa have been making effective finetunes with diff. tokenizers for years.

If you do backend blind, it will also almost certainly embarrass you. I’ve never had an experience beyond the most basic crud app where I didn’t have to somehow use my engineering experience to dig it out of a hole.

Works mostly fine for me on Rust backends. As long as I'm willing to accept tight contracts at the edges with spaghetti in the middle, or otherwise gate approval for everything it does.

If I want good abstractions, sure, I can set up approvals and babysit it with reprompting, because it will do stupid things that an experienced engineer wouldn't. But the spaghetti also works in the sense that it takes the input types and largely correctly maps them to the output types.

That doesn't emarrass me with customers because they never see the internals. On front-end, obviously they will see and experience whatever abomination it cooks up directly.


Even the US government should be considering this.

It’s easier not to have that separation, just like it was easier not to separate them before LLMs. This is architectural stuff that just hasn’t been figured out yet.

No.

With databases there exists a clear boundary, the query planner, which accepts well defined input: the SQL-grammar that separates data (fields, literals) from control (keywords).

There is no such boundary within an LLM.

There might even be, since LLMs seem to form adhoc-programs, but we have no way of proving or seeing it.


There cannot be, without compromising the general-purpose nature of LLMs. This includes its ability to work with natural languages, which as one should note, has no such boundary either. Nor does the actual physical reality we inhabit.

There is a system prompt, but most LLMs don't seem to "enforce" it enough.

Since GPS-OSS there is also the Harmony response format (https://github.com/openai/harmony) that instead of just having a system/assistant/user split in the roles, instead have system/developer/user/assistant/tool, and it seems to do a lot better at actually preventing users from controlling the LLM too much. The hierarchy basically becomes "system > developer > user > assistant > tool" with this.

Wait, you mean typical consumers _don’t_ want to build my terminal-based TUI app from source?



Ireland needs this. I don’t live there anymore, but the amount of ads literally everywhere you go there these days is insane.

Gambling ruins lives.


Sometimes PRDs might be boilerplate, but there’s been times where I sat down thinking “I can’t believe these dumbasses want to foo a widget”, but when writing the user story I get into their heads a little and I realize that widgets are useless if they can’t be foo’d. It’s not the same if AI is just telling me, because amongst the fire hose of communication and documentation flying at me, AI is just another one. Writing it myself forces me to actually engage, even if only a little more than at a shallow level.


I think that an actionable critique might be that there’s an overuse of “big word” adverbs and adjectives.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: