Hacker Newsnew | past | comments | ask | show | jobs | submit | _bobm's commentslogin

What are the news recently?

Well, Intel was kind of in the dumps because their process fell behind. They didn't bet on EUV and got leapfrogged by TSMC and Samsung who did use ASML's EUV technology.

They eventually got on the EUV train and were the first customer to receive ASML's current state of the art machine which they call high-NA EUV. Intel's 18A process is the first to use this machine as part of the manufacturing process, Panther Lake uses this process so now they're right back to being SOTA.

All the news about them (stock price movements, theories about them going bankrupt, Panther Lake, etc...) for the last 2 years has essentially been people betting on whether or not they can successfully incorporate SOTA ASML machines into their manufacturing.


amen

hah, good on them.

nice catch.


How do you see Zulip comparing to anytype, https://anytype.io/ ?


This is how I view it as well.

And... and...

This results in a _very_ deep implication, which big companies may not be eager to let you see:

they are context processors

Take it for what it is.


What you are trying to say is they are plagiarists and training on the input?

We know that already I don’t know why have to be quiet or hint at it, in fact they have been quite explicit about it.

Or is there some other context to your statement? Anyway that’s my “take that for what you will”.


But you are not having a free meal lunch are you? You _are paying_ for your meal.

Worse: you are the meal as well.

Do you see this?


I have been a bit out of the loop. what is relevant these days for writing ebpf code? what about ebpf code in python?


Writing it in C, compiling with clang, and loading with either C(libbpf), Go (cilium/ebpf), or Rust (Aya).

You can also write bpf in rust with Aya but i'm not sure how feature complete it is.

For very simple use cases you can just bpftrace.


bpftrace is nicer to work with and can replace bcc in most cases for debugging.


Despite trying and reading it carefully I didn't understand it. I wonder if it is me or the article, or both.


I only have had entry-level introductions into QM, but had no trouble understanding this. It that may be because I do have a background in computational dynamics, but I'm no expert in either field.

If I understood correctly, what the article is trying to explain is that the software/hardware architecture optimized for neural net processing is equally suited for many-body simulation of quantum equations. The architecture allows to broadcast the intermediate results among all individual particle simulators, which is untractable in other architectures: Monte-Carlo simulations lose accuracy and coupled cluster simulations can only solve stable lattice configurations.

Personally, I like the observation they made that the fitness constraint for their training is determined by physics: whichever solution yields the lowest total-system energy wins.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: