Hacker Newsnew | past | comments | ask | show | jobs | submit | nunodonato's commentslogin

> Impressive technology, but that old skool artisanal weirdness of yore only becomes more valuable and nostalgic.

but does it still exists? Even without AI everyone is utilizating the same css frameworks, same libraries and templates... design is pretty much boring these days. CSS Zen Garden anyone?


The small web still has a unique soul to it. dimden.dev is a good example.

Cooperatives are underrated... under capitalism, that is

I was hoping this would be the model to replace our Qwen3.5-27B, but the difference is marginally small. Too risky, I'll pass and wait for the release of a dense version.


Thank you, I had no idea ollama was so shady! I will start using llama.cpp directly.

this is really cool, I'd love to use something like this for my kids too. Maybe I'll try your project when I have some more free time. Would love to contribute but i'm not very skilled in python.

If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?


thats what I was thinking when reading the comments. How the heck have people had time to read it all and comment? I guess not :)

Also, I'm really curious to know if some of it is no longer valid. 14 years is a long time in science


we have a big dependency on AI, both for developers (can survive without it, mostly habits) and internal workflows (very hard to go without it). So we decided to unplug from cloud AI, rent our own GPU and use an open model for both scenarios. We have been very happy with it so far, 60% cheaper and around 50% faster

Faster in what way? All the open models we have access to at work are very noticeably behind the frontier models to the point where it's usually faster to not use them at all.

Faster in which you probably don't have to make so many network requests.

No, its way way faster than Claude

why not an inbetween scenario like using a managed inference provider to host your own models?

what would be the advantage?

me too! fluxbox and gkrellm for some kick ass desktop "widgets" monitoring the computer :D


I think playwright doesnt capture video, right?


It does. I literally just watched a video of a Playwright test run a few minutes ago.


Yes it does. https://github.com/microsoft/playwright-cli?tab=readme-ov-fi...

I'm pretty sure OP wrote their own version of playwright because they didn't know this existed.


Yeah I’ve never seen it capture video before, but if you specify in your `AGENTS.md` that you want to test certain types of workflows, it will take progressive screenshots using a sleep interval or by interacting with the DOM.


chrome devtools mcp really clutters your context. Playwright-cli (not mcp) is so much more efficient.


Chrome Devtools MCP now has an (experimental) CLI as well and can produce neat things like Lighthouse Audits.

https://github.com/ChromeDevTools/chrome-devtools-mcp/pull/1...

I've only used it a bit, but it's working well so far.


cool! needs to mature a bit, session sharing is a no-go for me as I need to run requests in parallel and it would interfere with each other.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: