Same. The tone is really off. Here is a response I just got from Gemini 3.1: "Your simulation results are incredibly insightful, and they actually touch on one of the most notoriously difficult aspects of ..." It's pure bullshit, my simulation results are in fact broken, GPT spotted it immediately.
The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.
The only regularity I can discern in contemporary online debates about LLMs is that for every viewpoint expressed, with probability one someone else will write in with the diametrically opposite experience.
Today it’s my turn to be that person. Large scientific code base with a bunch of nontrivial, handwritten modules accomplishing distinct, but structurally similar in terms of the underlying computation, tasks. Pointed GPT Pro at it, told it what new functionality I wanted, and it churns away for 40 minutes and completely knocks it out of the park. Estimated time savings of about 3-4 weeks. I’ve done this half a dozen times over the past two months and haven’t noticed any drop off or degradation. If anything it got even better with 5.4.
Surely it gets noisy after 30 or so, but pedestrian stock trading apps like Thinkorswim have very high levels of customizability and modularity. I think even extensibility in Thinkorswim’s case. Java I think. Anyway, I would think any users of OP’s app are not using 90% of the 500+ widgets.
Panel is an arbitrary UX delimiter, so fundamentally no, unless you're really pedantic in defining (upfront) panel as a meaningfully semantic unit across apps.
I'm not sure I've correctly understood what you're implying.
If it's that I'm not working, well, I'm employed.
It it's that I'm not working enough to not have this money... Well, we still go back to the bubble. Not everywhere in the world you can easily find a job that pays you enough, even if you accept to work more. And the employer will not accept to give developers a $200/month subscription, even less for personal use.
If it's that I'm not working enough and I should go freelancing to work as much as I want and get rich (I'm extrapolating). Well, you're right, I could do that. But (at least at first), I would work a lot more for much less money. And even if I become a recognized freelancer, it doesn't change the fact that I'll earn less money compared to the baseline of SF, or even the USA in the tech sector in general. So, bubble again. I could also, like someone said, put the tokens cost into my hourly/daily rate, but I'll be much more expensive than other freelancers.
Also, but that's a "me case" compared to my previous points, health issues can greatly affect how much work you can do.
Instinctively, if we suppose all the newbies freelancers without any reputation start with the same lowest rate possible to be competitive, passing additional cost to my client will mechanically increase my rate. Putting me in disadvantage about getting any work. And with the difference of monetary value for the same price of tokens, the rate delta is higher.
It's a simplified model of the world, but it feels like simple economic rules.
I assume the comment I'm referring to was written by someone who is already established and for Wich the token cost passing is lower relatively to my environment.
you use profiles for that [0], or in the case of a more capable tool (like opencode) they're more confusing referred to as 'agents'[1] , which may or may not coordinate subagents..
So, in opencode you'd make a "PR Meister" and "King of Git Commits" agent that was forced to use 5.4mini or whatever, and whenever it fell down to using that agent it'd do so through the preferred model.
For example, I use the spark models to orchestrate abunch of sub-agents that may or may not use larger models, thus I get sub-agents and concurrency spun up very fast in places where domain depth matter less.
I’m seeing a real distinction emerge between “software engineering” and “research”. AI is simply amazing for exploratory research — 10x ability to try new ideas, if not more. When I find something that has promise, then I go into SWE mode. That involves understanding all the code the AI wrote, fixing all the dumb mistakes, and using my decades of experience to make it better. AI’s role in this process is a lot more limited, though it can still be useful.
Thats because an LLM can access breadth at any given moment that you cannot. That's the advantage it has.
E.g. quite often a sound (e.g. music) brings back memories of a time when it was being listened to etc.
Our brains need something to 'prompt' (ironic I know) for stuff in the brain to come to the front. But the human is the final judge (or should be) if what is wrong / good quality vs high quality. A taste element is necessary here too.
reply