To some degree there’s something like this happening. The old saying “pics or it didn’t happen” used to mean young people needed to take their phones out for everything.
Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.
The one that I can think of is the government sets the amount of electricity produced, and then it’s rationed. But I doubt the UK would be happy with rationed electricity where your power shuts off the second it’s over. That would be essentially mandatory blackouts all the time.
Not to mention the cost would be held by the government, so you end up paying it in taxes anyway.
Rationed electricity might also come in the form of a universal basic entitlement, followed by market price for higher usage. One assumes under such a system that the state would own and operate energy production and that they would, for instance, increase the ration over time, leaving the remaining needed capacity to be fulfilled by the market.
Honestly, pricing based on the cost of financing and operation isn't a terrible idea.
Sure much you will end up with more energy consumed. If it’s a free ration, almost everyone will consume all that they are given up to the limit.
Under the current system, when energy becomes hard to produce or more people need it, rising prices means people will reduce. So pricing for cost of financing, sure, but it might be a higher cost because people will consume more.
A common agreement I hear is “illegals/criminals shouldn’t get a trial” as if the point of trials isn’t to figure who is and isn’t genuinely those things.
Cloudflare is talking about Italian law and Italian policy and making comments about his actions they will take in Italy with Italian users specifically.
“Italian here” as in “I am not a random person with no skin in the game / I live in the country and presumably am more well informed on the policy he is talking about.
If there was a post about a law in nyc, I think it would be helpful to hear takes from New Yorkers.
Not doubting you, but any specific examples of him supporting monopoly?
Or are you saying the general environment of high finance supports this?
No doubt he had more money than he needed but if this is referring to his preference for coka-cola and apple stock / any stocks with the ability to set their own prices because of market dominance, I feel like that’s not a totally fair criticism.
And this bit is tripe: “Buffett is the avatar of monopoly. This is a guy whose investments philosophy is literally that of a monopolist. I mean, he invented this sort of term, the economic ‘moat,’ that if you build a moat around your business, then it's going to be successful. I mean, this is the language of building monopoly power.”
Seeking moats isn’t monopolistic. It’s inherent to competition.
Feel like this debate might be way different for novel writing vs every day writing.
I’m biased because I am not a very good writer, but I can see why in a book you might want to hint at how someone walked up to someone else to illustrate a point.
When writing articles to inform people, technical docs, or even just letters, don’t use big vocabulary to hint at ideas. Just spell it out literally.
Any other way of writing feels like you are trying to be fancy just for the sake of seeming smart.
Spelling it out literally is precisely what the GP is doing in each of the example sentences — literally saying what the subject is doing, and with the precision of choosing a single word better to convey not only the mere fact of bipedal locomotion, but also the WAY the person walked, with what pace, attitude, and feeling.
This carries MORE information about in the exact same amount of words. It is the most literal way to spell it out.
A big part of good writing is how to convey more meaning without more words.
Bad writing would be to add more clauses or sentences to say that our subject was confidently striding, conspiratorially sidling, or angrily tromping, and adding much more of those sentences and phrases soon gets tiresome for the reader. Better writing carries the heavier load in the same size sentence by using better word choice, metaphor, etc. (and doing it without going too far the other way and making the writing unintelligibly dense).
Think of "spelling it out literally" like the thousand-line IF statements, whereas good writing uses a more concise function to produce the desired output.
Those examples were simple, so it’s less of an issue, but if the words you use are so crazy that the reader has to read slower or has to stop to think about what you mean…then you aren’t making things more concise even if you are using less words.
For sure! Every author should know their audience and write for that audience.
An author's word choices can certainly fail to convey intended meaning, or convey it too slowly because they are too obscure or are a mismatch for the the intended audience — that is just falling off the other side of the good writing tightrope.
At technical paper is an example where the audience expects to see proper technical names and terms of art. Those terms will slow down a general reader who will be annoyed by the "jargon" but it would annoy every academic or professional if the "jargon" were edited out for less precise and more everyday words. And vice versa for the same topic published in a general interest magazine.
So, an important question is whether you are part of the intended audience.
Pre-training is just training, it got the name because most models have a post-training stage so to differentiate people call it pre-training.
Pre-training: You train on a vast amount of data, as varied and high quality as possible, this will determine the distribution the model can operate with, so LLMs are usually trained on a curated dataset of the whole internet, the output of the pre-training is usually called the base model.
Post-training: You narrow down the task by training on the specific model needs you want. You can do this through several ways:
- Supervised Finetuning (SFT): Training on a strict high quality dataset of the task you want. For example if you wanted a summarization model, you'd finetune the model on high quality text->summary pairs and the model would be able to summarize much better than the base model.
- Reinforcement Learning (RL): You train a separate model that ranks outputs, then use it to rate the output of the model, then use that data to train the model.
- Direct Preference Optimizaton (DPO): You have pairs of good/bad generations and use them to align the model towards/away the kinds of responses you want.
Post-training is what makes the models able to be easily used, the most common is instruction tuning that teaches to model to talk in turns, but post-training can be used for anything. E.g. if you want a translation model that always translates a certain way, or a model that knows how to use tools, etc. you'd achieve all that through post-training. Post-training is where most of the secret sauce in current models is nowadays.
Want to also add that the model doesn’t know how to respond in a user-> assistant style conversation after it’s pretraining, and it’s a pure text predictor (look at the open source base models)
There’s also what is being called mid-training where the model is trained on high(er) quality traces and acts as a bridge between pre and post training
just to go off of this there is also stochastic random overfit retraining process (SRORP). Idea behind SRORP is to avoid overfitting. SRORP will take data points from -any- aspect of the past process with replacment and create usually 3-9 bootstrap models randomly. The median is then taken from all model weights to wipe out outliers. This SRORP polishing -if done carefully- is usually good for a 3-4% gain in all benchmarks
If pre-training is just training, then how on earth can OpenAI not have "a successful pre-training run"? The word successful indicates that they tried, but failed.
It might be me misunderstanding how this works, but I assumed that the training phase was fairly reproducible. You might get different results on each run, do to changes in the input, but not massively so. If OpenAI can't continuously and reliably train new models, then they are even more overvalued that I previously assumed.
Because success for them doesn't mean it works, it means it works much better than what they currently have. If a 1% improvement comes at the cost of spending 10x more on training and 2x more on inference then you're failing at runs. (numbers out of ass)
- Reinforcement learning with verifiable rewards (RLVR): instead of using a grader model you use a domain that can be deterministically graded, such as math problems.
The first step in building a large language model. That's when the model is initiated and trained on a huge dataset to learn patterns and whatnot. The "P" in "GPT" stands for "pre-trained."
Would current collapse make more than just Northern Europe colder? Or maybe they would be warmer?
They seem to suggest only certain northern countries would be affected because warm water stops flowing from the south.
So the southern waters would stay hotter right? Or what about across the Atlantic where the currents do the opposite (and make the winters so cold). Would Boston and New York get more temperate?
North of the Alps temperature would drop considerably. South of the Alps, probably fine due to the thermal mass of the mediterranean sea. However, for the whole Europe you would see a massive drop in rainfall, since basically all the humidity comes from the Atlantic's warm air that carries a lot of it.
Additionally, Carribeans, Mexico and South of the US would also be fucked since the energy wouldn't disperse and all the heat and humidity would stay there. Hurricanes would be much more violent, with way more rain, and likely more frequent.
Labrador current might become weaker though, but it is not a given. Currently, the waters from the gulf stream cool down and sink to the bottom of the ocean, so they don't displace the artic waters and hence are not likely the cause of how cold north eastern US is.
None? It is not certain any country will benefit. Countries built their infrastructure and population centers according to the weather of the location. If the weather changes probably every country will have to adjust.
If you are asking which area will benefit from climate change I would say Siberia as it will become increasingly important due to the northern corridor remaining ice free and because a lot of people will be displaced by weather/sea level. And that place is empty. Additionally, it has nice farming soil which right now is not used since there are easier places to farm but in a warming world this could change
I don’t know if this is ridiculous, but I’m curious if access to LLMs will one day be priced like the Bloomberg Terminal or something. Where access for one user is like 20,000 dollars. Maybe less than that, but like 5k per person.
Seems crazy by most software standards, but when Bloomberg became a software only program (they stopped selling physical terminals) and people were shocked when they paid almost nothing for excel but then so much for the second tool they needed as traders.
The difference is that Bloomberg Terminals were always expensive, and so people expected to pay. LLMs are basically free (subsidized) at this point, and people are very sensitive to large price increases.
Sure and I’m sure there would be a huge shock, but simple economics would dictate that if that’s the true equilibrium of price for LLMs to be economical, then it would have to get to the price eventually
1. Is it worth 20k to anyone? Well depends on the advantage but maybe yes. People are dropping 200-1000 a month already as ordinary devs.
2. Is there competition? Yes lots. To get to 20k one model provider needs a real killer edge that no one else can keep up. Or alternatively constraints push prices up (like memory!) but then that is not margin anymore.
I think there could be a few directions. Consumer level LLM usage will become free. Corporate-grade LLM use will cost a lot of money. Maybe the highest grade LLM use will be regulated and only done through government labs.
What’s a high grade LLM though in such a competitive environment? And if China releases more high grade open source models, that pricing model is f8cked.
One interesting thing I heard someone say about LLM’s is this could be a people’s innovation, basically something so low margin it actually provides more value to “the people” than billionaires.
It just seems hard to imagine that simultaneously running 1,000 instances of claude code will be cheap in the next decade, but maybe running 1,000 instances of claude-like tools is what a corporate LLM subscription will give. And maybe running 1,000,000 or a billion such models is what the government will do once a contract gets awarded.
It's my understanding that even the paid version ChatGPT is highly subsidized so yeah, the prices will have to be raised quite substantially to meet profitability.
Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.
reply