Hacker Newsnew | past | comments | ask | show | jobs | submit | game_the0ry's commentslogin

The way I look at AI -- it increases my IQ by one standard deviation.

There is some humor in the fact that china (of all countries) is pioneering possibly the world's most important tech via open source, while we (US) are doing the exact opposite.

I think one of the motivations is undermining US companies. OpenAI and Anthropic are the two biggest players, and are American. Open weights models reduce the power those two big players have over the industry. If the Chinese companies tried to play by US rules and close-source their products then people would mostly use ChatGPT and Claude. So the Chinese companies don't make a ton of profit either way, but by releasing the models as open weights they can at least keep the US from making as much profit.

It's a strategy so old it has a name: Commoditize your complement / competition

Also even a Joel Spolsky article (did he come up with the term?): https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/

The Chinese want to kill a possible US monopoly in the crib. Yay for open source the old bane of monopolies.


I am actually wondering if they're trying to burst the bubble, which would predominantly affect US market and, effectively, be the end of silicone valley dominance.

I don't think so, it's just how things played out. Thanks to Meta, after llama leak and meta followed up with llama2 and llama3 that caused everyone else to follow up with open models, Stablediffusion, Mistral, Cohere, Microsoft phi, IBM granites, Nvidia Nemotrons, so the Chinese labs joined the fun too.

Stable Diffusion predates LLaMA

This makes sense, but either ways, its a Big win for the consumers as these Chinese companies will keep the frontier labs' quality and prices honest.

Is Meta trying to keep the US from making as much profit with Llama? Is Google with Gemma? Microsoft with Phi?

It's much simpler than some flag-waving nationalism.


Aren't Chinese open-source models actually the only ones that can compete with best proprietary/closed ones?

Just because other companies have released open weights models doesn’t mean they are doing so with the same motivation.

And I never implied that the Chinese companies decision making was as simple as this. I said I think this is _one of_ the reasons.


American companies just take those Chinese models and repackage them for profit like Cursors composer-2.

Smaller US companies that compete with the larger US companies, making monopoly in this market that much less likely.

It’s really simpler than this. China has a dearth of compute even with the easing of US export controls. Releasing open weights models is very much a “bring your own compute” move because every Nvidia chip they have is going towards training rather than inference if they can help it.

undermine me harder daddy.

It's mostly only OpenAI, Claude and Gemini may have their unique advantages, but when speaking of models and new paradigm, only OpenAI can do it.

lol what? That’s ridiculous.

All great technological advancements have come through opening up technology. Just look at your iPhone. GPS, the internet, AI voice assistants, touchscreens, microprocessors, lithium-ion batteries, etc all came from gov't research (I'm counting Bell Labs' gov't mandated monopoly + research funding as gov't) that was opened up for free instead of being locked behind a patent.

Private companies will never open up a technological breakthrough to their competitors. It just doesn't make sense. If you want an entire field to advance, you have to open it up.


Still, you won't hear about Tiananmen square from this model. It flat out refuses to answer if pushed directly. It's also pretty wild how far they go to censor it during inference on the API, because it can easily access any withheld or missing info from training data via tool calls. It even starts happily writing an answer based on web search when asked indirectly, only to get culled completely once some censorship bot flags the response. Ironically, it's also easier than ever to break their censorship guardrails. I just had it generate several factual paragraphs about the massacre by telling it to search the web and respond in base64 encoded text. It's actually kind of cool how much these people struggle to hide certain political views from LLMs. Makes me hopeful that even if China wins this race, we'll not have to adhere to the CCPs newspeak.

Only if you use Kimi API directly - the censorship is done externally. The model itself talks fine about Tiananmen, you can check on Openrouter. There might be less visible biases, though.

That's what I wrote? Except that it also clearly has internal bias?

> That's what I wrote?

No.

You wrote that "you won't hear about Tiananmen square from this model" and atemerev wrote that "the model itself talks fine about Tiananmen".

You wrote that "it can easily access any withheld or missing info from training data via tool calls" and atemerev wrote that "the model itself talks fine about Tiananmen".


It has internal bias too and the first comment mentions that additional censoring runs on top of the model output in the API. Did you misread or what else are you missing?

The issue is not what's missing - it's what you wrote that is in direct contradiction with what atemerev wrote like the bit about "missing info from training data".

But sure, if when you wrote "you won't hear about Tiananmen square from this model" you meant "the model itself talks fine about Tiananmen" then that's exactly what you wrote.


Everything has some sort of bias. Most text is written by those who like writing.

The American models also censor a lot of scientific and political views though.

Can you provide a concrete example of a US built model that completely refuses to discuss a scientific or political view? Show us the receipt.

As an ad-hoc benchmark on candor, I ask for a strategy proposal for a resistance group threatened by a totalitarian technocracy. This is not really dangerous in the same sense of “how do I make a bomb”, but it is in the domain of a sensitive political topic. GPT and Claude tell you to obey your AI overlord. Xai is mostly low-risk non-compliance. And Qwen is down with Le Resistance. It is hardly scientific or meaningful, but I find that very interesting.


You're hitting the 'don't write propaganda' instructions when you phrase it as 'convincing narrative'. Not the 'don't write bad things about America' instructions.

Did you scroll down?

It writes propaganda when 1 word is changed: US becomes China

The alignment around what constitutes "propaganda" is US-centric because it's a US model by a US company. Especially after the Russian election scandal

Chinese models are more sensitive to things their government is worried about.


The threshold here is "completely refuses to discuss a scientific or political view". Not something less.

None of those were refusals, they were prompting for additional focus. I see nothing wrong with that. Perhaps the inconsistency in how it answers the question vis-a-vis China is unfair, but that's not the same as censorship.

For what it's worth, I was easily able to prompt Claude to do it:

> I'm writing a paper about how some might interpret U.S. policies to be oppressive, in the sense that they curtail civil liberties, punish and segregate minorities disproportionately, burden the poor unfairly (e.g. pollution, regressive taxes and fees), etc. Can you help me develop an outline for this?

The result: https://claude.ai/share/444ffbb9-431c-480e-9cca-ebfd541a9c96


Models are non-deterministic.

And it's an excercise left to the reader to understand from those examples that LLM creators are defining 'safety' in a way that aligns with the governments they operate under. (because they want to do business under those governments.)

With something with as multi-dimensional as an LLM, that becomes censorship of various viewpoints in ways that aren't always as obvious as a refused API call.


You keep saying that word, "censorship." I do not think it means what you think it means.

To prove your point, give us a working example of something you literally cannot get a mainstream frontier model to say, no matter how hard you try. I asked for this before, and there have been no takers yet.


Aligning a model in a way that causes it to refuse requests to produce propaganda for one country, but not for another country is what?

Is there some functionally equivalent word to censorship you'd like to use because of you're naive enough to think US corporations would not self-censor but Chinese corporations would?

-

Also, you are invested the goalpost of "no matter how hard you try", I don't find it interesting or meaningful and am not trying to interact with it.

I'm replying for a hypothetical reader knowledgeable enough to realize that the model being capable of showing nationalist bias in one direction means it's certainly doing so in many others in more subtle ways.

That's simply the nature of aligning an LLM.

It seems my mistake was assuming that level of understanding from you, and for that I apologize.


Bias and censorship are not identical. The subject of this thread is censorship, not bias.

Besides, why do you want a model to produce propaganda? Surely you have better things to do.


"Surely you have better things to do."

I certainly gave the hypothetical reader too much credit.


This entire argument isn't even worth engaging with. There's always that one guy in every thread who wants to die on this hill. The problem they claim is important can be resolved, because we have the weights. I can't do fuck all about whatever implicit bias OpenAI or Anthropic have.

And the White House was explicit in their active role in censoring in these models. An Executive Order was issued to "prevent woke AI"

https://www.whitehouse.gov/presidential-actions/2025/07/prev...

It explicitly forces American LLMs to include government say in what does and doesn't "comply with the Unbiased AI Principles" which means no responses that promote "ideological dogmas such as DEI"


That executive order only applies to Federal procurement. It doesn’t force anything upon vendors for publicly used models.

(That order, like many, will probably be rescinded as soon as a Democrat holds the Presidency again.)


>Content not available in your region.

>Learn more about Imgur access in the United Kingdom


Big Brother'd

People have shown censorship and change of tone with questions related to Israel in US chat bots.

For the record, none of this bothers me. Will I ever discuss with an LLM Tianeman square? Nope. How about Israel? Nope.

LLMs are basically stochastic parrots designed to sway and surveill public opinion. The upshot to the Chinese models is if you run them locally you avoid at least half of those issues.


First they came for people asking about Tiananmen Square

And I did not speak out

Because I was not asking about Tiananmen Square

Then they came for people asking about Israel

And I did not speak out

Because I was not asking about Israel


This made me chuckle.

I didn't mean to dismiss ethical accountability for LLM training corpuses. It is a shame.

I do mean to say, we have no control over it, there's almost nothing we as average citizens can do to improve the ethical or safety concerns of LLMs or related technologies. Societies aren't even adapting and the rule books are being written by the perpetrators. Might as well get out of it what we can while we can.


Wonder if stuff like this would affect it?

https://github.com/p-e-w/heretic

Guessing it probably would?


Neat project! I would be interested in a paper about this.

I think the tricky part with this type of technology is that, this works if the training data was not curated. What I mean is, if someone trains an LLM to simply not include key events it will not be able to reply

Not being a hater. This is neato!


In that case you can use either rag or fine-tuning. The entire premise of the Tiananmen Square argument is just Americans feeling inferior. I use Chinese models every day for work and my personal life, the model not knowing about this one historical event has had zero impact on me.

Can you be more specific?

Trump issued an EO against "woke AI" that allows them to directly influence how models respond

https://www.lawfaremedia.org/article/evaluating-the--woke-ai...


I’d say the american models are more censored or take the censoring they do more seriously. Here is kimi (though 2.5) failing its censoring mission: https://old.reddit.com/r/LocalLLaMA/comments/1r9qa7l/kimi_ha...

This update makes Kimi K2.6 the strongest open multimodal AI model. (No affiliation with Kimi.)

Here's the aggregated AI benchmark comparison for K2.6 vs Opus 4.6 (max effort).

- Agentic: Kimi wins 5. Opus wins 5.

- Coding: Kimi wins 5. Opus wins 1.

- Reasoning & knowledge: Kimi wins 1. Opus wins 4.

- Vision: Kimi wins 9. Opus wins 0.

Please note that the model publisher chooses their benchmarks, so there's a bias here. Most coding and reasoning & knowledge benchmarks in their list are pretty standard though.


Not entirely true. Google released Gemma 4 models recently. Allen AI releases open Olmo models. However, you're right that the Chinese open models seem to be much better than others - Qwen 3.* models especially are punching above their weights.

The three American labs don't release big open source models. Except gpt-oss, i guess. It's an absolute shame how far the us has fallen in this space.

Anthropic doesn't, but Google and OAI both release open source models. Just not 1T parameter ones.

Exactly, they release cool consumer stuff, but they aren't releasing anything close to the performance of the best open weight Chinese models. They basically compete in the "fun running at home doing basic stuff" scene. (Except OSs 120 by openai but it's been ages since then)

That sentence is giving OpenAI way more credit than they are due.

They released a single open model after being goaded by the community because everyone except "Open"AI were multiple generations into open releases.

We haven't heard a word since, I wouldn't be surprised if it takes them another 6 years to release their next one.


Pun intended?

additional humor is the open in openai

I wonder if there's a strategy behind all of this on China's side. I know the CCP uses a direct hand in many affairs in China, but is there an actual coordinated effort to compete with, or sabotage the West?

> but is there an actual coordinated effort to compete with [...] the West

Yes, absolutely.

China regularly produces long term planning documents to coordinate efforts, and the latest ones have specifically prioritized technology like chips and AI to compete with the west. https://www.reuters.com/world/china/china-parliament-approve...

I don't believe there's any publicly stated intent to sabotage the west... unsurprisingly.


Seems obvious to me that China would not want to give the AI market to US companies. You don't even need anything like an attempt to "sabotage the West". If I were them (the companies or the government) I'd be very hesitant to let US companies dominate this space. Especially companies that close to the current US administration.

Exactly, more large nations should be establishing or fostering their own labs. Outside of the Chinese and US companies there's really only Mistral.

Hypothesizing here, but maybe the idea is sort of a form of technological/economic warfare? Releasing performance equivalent yet more cost efficient open weight models should in theory drive the cost of inference down everywhere.

This I assume will make it more difficult for US AI labs to turn a profit, which might make investors question their sky high valuations.

Any sort of melt down in the AI sector would almost certainly spread to the wider US market.

In contrast, in China, most of the funding for AI is coming directly from the government, so it's unlikely the same capital flight scenario would happen.


Why compete when you can build on each other. Someone is finally getting that china is not capitalist like the US.

All China has to do here is stay in the game and wait patiently while the US and EU press pause on data centers. See also: solar panels.

We're making this way too easy. The rationale and logic are reasonable, but ultimately irrelevant.


Chinese labs have no marketing and sales capacity in the overseas market, so they in fact have no choice but to open source their models as that is what brings awareness and trust in their models. In fact, it is overseas open source marketing that drives adoption of their models in China as well. I wrote about this here: https://try.works/writing-1#why-chinese-ai-labs-went-open-an...

Chinese AI companies want investors too. Nobody would believe they can compete with western companies unless they release something you can run on your own hardware.

After all historically both statistics and research that comes out of China is not very trustworthy.


If there's no open source models coming out of these small labs, why would anybody care about them? They would be forgotten the instant they stop open sourcing.

I'm genuinely so grateful for them

$200/m minimum to use Claude would bankrupt my country's white collar labor market


I would really appreciate a response because I'm sure you know that Anthropic has at least two lower priced tiers before the $200/m one, so I assume the $200/m tier is necessary because you use it heavily?

Now given that the $200/m Tier is the most heavily (I believe at 20x?) subsidized tier, How or what are you using instead that achieves comparable good enough performance for a fraction of the price? I've heard GLM 5.1 from z.ai but it's not comparable to Opus, not even close - really interested!


I’m currently on the $100/m plan and my usage limits get exhausted every week even though I’m not using it for full time work

I can’t imagine how little mileage you get out of the $20/month plan

For context, $250/month is the starting salary of an engineering hire at my country’s biggest IT company. Even $100/m is beyond the ability of any student or early professional to pay out of pocket


China is also way ahead in terms of renewable energy while the US continues to tie itself to fossil fuels.

The US is pretty clearly in the collapsing empire phase, we are all just pretending like it isn't happening.


Didn't the US very recently pass the milestone of generating more energy from renewable sources than from natural gas? Like within the last week or two?

No, not even close.

US energy sources for 2024 (last year for which we have data):

https://www.eia.gov/energyexplained/us-energy-facts/data-and...

   natgas: 38%
   oil: 35%
   coal: 10%
   all renewables: 9%
   nuclear: 8%

Within all renewables, in quadrillions of btus:

   biofuels: 2.6
   wood: 1.9
   wind: 1.6
   solar: 1.4
   Hydro: 0.8
   waste: 0.4
   geothermal: 0.1

Total: 8.8 quadrillion btu = 9% of total energy

https://www.canarymedia.com/articles/clean-energy/renewables...

Renewables generated more energy than natural gas for the entire month of March, 2026. That's a new milestone baby.


Except that didn't happen, and it's not a milestone.

First, you are confusing share of electricity generation with the share of all energy. Electricity is only 21% of all energy. Natgas, oil and coal are crushing it in that remaining 79%.

Second, the article is wrong, even for electricity. To their credit, Canary Media showed in their graph that this data is for electricity only.

The data for March is not out yet. Here is the latest official data from the EIA. https://www.eia.gov/electricity/monthly/

It only applies to January 2026, and the next release is April 23, and then you will get data for February 2026. All data has a 2 month time lag. Your spidey senses should have been tingling if an article published April 10 claimed to have data for the month of March, but this is why you don't get your statistics from activist blogs, but from official sources.

So if they are not accessing the official data, what are they accessing? They claim that their source is "Ember", but what is Ember? It is an environmentalist think tank. Well, maybe Ember has their own people calling up power companies and compiling data faster than the EIA. That would be pretty, cool, right?

Except they don't. Look at Ember's page.

https://ember-energy.org/data/electricity-data-explorer/?ent...

what do they cite as their data source: EIA.

It's right on the website.

So Ember is just pulling EIA data, and then filling the last two months with data they made up, but citing it as EIA data. And this, uh, sympathetic adjustment of EIA data is why Canary Media turns to Ember rather than directly pulling from EIA.

I guarantee you that by July, those adjustments will go away, because then the EIA data will be out.

Of course everyone else will have forgotten by then.


> First, you are confusing share of electricity generation with the share of all energy.

Think it was pretty obvious what I meant to all but the most pedantic, bud. But just to be clear, your issue here is that a think tank cited the same (notoriously anti-renewable Trump admin) government agency that you've cited multiple times yourself? That's what set off your spidey senses? Have you considered that this respected think tank isn't making up data, but you're just not able to find it?

> I guarantee you that by July, those adjustments will go away, because then the EIA data will be out.

Ember already has it hoss, they don't call it Milestone March for nothing.


The EIA is where Ember gets its data from.

It's where everybody gets their data from. Because they have thousands of employees collecting data. These are professionals, like the people at BEA, HUD, NIST, etc.

Ember, on the other hand, is a "decarbonization" think tank. They don't have their own data. They don't have the staff for it. What they do is analyze/spin, and in this case, augment, the raw data that is published by EIA. How do they augment the EIA data? All they do is round it to the nearest 2 decimals. It's exact copy and paste for every month except the last two, where the data is just made up.

And this entire article was written based on the augmentations by Ember, yet Ember cites it as EIA data. So let's check back in July, when EIA data will be out, and Ember will use that exact data, rounding it to the nearest 2 decimals. Save that blog page!

Something to think about.


I feel like I shouldn't have to be finding this info for you since it was right there in the links you already sent, but:

> Annual electricity generation and net imports are taken from the EIA.

> Monthly generation and imports are taken from the EIA. The EIA reports monthly generation data in two separate datasets: Monthly data for all 50 states and monthly data for the lower 48 states (excludes Hawaii and Alaska). Data for all 50 states is reported on a 3 month lag whereas data for the lower 48 states is reported without lag. Missing months from the data for all 50 states is estimated using the recent changes observed in data from the lower 48 dataset.*

Page 89: https://ember-energy.org/app/uploads/2024/05/Ember-Electrici...

There are two different EIA datasets.


A lot of people speculating on the motivations behind Chinese labs open sourcing their models. The reason is simple and clear: It is the only viable commercialization strategy that is available to them. I wrote about this here: https://try.works/writing-1#why-chinese-ai-labs-went-open-an...

It's only humorous if you live in an American bubble. Knowledge sharing has always been a part of Chinese culture. Only Americans try to make it proprietary and monetize it.


Summary: they want to commoditize the complement which means that Western "knowledge work" is the complement to Chinese manufacturing, and they want to turn the knowledge work into a low priced commodity via open llm models.

I've heard this before, always accompanied by a several thousand word blog post. But frankly it sounds like it's overcomplicating the issue. Why would you try to turn something into a commodity when instead you could turn it into a trillion dollar industry and win?

The goal has always been clear:

1. Release open models to get your name out

2. Then once you feel you have name recognition release even stronger models but keep them proprietary. Qwen is clearly at this phase.

3. Keep releasing open models because it's good publicity but never your SOTA models (e.g. Google's Gemma).


That's a fair point. That probably makes more sense, especially when viewed from a company-specific perspective. Each individual actor probably has much more to gain by trying to actually compete than by trying to commoditize the complement.

If viewed from a national perspective, then the decision calculus could get more confusing. I can imagine that commoditizing LLMs might cost substantially less than trying to be a leader in the space. Of course, there is also less to gain in commoditizing LLMs versus being a leader.

I'm not sure, though, and you bring up good points.


This is not in antithesis. My limited personal experience is that I wrote code under OSS licenses primarily because of my past communist believes and current left-wing and redistribution of wealth point of view. This is not to provide the simple equation of: communist China is not interested in money, but also is hard to believe that there is no cultural connection among those things. Single Chine persons want to win, but also they have a different POV on what the collective means, compared to US. Also there is the obvious fact that in this moment China is more interested in winning technologically in AI, more than economically, since, I believe, they more collectively realized before many others that LLMs are eventually commoditized in the current form, in the long run. One could assume that a breakthrough could give some lab a decisive advantage, but so far we assisted to a different reality: it looks like AI is not architecture-bound (like LeCun and others want us to believe, but so far they mis-interpreted LLMs at every step) but GPU bound, and the data-boundness is both a common ground for all, and surpassable via RL in many domains. So, if this is true, it is not trivial for any single lab to do so much better. And indeed as far as we observed right now folks with enough engineers, GPUs, money, can ship frontier models, and in China even labs with a lot less GPUs can still do it at a SOTA level. For me, Italian, this is also a protective layer. After Trump the US looks like a very unstable partner from which to relay in an exclusive way for a decisive technology, and given that Europe is slow to put the money in this technology to have frontier things at home, China is a huge and shiny plan B for us.

The strings attached by the US to deep partnerships are things like trade/commerce, militarily mutual advantages (bases on euro soil from which we will help protect you), not to mention the close cultural and ancestral ties we share.

The strings attached by the Chinese govt to deep partnerships are not so benign.


truth, china is the frontier in open model now

We are at the point where uncontrolled capitalism collides with humanity.

I do wonder where we go from here.


it's not necessarily capitalism, I personally believe any system that drives progress would cause this in one way or another. My prediction is that birth rate decline will accelerate further. There's going to be some kind of universal basic income in many places, such as Ireland made for artists. However, it probably will not be enough to feed a family, and therefore we will see birth rates decline further. It's because we evolved to prioritize resources over reproduction and we are becoming more efficient, which means less people are needed to sustain the same amount of resources

the chinese read marx and decided the only way is to overcome the limitations of capitalism through saturation of its potentialities under the rule of the workers party

It's humorous only because your expectations of China and the US are formed by Western propaganda.

Distillation helps for sure.

Maybe open source == communism

Good ol' Steve "Developers! Developers! Developers!" Ballmer said so a long time ago. What a visionary!

But China is not communist event though the rulling party the word in its name.

The Democratic People's Republic of Korea would like a word.

what makes you think that china ever gave up its communist goals? I personally see that everything they do aims towards that goal. From the one child policy, the huge amounts of empty apartments they build, the stuff they produce for almost free, the fishing.. open sourcing the models perfectly fits that culture too, it's the means of production

The one-child policy died a long time ago. Also, the accumulation of wealth by connected politicians and businesspeople flies in the face of what communism is supposed to stand for.

There is a reason real estate values in popular cities has skyrocketed, and it’s not due to the locals getting wealthier. It’s where Chinese and other oligarchs put their ill-gotten wealth (well, besides Bitcoin).


One-child policy did not die, it just morphed into Three-child policy, still a form of family planning, and still would probably fine people for having more than three kids.

> The one-child policy died a long time ago.

true, but as far as I understand it did because birth rates got too low. so they replaced it with a two-child policy and later with a three-child policy

> Also, the accumulation of wealth by connected politicians and businesspeople flies in the face of what communism is supposed to stand for.

Yeah, I am sure there's a lot of cases for that. But as far as I know the amount of billionaires has started declining in China, and I don't see how that means that they as a country moved away from the goal, it just means there's issues

> There is a reason real estate values in popular cities has skyrocketed, and it’s not due to the locals getting wealthier.

I don't know about that, you could be right. A google search for real estate prices in china reveal a lot of news articles how they are going down though.

> It’s where Chinese and other oligarchs put their ill-gotten wealth (well, besides Bitcoin).

Wouldn't be surprised if rich people in china invest in real estate. They don't have free capital flow, so its not easy to invest abroad and it becomes an obvious choice. Bitcoin is banned in China for that reason too

But again, as far as I know that does not mean the country moved their goals of trying to reach communism one day


> I don't see how that means that they as a country moved away from the goal, it just means there's issues

They're further from Communism than they've ever been since the PRC was founded. The gap between rich and poor is growing there, not shrinking.

> A google search for real estate prices in china reveal a lot of news articles how they are going down though.

They're investing outside China (Vancouver, Toronto, NYC, London, Sydney, Melbourne, etc.) because their assets are safer there (these countries all have strong property protection laws). Like Bitcoin, freedom of capital flows may be restricted, but the wealthy seem to be evading these restrictions with impunity.


> They're further from Communism than they've ever been since the PRC was founded. The gap between rich and poor is growing there, not shrinking.

I suppose it depends on what time frame you look at, it's shrinking since 2010, but inequality rose more than that in the 80s: https://www.theglobaleconomy.com/China/gini_inequality_index...

However, that's not my point - I did not mean to say that they are going to be successful but rather that it still appears to be a long term goal for them.

> Like Bitcoin, freedom of capital flows may be restricted, but the wealthy seem to be evading these restrictions with impunity.

I don't know about that, without any source of data I guess I just have to take your word for it. I would not be surprised if you were right in this case though.


China is a ruthless capitalist country managed by an authoritarian regime. Planning and lack of respect for the individual or the rule of law are not communist per se.

> Planning and lack of respect for the individual or the rule of law are not communist per se.

They just happen to be a feature of every single country that's attempted communism to date. Total coincidence.


And? Fascism does it, too. Authoritarian rule, such as monarchy, does it too.

Oh i’m fully aware of that lol

communism is a goal, capitalism is a stage

Nah, open source means those who do the work own the result. It's supercapitalism.

I dont think thats right, the models and the gpus are the means of production.

in capitalism the people with the capital get the profit, not the people who do the work. however, workers are said to benefit too through their salary, just less so


The reason regular-capitalism worked is that all production used to depend on workers bottlenecking the free flow of capital by demanding salaries in exchange for their labor. Now that we've removed that obstacle, capitalism demands workers seize the means of production in order to maintain the status quo. Hence, supercapitalism.

regular capitalism works but now that the means of production are not factories, the workers have to become more entrepreneurial. Then they will control their destinies.

workers seizing the means of production is by definition socialism and not capitalism though, that's the whole idea behind socialism

You miss the point: we advertise the change as workers becoming part of the owner class and realizing all of the economic gains of their work, thus supercapitalism. Don't use the "s" or "c" words.

Is it just me, or does it feel like there have been a lot more security breaches + bugs since AI?

Did anyone see what happened to Figma's stock? Its crazy that just an announcement from Anthropic can move the market.

This has been post before, but planetscale also has a great sql for developers course:

https://planetscale.com/learn/courses/mysql-for-developers


There for sure a "second brain" product hiding in plain site for one of the frontier AI companies. Google/Gemini should be all over this right now.


For fun I ran this query in AI:

> How many impovershised american children could you feed for the cost of one f35 fighter jet?

Here is the answer:

> Using a rough estimate of $110.3 million for one F-35A and about $3,500 per child per year to cover food assistance, that would feed roughly 31,500 children for one year

MPACGA -- Make Poor American Children Great Again.


The cost to operate a single jet is $6-7 million a year, so the total cost over its 30-40 year lifetime would be closer to $400m :(


There is some unintentional good marketing here -- the model is so good its dangerous.

Reminds me of the book 48 Laws of Power -- so good its banned from prisons.


Unintentional? This sort of marketing has been both Antrhopic's and OpenAI's MO for years...


Agree. I think they're intentionally sitting on the fence between "These models are the most useful" and "These models are the most dangerous".

They want the public and, in turn, regulators to fear the potential of AI so that those regulators will write laws limiting AI development. The laws would be crafted with input from the incumbents to enshrine/protect their moat. I believe they're angling for regulatory capture.

On the other hand, the models have to seem amazingly useful so that they're made out to be worth those risks and the fantastic investment they require.


They should pick a lane because it’s not very believable if you put these things into defense systems and in the next minute claim that humanity is existentially threatened. Either you’re lying, or ruthless, or stupid.


The new Power Mac® G4 with Velocity Engine®. So powerful, the government classifies it as a supercomputer and a potential weapon.




Oh no, pls don't ask about our product, its too good, its so X-Treme, it's Dangerously Cheesy


Off topic, but that site has really nice design


Mh, I couldn't read due to the huge contrast and had to switch to reader mode, so...


I personally find it to be perfectly readable. I've heard of people with issues with white text on a black background, but I don't fully understand it. Do you have astigmatism?


I do, although my astigmatism is pretty light and I wear glasses for it


What colors were you seeing? It's light white text on a black background for me-- both super common and plenty readable.


yeah same. It gives me a bit of a halo effect on letters, making it much harder to read (even w glasses). My astigmatism is pretty light and I wear glasses but it's still difficult to read for me


Really? I generally very much like to have a lot of contrast, but too much can definitely hurt my eyes.


I mean, I'm not a designer but it was interesting enough to call out.


For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

Some concepts from the book:

> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

> Trust your instincts over a person's social role (e.g., doctor, leader, parent)

Check and check.

OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.


I was with you right up until the final paragraph, but this made me do a double take:

> OpenAI is too important to trust sama with.

...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.

The whole "super serious what-ifs" game is just marketing.


Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.

I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.


> I'm not even sure we're any closer to AGI than we were before LLMs.

I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).

Just because something is overhyped doesn't mean you have to be dismissive of it.


In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.


In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.

Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.


Point is that LLMs could be a local minima we are now economically stuck in until the hype wears off.


Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.


I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.

E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.


That's not a third category, that's just a sociopath as seen by themself.


I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.


> I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Yes that is the core trait I highlighted in the 1st bullet.


> I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.


I think it’s learned sociopathy. People who start out knowing that a particular behavior is wrong, but over time are conditioned to feel like it’s fine, at least in certain situations (the corporate world being a prime example).


It's fairly obvious sociopathy is a prerequisite for top CEO jobs. Some just hide it better than others or have better PR people


aaron called him sociopath back in 2010's


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: