Hacker Newsnew | past | comments | ask | show | jobs | submit | Kamq's commentslogin

Oh come on. They're obviously using the word "science" in this context as a shorthand for the institutions and processes we've set up to do research. Mostly because that's too many words for a title and nobody has come up with a catchy name that's not politically coded. It's also pretty normal usage of the word out in the wild.


What Smolin is trying to point out are the meta principles which underlie the feedback loop of the Scientific Method itself. Once the principles are adhered to, the loop becomes commonsense. This is because all of "doing Science" are human activities where we discover knowledge through three means viz. 1) Authority(Textual/Oral) 2) Reasoning 3) Experience. All three have to be considered to come to a definite conclusion. The submitted article ignores this trifecta and seems to conflate "Empiricism" solely with external validation.


> This should be the only entity that's approved to have nuclear weapons.

This speaks like someone who has never been outside of a heavily bureaucratized regime. People don't get "approval" for things, they just do them.


> Just because I'm a middle aged male I see trucks, and beer, and football advertisements all day long

Well, yeah. Those companies will pay to send their ad to all middle aged men. Those companies could slice and dice more to get better demographics, but they don't think it's worth it.

Google's business isn't to slice and dice the demographics to show you better ads. It's to slice and dice the demographics in any way that the advertisers will pay for.

Because the people who are willing to pay money are, ultimately, the customers.


> Go's simplifications often introduce complexities elsewhere

It does occasionally, although I'll push back on the "often". Go's simplifications allow most of the codebase to be... well... simple.

This does come at the cost of some complexity on the edge cases. That's a trade off I'm perfectly willing to make. The weird parts being complex is something I'm willing to accept in exchange for the normal parts being simple, as opposed to constantly dealing with a higher amount of complexity to make the edge cases easier.

> There's no free lunch here

This I'll agree with as well. The lunch is not free, but it's very reasonably priced (like one of those hole in the wall restaurants that serves food way too good for what you pay for it).

> the compromises Go makes to achieve its outcomes have shown themselves to be error-prone in ways that were entirely predictable at design time.

I also agree here, although I see this as a benefit. The things that are error prone are clear enough that they can be seen at design time. There's no free lunch here either, something has to be error prone, and I like the trade offs that go has made on which parts are error prone. Adding significant complexity to reduce those error prone places has, in my experience, just increased the surface area of the error prone sections of other languages.

Could you make the case that some other spot in design space is a better trade-off? Absolutely, especially for a particular problem. But this spot seems to work really well for ~95% of things.


I dunno, I'm not willing to overlook null values or default zero-values that easily. Those cause me problems all the time, and they are not meaningfully simpler than the alternative (explicit assignment).

Or mutability modifiers. Yes, that's an extra feature, and there's an undeniable appeal to having fewer features. But being able to flag things as immutable will make all the code you deal with easier in future.

Or consider how they left out generics for 15 years. It simplifies the language in some ways, sure, but when you needed generics, you had to use reflection, which is way more complicated than generics. Even macros, unpopular as they are, are better than codegen.

Again, I understand the appeal of minimalism, but a cost/benefit analysis of any of these features shows them to be a massive net gain. A blanket policy of "no, we can't have nice things" is needlessly austere, and it leaves everyone worse off imo.


> Equal in voting rights. Gerrymandering has been perfected by Republicans. Through that they manage to dilute votes of the opposition.

This thread is talking about the Senate. The senate isn't gerrymandered. Both senators are state-wide races.

If you want to view it that way, you can view the senate as "pre-gerrymandered". But the last time that was an option was in 1959, and both of those are just "the entire area the US owned, but wasn't a state yet. To get senate gerrymandering, you have to go back to 1912 and the admission of New Mexico/Arizona.


> If you want to view it that way, you can view the senate as "pre-gerrymandered".

That is quite explicitly the history of the US Senate (and House), FWIW.

The Connecticut Compromise was reached to give low-populations states outsized legislative power in the senate. This is the main reason the senate exists.

Building on that, the 3/5th compromise was reached as part of this to give slave states outsized legislative power in the house.

The state of Maine used to be part of Massachusetts, but it was later set up as an independent state in order to increase the number of anti-slavery states in the senate (the Missouri compromise).


Gerrymandering can affect voter sentiment and trigger polling location changes during redistricting, both of which can affect voter turnout[1][2][3] (though the research doesn't seem conclusive on the effect).

And thinking about it more, though I haven't seen if there are studies on it: there are probably manpower/fundraising effects from gerrymandering.

If you're able to protect your political power in one area that probably better enables you to amass resources to use in the area you can't gerrymander.

But all that said, both parties practice gerrymandering and I don't think there's strong evidence of a significant advantage over a major party from current gerrymandering at the national level.

[1] https://da.lib.kobe-u.ac.jp/da/kernel/90008864/90008864.pdf

[2] https://electionlab.mit.edu/articles/gerrymandering-turnout-...

[3] https://stateline.org/2022/05/20/check-your-polling-place-re...


> On a percentage basis, over three times as many districts were competitive in states where independent commissions drew maps as in states where Republicans drew maps.

https://www.brennancenter.org/our-work/analysis-opinion/how-...


That’s just confusing cause and effect. If your seats are safe, you have no reason to agree to forming an independent commission. The same is true in both heavily blue and heavily red states. Are districts more competitive in states where Democrats draw maps? I don’t think so.


This totally ignores values and motivations, and I would argue that only one group in your comment values winning at any cost.


I don’t even know which group you mean, but “my group has good values and motivations, but the enemy group just values winning at any cost” is exactly what a total partisan who values winning at any cost would say.


The evidence is that independent commissions drawing maps makes for more competitive districts. Which party is most opposed to such commissions? Which party is gleefully dismantling all accountability and oversight positions and departments? Which party is openly inviting corruption and pardoning those they should be prosecuting?


I wonder why one party would be seeking to change a civil service that’s 90% staffed by members of the other party? I guess “democracy” means Democrats running the country no matter who wins the election, right?


First, your stats are wild. Please provide and unbiased citation.

Second, your solution was in place in the 1800s and was referred to as the spoils system. It led to bad outcomes and was rightfully abandoned. Your beef is with the fact that educated people tend to choose policies that you don't like (assuming your 90/10 split, which is still wild). You/the GOP have three options. First is to recognize that the policies pursued do not attract people which education (which I consider a red flag). Second is to re-adopt the spoils system despite it being illegal, and frankly just sort of dumb since when the other side is in power you suffee, but at least then you never need to think deeply about making policy for the whole country instead of a subset of supporters. Third, you/the GOP self-own via tearing up all the intellectual capital and international good will built up over the decades without a replacement, massively reducing American influence on the world in all dimensions.


Democracy means "one person, one vote".

We all know which party is fighting tooth and nail against that on practically every issue that affects it.


Are most members of the civil service Democrats? This is the first I've heard of this.


OP asserts this unsource. While it does seem to tilt towards Democrats since it is ethics and mission oriented and typically requires a degree, 90/10 sounds wild in my experience.

My prior is based on experience. Most of the civilian govies are centrist, "I just want to grill" types.


That makes sense to me. This is why I suspected that attempting to claim the election was stolen would be a losing proposition; I was sadly surprised to the contrary.

Elections are run by Republicans as well as Democrats. In fact several of the key locations that Trump claimed were stealing the election from him were basically locations where the Republican party had a lock on the administration of the election. As I remind people often, when they talk about someone stealing the election, that's not a hypothetical "someone," that's Betty three houses down that has the nice flower garden and organizes the bake sale at church every month.


> And yet so many of them kinda rule the world by running the biggest corporations in the world.

Have you looked at the state of the world recently?


They used to rule the world when its state was better too. In fact, their proportion was higher.


I don’t see your point. When has the state of the world ever been ever been good?


> Imagine not being able to get a shitty fast food job because ... Or just moved to the US and speak too weird and don't have anyone to vouch for you.

You've obviously never worked food.


> That’s incredible! The capability of an AI model is approximately junior level in the fields I’ve tested it in (programming, law, philosophy, neuroscience). If you’d don’t see any possible uses for the technology, keep thinking about it.

It is absolutely incredible from a technical perspective, but your next statement does not follow.

In a lot of (most?) fields, juniors are negative ROI, and their main value is that they will eventually be seniors. If AI isn't on the road to that, then the majority of the hype has been lies, and it's negative value for a lot of fields. That changes AI from a transformative technology to an email summarizer for a lot of people.


So on one hand, you're kinda right. HN is filled with exaggeration (imo often justified) from people venting because they have to deal with the bad parts of this system all day. That seems natural in a dev filled space.

But I don't think your comment is fair.

> We’re told of the engineer who isn’t hired by Google because he can’t invert a binary tree. Everyone else piles on and decree that, yes indeed, you cannot measure developer efficiency with a Leetcode or whiteboard problem.

Because this is a bad way to judge engineers. Or, rather, it's a great way if they don't know how to invert a binary tree. Most of the job is to figure out something you don't know yet and do it. Giving an engineer a random wikipedia page on an obscure algorithm and having them implement it is a great interview tactic. Having them regurgitate something common is bad, there will be a function for it somewhere, and you just need to call it.

> Meanwhile in the real world, hordes of awful engineers deliver no story points, because they in fact, do nothing and only waste time and lowers morale.

I agree with you on this one. Those people need to be fired. That doesn't mean story points are a good metric, often 90% of long term value can come from the kind of people who are like Tim, and losing them can destroy projects. Just because something bad is happening, it doesn't justify killing 90% of value for a team.

The only thing I've seen that works is to give team managers more discretion and rigorously fire managers who regularly create poor preforming teams (you often have to bump manager pay for this, that's fine, good managers are worth their weight in gold).

> Meanwhile in the real world, each job opportunity has thousands of applicants who can barely write a for loop. Leetcode and whiteboards filter these people out effectively every day.

You do need to filter for people that can code. That doesn't mean filtering for inverting binary trees is a good idea. Having people submit code samples that they're proud of is a much better approach for a first filter.

> Meanwhile in the real world, metrics on delivery, features and bugs drive company growth and success for those companies that employ them.

Bullshit. Basically all companies use metrics, and most companies are garbage at delivering useful software. A company being years behind and a million over budget on a software project, and eventually delivering something people don't want is so cliche that it's expected. And these companies regularly get out competed by small teams using 1% of the resources, as long as the small teams give half of a shit. In fact, if you want my metric for team success, what percentage of the team actually cares is a good one.

You're proposing a solution with a <20% success rate. Don't act like it's a gold standard that drives business value to new heights. With the system as it is today, most companies would be better off getting out of software and having a third party do it for them.


My wider point is not that the way companies are run is perfect and that we should stop the “innovators” (to quote the sibling comment). Each of these examples speak of corporate dysfunction, but we never give any weight to the constraints that force them in place. Leetcode is bad, but it’s bad in the sense that it errs too heavily on filtering out false negatives - the cheaper of the two errors. The alternative is worse.

Giving Tim the benefit of the doubt in this story, it still holds true that for every extraordinary and invisible superstar like Tim there are 99 under-performers who are indistinguishable from him.

We need to empathise with our managers and the processes in our organisations to understand their purpose and how they came to be.

We, software engineers, keep picking out singular data points of evidence to point at a flawed and unfair world, that go against our self inflated egos.

The brew guy inverting the binary tree and Tim being great, does not invalidate the practices of whiteboards and story points as a general practice.

To your final point, the best organisations that I’ve worked with used metrics in a very effective way (mostly in start ups). The worst did too. Just because some do it poorly, does not mean that it’s bad across the board.

What is tiring, is the unfair, and low expectation of the quality of evidence demanded of the anti-establishment notions in software development, before they are taken as gospel by this community.

And, in my experience, the people who are the strongest proponents of sidestepping or dismantling these processes overlap strongly with those who also do not deliver value to their teams.


> Leetcode is bad, but it’s bad in the sense that it errs too heavily on filtering out false negatives

But, it doesn't. It filters for something orthogonal to development, which is ability to obsess over clever algorithmic solutions. Ok, well my company does HackerRank instead of LeetCode, maybe LeetCode is magically better, but I'm not seeing anything that tells me someone who grinds LeetCode is actually going to be useful on my team.

Look, you want an idiot check to make sure someone is actually able to code, fine. That's probably a good idea. But the number of stories on here about people being turned away because they hadn't run into a particular tricky algorithm problem is concerning.

> Giving Tim the benefit of the doubt in this story, it still holds true that for every extraordinary and invisible superstar like Tim there are 99 under-performers who are indistinguishable from him.

But they aren't indistinguishable. The author of the blog post was perfectly able to distinguish them. That's my point. There are plenty of ways to be able to distinguish them, this metric just isn't one of them. Because it's a bad metric.

It may not be legible to the higher ups, but good lower level managers have no problem distinguishing good unconventional people, and under-performers.

> We need to empathise with our managers and the processes in our organisations to understand their purpose and how they came to be.

I do empathize with the managers, at least the lower level ones. That's why I advocated for putting them in complete control and giving them unilateral firing privileges and increasing their pay.

> the best organisations that I’ve worked with used metrics in a very effective way (mostly in start ups). The worst did too.

You're really making it sound like metrics (at least as traditionally practiced in software) are orthogonal to being a good organization. If that's true, we might want to re-think how much time we spend on them and how much money we spend capturing them.

Now, if you want to use profit, adoption, or user satisfaction as metrics, I'd love to talk about that, but I've seen nothing in my experience in the corporate world that tells me that the way we're currently using them is even net positive value.


It only appears that HackerRank/Leetcode isn’t good at filtering because you’re viewing it from your perspective, and not the perspective of the entire population that is tested. To you, the predictive power at the top tail end of the distribution is low, because you’re thinking of two strong developers Alice and Bob. Alice happens to know algorithm X and would pass the test, whereas Bob does not. But that’s not the population we’re testing. Think more along the lines of Alice and Bob and your grandmother were the test population. It’s absolutely fantastic at filtering the lower 95% of applicants because they will _never_ be able to pass. Yes, inadvertently 2.5% of “good developers” are filtered too, but that doesn’t matter to the outcome of your company. They just want someone competent, and they don’t care if it’s Alice or Bob.

The same logic sort of applies to Tim and his performance. The bias of having an imperfect metric is probably much better than the bias of letting an army of middle managers go with their cut. Besides, it doesn’t have to be a hard filtering function at this stage, but a metric to indicate that we need to look a little closer at Tim


> It’s absolutely fantastic at filtering the lower 95% of applicants because they will _never_ be able to pass.

This is the part I disagree with. It hasn't been true for years. Anyone with the free version of ChatGPT can pass a hacker rank today.

> but that doesn’t matter to the outcome of your company

It does for mine, because we've hired all of the good developers that get through the process you're describing and it isn't enough. We actively moved away from what you're describing and turned the interview into a 2-3 hour pair programming session where the person completes a mini version of a ticket.

This has much more predictive power than what you're describing.


> This is the part I disagree with. It hasn't been true for years. Anyone with the free version of ChatGPT can pass a hacker rank today.

It certainly still is true today. Anybody who is sufficiently motivated to cheat can pass it. It was true prior to ChatGPT, and it still remains true today. And yet they don’t. Most people completely fail these screens

> It does for mine, because we've hired all of the good developers that get through the process you're describing and it isn't enough.

Then your industry is atypical in the type of applicants that you are getting. So to accommodate you’ve had to increase your false positives to reduce false negatives. That’s completely fine if it’s what you need to do, but it’s not the typical experience for a tech company.

We also do a pair screen after the code test and we still reject around 80% who make it to that stage. How do you scale interviewing everyone if you don’t pre screen?


> Then your industry is atypical in the type of applicants that you are getting

Based on the quality of candidates that get through at other companies, I'm guessing our problem isn't atypical. Or at least, good devs often aren't getting through their pipelines at all. It's possible that in trying to reduce the false positive rate, they screened out all of the actual positives, but that doesn't paint a good picture of the industry.

> How do you scale interviewing everyone if you don’t pre screen?

We do pre-screen. The fact that they haven't encountered a tricky algorithm before isn't a problem. For ones where the syntax is valid, a dev at my company does a code review on it.


> Rust's ? operator - effective and ergonomic

It is, but it's also subtle, and if you want branches (especially sad-path branches) to be explicit, that's not a good thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: