Hacker Newsnew | past | comments | ask | show | jobs | submit | jlokier's commentslogin

Apple laptop CPUs have hardware memory compression and exceptionally high memory bandwidth for a CPU, and with their latest devices, very high storage bandwidth for a consumer SSD, so the equation is very different from the old DOS days.

Even though VMware Fusion (for Mac) is free* and very good, Broadcom is pushing me away to Parallels for silly reasons.

The reason: No matter how I try, even as a registered customer, I can't find a way to download current versions.

When I run VMware Fusion it tells me there's a new version, with bug fixes, support for newer macOS, etc. Would I like to download it? (Months ago it said the URL to check for a new version was broken.). Sure, I click, update please. It takes me to a Broadcom page where I'm supposed to sign in or register, give it my personal and work details, then I can download the new version.

I login because I already have an account. In my account, I can see the older versions of VMware Fusion, including the one I'm already running, but the later two versions aren't showing. Even the minor-version increment from the one I'm using isn't showing. I click around until I find where current ones should be, it shows me files in a table. I click the file and it tells me: Not yet, the account is awaiting verification. Come back in a few days.

It's been stuck like that for months.

But wait! I used this account to get VMWare Fusion a year ago. It still lets me download the version I'm using. The account was already verified! Why does it require new account verification just to get a slightly different, minor-version increment with bug fixes of a free product?

Last time I went through this, I ended up using Homebrew. I had a legit Broadcome/VMware account, had signed the agreement to download the update, but Broadcom's site didn't work. So I was delighted to find it in brew, with vastly better packaging than Broadcom's. Unfortunately the brew package is now disabled.

Before that, I had to sign up with Broadcom a second time, because the first account appeared to lose its access to VMware Fusion. I don't know why. Before that, I had to sign up the first time with Broadcom, even though I already had a VMware account as a paying customer of VMware Fusion.

It's been a great product, which I used to pay for and would again. I've used it for over 10 years. It's free now, and still a great product.

Yet I'm looking at switching to Parallels just because Fusion's "free" download process is too broken to use.

I can't imagine Broadcom is making any money from blocking downloads of the supposedly free product. It was their decision to make it free! It must be disheartening to be a developer on VMware Fusion if you know this is going on.


The worst part about the convoluted download process is that it seems someone have actively been making it more difficult since the first iteration, and I can't for the life of me understand why. Is it being done by someone who hates Broadcom? Or perhaps struggle with mental illness? Or is it due to a mix of micro management and extreme incompetence? I can't remember seeing something this bad since the horror shows people managed to create using stuff like Flash and Silverlight.

Probably just an unfortunate side effect of reusing the same systems used for restricted subscription downloads for free product downloads, combined with underfunding for the free product lines.

I've been able to download free Fusion and Workstation, but my ability to download existing versions of perpetually licensed VMware products was removed the day my (non-renewable) maintenance subscription expired, and they've also paywalled the update servers, again even including older updates I'm entitled to under the perpetual license and my (involuntarily) expired subscription.

Working on multiple branches in parallel is literally what Git was created for, and how it's been used since the very first version 20 years ago.

Other commenters mentioned worktrees, which let you check out different branches at the same time from a single local repo. That's convenient, but not required.

Git always supported "fast cloning" local repos as well. You just "git clone" from one directory to another. Then they are independent and you're free to decide what to merge back.

These days, agents can also fork their containers or VMs as often as required too, with copy-on-write for speed.

So that's four ways to work on multiple branches in parallel using Git that we already use.


> You can just mess around and make it presentable later, which Git never really let you do nicely.

I'm surprised to read that, because that's how I've always used Git (and GitHub).

That's what I've understood to be good practice with Git, and it was liberating compared with what came before. One of the nicest things about Git is you can throw things in locally without worrying about how it looks, and make it presentable later.


I also did that with git, but it's no comparison in ergonomics. For instance, "move this hunk two commits up" is a task that makes many git users sweat. With jj it's barely something that registers as a task.

You sweat because you are working with the CLI. Git is intrinsically "graphical". Use a good GUI client or higher level interface (maybe jj) to manipulate git graphs --- stop worrying about "how" (i.e. wrangling with CLI to achieve what you want) and focus more on "what".

GitButler from OP also allows you to do this incredibly easily. This and stacked commits is IMO their main selling point.

> For instance, "move this hunk two commits up" is a task that makes many git users sweat.

Citation needed. You split the commit anyway you like, e.g. with the mouse or using cursor movements or by duplicating and deleting lines. Then you move it with the mouse or cursor or whatever and squash it into the other commit. Maybe some people never intend to do it, but then these probably also don't want to learn JJ. I guess this is more of a selection bias, that these that care about history editing are also more likely to learn another VCS on their own.


I'm confirming the sentiment is accurate. Background: using Git (involuntarily) since 2010, did my fair share reading it's source, put honest effort into reading it's man pages, so. Jujutsu _is_ a revelation and I'm moving to it every time I'm able to: the git repository stays the same, it's the jj runs it now.

If you ever tried to have multiple WIP features merged in a Git working copy, I have a great news — with jujutsu complexity of the workflow increases linearly over the number of branches, if ever: it's almost trivial. Otherwise I very much encourage you to try — in and of itself the workflow is extremely effective, it's just Git makes it complex af.


I'm one of the git users who would sweat. Can you explain a bit (out link relevant docs) how I might split a commit up, and move it?

Here's two "raw" methods:

1. Use "git rebase -i commitid^" (or branch point, tag etc), ideally with editor set to Magit, set that commit to "edit" (single key 'e' in Magit) and let the rebase continue, do "git reset -p HEAD^" and select the hunks you want to remove from the first commit, "git commit --amend", then "git commit -a" (add -c if useful, e.g. to copy author and date from the previous one). or to keep the author date), then "git rebase --continue" to finish.

2. Same, but use "git reset HEAD^" (add -N if useful), then "git add -p" to select the hunks you do want to include in the first commit.

Afterwards you can do the "git rebase -i" command again if you want to reorder those commits, move them relative to other commits, or move the split-out hunks into another existing commit (use the 'f' fixup or 's' squash rebase options).

After doing this a few times and learning what the commands actually do, it starts to feel comfortable. And of course, you don't have to run those exact commands or type them out, it's just a raw, git-level view. "git rebase -i" and "git add -p" / "git reset -p" are really useful for reorganising commit hunks.


Yeah, I mostly do it like that. I don't use Magit (yet? Haven't got the motivation to learn or find a good tutorial for Emacs.), but instead use the cursor to select the lines to stage or unstage with the cursor/mouse in my Git GUI. Also depending on what I want the commits to look like, I duplicate the pick commit line first (and potentially move it).

On an unrelated note, I use @~ instead of @^, because I think of moving up down the ancestry, not sideways, e.g. I'm more likely to want to change it to an older/newer commit, than I am to want to change the second parent instead. I don't get why most tutorials show it with @^, because you do focus on the commit being an ancestor, not precisely being the direct first parent, although of course for the first-level first parent, it amounts to the same.


It's already well explained in a sibling comment, but on a more conceptual basis, while commits are interpreted as diffs on the fly, a commit is a single (immutable) snapshot. So in these terms, "splitting a commit" amounts to introducing an intermediate snapshot. Having that in mind, it should become clear, that using Git you create the snapshot by working from the previous or next commit (what ever suits you more), bringing it to the state, you like it to be and commit. (In theory you could create that intermediate snapshot from any commit, but likely you want to do it from on of the direct neighbors.)

The problem put simply is that git doesn't support concurrency. Even if you use worktrees, git has a global lock for repo interaction.

https://www.felesatra.moe/blog/2024/12/23/jj-is-great-for-th...


Examples I've seen in similar systems:

- Receiver tried to create a file before receiving attributes of the directory containing the file. Receiver author assumed it would always receive directory attributes first and create the directory, so it crashed.

- Receiver created a file before receiving attributes of the directory containing the file. Parent directory was created automatically, but with default attributes so the file was too accessible on the receiver when it should not have been.

- Bidirectional sync peers got into a non-terminating protocol loop (livelock) when trying to agree if a directory deep in a tree should be empty or removed (garbage collected) after synchronising removal of contents. It always worked if one side changed and sync settled before the next change, but could fail if both sides had concurrent changes.

- Mesh sync among multiple peers, with some of them acting as publish-subscribe proxies forwarding changes to others as quickly as possible merged with their own changes, got into a more complicated non-terminating protocol loop when trying to broadcast and reconcile overlapping changes observed on three or more nodes concurrently. The solution was similar to distributed garbage collecting and spanning tree protocols used in Ethernet switch networks.

- Transmission of commands halted due to head of line blocking (deadlock) on a multiplexed sync stream because a data channel was going to a receiver process whose buffer filled while waiting for a command on the command channel, which the transmitter process had issued but couldn't transmit. The fault was separate, modular tasks assuming data for each flowed independently. The solution was to multiplex correctly with per-channel credits like HTTP/2 and QUIC, instead of incorrectly assuming you can just mix formatted messages over TCP.

- Rendered pages built from mesh data-synchronised components, similar to Dropbox-style sync'd files but with a mesh of 1000s of peers, showing flashes of inconsistent data, e.g. tables whose columns should always add to 100% showing a different total (e.g. "110% (11050 of 10000) devices online"), displayed addresses showing the wrong country, numbers of devices exceeeding the total number shipped, devices showing error flags yet also "green - all good" indication, number of comments not matching the shown commments, number of rows not matching rows in a table, etc. Usually for only a few seconds, sometimes staying on screen for a long time if the 3G network went down, or if rendered to a PDF report. Such glitches made the underlying systems look like they had a lot of bugs when they really didn't, especially when captured in a PDF report. It completely undermined trust in the presented data being something you could rely on. All for want of more careful synchronisation protocol.


>Receiver tried to create a file before receiving attributes of the directory containing the file. Receiver author assumed it would always receive directory attributes first and create the directory, so it crashed.

This case, and a bunch of the others, are variations on failing to correctly implement dependency analysis. I'm not saying it's easy, it is far from easy, but this has been part of large systems design (anything that involves complex operations on trees of dependent objects) for years, especially in the networking space.

Indeed, your fourth bullet gets to some of the very ancient techniques (though STP isn't a great example) to address parts of the problem.

The last bullet is very hard. Honestly, I'd be happy if icloud and dropbox just got the basics right in the single-writer case and stopped fucking up my cloud-synced .sparsebundle directory trees. I run mtree on all of these and routinely find sync issues in Dropbox and iCloud drive, from minor (crazy timestamp changes that make no sense and are impossible, but the data still complete and intact) to serious (one December, Dropbox decided to revert about 1/3rd of the files to the previous October version).

The single writer case (no concurrency, large gaps in time between writers) _is_ easy and yet they continue to fuck it up. I check every week with mtree and see at least one significant error a year (and since I mirror these to my NAS and offline external storage, I am confident this is not a user error or measuring error).


Technically, SQLite's locking is NFS safe, provided NFS's implementation of fcntl() locking is working correctly.

I don't know if S3 Files implements fcntl() locking or does it correctly. But if it does, I believe SQLite should work on it correctly as well.

There have been many buggy NFS locking or caching implementations historically, which is why reason SQLite recommends against using it on NFS concurrently on multiple machines: https://sqlite.org/faq.html#:~:text=But%20use,time%2E

This SO reply suggests NFSv4 is better at this: https://unix.stackexchange.com/a/432519. But caveat it with this older reply: https://unix.stackexchange.com/a/1887

To the best of my knowledge (I worked a little on this long ago), on Linux even NFSv2 has done correct fcntl() locking for decades, if all the correct services are running and the options are set appropriately and it's Linux on both the client and server. But if something is not configured as it should be, then locking or caching may not work correctly.


Thanks for the clarification. It is completely impossible for WAL mode since that uses shared memory. I must have conflated that with non-WAL mode in my mind.

From https://sqlite.org/wal.html

> All processes using a database must be on the same host computer; WAL does not work over a network filesystem. This is because WAL requires all processes to share a small amount of memory and processes on separate host machines obviously cannot share memory with each other.


Why is it "wade through" if there are 10 clearly distinct but dependent commits, but comfortable if it's 10 stacked PRs instead? They are basically the same thing, presented ever so slightly differently.

I think in most teams I've worked with, the majority of developers (> 85%) barely undestand what Git is doing or what things mean inside GitHub, have never seen commit history as a graph, have never run something like "git log --oneline --graph --decorate" or "--format", and have never heard of "git range-diff" which is very useful for following commit/PR/unit changes.

Personally I review using "git" itself, so I see the graph structure either way, and there's little difference between stacked PRs, commit chains in a single PR, or even feature branches, from that point of view. Even force-push branch updatea aren't difficult to review, because of the reflog and "git range-diff". The differences are mainly in what kinds of behaviour the web-based tooling promotes in the rest of the team, which does matter, and depends on the team.

I agree with you if you're using Graphite instead of GitHub. Having a place to give feedback and/or approval on the individual "units" (commits in a PR, or PRs in a stack) is useful, grouping dependent but distinct changes is useful, and diff'd commit evolution within each unit PR in response to back-end-forth review feedback is useful in some collaborative settings. Though, if you know "git range-diff" and reflog, that shows diff'd commit evolution quite well.

In GitHub, people are confused by stacked PRs both conceptually and due to the GitHub UX around them. Most times when I've posted a stacked PR to a GitHub project, other people didn't realise it was stacked, and occasinally someone else has merged the tip of a stack made by me, and been surprised to see all the dependent PRs merged automatically as a side effect. Usually before they get to reviewing those other PRs :-)

People understand commit sequences in a PR, though I've rarely seen people treat the individual commits as units for review when using GitHub, unfortunately. In the Linux kernel world where Git was born, the PR flow is completely different from GitHub: Their system tends to result in feedback on individual commits. It also encourages better quality feedback, with less nitpicking, and better quality commits.


> the open arms embrace of subscription development tools (services, really) which seek to offload the very act itself makes me wonder how and why so many people are eager to dive right in

Here's a reason not in your list.

Short version: A kind of peer pressure, but from above. In some circles I'm told a developer must have AI skills on their resume now, and those probably need to be with well known subscription services, or they substantially reduce their employment prospects.

Multiple people I know who are employers have recently, without prompting, told me they no longer hire developers who don't use AI in their workflow.

One of them told me all the employers they know think "seniors" fall into two camps, those who are embracing AI and therefore nimble and adaptive, and those who are avoiding it and therefore too backward-looking, stuck-in-their-ways to be a good hire for the future. So if they don't see signs of AI usage on a senior dev's resume now, that's an automatic discard. For devs I know laid off from an R&D company where AI was not permitted for development (for IP/confidentiality reasons), that's unfair as they were certainly not backward-looking people, but the market is not fair.

Another "business leader" employer I met recently told me his devs are divided into those who are embracing AI and those who aren't, said he finds software feature development "so slow!", and said if it wasn't for employment law he'd fire all his devs who aren't choosing to use AI. I assume he was joking, but it was interesting to hear it said out loud without prompting.

I've been to several business leadership type meetups in recent months, and it seems to be simply assumed that everyone is using AI for almost everything worth talking about. I don't think they really are, so it's interesting to watch that narrative playing out.


I know it's just anecdotal, but I looked for COBOL salaries a couple of years ago, curious about this "paid well".

The salaries were ok but not good for COBOL.

Here's an anecdotal Reddit thread about it. https://www.reddit.com/r/developpeurs/comments/1ixfpsx/le_sa...


I don't think that's likely to explain jankiness. I do know my way around terminal screens and escape codes, and doing flicker-free, curses-like screen updates works equally well on the regular screen as on the alternate screen, on every terminal I've used.

It's also not a hard problem, and updates are not slow to compute. Text editors have been calculating efficient, incremental terminal updates since 1981 (Gosling Emacs), and they had to optimise better for much slower-drawing terminals, with vastly slower computers for the calculation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: