An interesting article and it does seem like people rushed to embrace NoSQL and are now trying to force it into some kind of consistency after the event - not a bad thing, incidentally - there's a lot of interesting work here (Vector clocks, CRDTs, etc.).
One thing that surprised me though was the lack of a key player: Amazon. Their Dynamo paper was hugely significant and as a company they use eventually consistent stores for a whole swathe of products at scale.
Why mention Facebook and Google but omit this other major player, especially are their experiences tell a different story.
Amazon estimated that each 1 ms of additional latency costs them a few million dollars a year as well as decreases rate of returning users. For low latency there is still nothing better than eventual consistency, so they may be driven by the bottom line.
Also, from my experience of being a merchant on Amazon with hundreds of thousands of items in inventory that need to be updated almost realtime as prices/stock changes all the time, leading to a few millions updates a day (in bulk), I can't envision how Amazon would be able to achieve fast import from a horde of merchants like me on any SQL system.
One thing that surprised me though was the lack of a key player: Amazon. Their Dynamo paper was hugely significant and as a company they use eventually consistent stores for a whole swathe of products at scale.
Why mention Facebook and Google but omit this other major player, especially are their experiences tell a different story.