> Great, have they increased by as much as inflation?
Yes, real wages have been on the rise for the past few years. With the exception of the somewhat artificial COVID peak, median real wages are the highest on record: https://fred.stlouisfed.org/series/LES1252881600Q
Necessary stuff (houses, healthcare, education) have outpaced CPI, and generally it is becoming more expensive.
Unnecessary stuff (electronics, appliances, other tech) did not, and generally it is becoming cheaper (Planned obsolescence is another topic though...)
Unfortunately this is using BLS data that captures largely urban areas and fails to account for a large and quickly growing segment of the workforce that also tend to be lower earners - self-employment (eg uber drivers, doordash, gig working, contractors). This is definitely an over-estimate of real wages, a best case scenario of sorts.
With the backdrop of it coming from the organization that is supposedly supposed to be managing inflation... :P
> Yes, it's nice that computers and phones are super cheap and powerful.
It was nice, but that's quickly changing now that the consumer market is being ignored by chip makers who'd rather sell to companies building data centers
Wow wages barely rising after 60 years of wage suppression, the wealth is truly trickling down now! Just ignore that the top 1% stole $50 trillion from the bottom 90%:
> the driver has enough expertise to know what his personal limit ought to be
It is actually somewhat amusing that you worded this as "ought to be" rather than "is". Because one of the big problems with most drivers is they have an overly inflated idea of how competent they are at driving (I am not so churlish as to exclude myself from the category). And our system does nothing to bring drivers' beliefs about their capabilities in line with their actual capabilities--drivers are tested generally once on their competence [1], and that pass result then gets to hold for several decades, physical or mental decline notwithstanding.
> I settle for a middle position, which is that the speed limit should be no less than 35 mph on most streets
Most residential streets are not safe to travel at 25 mph, let alone 35 mph. There's a line of parked cars in the shoulders, children playing in the driveways, sidewalks, and street? Yeah, if you're traveling 35 mph, you've got no hope of stopping in time (recall that stopping distance goes to the square of speed).
> Moreover, I think that all pedestrian collisions, no matter how small, must be investigated thoroughly, with a hard action taken to minimize such an incident.
We already know how to minimize collisions. The top 3 actions to take are a) reduce speed limits, b) redesign roads to be narrower to make drivers less comfortable traveling at speed, and c) ban right turns on red.
> Bicyclists must be mandated to wear light-colored high-visibility clothing, reflective gear, and a helmet, otherwise their bicycle should be confiscated.
Why? It's not like wildlife like bears, moose, or deer that wander onto the roads wear such gear, and a "mature highly-attentive driver" should be equally aware of such dangers.
[1] And to be honest, even that is somewhat generous a statement.
Speed limits below 30 are lazy copouts by lazy people that have for decades come in the way of instituting proper safety systems that don't require lowering the speed. Making roads narrow is worse; it is simply horrific. Extending your depraved logic, disallowing cars on the road would work even better. Your metric is one-sided in that it accounts for wanting pedestrian safety but not driver utility.
If you actually begin to use your head, there are other ways to lower collisions:
1. Make roads and lanes substantially wider. This allows pedestrians to be seen from the edges before they come in the front of a car.
2. Shoulder parking is a pathetic substitute for a absence of multilevel parking lots, so these lots should be constructed and used. The shoulder parking should be eliminated as it is very detrimental to pedestrian visibility.
3. There should be well-maintened painted crosswalks, with a walk button that actually works, also with dynamic traffic lights to go with them. These exist at some places in NYC. No-stair bridges also work if they stay maintained.
WebPKI is derived from X.509, but I don't think X.509 lives on anymore. X.500 was stripped down to form LDAP, which is still in very heavy use today. There's still some X.400 systems in existence. I think some of the early cellphone generations may have used the ITU standards in the physical layer?
Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
So I've definitely played around with a lot of these ideas, because the thing that I'm trying to emulate/port/reverse engineer is a 16-bit Windows application, which coincidentally means that a lot of the existing infrastructure just keels over and dies in supporting it.
I originally started the emulation route, but gave that up when I realized just how annoying it was to emulate the libc startup routine that invoked WinMain, and from poking around at the disassembly, I really didn't need to actually support anything before WinMain. And then poking further, I realized that if I patched the calls to libc, I only had to emulate a call to malloc at the, well, malloc level, as opposed to implementing the DOS interrupt that's equivalent to sbrk. (And a side advantage to working with Win16 code is that, since every inter-segment call requires a relocation, you can very easily find all the calls to all libc methods, even if the disassembler is still struggling with all of the jump tables.)
After realizing that the syscalls to modify the LDT still exist and work on Linux (they don't on Windows), I switched from trying to write my own 386 emulator to actually just natively executing all the code in 16-bit mode in 64-bit mode and relying on the existing relocation methods to insert the thunking between 16-bit and 64-bit code [1]. The side advantage was that this also made it very easy to just call individual methods given their address rather than emulating from the beginning of main, which also lets me inspect what those methods are doing a lot better.
Decompilation I've tried to throw in the mix a couple of times, but... 16-bit code really just screws with everything. Modern compilers don't have the ontology to really support it. Doing my own decompilation to LLVM IR runs into the issue that LLVM isn't good at optimizing all of the implemented-via-bitslicing logic, and I still have to write all of the patterns manually just to simplify the code for "x[i]" where x is a huge pointer. And because it's hard to reinsert recompiled decompiled code into the binary (as tools don't speak Win16 file formats anymore), it's difficult to verify that any decompilation is correct.
[1] The thunking works by running 16-bit code in one thread and 64-bit code in another thread and using hand-rolled spinlocks to wait for responses to calls. It's a lot easier than trying to make sure that all the registers and the stack are in the appropriate state.
[post author] I went down some similar paths in retrowin32, though 32-bit x86 is likely easier.
I was also surprised by how much goop there is between startup and main. In retrowin32 I just implemented it all, though I wonder how much I could get away with not running it in the Theseus replace-some-parts model.
I mostly relied on my own x86 emulator, but I also implemented the thunking between 64-bit and 32-bit mode just to see how it was. It definitely was some asm but once I wrapped my head around it it wasn't so bad, check out the 'trans64' and 'trans32' snippets in https://github.com/evmar/retrowin32/blob/ffd8665795ae6c6bdd7... for I believe all of it. One reframing that helped me (after a few false starts) was to put as much code as possible in my high-level language and just use asm to bridge to it.
Yeah, 32-bit x86 is somewhat easier because everything's in the same flat address space, and you at least have a system-wide code32 gdt entry that means you can ignore futzing around with the ldt. 16-bit means you get to deal with segmented memory, and the cherry on top is that gdb just stops being useful since it doesn't know anything about segmented memory (I don't think Linux even makes it possible to query the LDT of another process, even with ptrace, to be fair).
As for trying to ignore before main... well, the main benefit for me was being able to avoid emulating DOS interrupts entirely, between skipping the calls to set up various global variables, stubbing out some of the libc implementations, and manually marking in the emulator that code page X was 32-bit (something else that sends tools in a tizzy, a function switching from 16-bit to 32-bit mid-assembly code).
16-bit is weird and kinda fun to work with at times... but there's also a reason that progress on this is incredibly slow for me.
Slow progress is fine, it took me like two years to get where I got! (Not that I was working on it full time or anything, but also there were just many false starts and I had no idea what I was doing...)
Have you tried otvdm? (Back port of WineVDM to Windows x64.) I tried with a Win16 (VB appearing application) that interfaces to a device using an optical transceiver on a serial port and it worked fine.
I tried that, and ran into roadblocks since the app under test is old Visual Basic (which is half compiled and half interpreted) and then it used a third party library which has quite sophisticated anti-decompilation features.
The issue is that UA are editable by the user, and there is no proof that some random person/scraper isn't just using a suspected trusted bot's UA string. Every ethical service also posts what IP addresses they use, so that people can compare the traffic they get to see if it is actually their bot scraping. What this article describes is the game of every third-party unethical scraper; they do anything and everything to try and get their request through. They steal UA's, they steal residential IP addresses through botnets, they attempt to circumvent CAPTCHAs using AI, etc. So the behavior in this article is not prove for any major AI provider doing unethical scraping.
It's amusing because some insist that Fortnite is a battle royale game in the vein of PUBG, while others insist that it's a tower defense/shooter game like Orcs Must Die. And still others insist it's not a game but a venue for things like digital concerts. Clearly, it can't be all of those things!
IEEE 754 basically had three major proposals that were considered for standardization. There was the "KCS draft" (Kahan, Coonen, Stone), which was the draft implemented for the x87 coprocessor. There was DEC's counter proposal (aka the PS draft, for Payne and Strecker), and HP's counter proposal (aka, the FW draft for Fraley and Walther). Ultimately, it was the KCS draft that won out and become what we now know as IEEE 754.
One of the striking things, though, is just how radically different KCS was. By the time IEEE 754 forms, there is a basic commonality of how floating-point numbers work. Most systems have a single-precision and double-precision form, and many have an additional extended-precision form. These formats are usually radix-2, with a sign bit, a biased exponent, and an integer mantissa, and several implementations had hit on the implicit integer bit representation. (See http://www.quadibloc.com/comp/cp0201.htm for a tour of several pre-IEEE 754 floating-point formats). What KCS did that was really new was add denormals, and this was very controversial. I also think that support for infinities was introduced with KCS, although there were more precedents for the existence of NaN-like values. I'm also pretty sure that sticky bits as opposed to trapping for exceptions was considered innovative. (See, e.g., https://ethw-images.s3.us-east-va.perf.cloud.ovh.us/ieee/f/f... for a discussion of the differences between the early drafts.)
Now, once IEEE 754 came out, pretty much every subsequent implementation of floating-point has started from the IEEE 754 standard. But it was definitely not a codification of existing behavior when it came out, given the number of innovations that it had!
Everyone has already made several comments on the incorrect use of EPSILON here, but there's one more thing I want to add that hasn't yet been mentioned:
EPSILON = (1 ulp for numbers in the range [1, 2)). is a lousy choice for tolerance. Every operation whose result is in the range [1, 2) has a mathematical absolute error of ½ ulp. Doing just a few operations in a row has a chance to make the error term larger than your tolerance, simply because of the inherent inaccuracy of floating-point operations. Randomly generate a few doubles in the range [1, 10], then randomize the list and compute the sum of different random orders in the list, and your assertion should fail. I'd guess you haven't run into this issue because either very few people are using this particular assertion, or the people who do happen to be testing it in cases where the result is fully deterministic.
If you look at professional solvers for numerical algorithms, one of the things you'll notice is that not only is the (relative!) tolerance tunable, but there's actually several different tolerance values. The HiGHS linear solver for example uses 5 different tolerance values for its simplex algorithm. Furthermore, the default values for these tolerances tend to be in the region of 10^-6 - 10^-10... about the square root of f64::EPSILON. There's a basic rule of thumb in numerical analysis that you need your internal working precision to be roughly twice the number of digits as your output precision.
Your last comment is essential for numerical analysis, indeed. There is this "surprising" effect that increasing the precision of the input ends up by decreasing that of the output (roughly speaking). So "I shall just use a s very small discretization" can be harmful.
One of the major projects that's ongoing in the current decade is moving the standard math library functions to fully correctly-rounded, as opposed to the traditional accuracy target of ~1 ULP (the last bit is off).
For single-precision unary functions, it's easy enough to just exhaustively test every single input (there's only 4 billion of them). But double precision has prohibitively many inputs to test, so you have to resort to actual proof techniques to prove correct rounding for double-precision functions.
For what it’s worth, this is basically the first word you learn when discussing numerical precision; and I mean word—nobody thinks of it as an abbreviation, to the point that it’s very often written in lower case. So welcome to the club.
to me this feels like wasted effort due to solving the wrong problem. The extra half ulp error makes no difference to the accuracy of calculations. the problem is that languages traditionally rely on an OS provided libm leading to cross architecture differences. If instead, languages use a specific libm, all of these problems vanish.
Standardizing a particular libm essentially locks any further optimizations because that libm's implementation quirks have to be exactly followed. In comparison the "most correct" (0.5 ulp) answer is easy to standardize and agree upon.
> The extra half ulp error makes no difference to the accuracy of calculations
It absolutely does matter. The first, and most important reason, is one needs to know the guarantees of every operation in order to design numerical algorithms that meet some guarantee. Without knowing that the components provide, it's impossible to design algorithms on top with some guarantee. And this is needed in a massive amount of applications, from CAD, simulation, medical and financial items, control items, aerospace, and on and on.
And once one has a guarantee, making the lower components tighter allows higher components to do less work. This is a very low level component, so putting the guarantees there reduces work for tons of downstream work.
All this is precisely what drove IEEE 754 to become a thing and to become the standard in modern hardware.
> the problem is that languages traditionally rely on an OS provided libm leading to cross architecture differences
No, they don't not things like sqrt and atanh and related. They've relied on compiler provided libs since, well, as long as there have been languages. And the higher level libs, like BLAS, are built on specific compilers that provide guarantees by, again, libs the compiler used. I've not seen OS level calls describing the accuracy of the floating point items, but a lot of languages do, including C/C++ which underlies a lot of this code.
> The first, and most important reason, is one needs to know the guarantees of every operation in order to design numerical algorithms that meet some guarantee
sure, but a 1 ulp guarantee works just as well here while being substantially easier to provide.
> And the higher level libs, like BLAS, are built on specific compilers that provide guarantees
Sure, but Blas doesn't provide any accuracy guarantees so it being built on components that sort of do has pretty minimal value for it. For basically any real application, the error you experience is error from the composition of intrinsics, not the composed error of those intrinsic themselves, and that remains true even if those intrinsics have 10 ULP error or 0.5 ULP error
Many of the conversions so far have been clearly faster. I don't think anything has been merged which shows a clear performance regression, at least not on CPUs with FMA support.
using fma makes it possible to write faster libm functions, but going back to a 1 ulp world with the same fma optimizations would give you another 20% speedup at least. the other issue is that these functions tend to have much larger code size which tends not to be a significant problem in micro benchmarks, but means that in real applications you increase cache pressure allowing things down in aggregate
> Not much hydrogen there, so not much water, which probably is the biggest problem.
Actually, the cloud layer at that level is mostly sulfuric acid, from which you can get your water. It also means you need to be in a hazmat suit when you walk outside, but that's still a step up from everywhere else, where you need a bulky pressure suit instead.
Yes, real wages have been on the rise for the past few years. With the exception of the somewhat artificial COVID peak, median real wages are the highest on record: https://fred.stlouisfed.org/series/LES1252881600Q
reply