My favorite thing about Swift is that is seems to get out of the way - and when it gets in the way it's usually with a nifty language feature (like the { $0 + $1 } closure syntax). I'm very excited for the future - between Go and Swift we now have two compiled fast languages that are almost as expressive as their slower dynamic/interpreted cousins.
As an aside, I like that Apple is betting the farm on ARC. I wish I could have been a fly on the wall when they were discussing ARC versus GC.
I really like the `deinit` construct that comes with ARC, which lets you know when an object is about to be deallocated. Makes it much easier to find memory leaks, imo.
Indeed. `deinit` has saved me from things that would have caused much greater problems later on. It's probably one of the more subtle features that one would miss the most when moving to a language that doesn't have it.
I love go but if one thing that it is not is - expressive, especially when compared to its dynamic cousins like Ruby/Python. That is a tradeoff I am willing to live with, but there is no need to get starry eyed over it.
GP was talking about Go. Not to beat a dead horse, but not having generics results in writing similar code over and over again, eg working with collections
I'm always a bit fascinated when what everyone knows is the next thing turns out not to be. RISC is a classic example. (ARM is light CISC compared to the sort of true minimal RISC I'm talking about.)
GC is borderline but it feels like it might be one of those. In retrospect one giveaway is how easy it is to avoid almost all memory problems in C++ with RAII and STL.
>I wish I could have been a fly on the wall when they were discussing ARC versus GC.
Apple shipped a tracing GC (RC is a form of GC) for a while, but couldn't get it to work reliably or with adequate performance. ARC was a bit of a "Hail Mary" and is problematic in its own right, but certainly better than the GC it replaced.
> ARC was a bit of a "Hail Mary" and is problematic in its own right
I'm just curious with your experience what you are pointing to as problematic. Other than potential extra release calls in tight loops, the main downside I saw was it made the use of C structs way less appealing---at the same time, I wouldn't want to with manually memory managed C structs in GCD blocks.
Last time I did any Objective-C you had to retain/release yourself so the auto stuff is interesting. As far as I can tell the benefit over Garbage Collection is that GC only works well when you have lots of excess spare memory, which is constrained on mobile devices.
There are hybrid systems that combine the prompt deallocation of pure reference counting with the superior throughput of tracing garbage collection.
Part of the reason I really dislike the "reference counting vs. garbage collection" debate, and keep emphasizing that reference counting is garbage collection, is that it sets up this false dichotomy. In reality, there are all sorts of automatic memory management schemes that combine aspects of reference counting with tracing in different ways, most of which were created in the 80s and 90s. Sadly, almost nobody in industry is aware of this enormous body of work, and the simplistic "RC vs. GC" model has stuck in everyone's heads. :(
Not just is reference counting a form of garbage collection (as pcwalton pointed out), it is also not the case that you had to retain/release stuff yourself, certainly not since Objective-C 2.0's properties.
Here is the code to define and use a property pre ARC with properties:
Spot the difference? Now it turns out that there are some differences, such as automatic generation of -dealloc methods and weak references and some cosmetic stuff. But overall, it's at best a subtle difference and for most code you won't be able to tell the difference.
Pre Objective-C 2.0, there were solutions such as AccessorMacros[1], which handled the same use-cases except without the dot syntax (which is somewhat questionable) and have the advantage of being user-defined and user-extensible, so for example if you want a lazy accessor, you don't have to wait for your liege lord, er language supplier to add them, or create a whole new language to do the trick. Instead, you just write 4-5 lines of code and: done!
This is one of the most uninformed posts I have read in a while. As someone who has been developing in Objective C for the last 6 years, and been through the transition of MRC to ARC, none of what is stated in this post is accurate.
Actually, all of it is accurate. Since you're spouting off your credentials as the only evidence for why what I wrote is wrong [not sure how that works], here are mine:
- programmed in Objective-C for ~30 years
- implemented my own pre-processor and runtime (pre NeXT)
- programmed in the NeXT ecosystem professionally since 1991
- additionally, worked in Objective-C outside the NeXT/Apple ecosystem for many years
- worked with Rhapsody and with OS X since the early betas
- worked at Apple for 2 years, in performance engineering (focus: Cocoa)
- one of my projects was evaluating the GC
With that out of the way (and just like your 6 years, it has no actual bearing on correctness): which specific parts do you believe are inaccurate? I'd be happy to discuss, show you why you're wrong, or correct my post if you turn out to be right on something that can be verified (your opinion as to how awesome ARC is doesn't count).
My personal frameworks consist of 205584 non-comment, non-whitespace, non-single-bracket lines of code. Of these, 304 contain a retain, 1088 an autorelease, and 957 a release. That's 0.15%, 0.52% and 0.46% of the code respectively, for a grand total of 1.13%.
I'd have a hard time calling around 1% of total code "a lot", especially since the bulk of that is very simple boilerplate and trivial to write, but I guess everyone is different.
Mind you, this is a less-than-optimal code base, dating back to the mid 1990ies, with much more "but I am special" code that does do manual management where it shouldn't. Code I write today, even without ARC, has a significantly lower R/R density, well under 1%.
However, even of that 1%, the bulk is (a) releases in dealloc and (b) autorelease in class-side convenience initializers.
Quite frankly, I really miss convenience initializers in typical ARC code, writing [MPWByteStream streamWithTarget:Stdout] is so much nicer than [[MPWByteStream alloc] initWithTarget:Stdout] that (a) I wish people would write convenience initializers even in ARC mode (my experience is that they don't) and (b) I wrote a little macro that will generate an initializer and its convenience initializer from one specification. It's a bit nasty, so not sure I'll keep with it.
For the releases in dealloc, I once wrote an auto-dealloc that grubbed through the runtime to automatically release all the object instance variables (with an exception list for non-retained ones). It probably would have allowed me to eliminate the bulk of releases, but somehow I just didn't find it all that advantageous, writing those dealloc methods was just not that much of a hassle.
What may be interesting here is that the fact that I had an alternative may have been instrumental to realising it wasn't that big a deal. Things seem a lot worse when you don't have an alternative (or feel you don't have an alternative).
The same applies to ARC itself, at least for me: before ARC was released, it was exactly the solution I had wanted, especially in light of the GC madness. Again it was once I had used it in practice that it really became obvious how insignificant of an issue R/R was.
The only way I can see of getting significantly higher than 1% R/R code is by accessing instance variables directly, either because you are writing accessors by hand (why?) or grabbing at those instance variables without going through their respective accessors (why?!?!?). In both cases: don't do that.
Yet, whenever I mention these straightforward facts (particularly the numbers), people jump at me. Which is interesting in and of itself. My only explanation so far is that people generally write much, much worse code than I can imagine, or that R/R looms much larger in the collective Apple-dev psyche than can be justified by the cold, hard facts of the matter.
My guess is that's it's a little of the former and a lot of the latter. As a case in point, one dev who had jumped at me on a mailing list came back to me a little later in a private mail. He had converted one of his ARC projects back to R/R and was surprised to find what I had written to be 100% true: the R/R portion of the code was tiny and trivial, much less than he'd imagined, and hardly worth noticing, never mind fretting about.
However, the collective paranoia around R/R and the RDF around ARC seems to be big enough that reality doesn't really stand a chance. Which is of course also relevant. Perception matters, and that's why ARC is important.
I think they had to go with ARC due to the requirement that swift interoperates with Objective-C. If that hadn't been a constraint, yeah it would be an interesting decision.
No - they already had GC working with Objective-C and could have chosen it for swift if they had thought it was the best technology.
Here's a quote from Chris Lattner:
"GC also has several huge disadvantages that are usually glossed over: while it is true that modern GC's can provide high performance, they can only do that when they are granted much more memory than the process is actually using. Generally, unless you give the GC 3-4x more memory than is needed, you’ll get thrashing and incredibly poor performance. Additionally, since the sweep pass touches almost all RAM in the process, they tend to be very power inefficient (leading to reduced battery life).
I’m personally not interested in requiring a model that requires us to throw away a ton of perfectly good RAM to get an “simpler" programming model - particularly on that adds so many tradeoffs."
Yes, they had GC working with Objective-C but there were so many problems with it that they dropped the GC in favor of ARC years ago. By the time Swift came along, GC with Objective-C was no longer an option.
Chris Lattner was already working on Swift when the decision to drop GC was made. Guess who made the decision? Chris Lattner. If anything, GC was dropped because of Swift, not the other way around.
Generally data lifetime in rust is fully deterministic and the borrow checker can statically determine when data should be deallocated. If for whatever reason you do need reference counted semantics there are options in the stdlib (alloc::rc).
And to elaborate on this, there's a namespace clash: Arc in Rust is _atomic_ reference counting, and Swift is _automatic_ reference counting, which, even more confusingly, is implemented using atomic reference counting in my understanding.
Automatic reference counting inserts all of the refcount bumps. In Rust, you have to write the up count yourself, but not the downcount.
But, reference counting isn't used very often, at least in my experience. It's very useful when you have non-scoped threads, though.
As far as I can tell, the impl of Arc and ARC are actually basically identical at the high level, with the only major diff being ARC only keeps 32-bit counts on 64-bit (so they only waste one pointer of space).
Everything else is just where retain/release (clone/drop) calls are made. Rust is insanely good at not frobbing counts because you can safely take internal pointers and move pointers into a function without touching the counts at all. Swift has lots of interesting optimizations to avoid touching the counts, but it's hard to compete with a system that has so much great static information related to liveness of pointers and values.
As a simple example, consider this code (which is conveniently valid Swift and Rust, modulo snake_vsCamel):
let x = make_refcounted_thing();
foo(x);
This code in isolation will always not frob counts for Rust. This code may not frob counts in Swift, depending on what follows `foo`. In particular if foo is in tail position (or at least tail-position-for-the-life-of-x), then we can avoid frobbing, because we know foo's final operations will be to release its function args. foo may in turn punt releasing its function args to anyone it passes them to as a tail call. Note that Swift and Rust both have the usual caveats that "less is a tail call than you think" thanks to side-effecting destructors.
The takeaway is that Rust's default semantics encourage releasing your memory earlier, which in turn means less traffic on the reference counts. Particularly interesting is that in Rust, you can run a function arg's destructor earlier than the end of the function by moving it in to a local variable. In Swift, I do not think this is possible (but I wouldn't be too surprised to be wrong -- Swift has tons of special attributes for these little things).
As an aside, I like that Apple is betting the farm on ARC. I wish I could have been a fly on the wall when they were discussing ARC versus GC.