When you write too much Haskell, your Python code starts to look like this:
print('\n'.join(
'FizzBuzz' if x%5==0 and x%3==0
else 'Fizz' if x%3==0
else 'Buzz' if x%5==0
else str(x)
for x in range(1, 101)))
I would really like to have a "let" expression in Python to avoid having to write a new function with a def statement when you could get away with a simple lambda or generator expression.
The magic is in the neg sbb combination: The neg changes reg to two's complement but also sets or clears the CF if the argument was != 0 then sbb reg,reg effectively moves CF to the reg avoiding conditional jump for != 0.
I also tend to wrap loops in expressions like that. Then, in almost every interesting loop, I suddenly want to log something or add intermediary computation, and have to refactor into the good old boring for loop.
Yes, but in an incredibly verbose way. And a simple let->lambda conversion can't express things that Scheme's let* or letrec expressions let you do (in Haskell, a "let" or "where" expression is a "letrec").
A highly Pythonic, Easier to Ask Forgiveness than Permission[1] version:
FIZZ=3
BUZZ=5
cache = {}
for i in range(FIZZ-1, FIZZ*BUZZ, FIZZ):
cache[i] = 'Fizz'
for i in range(BUZZ-1, FIZZ*BUZZ, BUZZ):
try:
cache[i] += 'Buzz'
except KeyError:
cache[i] = 'Buzz'
for i in range(100):
try:
print cache[i%(FIZZ*BUZZ)]
except KeyError:
print i+1
from itertools import izip, islice
def fizzes():
i=0
while True:
yield '' if i%3 else 'Fizz'
i += 1
def buzzes():
i=0
while True:
yield '' if i%5 else 'Buzz'
i += 1
def numbers():
i=0
while True:
yield str(i+1) if i%3 and i%5 else ''
i += 1
print "\n".join("".join(parts) for parts in islice(izip(fizzes(), buzzes(), numbers()), 100))
Or better yet, don't even bother asking for forgiveness or permission:
from collections import defaultdict
FIZZ=3
BUZZ=5
cache = defaultdict(str)
for i in xrange(FIZZ-1, FIZZ*BUZZ, FIZZ):
cache[i] = 'Fizz'
for i in xrange(BUZZ-1, FIZZ*BUZZ, BUZZ):
cache[i] += 'Buzz'
for i in xrange(100):
print cache.get(i%(FIZZ*BUZZ), i+1)
But I'm rather partial to the generator versions you and others posted.
def fbgen(text, divisor):
while True:
for i in range(1, divisor):
yield ""
yield text
def derrangedBuzz(max):
fizzer = fbgen("Fizz", 3)
buzzer = fbgen("Buzz", 5)
for i in range(1, max + 1):
s = fizzer.next() + buzzer.next()
if s == "":
print i
else:
print s
One of the solutions in the comments I found quite pythonic and concise. Somehow people have it in their heads that "Pythonic" means long-winded. And yes, you have to read the code and think for a second to understand it, but that's no crime.
[(not x % 3) * 'Fizz' + (not x % 5) * 'Buzz' or x for x in range(1, 101)]
To quote Brian Kernighan, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
It's not just more complexity per line, it's also a higher level of complexity, using language-specific features that people who aren't fluent in python wouldn't be familiar with (multiplying a string by a boolean)
I'm fairly new to Python, though not to software development, and one of the nice things I find about Python is that when I see something I haven't seen before (in this case multiplying a string by a boolean) I can usually guess correctly what it will do and spend a few seconds with the REPL to confirm.
Of course, this is true to a certain extent of all programming languages, but I do find Python particularly easy in this respect.
Well, that example is a bit different. You're ignoring who the audience is. When writing code, the audience is usually at least people on the same team, if not the person writing in the first place.
If writing in French for people who are not fluent, I think it would be a good idea to avoid idiomatic language.
It's not different. The audience in this case is people who are fluent in Python. Saying that code sample is bad is like saying Baudelaire is bad because only people fluent in French can read it.
I was curious about that. Is multiplying a string by a boolean idiomatic Python? I don't write nearly enough Python to know, but it strikes me that this might be more like writing French using lots of obscure words.
This starts to get into "what is idiomatic python?" which changes as the language evolves. For example, before Python had an official ternary form, this was a common idiom:
This does the same thing as the FizzBuzz example, coercing a bool to int. Here the int is used as an index into the list of the two strings.
Personally, I found this handy and liked it a lot. Others didn't and now there is this, which isn't bad:
account.status = "unpaid" if amount_due > 0 else "paid"
Maybe it's less like using obscure French and more like speaking in a slightly different dialect, or in a different region with different cultural references.
That kind of code could make sense when you're using a very opinionated API, that is designed (for better or worse) with patterns in mind, such as Android. But, when you start bringing that style of coding outside of that environment...
I generally don't particularly like giving punctuation meaning that, well, isn't punctuating. The bottleneck when writing code is almost never keypresses.
I think CoffeeScript gets it probably the most right of languages I've worked with, in that the choice of punctuation makes intuitive sense... "Hey! It's an Arrow. It goes from here to here".
FWIW, I don't like the word "lambda" either... Naming things with one letter are generally a bad idea that we'd do better to divest ourselves of, but there is a bit of legacy to deal with. Either way, they make a field (and statistics is a major offender here, with p-values and t-tests and whatnot), terribly inaccessible.
So what about the '*' in C or even better the '.' in so many object oriented language.
> The bottleneck when writing code is almost never keypresses.
No, it's readability and that often comes with terseness. Every line break because I have to spell out lambda or every refactoring that removes the function from its original context so I can avoid a line break is bad for readability. In that aspect the '\' in Haskell does great. Also in Haskell anonymous functions are so important that they definitely deserve their own special character to shorten many expressions.
I only see this becoming a problem if it is used in excess, like in C++ and especially in C++11.
I don't like it... I don't what I'd do if starting from scratch, and there might not be better, but it definitely causes readability pain.
> the '.' in so many object oriented language
I have less of a problem with this, because there's some non-programming analog to be found (Section 1.1, Article 3.2, etc.). That said, I think, at some point, it's harmful to OOP in that it makes it hard to sort out properties and messages, and the roles of one or the other.
I agree 100% on readability - the problem for me comes when words get mapped onto abbreviations and symbols that are almost entirely constructed and/or alien to most readers.
When you're writing Haskell in emacs (like you should be), you enable haskell-font-lock-symbols in haskell-mode, and it displays the lambda as it should be (plus a bunch of other symbol corrections for ::, ->, =>, (), ||, &&, etc.)
I cannot agree enough. Someone give me a call when a Haskell-like language with readable syntax appears. I really like the ideas behind Haskell, but the syntax is so unfriendly that I cannot be bothered to really start using it. Less special characters, please.
There really aren't that many special characters (certainly no requirement for non-ascii) in the core language - perhaps the sin is allowing almost unlimited user defined operators in code, which leads to monstrosities like ekmett's lens infix operators: http://hackage.haskell.org/package/lens-4.1.2/docs/Control-L...
As a lisper, I prefer names and prefix operators - there's a few exceptions where infix operators add improvements to the language, but I dislike having user-defined operators (they make searching a pain, although Hayoo eases some of that).
I really like scheme & clojure and I think I have done a lot more scheme and clojure than haskell. (And even a alot more of Python). However, I think that Haskell by itself has a great syntax (with some legacy cruft, granted).
What is really great is how Haskell enables you to write extremely succinct code by providing powerful abstractions and tools. The fact that you have a type annotation already gives enough meta information in the code that you often can omit a long name. The fact that Haskell functions tend to be short in lines gives you something that is closer to an equation than a command language.
I don't know how to put this in words, but I'll try: Naming things well in programming is the most difficult task after mastering the very basics. I think this is because names are not fundamental for algorithms and programs but are like bookmarks for the programmer and their human brain. In C, a function with one-letter argument names is a nightmare, in Haskell, it actually improves readablity over a verbose version with whole-word names. And then, I have seen a lot of functions that were named very specific in terms of a problem the programmer tried to solve at the time, that however where quite generic, or could have been very generic (think of `sort_customers_by_name()` instead of a `sort_by()`.
I share your dislike of extreme operator usage. It is a problem that whatever permutation of +-!@#! are used as operators in libraries for functionality that the user rarely needs. But the general approach of haskell for succinctness is a good one.
I have very limited familiarity with Haskell, but I find the syntax pretty clean and readable - I spend most of my time getting my head around functional/lazy/etc semantics, which make my head hurt (in a good way) - that presumably implies that the syntax is successfully exposing those semantics to my poor OO/duck-typing coddled brain.
I suppose these things are a matter of taste as much as anything else. But when people talk about a Haskell program being 'elegant', I never think to myself "what, that mess?"
You don't need a return, just a print (since we are doing FizzBuzz).
For tghw's solution to work you'd have to have a print outside of the function call (since you can't print from a lambda). With the def, you can print inside of the function (and don't need a return, since you're just printing).
I use this function to force evaluation of elements in a generator:
def do(gen):
for x in gen:
pass
It evaluates all elements of a generator with storing them in memory. If you don't mind generating some garbage, you can just use the list function instead.
The jury is still out on whether this is a good thing or not.
IMHO, the objective of the crippled `lambda` really is to make the programmer refactor into reusable functions instead of having λs littered everywhere, which would hurt maintainability.
One of the most effective ways to hurt maintainability of code written in any given programming language is to try to make some language features artificially awkward to use in the hope that programmers will then do something else. When has that ever worked?
IMHO, if a feature is useful enough to have in the language, it deserves a good implementation. If it isn't, it shouldn't be there at all. And if it is useful sometimes but doesn't fit well in other situations, providing better alternatives in those situations is a more effective way to avoid dubious code than just artificially crippling the entire feature to discourage its use.
So what's if it's shorter. You'll run out of screen? You'll wear down your keyboard?
If you don't like typing it, have a macro. Otherwise, as Objective C shows, the verbosity of code is not relevant at all. It's what it says and what it does, which is.
But I hoped you'd see beyond the direct comparison of lambdas with Objective C's blocks.
I'm saying that short code doesn't automatically translate to more readable or better code, Objective C's message syntax is the proof (if I have to be painfully specific about it).
I write a lot of JS and typing out "function" or "return" was never on the list of things I found a problem, despite I also work in C#, which uses the short => variation.
The parent wasn't saying code should be short, they were saying the syntax for lambdas should be short.
Yes, short code doesn't automatically translate to readible code. But neither does long code.
You can't wheel out objective-c's fondness for long self-documenting message names as self-evident proof that the syntax for lambdas must be verbose.
Long message names are good because they let you clearly describe what is otherwise a big black box of unknown behaviour.
The syntax for a lambda will always be a lambda, so it doesn't need to be spelt out explicitly each time. Shorter syntaxes such as used by Haskell are easier to visually pattern match on and so can actually aid comprehension.
It's not as simple as longer is always better and anyone who disagrees is obviously foolish and lazy.
words = ((3, 'Fizz'), (5, 'Buzz'))
def fizzbuzz(num):
for value, word in words:
if num % value == 0:
yield word
for i in range(1, 101):
print ''.join(fizzbuzz(i)) or i
A reminder that Python generator functions let you use multiple yields.
from __future__ import print_function
def fizzbuzzify(integers):
for i in integers:
if i % 15 == 0:
yield 'Fizbuzz'
elif i % 3 == 0:
yield 'Fizz'
elif i % 5 == 0:
yield 'Buzz'
else:
yield str(i)
print(', '.join(fizzbuzzify(range(1, 101))))
for i in fizzbuzzify(range(1, 101)):
print(i)
Since we are talking about crazy python fizzbuzz, allow me to share a creating I have had sitting around for a while:
def fbg(s,r):print("\n".join(map(lambda x:str(x[1])if x[0]==''else x[0],zip(("".join(a for a,b in s if n%b==0)for n in range(1,r+1)),range(1,r+1)))))
fbg((('Fizz',3),('Buzz',5)),100)
Cause, you know, fizzbuzz one-liners need to work for the generic case.
This problem is like Project Euler #1. You don't strictly need to divide. I prefer to skip along the array and put in the new values.
n,f,b= 100,3,5
a = [str(i) for i in range(1,n+1)]
for i in range(n/f):
a[(i+1)*(f)-1] = 'Fizz'
for i in range(n/b):
a[(i+1)*(b)-1] = 'Buzz'
for i in range(n/(f*b)):
a[(i+1)*(f*b)-1] = 'FizzBuzz'
Doesn't even touch on my personal pet peeve, people who don't use list and dict literals. I assume they're former Java programmers who got ahold of enough python knowledge to be dangerous. e.g:
What's the first step? Well, you probably make a stream of numbers. And then a set of if blocks to test and return strings. Why not keep going?
What happens if you want more fizz buzz strings? Does the giant if-or statement seem a little unwieldy? Okay, pull the rules out and see if that's better. Is it easier to test now that it's a function and not a little stateful object? Do you want your fizzbuzz function to concatenate the strings or use explicit replacement? Can you make a switch for that? And so on.
That stuff only scratches the surface. I'm sure there's some brain burning fizz-buzz interviewers out there. It really puts you on the spot to deal with a stateful program.
One thing that seems really weird to me is this: given the first set of rules, why does almost everyone write the program that is the least extensible? Is it the way the question is phrased or scar tissue from imperative programming?
Note that this solution is generalised for any divisors and generates an infinite sequence.
from itertools import chain, combinations, count
from operator import mul, add
fizzbuzz = lambda terms: (lambda terms: ({x%d:w for d,w in terms}.get(0,str(x)) for x in count(1)))(tuple((lambda (d,w):(reduce(mul,d),reduce(add,w)))(zip(*x)) for x in chain.from_iterable(combinations(sorted(terms.iteritems()),s) for s in xrange(1,len(terms)+1))))
terms = {3:'fizz', 5:'buzz', 7:'baz'}
from itertools import islice
print list(islice(fizzbuzz(terms),None,25))
Neat! I'd be interested in seeing pythonic solutions written in other languages (to the extent that those languages may allow 'Pythonic' style), too. I find it fascinating to see how certain languages will, for one reason or another, trend towards certain design patterns and styles.
Actually, most Python I write (influence from reading experienced programmers) tends to the last one, although it's semi-jokingly:
def fizzbuzz(n):
return 'FizzBuzz' if n % 3 == 0 and n % 5 == 0 else None
def fizz(n):
return 'Fizz' if n % 3 == 0 else None
def buzz(n):
return 'Buzz' if n % 5 == 0 else None
def fizz_andor_maybenot_buzz(n):
print fizzbuzz(n) or fizz(n) or buzz(n) or str(n)
map(fizz_andor_maybenot_buzz, xrange(1, 101))
It's pleasing to use HFOs in Python, as long as you don't abuse lambdas. Also, some functional types like `defaultdict` can be used to describe code/business logic with datastructures rather than a bunch of if's, keeping things tidy.
There's something minor that's always bothered me about the usual description of FizzBuzz:
> If the number is divisible by 3, print Fizz instead of the number. If it’s divisible by 5, print Buzz. If it’s divisible by both 3 and 5, print FizzBuzz.
In a strict interpretation, the first two sentences could be seen as incorrect. If a number is divisible by 3, you cannot just print 'Fizz' and move on. You also have to check if it's divisible by 5.
The particular form of the description at issue can be interpreted (arguably, is most naturally interpreted) to direct a different outcome than is usually expected from FizzBuzz, to wit, it directs that on numbers divisible by 15 "Fizz", "Buzz", and "FizzBuzz" all should be printed, rather than just the last.
The description is underspecified, no surprise there, but that's different from claiming it's incorrect. As far as its purpose as an interview question goes noticing this interpretation would probably be seen as a good sign.
If it can be satisfied in a way that is materially different from what was intended, then it is both under-specified and incorrect. This is a not-uncommon source of errors.
Natural language can be used with precision, and this is an important skill for engineers. Being able to identify ambiguity and inconsistency is an important part of that skill so yes, noticing it within a question should count in favor of the candidate. Dismissing it as being pedantic is the wrong response, because if you are working on something critical (secure communications software, for example) you need to handle ideas with precision.
Sure, it tests whether you can write a set of statements a computer can understand.
But it also tests whether you can understand the intent behind a set of statements a human would make, without going on a diatribe about how the definition is not good enough.
After all, if English was a formal strict language where only one right way existed to express something, we wouldn't need programmers, would we?
> After all, if English was a formal strict language where only one right way existed to express something, we wouldn't need programmers, would we?
If you just solved the fact that one meaning can have many expressions, we'd still need programmers (and, more relevantly, system analysts) just as much.
The relevant problem is that English isn't a formal strict language where a particular expression can only have one meaning (and, more importantly, that, people don't use it that way even when it superficially seems to be.)
That is, the problem that requires specialized work to develop unambiguous requirements for the implementation of (among other things) information systems isn't that English maps (many expressions) -> (one meaning), but that it maps (one expression) -> (many meanings).
I think you could have made this point without subtly referring to my post as a "diatribe." I'm more than aware of what the intended interpretation is. Posting what I felt was a very minor and amusing observation hardly constitutes a "diatribe."
> After all, if English was a formal strict language where only one right way existed to express something, we wouldn't need programmers, would we?
This is very off-topic, but I'm not sure I agree with it. You're assuming that it would be easy to find the one unique way to express a given computation such that anybody could write it down. Even if English had this uniqueness property, I don't think that would be true at all.
JS version for funsies. I didn't like the idea of specifically checking for simultaneous mod3 and mod5, so I made this.
for (i=1; i<101; i++) {
var result = "";
if(i%3 == 0)
result += "Fizz";
if(i%5 == 0)
result += "Buzz";
if(result.length == 0)
result = i;
document.write(result+"<br>");
}
He forgot one on the "i += 1" (which would be i++ in a for loop but Python doesn't have that operator).
But that whole line is weird, because I think the C example should use a "for" loop, not increment the main counter at the top of a while loop, even if it has to use "for i in range(1, 101)" like in pythonic python. A C programmer like me (who loves python too) would immediately think "for (i = 1; i <= 100; i++)" for that (although counting from 1 is strange, it's what the problem asks for).
This is what I originally had but I felt that the Python `while` was actually a bit more true to the way a C for loop is constructed (and `range` seems a very not C thing to do.) It's not perfect!
i loved this; studying multiple implementations of the same recipe, even if most are not intended to be "exemplary" but rather cautionary, is a great way to learn.
This reminds me of a similar effort called "evolution of a python programmer" and published in this gist:
There's no honest attempt to explain the philosophy behind every language with real-world examples in Python. It's just childish jokes.
It's the equivalent of saying "Ze fish in le river" is "Frenchish English". Ha ha, stupid French people, right?
It's easy to make fun of programming languages like this, but what does that teach us? Aside from to mock what is different than what we use right now?
While I wrote this with tongue in cheek, I also aimed to illuminate the power of Python's flexibility along with the inherent danger therein. I do admit the Javacious example was over the top though ;)