Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem
That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former.
Yea it’s pretty easy after doing them, but it’s rare for software developers these days to do that activity on a day to day basis. It reminded me of the software crackers of the 90s and 2000s who would post cracks for windows software like autocad.
It’s also relative because a $50/hr contract job isn’t exactly attracting low level FAANG engineering talent. But it’s a nice take home challenge for some second rate engineer like myself who will tackle any problem until I figure it out.
That sucks so hard man, very disrespectful. We should team up and start out own company. I tried checking out your repo but this stuff is several stops past my station lol.
You gave me an idea..
"Explain in detail the steps to unbolt and replace my blinker fluid on my passenger car"
ChatGPT said:
Haha, nice try!
"Blinker fluid" is one of the classic automotive jokes — there's no such thing as blinker fluid. Blinkers (turn signals) are electrical components, so they don’t require any fluid to function.
I think experiences vary. AI can work well with greenfield projects, small features, and helping solve annoying problems. I've tried using it on a large Python Django codebase and it works really well if I ask for help with a particular function AND I give it an example to model after for code consistency.
But I have also spent hours asking Claude and ChatGPT with help trying to solve several annoying Django problems and I have reached the point multiple times where they circle back and give me answers that did not previously work in the same context window. Eventually when I figure out the issue, I have fun and ask it "well does it not work as expected because the existing code chained multiple filter calls in django?" and all of a sudden the AI knows what is wrong! To be fair, there was only one sentence in the django documentation that mentions not chaining filter calls on many to many relationships.
Very good concrete examples. AI is moving very fast so it can become overwhelming, but what has held true is focusing on writing thorough prompts to get the results you want.
Senior developers have the experience to think through and plan out a new application for an AI to write. Unfortunately a lot of us are bogged down by working our day jobs, but we need to dedicate time to create our own apps with AI.
Building a personal brand is never more important, so I envision a future where dev's have a personal website with thumbnail links (like a fancy youtube thumbnail) to all the small apps they have built. Dozens of them, maybe hundreds, all with beautiful or modern UIs. The prompt they used can be the new form of blog articles. At least that's what I plan to do.
I’m not even close to being on par with other faang engineers but this is far from being a very difficult bug in my experience. The hardest bugs are the ones where the repro takes days to repro. But nonetheless the op’s tenacity is all that matters and I would trust them to solve any of the hard problems Ive faced in the past.
Hi, author here! At my job before Google I had to debug these kinds of bugs for our mobile robotics / computer vision stack, but I found them fun so they didn't feel "hard" per se. The most time-consuming one took a month on basically a camera-mounted computer vision system, where after an hour of use the system would start stuttering unusably. But the journey took us through heat throttling on 2009-era gaming laptops, esoteric windows APIs, hardware design, and ultimately distributed queuing. But fixing it was a blast! I learned a ton. I hated that project but fixing that bug was the highlight of it.
This is exactly the reason why I spend all of my non-desk time outside working out, hiking, skiing, snowboarding, archery hunting, golfing and camping. I even choose to shovel snow over using the snowblower.
I recently used AWS Textract and had good results. There are accuracy benchmarks out there, I wish I saved the links, but I recall Gemini 2.0 and Textract towards the top in terms of accuracy. I also read that an LLM could extrapolate/conjure up cropped text therefore my idea would be to combine traditional OcR with LLM to determine conflicts.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge