Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are two major reasons people don't show proof about the impact of agentic coding:

1) The prompts/pipelines portain to proprietary IP that may or may not be allowed to be shown publically.

2) The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process and open the people up to dunking.

For example in the case of #2, I recently published the prompts I used to create a terminal MIDI mixer (https://github.com/minimaxir/miditui/blob/main/agent_notes/P...) in the interest of transparency, but those prompts correctly indicate that I barely had an idea how MIDI mixing works and in hindsight I was surprised I didn't get harrassed for it. Given the contentious climate, I'm uncertain how often I will be open-sourcing my prompts going forward.





You weren't harassed for it because (1) it is interesting and (2) you were not hiding the AI involvement and passing it off as your own.

The results (for me) are very much hit-and-miss and I still see it as a means of last resort rather than a reliable tool that I know the up and downsides of. There is a pretty good chance you'll be wasting your time and every now and then it really moves the needle. It is examples like yours that actually help to properly place the tool amongst the other options.


I've seen people harassed for their personal projects because they used AI.

I'm fundamentally a hobbyist programmer, so I would have no problem sharing my process.

However, I'm not nearly organized enough to save all my prompts! I've tried to do it a few times for my own reference. The thing is, when I use Claude Code, I do a lot of:

- Going back and revising a part of the conversation and trying again—sometimes reverting the code changes, sometimes not.

- Stopping Claude partway through a change so I can make manual edits before I let Claude continue.

- Jumping between entirely different conversation histories with different context.

And so on. I could meticulously document every action, but it quickly gets in the way of experimentation. It's not entirely different from trying to write down every intermediate change you make in your code editor, between actual VCS commits.

I guess I could record my screen, but (A) I promise you don't actually want to watch me fiddle with Claude for hours and (B) it would make me too self-conscious.

It would be very cool to have a tool that goes through Claude's logs and exports some kind of timeline in a human-readable format, but I would need it to be automated.

---

Also, if you can't tell from the above, my use of Claude is very far from "type a prompt, get a finished program." I do a lot of work in order to get useful output. I happen to really enjoy coding this way, and I've gotten great results, but it's not like I'm entering a prompt and then taking a nap.


All your conversations are living as json files inside `~/.claude/`.

But that includes a ton of dead ends and stuff.

No. The main reasons are that

1) the code AI produces is full of problems, and if you show it, people will make fun of you, or

2) if you actually run the code as a service people can use, you'll immediately get hacked by people to prove that the code is full of problems.


1) no one cares if it works. No one cared before how your code looked as long as you are not a known and well used opensource project.

2) there are plenty of services which do not require state or login and can't be hacked. So still plenty of use cases you can explore. But yes i do agree that Security for production live things are still the biggest worry. But lets be honest, if you do not have a real security person on your team, the shit outthere is not secure anyway. Small companies do not know how to build securely.


> 1) no one cares if it works. No one cared before how your code looked as long as you are not a known and well used opensource project.

Forgive me if this is overly blunt, but this is such a novice/junior mindset. There are many real world examples of things that "worked" but absolutely should not have, and when it blows up, can easily take out an entire company. Unprotected/unrestricted firebase keys living in the client are all the rage right now, yea they "work"until someone notices "hey, I technically have read/write god mode access to their entire prod DB", and then all of a sudden it definitely doesn't work and you've possibly opened yourself to a huge array of legal problems.

The more regulated the industry and the more sensitive the business data, the worse this is exacerbated. Even worse if you're completely oblivious to the possibility of these kinds of things.


> Forgive me if this is overly blunt, but this is such a novice/junior mindset.

Unfortunately the reality is there are far more applications written (not just today but for many years now) by developer teams that will include a dozen dependencies with zero code review because feature XYZ will get done in a few days instead of a few weeks.

And yes, that often comes back to bite the team (mostly in terms of maintenance burden down the road, leading to another full rebuild), but it usually doesn't affect the programmers who are making the decisions, or the project managers who ship the first version.


I'm an architect and have 20 years of experience.

I have seen production databases reachable from the internet with 8 character password and plenty others.

But my particular point is only about the readability of code from others.


You should go hack the Cloudflare Workers OAuth stuff then, right?

You seem to think I'm an AI coding hater or something. I'm not. I think these tools are incredibly useful and I use them daily. However, like described in the article, I do am skeptical about stories where AI writes whole applications, SaaS or game engines in a few hours and everything "just works". That is not my experience.

The Cloudflare OAuth lib is impressive, I will readily admit that. But they also clearly mention that of course everything was carefully reviewed, and that not everything was perfect but that the AI was mostly able to fix things when told to. This was surely still a lot of work, which makes this story also much more realistic in my opinion. It surely greatly sped up the process of writing an OAuth library - how much exactly is however hard to say. Especially in security-relevant code, the review process is often longer than the actual writing of the code.


I don't know why you're giving me two paragraphs of response. I'm not psychoanalyzing you. I had a simple suggestion: if agent code output is so bad nobody runs it because it would get people owned, go own up the code Kenton generated.

How are both of these not simply the second case they provided?

> The prompts/pipelines are boring and/or embarrassing and showing them will dispel the myth that agentic coding is this mysterious magical process

You nailed it. Prompting is dull and self evident. Sure, you need basic skills to formulate a request. But it’s not a science and has nothing to do with engineering.


Could you clarify that last paragraph for me? I’m not sure what ”contentious climate” is here. AI antihype? I don’t understand the connection to not being harassed for something, isn’t that a good thing rather than something that would make you uncertain if you want to share prompts in the future?

"AI tech bro creates slop X because they don't understand how X actually works" is a common trope among the anti-AI crowd even on Hacker News that has only been increasing in recent months, and sharing prompts/pipelines provides strong evidence that can be pointed at for dunks. Sharing AI workflows is more likely to illicit this snark if the project breaks out of the AI bubble, though in the case of the AI boosters on X described as in the HN submission that's a feature due to how monetization works that platform. It's not something I want to encourage for my own projects, though.

There's also the lessons on the recent shitstorms in the gaming industry, with Sandfall about Expedition 33's use of GenAI and Larian's comments on GenAI with concept art, where both received massive backlash because they were transparent in interviews about how GenAI was (inconsequentially) used. The most likely consequence of those incidents is that game developers are less likely on their development pipelines.


Counterpoint: If the tech was actually that good, nobody could dunk on it and anyone who tried would be mocked back.

If your hand is good, throw it down and let the haters weep. If you scared to show your cards, you don't have a good hand and you're bluffing.


You'd think so, but with the recent extreme polarization of GenAI the common argument among the anti-AI crowd is the absolute "if AI touched it, it's slop". For example in the Expedition 33 case (which won Game of the Year), even though the GenAI asset was clearly a placeholder and replaced 2 days after launch, a surprisingly large number of players said sincerely "I enjoyed my time with E33 but after finding out they used GenAI I no longer enjoy it."

In a lesser example, a week ago a Rust developer on Bluesky tried to set up a "Tainted Slopware" list of OSS which used AI, but the criteria for inclusion was as simple as "they accepted an AI-generated PR" and "the software can set up a MCP server." It received some traction but eventually imploded, partially due to the fact that the Linux kernel would be considered slopware due to that criteria.


oh yeah, most of us would agree those remarks are unreasonable

Sure, but I'm gonna push back and go "so what"? That sort of thing is what haters do, especially in the notoriously toxic world of gaming.

"Some people expressed disappointment about a thing I think is silly" is literally the center square on the gamer outrage bingo card lol. Same with "someone made a list that I think is kind of stupid".

And again, so what? Why should you care? Again, if you feel that insecure about it, it's you and your work that's the problem, not the haters who are always going to exist. Have the courage of your own convictions or maybe admit that it isn't that strong of a conviction lol.


> if you feel that insecure about it, it's you that's the problem, not the haters who are always going to exist

Pulling this victim-blaming sentence out of context to show how ridiculous it is.

Given this stance, I think GPs reasoning for not publicly bragging about using AI makes perfect sense.

Why paint a target on your back? Why acquiesce to "show us your AI" just to be mobbed by haters?

Fuck that, let them express their frustrations elsewhere.


My dude, that's not "victim blaming" lol. Nobody's forcing you, personally, to do anything. I don't care if you, personally, publish your work or not.

What I'm saying is that _feeling_ of insecurity doesn't come from haters, because haters gonna hate, it's a sign that _your_ work might not be as good as you think it is, and you don't feel that you can stand behind it.

Also, managing public expectations and messaging is a thing professionals in many industries do all the time. It's not even particularly difficult, you just hear about it when it's bungled.

EDIT: To clarify, as a SWE, my work is available to anyone at the company. Any engineer I work can see what I've done, and the public sees it too, they just don't know about it, because if I screw up, the company will take the blame for it. You get very very very very used to critique in this role and taking responsibility for what you make and making the case for your technical solution.


you can use however you like, no one cares. really, no one.

but, people in general are NOT inclined to pay for AI slop. that is the controversy.

why would I waste my time reading garbage words generated by an LLM? If people wanted this, they would go to the llm themselves. the whole point of artistic expression is to present oneself, to share a perspective. llms do not have a singular point of view, they do not have a perspective, they do not have an cohesive aggregate of experiences. they just regurgitate the average form. no one is interested in this. even when distributed for free, is disrespectful to others that put their time until they realized is just hot garbage yet again.

people are getting tired of low effort `content`, yet again another unity or unreal engine resking, asset flipping `game`...

you get the idea, lots of people will feel offended and disrespected when presented with no effort. got it? it is not exclusively about intellectual property theft also, i don't care about it, i just hate slop.

now whether you like it or not, the new meta is to not look professional. the more personal, the better.

AI is cool for a lot of things, searching, learning, natural language apropos, profiling, surveilling, compressing information...it is fantastic technology! not a replacement for art, never will be.


Did you post them with commentary along the lines of "this is the second coming of $DEITY, AI will replace us all, click on this Claude referral link to sign up"?

No, don't think so.

However, 90% of "AI" articles either are full of bullshit about "AI" or are someone trying to pass as an "expert" in some domains with LLM generated bullshit.

Stuff like yours is rare.


Or 3 it’s my competitive advantage to keep my successes close to my chest.

That's 1, just reworded.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: