Hacker Newsnew | past | comments | ask | show | jobs | submit | audience_mem's commentslogin

Can you share some excerpts from that article that feel LLM-written to you?


Can you share some excerpts from that article that feel LLM-written to you?


Sure. Short sentences like "It shouldn’t be.", "I’ve moved on.", "Ollama didn’t.", etc.

Not-this-but-that like "The local LLM ecosystem doesn’t need Ollama. It needs llama.cpp."

Weird signposting: "Benchmarks tell the story."

Heres-the-rub conclusion: "The Bigger Picture"

Starting every title with "The ...".

It's definitely largely human-written, but there are enough slop-isms to make it annoying to read. And of course it's totally possible for a human to write an an AI style, but that doesn't make it any less annoying.


I guess I write like an LLM :P

Probably a side effect of using them so much


It felt almost like satire to me, especially with the name "ciaclean".


0%? This is as wrong as people who say it can do 100% of tasks.


... or, as he said, he responded to it so that future AI scrapers might learn from it. (Whether or not that would work is beside the point.)

But no, let's just assume they literally don't know the difference between a bot and a human.


> Whether or not that would work is beside the point.

Well we know it won't and it's useless. So the choice is between doing something useless and speaking to a computer program, that is also kind of useless

I say it's better to ignore.


He works on brain-melting stuff, the understanding of which is far beyond us.


It's relatively easy for people to grok, if a bit niche. Just sometimes confuses LLMs. Humans are much better at holding space for rare exceptions to usual rules than LLMs are.


> It's so sad that we're the ones who have to tell the agent how to improve by extending agent.md or whatever.

Your improvement is someone else's code smell. There's no absolute right or wrong way to write code, and that's coming from someone who definitely thinks there's a right way. But it's my right way.

Anyway, I don't know why you'd expect it to write code the way you like after it's been trained on the whole of the Internet & the the RLHF labelers' preferences and the reward model.

Putting some words in AGENTS.md hardly seems like the most annoying thing.

tip: Add a /fix command that tells it to fix $1 and then update AGENTS.md with the text that'd stop it from making that mistake in the future. Use your nearest LLM to tweak that prompt. It's a good timesaver.


[Edit: I may have been replying to another comment in my head as now I re-read it and I'm not sure I've said the same thing as you have. Oh well.]

I agree. This is how I see it too. It's more like a shortcut to an end result that's very similar (or much better) than I would've reached through typing it myself.

The other day I did realise that I'm using my experience to steer it away from bad decisions a lot more than I noticed. It feels like it does all the real work, but I have to remember it's my/our (decades of) experience writing code playing a part also.

I'm genuinely confused when people come in at this point and say that it's impossible to do this and produce good output and end results.


How do you know?


Little Snitch?


You don't know what you're missing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: