I mean no disrespect. This is more of a rant at how things are today. It is telling that over-complicated solutions have become so common that, for the current generation of devs, Kubernetes is the obvious way of doing stuff and a simple systemd service is the obscure one. I am sure there are good reasons for this, but it still feels like a loss when simplicity is no longer obvious.
Congratulations on derailing what was otherwise such a nice thread. Well done.
Not everyone in this world is always on edge like you. It is OK to be cheesy sometimes. Humor exists for a reason. Not every unconventional interaction is creepy. Get over yourself, please.
Using a throwaway with 37 karma to come in and act like the HN police and pretending you know me? My friend, it appears it is you who needs to get over yourself.
You should read the original article by Dawkins that this piece is critiquing: https://archive.is/Rq5bw
I don't know if the original article casts him in a better light. I think it does not. But it is still worth reading so you can see the context for yourself and judge whether the criticism in this article is fair.
This sounds unnecessarily reductive. By "own" I would mean that I can re-read the book again and again and again as many times as I want as long as I take good care of the book and prevent it from disintegrating.
But the DRM e-books can't be used like that. That was their point.
I really had fun with this one. You know what would make it even cosier? Being able to choose a small avatar for ourselves. The mouse pointer as your icon feels a very impersonal at the moment. Having avatars would make it feel more like we're all hanging out together in this wonderland.
> Lots of negativity in the comments and while I'm as distrusting of VC funding as the next guy I think competition in this space is something we should encourage, and bootstrapping that is hard if not impossible at this point.
What you are calling "negativity" are genuine concerns to me. I was excited at the headline first. But as soon as I found it is VC-funded, it became a complete non-starter for me.
Look, I'm going to make my labor of love available to the world on your platform. I'm not going to earn a dime from it. It's just free work I'm gonna put out there. If I'm going to do that, I'll choose a platform where I can be reasonably sure that there won't be a rug pull 5 years down the line.
The problem with VC-funded projects is that there is definitely going to be some kind of rug-pull. Because the investors need their money.
The Git hosting services I use today are those where I can pay as a paying customer or I can pay as a paying member. As a paying customer, I know what I am getting into. As a paying member, I have the right to vote on decisions that affect the platform.
I agree with everything you wrote, but wanted to add to:
> The problem with VC-funded projects is that there is definitely going to be some kind of rug-pull. Because the investors need their money.
If you can tell me up front what the rug-pull will be in N years, then I could potentially look past it for certain use cases.
But if all you say is "I know you don't like VC-funded companies, but ours really is different because of X" then that's pretty much a slap in the face to users who've been through the hamster wheel of enshittification before.
> There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
This makes no sense to me. For anything that has sensitive payment or personally identifieable data, direct access to DB is potentially illegal.
> The failure/rollback model is much easier with these as long as we have good backup hygiene.
Have you actually operated systems like this in production? Even reverting to a DB state that is only seconds old can still lose hundreds or thousands of transactions. Which means loads of unhappy customers. More realistically, recovery points are often minutes or hours behind once you factor in detection, validation and operational overhead.
DB revert is for exceptional disaster recovery scenarios, not something you want in normal day-to-day operations. If you are saying that you want to give LLM full access to prod DB and then revert every time it makes a mistake, you aren't running a serious business.
You are thinking way too hard. This person is a hazard that needs to learn the hard way.
If velocity means letting agents live edit a db, I'm fine being slow. Holy hell. Let these people crash and burn but definitely let me know the app name so I know never to use it first.
Not everything is a SaaS. I commented this elsewhere but I picture all the business running on spreadsheets/CSVs/MS Access databases on someone's desktop. People delete these all the time by accident. They have no security, no authentication, etc.
An LLM agent (with RW access to a DB), a developer, and a few days these become proper apps that SMB business would pay well for.
Sure don't give an LLM agent access to PII or properly built CRMs etc. But to not see the rest of the landscape seems like a missed opportunity.
At the very least you should give it a non-prod copy of the database, not direct access to the DB actively powering production right now.
I've done work for a hedge fund where the DB ran directly on the manager's desktop. I worked with my local copy and sent an update script, and he had a second copy he ran on to verify.
Even with humans you shouldn't be working directly against the prod DB in these cases!
Yes, I just think there's a sane way to do things that is not "never let LLM agents do things".
For dev/prod staging though, there's that other story on HN right now of an LLM agent that maneuvered it's way to prod credentials and destroyed prod. And backups went along with it. I'm paranoid enough to think backups in this use case means out-of-band uncorrelated storage.
I just think there's more nuance to it. Some things have an implicit RTO/RPO/SLA of say a day. Risk is also correlated to recovery and rollback. And there's levels of LLMs out there.
Surely in the Venn Diagram of things, there's a slot where it's okay let a Claude Opus agent run on a process with good backups/recovery? Where taking the risk of a 1-hour restore job is worth the LLM agent velocity?
For extra paranoia, surely even Opus/Mythos can't figure out how to destroy log level backups to immutable storage.
The only nuance I can see is, does the data matter at all? If it does you shouldn't do this. If it doesn't then who cares, also why even put it in a database.
This narrative seems to come from people who haven't worked on meaningfully complex software systems. They're more like script kiddies than software developers. I don't mean that in a derogatory manner. They're right that LLMs are unlocking new possibilities in the realm of their work. They just don't realize that these new possibilities are constrained to relatively simple applications, or very thin slices of complex systems.
I use an LLM to access my database occasionally, but never in production and never with write access. It is genuinely useful. It would never be useful in a production setting, though.
It's worth noting too that people should be wary of what a read only user means in database land. There are plenty of foot guns where writes can occur with read-like statements, and depending on the schema, maybe this would be a rollback-worthy situation. You really need to understand your database and schema before allowing an LLM anywhere near it, and you should be reviewing every query.
That's the issue that I feel misses the forest for the trees. Relatively simple applications or thin slices exist right now, in production, in critical paths, as spreadsheets/CSVs/files on someone's desktop. That's the pent up demand I picture out there for developers.
Go to any SMB out there and there's a goldmine of processes that could be improved with LLM agents with full RW access to a database. Where backups are sufficient as a recovery mechanism that is better-than-before.
I think the Venn diagram of people letting LLMs have complete control of their database AND having good backups, will have no overlap. The people that would benefit or not the people that have backups.
This is also a good point. Details like this are why I think experienced developers are going to remain relevant for a while yet. Anticipating what can go wrong is such a huge component of what building software systems is about. LLMs can be great at it, but only with the limited context they have, and even then only somewhat coincidentally.
I'm not thinking of SaaS or properly built apps with an API, modeled databases, etc. I'm thinking spreadsheets/CSVs/MS Access that thousands of SMBs use to power their critical paths and someone accidentally deletes. Typically single user, maybe a small team. Infrequent writes, lots of reads.
> Nobody seems to care or notice. I'm watching in disbelief how nobody is pointing out the article is full of inaccuracies.
I don't know. I finished my graduate studies in math a few years ago, and pretty much every textbook by well-known mathematicians was packed with errors. I just stopped caring so much about inaccuracies. Every math book is going to have them. Human beings are imperfect, and great mathematicians are no exception. I'd just download the errata from the uni website and keep it open while reading.
> This does not excuse the article from reversing the meaning of the theorem.
What's with this hyperbole? Even the best math books have loads of errors (typographical, factual, missing conditions, insufficient reasoning, incorrect reasoning, ...). Just look at any errata list published by any university for their set books! Nobody does this kind of hyperbole for errors in math books. Only on HN do you see this kind of takedown, which is frankly very annoying. In universities, professors and students just publish errata and focus on understanding the material, not tearing it down with such dismissive tone. It's totally unnecessary.
I don't know if you've got an axe to grind here or if you're generally this dismissive but calling it "simply not the theorem" or "plain wrong" is a very annoying kind of exaggeration that misses all nuance and human fallibility.
Yes, the precise statement of Birkhoff's representation theorem involves down-sets of the poset of join-irreducibles. Yes, the article omits that. I agree that it is imprecise.
But it's not "reversing the meaning". It still correctly points to reconstructing the lattice via an inclusion order built from join-irreducibles. What's missing is a condition. It is sloppy wording but not a fundamental error like you so want us to believe.
Feels like the productive move here is just to suggest the missing wording to the author. I'm sure they'll appreciate it. I don't really get the impulse to frame it as a takedown and be so dismissive when it's a small fix.
reply