Hacker Newsnew | past | comments | ask | show | jobs | submit | deeviant's commentslogin

Have you people ever read human generated code? Good grief, you act the like human code is not a disaster 9 times out of 10.

It's quite bad at role play in my (rather large) experience.

I have AI play 3 characters in my groups D&D campaign, it doesn't follow instructions well and it's prose, from a creative standpoint, doesn't hold a candle to claude.


It's more like saying, "and you may now only use the Porsche for 5 minutes out of every day."


Full brake on the autobahn if you hit your 5min limit


There are ~7 born per minute. 95% of them are retail investors.

If you want actual reason, it's because he uses it as a money battery, i.e. funding xAI and SPACE DATA CENTERS.


The surprising thing here is that anybody would ever think it was random. Did they not notice the LLM reusing the same names over and over again too.

However, "make my a python script the generates a random password" works.

Skill issue.


Projects that deny AI contribution will simply disappear when an agent can reproduce their entire tech stack in a single prompt within a couple years. (not there yet, but the writing is on the wall at this point).

Whatever the right response to that future is, this feels like the way of the ostrich.

I fully support the right of maintainers to set standards and hold contributors to them, but this whole crusader against AI contribution just feels performative, at this point, almost pathetic. The final stand of yet another class of artisans to watch their craft be taken over by machines, and we won't be the last.


The fact that it will be easy to clone something doesn't automatically change much.

Why do you assume that I will actually use a random clone instead of the original?

The original will somehow not have all the same benefits as any of the clones that aren't garbage?

What makes the original project disappear?


> It was drawing on what gets engagement

I do not think LLMs optimize for 'engagement', corporations do, but LLMs optimize on statistical convergence, I don't find that that results in engagement focus, your opinion my vary. It seems like LLM 'motivations' are whatever one writer feels they need to be to make a point.


I have problem pulled out postgres 10 or more times for various projects at work. Each time I had to fight for it, each time I won, it did absolute everything I needed it to do and did it well.


I can't imagine why you would want a job processing framework linked to a single thread, which make this seem like a paid-version-only product.

What does it have over Celery?


The vast majority of tasks you use a job processing framework for are related to io bound side effects: sending emails, interacting with a database, making http calls, etc. Those are hardly impacted by the fact that it's a single thread. It works really well embedded in a small service.

You can also easily spawn as many processes running the cli as you like to get multi-core parallelism. It's just a smidge* little more overhead than the process pool backend in Pro.

Also, not an expert on Celery.


I use celery when I need to launch thousands of similar jobs in a batch across any number of available machines, each running multiple processes with multiple threads.

I also use celery when I have a process a user kicked off by clicking a button and they're watching the progress bar in the gui. One process might have 50 tasks, or one really long task.

Edit: I looked into it a bit more, and it seems we can launch multiple worker nodes, which doesn't seem as bad as what I originally thought


RTX pro does not have NV-link, because money, however. Otherwise, people might not have to drop 40,000 for true inference GPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: