“In producing textiles but has there been actual positive impact in other sectors?”
I’m sure the Industrial Revolution didn’t just happen all at once, it started somewhere and crept.
In case the don’t reply I’m betting it was when they changed their API and broke a lot of existing hardware that took a long time to get working right again.
> I’m all for paying for a service that offers fair value
They say, about not paying for a product to let them stream pirated content from other services:D. I’m annoyed by all the various streaming services too but at some point you just gotta admit you don’t want to pay for anything regardless of value.
I have some coworker who says something similar, he vibe coded tons of cryptic code, which indeed solves some problem though could be way more compact and well structured. Now it is hitting complexity limitation, since llm now cant comprehend it, and human cant comprehend it by large a margin.
it will comprehend it well enough to complicate it further into a rats-nest that only Opus 4.9 can comprehend, and so on. Good luck if you run into a bug before the N+1 version launches.
Its a bit of workspace politics, I would need to call that guy out to tell that he is not hyper-performer, but just pushed lots of low quality code which will produce lots of negative impact in a long term.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It sounds like you might have some larger process problems if someone can just inject a bunch of vibe-coded slop into critical workflows while more discerning eyes are dubious of the quality/reliability etc.
In some sense, sure. There’s a lot of processes that weren’t previously needed, because sloppy people who couldn’t or wouldn’t think things through were mostly incapable of producing PRs that passed all the existing tests.
its partially/largely management problem. One of tier1 productivity metric in the group is # of LoC created by engineers, so it creates dynamics of people exchanging favors of pushing AI slop to codebase, or be labeled as low performers.
Aren’t you just making their point stronger? Effort is what is being replaced here, with some taste and a pile of AI (formerly effort) you can go to the moon.
In other words, it requires a tremendous amount of effort to fully communicate your tastes to the AI. Not everybody wants to expend the time or mental effort doing this! (Once we have more direct brain/computer interfaces, this effort will go down, but I expect it will not be eliminated fully)
This is the second time in two days I've seen a subthread here with folks seemingly debating whether or not defining and communicating requirements counts as work if the target of those requirements is an LLM system.
I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").
I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!
At some point, just an idea will be enough for your Neurolink to spawn an agent to create 1000 different versions of your idea along with things that mimic your tendencies. There will be no effort, only choice.
As both a software engineer and a creative, I absolutely do not want 1,000 versions of what I am trying to make generated for me. I don't care if it's free or even cheap. I want to make things.
I know this is a concept deeply alien to a lot of HN's userbase but I did not get into programming or making art to have finished products; that's a necessary function that is lovely when it's reached, but ultimately, I derive my enjoyment from The Process. The process of finding a problem a user has, and solving it.
And yes I'm sure Claude could do it faster than me (and only at the cost of a few acres of rainforest!) but again, you're missing the point. I enjoy the work. That is not a downside to me.
Deciding between 1000 different versions is a lot of effort IMO. With manual coding, you’re mostly deciding one decision point at a time, which is easier when you think about it. It just require foresight which comes from experience
Not really. The effort required to produce the same result has declined, but it has been on the decline for many decades already. That is nothing new. Of course, in the real world, nobody wants the same result over and over, so expectations will always expand to consume all of your available effort.
If there is some future where your effort has been replaced, it won't be AI that we're talking about.
Effort is still (and probably will always be) the hardest thing to replace.
Any time someone says AI can do this, and do that, and blah blah. I say ok, take the AI and go do that.. the barrier to entry is so low you should be able to do whatever you want. And they say, oh, no, I don't want to do that (or can't, or whatever). But it should be able to be done.. And I just nod, and sip my drink, and ...
.. and I'd like to point out these are seasoned professionals that I've seen put in effort into other things in their careers that have the capacity to literally do whatever is they want to do, especially now.. and they choose not to do so, at least not without someone guaranteeing them a paycheck or telling them they have to do it to survive.
I’m not discounting you feeling this way but this argument feels performative or virtue signaling in the same way I’ve had to deal with people making the same argument about “I don’t have kids because children ruin the planet, we all should have less kids”. There are so many “what about”s here that unless the person making the argument is living in tent sustainably growing their own food that it doesn’t feel like it’s intellectually honest.
I was a coach the last two years for this and I found the FIRST program to not be productive or enjoyable for the children or the adults. We all joined for the robotics but 75% of your score is not about robotics, fully 50% is about displays and presentations. On top of that, as others have alluded to, the missions rewarded brute force attempts at perfect replays as opposed to problems solving and didn't get into some of the more interesting sensors available. Add to that the upcoming year is focused on inclusion by relying on vibe coding so which is the opposite direction they should be going.
I hope Lego can find a partner more focused on the robotics and not the pageantry and performance.
I think you’re both right but missing a bigger issue which is the implication that these provided tokens are what the developers will use to develop with, at work “to be more productive”. That’s extra savage, today your work provides you access to AI, in the future you pay for that access out of your own “pocket”. It’s like being a lumberjack and having to bring your own chainsaw and gas.
It's still a big leap from what he actually said to assume that one would be restricted to an individual budget.
It's already variably the case or not (whether 'unlimited' or limited across the whole team) at different companies today.
All he's saying is that he expects to be paying not a couple of hundred a month per employee, or mere thousands, but about half of an employee's salary per employee on LLM usage. The article acts like he said it was going to come out of the employees' pocket directly with a corresponding pay-cut!
It's more like, being a lumberjack, they provide chainsaw but you provide gas. If you don't have tokens, use your muscle! If you have tokens, they promote you to the team lead, and you can sell tokens in the black market.
reply