... which is exactly how we know that LLMs are not conscious. We can't really explain consciousness. We can absolutely explain LLMs. The math is heavy and massive, but explainable. We can explain it layer-by-layer until we show that at its most basic level, it is still just a series of 0s and 1s.
Going in with an adversarial approach is just going to end up in conflict and burned bridges. You want to get a deeper understanding on why everyone is doing what they are doing, then help people find a path forward that meeting their goals in a healthy way.
If LLM vendors are doing it for the money, you need to show your employers that following the AI vendors' leads doesn't help the company, it just sends their revenue to vendors. You need to show how to achieve the same business results without using the AI tools.
And that is possible - I'm a consultant, and all the AI-First companies I've worked with have some truly awful results going on internally. Their metrics looks good, they are delivering code, even delivering and selling new features and products. But they are also piling up internal tensions and debt that are going to invoke a huge cost some day in the future. If you can show them those tensions and costs, you can change how they operate.
As I see it, there's no incentive to find a better path forward. We're in the age of "good enough". It doesn't even need to actually be any good, just "good enough". Tensions are rising and technical debt is sky high. Thing is, as long as the money keeps on pouring, there's no incentive to solve any of that. With AI taking over most of trivial programming tasks, and the current downsizing trend, even if the last few reasonable people in a company finally get fed up and leave, there's plenty of meat for the grinder.
These are typically posted at the end of the month. When people just rando post them every few days/week, all it does is create noise and dilute the meaning of everyone's post.
1) "they are incrementally migrating from Angular 1 to React" - Did they actually learn React? Because the worst React apps I've see is from teams who just took their habits from another platform and re-coded their comfort zone in React.
2) "I should not attempt to make contributions outside of the tickets I am assigned, and I am expected to raise 1 PR per day" - Did you get hired as a junior? Because they are treating you like one. I would not accept such little autonomy if I had over a decade of experience.
Aside from that, they are clearly not working in full modernized practices. This is not necessarily a bad thing. If they perform well and have a smooth-running shop, being a bit behind the curve is fine. But if their lack of modern practices is causing the other friction, then it is a problem. But there isn't an inherent correlation between "behind-the-time" and "bad team"
The most important question isn't any of this, though. Are you happy there? If you hate it there, that is all you need to know.
> Distillation is the technique at the centre of the dispute. It does not require stealing model weights or breaking into servers. A distiller feeds thousands or millions of carefully constructed queries to a frontier AI model, collects the responses, and uses those responses to train a cheaper rival model that approximates the original’s capabilities at a fraction of the cost.
Just so I'm sure I understand this correctly... The USA is ticked at China for training new LLMs on pre-existing content/data held by private corporations, which they freely exposed to the internet. But not ticked at those corporations for having trained LLMs in the first place on the content created by private citizens?
Yes, and it has been said since day one of LLMs that all we need to do is keep things that way - no action without human intervention. Just like it was said that you should never grant AI direct access to change your production systems. But the stories of people who have done exactly that and had their systems damaged and deleted show that people aren't trying to even keep such basic safety nets in place.
AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.
My cynical rule of thumb: By default we should imagine LLMs like javascript logic offloaded into a stranger's web-browser.
The risks are similar: No prompts/data that go in can reliably be kept secret; A sufficiently-motivated stranger can have it send back completely arbitrary results; Some of those results may trigger very bad things depending on how you use or even just display them on your own end.
P.S. This conceptual shortcut doesn't quite capture the dangers of poison data, which could sabotage all instances even when they happen to be hosted by honorable strangers.
The problem is, out of ten companies who take this approach, nine will indeed destroy themselves and one will end up with a trillion-dollar market cap. It will outcompete hundreds of companies who stuck with more conservative approaches. Everybody will want to emulate company #10, because "it obviously works."
I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.
On a more serious note, they were mostly f*cked by their paas provider imo. Claude will always do dumb shit. Especially if you tell it to not do something... By doing so you generally increase the likelihood of it doing it.
It's even obvious why if you think about it, the pattern of "you had one job, but you failed" or "only this can't happen, it happened!" And all it's other forms is all over literature, online content etc.
But their PaaS provider not scoping permissions properly is the root cause, all things considered. While Claude did cause this issue there, something else would've happened eventually otherwise.
Also, some folks seem to be forgetting the virtues of boring, time-tested platforms & technologies in their rush to embrace the new & shiny & vibe-***ed. & also forgetting to thoroughly read documentation. It’s not terribly surprising to me that an “AI-first” infrastructure company might make these sorts of questionable design decisions.
The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people.
PII breaches have been pretty consistently a problem for the last several decades, predating modern LLMs.
So that is a structural problem with their data and security management and operations, totally independent of the architecture for doing large scale token inference.
Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face.
It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe.
AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is.
That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures
Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem
This isn’t a problem for organizations that have well aligned incentives across their workflows
A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving
The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives
> That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures
As @TeMPOraL here likes to point out, it can be genuinely fruitful to anthropomorphise AI. I only agree with partially, that this is true for *some* of the failure modes.
> A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving
Sure, but society as a whole doesn't have the right solid incentives to make sure that companies have the right solid incentives to do this. We can tell this quite easily by all the stupid things that get done.
> The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them.
This is also fundamentally the AI alignment problem, that all AI are trained on some fitness function which is a proxy for what the trainer wanted, which is a proxy for what incentives their boss gave them, which is a proxy that repeats up to the owners in a capitalist society, which is a proxy for economic growth, which is a proxy for votes in a democracy, which is a proxy for good in a democracy.
I wrote a whole ass paper at the end of 2022 demonstrating that unless we fix society we will deterministically create anti-social AGI because humans do not generate pro-social data.
To be fair, the author of the post said the same thing. From the other thread on HN, they themselves said: "Nobody should cry over a SaaS, of all things. But GitHub has meant so much more to me than that (all laid out in the post). I have an unhealthy relationship with it. "
I'm pretty sure there was a Black Mirror episode about social scoring dictating peoples value/relevance. That was a good place for such a concept, because letting social media sites dictate someone's relevance is just weird. Relevance is a personal opinion, and should remain that way. People are free to stop following others. It works, and isn't dystopian.
reply