From past experiences (and I'm sure I'm not alone here), I can almost guarantee that the senior devs did communicate the problems, but they were ignored or brushed aside.
Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.
Over time, as devs get more experience, they have seen countless fads come and go. Some worked, some screwed things up, etc. - NONE were the silver bullet / savior that they were touted to be by adherents. So they learn a default "no" or "slowly" response to "we need to do this <buzzword> ASAP" from management who only see $$$. I mean AI companies are telling management that devs will resist AI because "it's so good it will let you replace them", so management is getting their views reinforced by devs saying it's a bad idea.
Yeah, the developers who will argue and teeth-gnash about using an ORM for weeks on the hope it will save a few hours perceived as boring or obvious are, simultaneously, annoyed and upset at being told to save time with super tools that save time and effort…
Pay no attention to the software output or quality or competitive displacement of the people selling you tools. LLMs, like cheesy sales strategies, are something so lucrative the only thing you can really do is sell them first come first serve to other people. Makes so much sense. Why make infinite money when you can sell a course/tool to naive and less fortunate companies? So logical.
The CTO got fired last month, presumably for poor performance. And the director that has taken is place is now all in on AI because he's desperate to turn things around but has no idea how.
Which is a pity, because lots of videos really need to be seen to be fully appreciated. Especially the ones showing stuff being made. And the ones that tend to show up here are usually worth the time.
I'm totally with the text folks on the 5 hour Fossdem sessions, though. Give me an accurate transcript I can grep or don't even bother.
I've always thought I was the only one experiencing this and felt like I was crazy.
I guess it's "good" to know that I'm not alone.
The amount of times I've searched for a ticket that I know it's there (because I either have it opened in a different tab, or because I just created it), but can't find, it's just way to many.
The results usually seem completely random to me. It's like the feature never made it out of proof of concept territory. The only advantage of all the email noise Jira sends out is that I can usually search my email for what I'm looking for.
I always got sad when I create a ticket and I see the "ticket created" toast, and then I'm like "oh shoot I forgot to add a screenshot" and go to click the toast to go to my ticket but the toast disappeared. Because then I know that I'm gonna waste the next five minutes of my life looking for it.
FWIW Github has similar shitty search interface. Not sure why.
Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.
Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.
It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.
Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.
Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.
Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.
I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.
If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).
Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.
To add to this, previously one could argue that LLMs were on par with somewhat less intelligent humans and it was (at least I found) difficult to dispute. But now the frontier models can custom tailor explanations of technical subjects in the advanced undergraduate to graduate range. Simultaneously, I regularly catch them making what for a human of that level would be considered very odd errors in reasoning. When questioned about these inconsistencies they either display a hopeless lack of awareness or appear to attempt to deflect. They're also entirely incapable of learning from such an interaction. It feels like interacting with an empty vessel that presents an illusion of intelligence and produces genuinely useful output yet there's nothing behind the curtain so to speak.
SOC2 is just "the process we say we have, is what we do in practice". The process can be almost anything. Some auditors will push on stuff as "required", but they're often wrong.
But all it means in the end is you can read up on how a company works and have some level of trust that they're not lying (too much).
It makes absolutely zero guarantees about security practices, unless the documented process make these guarantees.
Yeah, that was my understanding as well, so I fail to see how a proper SOC2 would have prevented this.
I mean ideally a proper SOC2 would mean there are processes in place to reduce the likelihood of this happening, and then also processes to recover from if it did ended up happening.
But the end result could've been essentially the same.
In the case you are asking in good faith, a) X requires logging in to view most of its content, which means that much of your audience will not see the news because b) much of your audience is not on X, either due to not having social media or have stopped using X due to its degradation to put it generally.
I'm not signed in but I can view the above linked tweet just fine.
Plus it's not a real clarification in anyway. It's just PR. Even if it's posted on Mastodon or Github or anywhere, I highly doubt you can use it to defend yourself if you get banned from violating their ToS.
You can’t view answers and the tweet threat.
You need to know every single tweet.
You can’t open the politician‘s feed so you have to know that there is a tweet and which it is to get information.
He's few borders behind that bridge now. They've been injecting faults left and right, from hiding tweets and accounts as "unavailable" to sorting replies by spamminess and everything.
Adding delay means you have to keep more connections open at a single time. Parallelism doesn't favor a server if your problem is already a small server getting hit by a big scraper
About 20 kilobytes of socket + TLS state, if you've really optimised it down to the minimum. Most server software isn't that lean, of course, so pick a framework designed for running a million or so concurrent connections on a single server (i.e. something like Nginx)
Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.
reply