Hacker Newsnew | past | comments | ask | show | jobs | submit | saganus's commentslogin

From past experiences (and I'm sure I'm not alone here), I can almost guarantee that the senior devs did communicate the problems, but they were ignored or brushed aside.

Very seldomly does middle/upper management truly listens to engineers, unless there's buy-in from the CTO/VP to champion the ideas and complaints.


Over time, as devs get more experience, they have seen countless fads come and go. Some worked, some screwed things up, etc. - NONE were the silver bullet / savior that they were touted to be by adherents. So they learn a default "no" or "slowly" response to "we need to do this <buzzword> ASAP" from management who only see $$$. I mean AI companies are telling management that devs will resist AI because "it's so good it will let you replace them", so management is getting their views reinforced by devs saying it's a bad idea.

Yeah, the developers who will argue and teeth-gnash about using an ORM for weeks on the hope it will save a few hours perceived as boring or obvious are, simultaneously, annoyed and upset at being told to save time with super tools that save time and effort…

Pay no attention to the software output or quality or competitive displacement of the people selling you tools. LLMs, like cheesy sales strategies, are something so lucrative the only thing you can really do is sell them first come first serve to other people. Makes so much sense. Why make infinite money when you can sell a course/tool to naive and less fortunate companies? So logical.


The CTO got fired last month, presumably for poor performance. And the director that has taken is place is now all in on AI because he's desperate to turn things around but has no idea how.

He doesn't care. When c suite gets fired they get like half a million in severance and go rinse and repeat somewhere else

And it was the AI's fault. So convenient.

Was the CTO advocating a more measured approached to ai adoption?

I have a feeling that I have witnessed it, although I was told the CTO decided to move on to other challenges.

If you haven't seen this one, I highly recommend it:

Indistinguishable From Magic: Manufacturing Modern Computer Chips

https://www.youtube.com/watch?v=NGFhc8R_uO4&t=2070s

It's quite old but I think there is no modern version of it.

I've tried posting to HN a few times but it hasn't gained traction for some reason, but I find it absolutely mind blowing.


Videos in general don’t get much traction here. Most of the time I don’t want to watch them in this context either, when other sites I do.

Maybe it’s just I come here for the old web feel when video was costly, rare and short.


Which is a pity, because lots of videos really need to be seen to be fully appreciated. Especially the ones showing stuff being made. And the ones that tend to show up here are usually worth the time.

I'm totally with the text folks on the 5 hour Fossdem sessions, though. Give me an accurate transcript I can grep or don't even bother.


There's something to it. I personally am happy to have one of these few precious places left where I can find content to read rather than watch.


Yeah, I understand and partially agree.

However I've discovered wonderful gems like this RAM video.


Video links are naturally gonna get less clicks from people scrolling HN at work :)


The whole process was deep magic to me before I watched that video. It didn't seem less magical after I've watched it. More so, if anything.

Asianometry[0] has a number of videos on EUV lithography that cover some of the mind-blowing advances in the years since.

Veritasium[1] recently also made a video on the subject.

[0] https://www.youtube.com/playlist?list=PLKtxx9TnH76RYHY7L1YzE...

[1] https://www.youtube.com/watch?v=MiUHjLxm3V0


I rewatch that every year as a reminder of what we're capable of. Great video.


Yep, me too. Still feels magical after all these years.


This was the one that did it for me: "38C3 - From Silicon to Sovereignty: How Advanced Chips Are Redefining Global Dominance" https://www.youtube.com/watch?v=NdppYYfQJgg

Absolutely insane stuff.


This one I did not know! Thanks for sharing


When I discovered the pronunciation of Houston, TX and Houston, NY... my mind was blown


I've always thought I was the only one experiencing this and felt like I was crazy.

I guess it's "good" to know that I'm not alone.

The amount of times I've searched for a ticket that I know it's there (because I either have it opened in a different tab, or because I just created it), but can't find, it's just way to many.


The results usually seem completely random to me. It's like the feature never made it out of proof of concept territory. The only advantage of all the email noise Jira sends out is that I can usually search my email for what I'm looking for.


I've used JIRA back in 2009 and that is exactly what we did to work around shitty search function in JIRA.


I always got sad when I create a ticket and I see the "ticket created" toast, and then I'm like "oh shoot I forgot to add a screenshot" and go to click the toast to go to my ticket but the toast disappeared. Because then I know that I'm gonna waste the next five minutes of my life looking for it.

FWIW Github has similar shitty search interface. Not sure why.


What does AGI look like in your opinion?

Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.

Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.

Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.

It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.

In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.

Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.


> What does AGI look like in your opinion?

Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.

Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.

Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.

I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.


Why should implementation matter at all? You should be able to classify a black box as AGI or not.

Well, I guess you lose artificial if there’s a human brain hidden in the box.


If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).

Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.


To add to this, previously one could argue that LLMs were on par with somewhat less intelligent humans and it was (at least I found) difficult to dispute. But now the frontier models can custom tailor explanations of technical subjects in the advanced undergraduate to graduate range. Simultaneously, I regularly catch them making what for a human of that level would be considered very odd errors in reasoning. When questioned about these inconsistencies they either display a hopeless lack of awareness or appear to attempt to deflect. They're also entirely incapable of learning from such an interaction. It feels like interacting with an empty vessel that presents an illusion of intelligence and produces genuinely useful output yet there's nothing behind the curtain so to speak.


Would a proper SOC2 audit have prevented this?

I've been through SOC2 certifications in a few jobs and I'm not sure it makes you bullet proof, although maybe there's something I'm missing?


SOC2 is just "the process we say we have, is what we do in practice". The process can be almost anything. Some auditors will push on stuff as "required", but they're often wrong.

But all it means in the end is you can read up on how a company works and have some level of trust that they're not lying (too much).

It makes absolutely zero guarantees about security practices, unless the documented process make these guarantees.


Yeah, that was my understanding as well, so I fail to see how a proper SOC2 would have prevented this.

I mean ideally a proper SOC2 would mean there are processes in place to reduce the likelihood of this happening, and then also processes to recover from if it did ended up happening.

But the end result could've been essentially the same.


It wouldn't have. lol.


Just so long as it was a proper SOC2 audit, and not a copy-pasted job:

https://news.ycombinator.com/item?id=47481729


I was running my K6-2 and I was _convinced_ it was superior to equivalent Intel CPUs.

Spent hours watching the graph hoping to get triplets and some kind of confirmation that I just found ET.

Miss those days so much.


Thariq has clarified that there are no changes to how SDK and max suscriptions work:

https://x.com/i/status/2024212378402095389

---

On a different note, it's surprising that a company that size has to clarify something as important as ToS via X


> On a different note, it's surprising that a company that size has to clarify something as important as ToS via X

Countries clarify nation policy on X. Seriously it feels like half of the EU parliament live on twitter.


Which makes the whole 'EU first' movement looks super weak when the politicians are part of the worse offenders.


FYI a Twitter post that contradicts the ToS is NOT a clarification.


What's wrong with using X?


In the case you are asking in good faith, a) X requires logging in to view most of its content, which means that much of your audience will not see the news because b) much of your audience is not on X, either due to not having social media or have stopped using X due to its degradation to put it generally.


I'm not signed in but I can view the above linked tweet just fine.

Plus it's not a real clarification in anyway. It's just PR. Even if it's posted on Mastodon or Github or anywhere, I highly doubt you can use it to defend yourself if you get banned from violating their ToS.


You can’t view answers and the tweet threat. You need to know every single tweet. You can’t open the politician‘s feed so you have to know that there is a tweet and which it is to get information.


c) it is controlled by a direct competitor and can bury / promote your customer communication at will.


Elon has enough sense not to cross that particular bridge.


He was quick to ban links to Mastodon when it was on the rise, I'm not sure why he'd treat SpaceX/xAI competitors any differently.


He's few borders behind that bridge now. They've been injecting faults left and right, from hiding tweets and accounts as "unavailable" to sorting replies by spamminess and everything.


Not bad per se but how much legal weight does it actually carry?

I presume zero.. but nonetheless seems like people will take it as valid anyway.

That can be dangerous I think.


ideologically or practically?


Money is a powerful motivator. For better or worse.


Maybe there is a way for the server to ask the client to do the work?

Something similar to proof-of-work but on a much smaller scale than Bitcoin.


just add some delay to your response, we don't have to waste any more energy on meaningless calculation.


Adding delay means you have to keep more connections open at a single time. Parallelism doesn't favor a server if your problem is already a small server getting hit by a big scraper


How expensive is it to just keep a connection open?


About 20 kilobytes of socket + TLS state, if you've really optimised it down to the minimum. Most server software isn't that lean, of course, so pick a framework designed for running a million or so concurrent connections on a single server (i.e. something like Nginx)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: