Hacker Newsnew | past | comments | ask | show | jobs | submit | Fr0styMatt88's commentslogin

Yeah it’s when you go off the happy path that it gets difficult. Like there’s a weird behaviour in your vibe-coded app that you don’t quite know how to describe succinctly and you end up in some back-and-forth.

But man AI is phenomenal for getting stuff out of your head and working quick.


I feel the exact same way about tutorials in games that try and be comprehensive and show you everything.

Incremental games do an amazing job at this (things like Universal Paperclips, A Dark Room, etc); parts of the game are revealed to you as you need them and it's often a fun surprise. I don't think the same thing is directly applicable to productivity apps, but I wonder if something could be taken from the pattern.

This is timely -- I'm coding an app at the moment and had the fleeting thought that "hey I should do a new user onboarding tour thingy" and then remembered that in general I skip them, so I havne't made one :)


> I feel the exact same way about tutorials in games that try and be comprehensive and show you everything.

For those an ingame encyclopedia and/or external wiki is a much better solution.


Thank you, I was starting to wonder.

I guess because I’m in game dev maybe, but in all my jobs knowing about the underlying stack has either been necessary knowledge or highly regarded.

I can’t think of any time in my career where knowing about the internals of the stack was ever frowned upon or where it’s been anything other than an advantage (especially when hunting bugs). I must have been lucky.


How did it get in? Isn’t Linus known for being rightfully fussy about what makes it into the kernel?

Would be an interesting story.


Linus has had been fussy about maybe like 5% of the things because even then he couldn't keep up with the sheer volume. Nowadays it's more like 1‰

I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.


We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.


> In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.

I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.


So have you stopped using ChatGPT and Claude?

This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).

I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)


Personally this type of behavior played a large part in why I left 2 oss communities.

A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.


I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans?

I should look into mailing lists. That would be a great filter for the "I need it now at any cost" interactions. Thank you for the indirect advice.

I think we are going to see a large movement of designing friction in the next decade.

I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.

From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.

I just reset the router, actually took the time to do everything by the docs, and then it worked.

Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.


People are losing their ability to reason without prompting an LLM first.

It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.

I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.


> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.


There is a lot of wisdom in this.

At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.


> At the end of the day chatgpt won't be there

Are you sure it won't?


Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways.

I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts

I have a couple of colleagues that run all communication through an LLM. It really helps their writing, but it does nothing to help their understanding.

It also makes me hate communicating with them because they'll (somewhat obviously) prompt the LLM to make the conclusion they want. For example, "respond to this jira with why this isn't an issue"


Yes, fully agree. Automatic communications should always be optional in the sense that you should offer that to someone but never force

Sometimes I don't feel like having to make a phone call, but sometimes I much rather talk to a human


You could have done this with Google search or Wikipedia or reading through books though

I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.

Apps like Doordash have introduced me to many good restaurants which I've then visited in person.

i see what you did there :)

hahaha took me a bit to get what you meant.... Yep I've been reading LLM output a lot lately lol

It’s really really inconsistent. Sometimes select all is available, sometimes not. Sometimes the handles don’t work. Selecting text in a scrollable region is fiddly, etc.

I’ve seen an insane drop in the quality of swipe typing recently as well. To the point where I’ll often go back to regular typing. I’ve made maybe six or more corrections just to this paragraph alone.


I think swipe typing suggests words inconsistent with any higher level language model, like word tuples, when proposing words which are possible matches for letter sequences swiped.

and it drives me crazy too.

I've just had good luck it seems with text select.

Have you found any way to do a Find within a span of text on iOS? That would be very useful, but I haven't seen it.


Will drop this here in case you’re not aware of it (but I’m guessing you probably are), sorry if a bit off-topic.

I’m low-vision and made great use of Microsoft Soundscape until it got discontinued. I’d been waiting for an alternative for ages and didn’t realise one actually got released and is on the app store!

VoiceVista:

https://apps.apple.com/au/app/voicevista/id6450388413


I absolutely LOVE! Voice Vista. It is an amazing bit of software. I wasn't able to use SoundScape when it first came out because it was never made available in my region, but VV is, and I would never want to miss it anymore when traveling. I love it. A lot.


This is actually one thing I think will be great as AI coding agents get better. Companies whose main expertise is hardware might start producing better software.

There are so many little bugs in consumer-facing apps that hit the ‘sweet’ spot of being incredible little annoyances that just aren’t worth putting an engineer on for a week to fix, but which are totally worth having an engineer throw an agent onto them.


How? Coding agents are trained on every copy of every tutorial that skips error checking and implements the least resistance path.


I find that the code AI likes to write actually checks for “errors” too often when often you wouldn’t even want to do that. You don’t need to check every dictionary access and come up with a default value for example


I mean I would hope at least one person actually reviews the code before it goes out, but yeah we all know what hope does :)


This is actually one thing I think will be great as AI coding agents get better. Companies whose main expertise is code reviewing might start producing better software.


Yeah, like fixing a annoyance while introducing one or two SEV-1 for sure is going to be great progress.


Curious as an outsider what you mean with US politics? Seems like Apple has a pretty strong stance when it comes to things like privacy that pushes back on some things (that could be smoke and mirrors though I guess).


The privacy is more of a market position thing than it is a political thing.

Apple has led the industry on hardware but is woefully behind on the software and services front. Focusing on device-level privacy controls turns what would be a gap into a moat, and it helps deprive Google and other services from monetizing their customer base.

Not to say that it's not something the company is passionate about - but it's also good for their business. Especially when you compare it to things like human rights, transparency, and security research where Apple could take a stronger stand but don't.


> The privacy is more of a market position thing than it is a political thing.

It is a market position, but companies do have some choice in which market positions they choose to take. And I wouldn't underestimate the effect of the personal views of the CEO in that.


> and it helps deprive Google and other services from monetizing their customer base.

The payment Apple gets from Google for being the default search might help explain this. It would be hard to turn down the sums Apple gets.

https://9to5mac.com/2025/09/03/just-one-word-in-the-google-a...


If you’re referring to their AI services being ‘woefully behind’, that’s just a market sector that they’ve chosen not to focus too much effort on. That was a sensible gamble too, given how unpredictable that sector is five years after it was released.

I’m not sure what else they are behind on frankly, as their current offerings have been extremely stable from day dot.

How many products has Google released and killed in the past 20 years? Apple managed to land on a good thing with Apple iTunes and iPhotos in the early oughts, and managed to transition those core services into Apple Music and iCloud with little to no disruption to users. iCloud is generally a pretty predictable service that delivers on a core set of user requirements very well.

Also, thief productivity suite isn’t meant to completely replace Office, and for a free package, it meets many users needs perfectly fine.


> That was a sensible gamble too, given how unpredictable that sector is five years after it was released.

Define sensible. Apple's B2C margins are peanuts compared to what Nvidia's commanding right now, and they're both ARM retailers competing for the same cutting-edge fab space.


>but is woefully behind on the software

iOS is ahead on software security compared to Android, Windows, Desktop Linux, etc.



I am referring to things like an app being able to escape the sandbox and potentially further escalate privileges.


Are you referring to any security features in particular? There's a new zero-click exploit every 6 months for iOS, and NSO Group is showing no signs of slowing.



A hardware vulnerability is separate from how good Apple has been at hardening the Os against attackers.



If you think Ternus wouldn't do it, you are in for a bad time.


Well, I hope I'm not, but yes, I will be quite disappointed if so.


Apple is a multi-trillion dollar public company.

It would be unusual for a leader of such a thing not act in accordance w/ shareholders' best interests, as well to defy likely board guidance.


“Capitulating to the current regime on everything is in shareholder’s best interests” is neither a foregone conclusion nor a statement of fact. It’s economic myopia at best.


Let me be clear - I'm not happy about it. But ignoring such a reality reminds me of that quote comparing Job's best friend to a lawnmower.

That said, I'd love to enlightened to how it's myopic, or rather, what course(s) of action you would take, keeping in mind that Apple is a multi-trillion dollar public company.


I’m telling you that thinking a->b is myopic. It could be that shareholder value would’ve been higher had Tim Cook told Trump (or Biden, or Trump, or Obama) to go fuck himself. Perhaps the people who spend money on iPhones, specifically, would’ve been more inclined to buy a new iProduct, than they are now that he’s bent the knee.

Myopia is thinking “well he did it so it must have been good”. There are myriad other things he could’ve done, that have a strong argument towards higher shareholder value.

Edit to add: Think TSLA, if you want a concrete example. If that stock was at all trading on fundamentals (and if they had a remotely capable or competent board) and not Magic Memes, Musk’s hard right pivot was inarguably bad for the brand and shareholder value, even if it made the President temporarily happy.


Counterfactuals are weak opinion, at best.

Given that Apple is doing well, the onus is on someone claiming that Apple would have done better, having a strong argument.

Not "could" have done better, because things could obviously have gone better, worse, or anything else, given any substantive or random difference. Could means nothing.

(And I say this as someone very disappointed with how Cook handled that.)


> Counterfactuals are weak opinion, at best.

Ah, "If you can't definitively and completely prove a negative then you're wrong (but also I'm like, totally not carrying water for those people)" is definitely not a weak opinion, though.

That said, maybe you should read the discussion a bit more carefully before jumping in with "OMG PROOOOOOF" or whatever the fuck this was supposed to be? The entire, plain English discussion, revolved around one thing not being the only possible "fact" just because it happened. None of the posts were particularly long, and none used challenging words.


My point isn’t that anyone’s view is wrong. I can’t make that claim either.

I hate what Cook did.

I would be happy and open to anyone who can point out how Apple was supposed to handle the actual threat of major tariffs in their components and systems better than he did.

But simply asserting a counter factual, a plausible way it might have been better, isn’t that. What would Cook be expected to do with that?

But what?

Not dismissing that there was a better way. There must be. It’s very worthwhile figuring out, even as a counter factual. That’s how we all learn.l

Not judging anyone. My answer is just or even more weak! I have really thought about this too, and come up with nothing so far.

(I appreciate and take note that my comment didn’t communicate my point well enough. It’s important to recognize weak reasoning. But that wasn’t meant to discourage, or show a lack of respect for another person’s efforts. I want a better answer too.)


I’d rather hear from someone suggesting, counterfactually, that they would have done worse had they not capitulated. What’s that argument like?


You want motivated reasoning?

It’s not clear what you are saying, other than what you want to hear.


> Myopia is thinking “well he did it so it must have been good”.

You're writing words that I did not say or imply.

The point is going against any (current) admin is almost always bad for a publicly traded company. Any public entity is going to have to have extremely good reasons to "fight back", how doing so is good for business. As a CEO of such an entity you're having to answer to many people who want a concrete plan and a belief in your strategy.

In the first rodeo, when all this was novel, it was believed such social signaling would pay off. Obviously silicon valley as a whole no longer feels this way.

TSLA is an outlier being grounded more on some superior man theory, that Apple did have in the past w/ Jobs, who is no longer there. Religious fervor stuff. It doesn't really apply. Rational moves here, please.

> There are myriad other things he could’ve done, that have a strong argument towards higher shareholder value

This is what I asked you to expound on. Please state a few.


Most shareholders may not care beyond the next quarter, but CEO action that led to those results were made couple of years ago at least, and current action will do as much to determine not the next quarter, but one slightly further in the future. Hence Jamie Dimon, for example, making a different decision in a similar matter. As Dimon explained: “[…] we have to be very careful about how anything is perceived, and also how the next DOJ is going to deal with it. So, we’re quite conscious of risks we bear by doing anything that looks like buying favors or anything like that”[1].

---

[1] https://www.cnn.com/2025/11/05/business/video/jp-morgan-chas...


Wouldn’t Ternus have had a hand in the Apple Silicon backdoor?

https://news.ycombinator.com/item?id=43003230


Unlikely and it also doesn’t really seem like a backdoor


My condolences in advance


It's less than the other tech CEOs who seem to evade criticism on HN. Elon literally worked for Trump, accomplished nothing, and ended up just leaking everyone's social security data. Thiel and Palantir are profiting from war and building out the surveillance state. Bezos made a $75M documentary about Melania. Larry Ellison took over TikTok US to squelch any criticism of US and Zionist war atrocities.


Depending on who you talk to, this could go either way. Some people want big companies to champion their own political ideals on a larger stage and think Apple should do more. Others would say Apple should stay out of it, after things like their gift to Trump[0], for example.

[0] https://www.theverge.com/news/737757/apple-president-donald-...


you mean offering gold bribes to the president along with $$$ to the prez inauguration to curry regulatory favor?


#appletoo


For me at least I always remember it being referred to as 16-bit, in all the gaming and computer magazines etc. Part of the 16-bit home computers; I remember the Atari ST being referred to that way as well.

I don’t remember seeing references to 32-bit until the 386/486 days on the home computer side and Sega 32X on the console side.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: