Hacker Newsnew | past | comments | ask | show | jobs | submit | some-guy's commentslogin

I disagree. The amount of slop I need to code review has only increased, and the quality of the models doesn’t seem to be helping.

It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.


Is anyone really reviewing code anymore though? It sounds like you are, but where I work its pretty much just scan the PR as a symbolic gesture and then hit approve. There's too much to review, to frequently.

A lot of people thought the same thing with everything going from analog -> digital. Or heck, even learning an instrument when MIDI was first introduced.

Even before generative AI, there is a long-going debate in audio circles around simulated guitar amplifiers. The truth is, the simulations of them have gotten so insanely good that now one could simply purchase an all-in-one pedalboard and have basically all of guitar history at your toes.

My rule-of-thumb is this: "does this tool I'm using in particular take away from the authenticity of my performance or songwriting?" Example: I am very keen on performing vocals and guitar at the same time, and I don't have an expensive studio setup, and my office has background noise. I use these tools, and yes even some open source AI ones, 1) remove background noise of the individual tracks and 2) do a final master against a recording I want to target (using something like Matchering or similar [0]). It still sounds like me, my voice isn't perfect, my beat isn't consistent, but it sounds like I rented some studio space. So for me it was a cost-saving measure.

[0] https://github.com/sergree/matchering


>> one could simply purchase an all-in-one pedalboard and have basically all of guitar history at your toes

And this is actually a problem. Great art usually comes from constraints, real or artificial. These things are a lot of fun to tinker with (a really fun hobby) but one amp, one guitar, and a small number of effects pedals will probably lead to you actually make more and better stuff.


I have an all-in-one amp / pedalboard and it's just more practical, even though all I do is just pick an amp, plug in my guitar and play. They take up less space and cost less money in the long run if you actually do want to use many pedals.

I get what you're saying but in general this specific case I think the all-in-ones win for most people.


engineering learning from a senior sde maybe 5 years ago - set your constraints with purpose, and apply them without breaking your system.

just because you have infinite effects doesn't mean you have to use them all. yypu can set whatever constraints you want


This was definitely true for me, which is why I write everything acoustically and ensure the song is "good" before going in my later age. If I want a specific effect, I then google what pedals were used in a particular song or artist, then I try to recreate the chain, and then tinker with that on top.

Ultimately I spent so much of my time worrying about "what crazy expensive equipment should I buy" when I was younger and more into this stuff, and I should have simply just played my shitty instruments and recorded on my shitty equipment. That's on me, but I also find it empowering as an artist that I can clean up my recording in the way that replaces my need for expensive equipment while maintaining (in my humble opinion) a sense of authenticity of my performance. I agree there may be too many knobs, but finding the knobs that I want has never been easier and I would rather live in the now than in the past.


> A lot of people thought the same thing with everything going from analog -> digital.

A lot of people were right. Music gear lead heavily back into analog after the initial analog to digital transition. I started out using computers exclusively. When I purchased my first analog synth, I couldn't believe how much better it sounded than my VST's. It's hard to quantify exactly why, but my ears lit up the second I started using it.

In terms of amp modeling software, some of it is indeed very impressive. But, tends to fall apart when you need to tweak parameters. I assume this has to do with the capture process. But, if you are happy to use stock patches, it's basically an amp replacement.


Not to be "that guy that just says to use LLMs", but writing out how you want these things to work on your computer to something like Claude, or heck even Google AI mode without logging in with an account, allows you to describe your ideal "home server as a docker-compose.yml file" and for me it did a damn fine job doing it. I had done this all with a previous server manually, and with a new server I simply had provided that I was a Fedora Linux box, with these hard drives, with these containers, and these are the locations of the files, etc. It worked the first try.

This wasn't something that I didn't want to learn myself, but I have so little time with children and gardening on top of my super busy work at this point that I didn't have time to simply google everything. I did know enough about it beforehand to provide a general idea of what I want, so YMMV.


This is how I use my Canon t3i. Once in awhile everything will align perfectly, require very little editing and I feel a huge sense of accomplishment.


With practice and patience, the aligning perfectly will come into your control :) Just get out there and shoot more!


I'm at a large enterprise outfit, and "shoving things in your face" has been a problem with large software suites for a long time, long before the AI craze. I keep telling my skip level leadership that we need more User-Experience "mob goons" that have authority across product domains to (metaphorically) beat the living daylight out of bad "PM-brained" ideas.


My work 64GB M1 Max Macbook Pro is consistently out of memory. (To be fair my $LARGE_ENTERPRISE_EMPLOYER reserves about half of it to very bad Big Brother daemons and applications I have no control over)


I have a 128GB M3 Max from my employer. Due to some IT oversight, I was able to use it for a few months without the corporate "security" crapware. Didn't even ever noticed this machine had a fan before the "security theatre" corporate rootkits were installed.


> My work 64GB M1 Max Macbook Pro is consistently out of memory

What are you doing that needs that much memory?


I lived a block away from a hydrogen fuel station in Oakland, and in the ten years I was there I maybe saw two different Mirais use it.


I have only purchased Toyota vehicles (currently in the market for an EV) and it baffles me that Dodge created a Charger in EV form and Toyota hasn’t made even an EV Corolla or Camry.


> it baffles me that Dodge created a Charger in EV form and Toyota hasn’t made even an EV Corolla or Camry

Dodge's Charger EV has been a sales flop [1] and pretty much universally panned by critics as something that nobody asked for.

The Camry and Corolla were the best-selling sedan and compact sedan of 2025 [2]. I think this shows that Toyota is listening to what Corolla and Camry drivers want - something inexpensive and reliable to get them to and from work every day without issue.

Some day Toyota will make an EV sedan. I think their 2026 bZ Woodland [3] shows that they are starting to figure out how make compelling EVs. And Toyota's EV strategy seems pretty reasonable to me overall - their delays to develop a decent EV don't seem to put them under threat from any legacy automakers. They are being threatened by Chinese EV makers, but so is Tesla - so even a huge head start likely wouldn't have benefited Toyota much either in that regard.

[1] https://www.roadandtrack.com/news/a69927938/dodge-charger-da...

[2] https://www.caranddriver.com/news/g64457986/bestselling-cars...

[3] https://arstechnica.com/cars/2026/02/looks-a-lot-like-an-ele...


An electric Corolla or Camry is my ultimate. I hate driving.

I want an appliance that just works. The Corolla and Camry were this for petrol.

I love my Leaf but it isn’t a Carolla.

What’s with the turning circle on the Leaf?


That's essentially the bZ3. But a Corolla branded BEV will eventually happen:

https://electrek.co/2025/10/13/toyotas-best-selling-car-elec...


> Yes, while I use Fedora on my laptop, I also know Fedora is generally not a good option for a server.

Why is Fedora not considered good for a server?


It's a cutting-edge distro with 6-month release and 13-month support cycles.

Whereas Debian/Ubuntu have 5 years and RHEL/Alma/Rocky have 10 years.


I don't feel like this really answers the question thought, right? At least not at face value.

I could see the side of maintenance burden being a potential point, meaning that one would be "pushed" to update the system between releases more often than something else.


Typically you want stability and predictability in a server. A platform that has a long support lifecycle is often more attractive than one with a short lifecycle.

If you can stay on v12.x for 10 years versus having to upgrade yearly yo maintain support, that’s ideal. 12.x should always behave the same way with your app where-as every major version upgrade may have breaking changes.

Servers don’t need to change, typically. They’re not chasing those quick updates that we expect on desktops.


Yeah, and that's the take I assumed to hear based on what was said.

However, for something like ARM and the use case this particular device may have, in reality you would _want_ (my opinion) to be on a more rolling release distros to pick up the updates that make your system perform better.

I'd take a similar stance for devices that are built in a homelab for running LLMs.


Depends on what you're building an ARM system for. There are proper ARM servers out there; server work isn't the exclusive domain of x86, after all.

For homelabs, that's out the window. Do whatever you want/fits your needs best. This isn't the place where you'd likely find highly available networks, clustered or highly available services, UPS with battery banks, et. al.


I take it as no more than someone's personal opinion, since there is no reference provided whatsoever.


It's more maintenance due to its frequent release cycles, but it's perfectly good as a server OS. I've used it many times, friends use it.

You can't mess up the release cycle because their package repos drop old releases very quickly, so you're left stranded.

A friend recently converted his Fedora servers to RHEL10 because he has kids now and just doesn't have the time for the release cycle. So RHEL, or Debian, Alma, Rocky, offer a lot more stability and less maintenance requirement for people who have a life.


I'd also love to hear what folks have to say about this.

For myself I've had nothing but positive experiences running Fedora on my servers.


I think it's highly circumstantial. For example, my personal servers run a lot of FreeBSD and even though I could stay on major releases for a rather long time, I usually upgrade almost as soon as new releases are available.

For servers at work, I tried running Fedora. The idea was that it would be easier to have small, frequent updates rather than large, infrequent updates. Didn't work. App developers never had enough time to port their stuff to new releases of underpinning software, so we frequently had servers with unsupported OS version. Gave up and switched to RockyLinux. We're in the process of upgrading the Rocky8-based stuff to Rocky9. Rocky9 was released 2022.


I generally agree with you. As a recent father with a toddler, and two parents with a full time job, I’ve found that the only way I can make time for those personal side projects is to use AI to do most of the bootstrapping, and then do the final tweaks on my own. Most of this is around home automation, managing my Linux ISO server, among other things. But it certainly would be more fun and rewarding if I did it all myself.


This feels like the same moment for me when I realized I couldn't keep using Gentoo and needed to move on to a Linux distribution that was ready to go without lots of manual effort. I have a family and kids I need those hours. I had the same feeling as OP of losing a fun learning activity. No longer progressing on Linux knowledge just maintaining. Granted it was good enough level to move on but it's still a loss.

I do the same as you with AI now, it's allowing me to build simple things quickly and revise later. Sometimes I never have to. I feel similarly that I'm no longer progressing as a dev just maintaining what I know. That might change I might adapt how I approach work and find the balance but for now it's a new activity entirely.

I've talked to many people over the years who saw coding as a get shit done activity. Stop when it's good enough. They never approached it really as a hobby and a learning experience. It wasn't about self progression to them. Mentioning that I read computer books resulted in a disgusted face "You can just google what you need when you need it".

Always felt odd to me, software development was my hobby something I loved not just a job. Now I think they will thrive in this world. It's pure results. No need to know a breath of things or what's out there to start on the right foot. AI has it all somewhere in it's matrix. Hopefully they develop enough taste to figure out what's good from bad when it's something that matters.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: