Hacker Newsnew | past | comments | ask | show | jobs | submit | fasterik's commentslogin

Hard agree about the intrinsic motivation. The intrinsic/extrinsic distinction is an unspoken assumption in a lot of conversations about AI and work in general. Not everyone is motivated by money and status.

I do believe you can use LLMs while maintaining the intrinsic rewards of programming. For me, right now that means writing code by hand and using LLMs primarily for research, documentation, and brainstorming. Sometimes I ask it to write a piece of code just so that I can see what it comes up with and maybe learn something from it. I'm also planning on experimenting with coding agents, but I will probably have it work in its own parallel repo and hand-pick the changes I want to keep.

I think a "late adopter" mindset is actually beneficial. It allows you to focus on fundamental skills that will never be outdated, and you get the benefits of new technologies once they mature.


It probably makes economic sense for a company on the scale of Google/Alphabet to spend something like $100M per year on the technology. There's relatively little downside (that amount is a rounding error compared to annual spending), the research involved might yield discoveries that benefit other projects, and the investment pays off if launch costs go down by 10x and/or the situation for terrestrial datacenters in terms of grid access, water, permitting, local opposition, etc. gets significantly worse.

Why did it make you sad? I can see why it would make training researchers harder, but once someone attains a postdoc-level skill in mathematics research, wouldn't having a PhD-level AI assistant just boost one's ability to do more ambitious research?

You should read that essay! The model is capable of producing phd level research on it's own with minimal set of prompt's. I was sad because of this paragraph:

"That view is that there is still a great deal of value in struggling with a mathematics problem, but that the era where you could enjoy the thrill of having your name forever associated with a particular theorem or definition may well be close to its end. So if your aim in doing mathematics is to achieve some kind of immortality, so to speak, then you should understand that that won’t necessarily be possible for much longer — not just for you, but for anybody."

He may seem to imply the end is only for some subset of reasons but if you read the entire essay he is just trying to give hope where the rest of the essay is really damning!


> once someone attains a postdoc-level...

... they will discover that level is already crowded by LLMs


> But today I imagine you visiting my hometown and spending a day with the locals. You’d probably end up watching reality TV, ordering some ‘New American’ food on Doordash (it’s a cheeseburger with Korean Kimchi Glaze™), and sports betting from your phone.

This is an idiosyncratic and gratuitously contrarian take on what the actual advice means. If you go to New York, you're more likely to have a good time at a random neighborhood bar that the locals frequent than at a bar in Times Square. If you're in a small town, at least some of the locals probably know about a good hike 20 minutes out of town with a great view that would be hard to find otherwise. Don't overthink it.


> I swear that people have said the same thing with effectively every new model

That is definitely true, and at the same time, we can measure progress by who is making that claim. When Timothy Gowers, a Fields Medalist, says that models are now capable of "producing a piece of PhD-level research in an hour or so, with no serious mathematical input from me," we can be pretty confident that we are getting into seriously interesting territory.


Isn't that a subjective value judgment? That's great that you enjoy gardening and building things with your hands. I don't really enjoy those activities and would rather sit down and read a book or play the piano in my free time. But I want to stay healthy so I exercise my muscles and cardiovascular system in "artificial" ways. What's wrong with that?

"Useless for learning" is just wrong. I've found LLMs immensely useful for directing my learning projects. Of course, a lot of the actual learning must come from doing things and puzzling through them myself. But I now find LLMs to be indispensable in finding out what I need to learn to accomplish a task, finding keywords to search on Wikipedia or in textbooks, and answering questions when I'm confused about something.

Part of the difference in your case is the motivation for learning. Many of us in grade school had a motivation to get good grades/pass a class outside of the pursuit of knowledge. Even for those of us that really liked to learn, it was usually directed at a certain subject matter and not everything that we would need to be successful as adults (I loved math, but would never willingly write an essay if I could get away with it). Because grade school kids are "forced" to learn things they do not want to, they always look for the easiest way to get through the material, and AI provides a way to do this.

I agree with your general point, but if people are going to use AI regardless, the question is whether we should teach young people how to use it effectively. If they don't learn this, they're more likely to use it a way that hampers their development.

Now, I don't know at what level that should begin. Probably somewhere around the high school level, when they're learning to do research projects and synthesize information from multiple sources, is when teaching AI literacy will be most important.


What value to a person does teaching "how to use it effectively" deliver?

How does that benefit their development, learning, society as a whole?

Before you start in with "it'll help them get a job", full stop - education as a public good isn't strictly vocational technician work. It's not a work training for companies.


For the same reason that we should teach people how to use a library, or a search engine, or an academic database. The tools for information retrieval are constantly evolving, and in a democratic society it's important that people learn how to educate themselves on a continuous basis throughout their lives. If you use AI properly, you can learn things that you wouldn't have had the time or skillset to learn otherwise.

It's worth remembering that this isn't that. What the poster describes is constant pushing from the Chrome OS designed to train dependence on the tools and to essentially checkout of the education process. In my opinion this is definitely useless for learning.

The culprit is using web technologies where they don't belong, which Electron is also guilty of. Claude Code is 400k lines of JavaScript for a TUI where a sane implementation in C would be two orders of magnitude less code.

>If you know better just hack it your way.

While this might be technically true, I also think it's a lazy argument that ignores practical reality. It's basically a way to avoid any kind of accountability or self-reflection on the part of developers. "Users aren't happy? If they don't want to make the change themselves, they can fuck off." This is a toxic attitude which I see a lot in discussions of free software.

In practice, 99% of the time it's not worth the time and effort to fork and maintain a large project. Even in a free ecosystem, users get locked into specific products and technologies. This is why sane technical leadership and responsiveness to user feedback are important, even (especially?) in open source projects.


Can you tell me an instance where users got locked into a dying ecosystem in Linux?

What I can tell you is that CentOS, which was used extensively in servers, died and you didn't really see much issue, at least not as much as compared to the pain and suffering users are having to go through now that Windows is the dying place.

What's lazy is the repetition of this realist fallacy of the technical lockin, when in fact what you really have is what you see, an open platform you can very well just leave for another when you disagree with the current vendor.

Dislike Ubuntu and you can very well migrate. That's the practical reality.


There are several software packages that are essentially mandatory if you want to run a modern distro with good desktop hardware support. Some that come to mind are glibc, systemd, and Wayland. These projects have made controversial design decisions which impact the entire ecosystem of Linux software.

I actually did leave Ubuntu because background Snap updates were randomly crashing running applications. Now, I'm fairly happy with Fedora, but it's far from perfect. I reject the idea that if I have technical critiques of these projects, that the fault somehow lies with me if I'm not willing to waste my time jumping distros or rewriting them myself. That attitude is exactly analogous to the user-hostile bullshit coming out of Microsoft.


> These projects have made controversial design decisions which impact the entire ecosystem of Linux software.

> I reject the idea that if I have technical critiques of these projects, that the fault somehow lies with me if I'm not willing to waste my time jumping distros or rewriting them myself. That attitude is exactly analogous to the user-hostile bullshit coming out of Microsoft.

I understand it's frustrating when your distro or OS starts acting up. It's a means to an end, it should get out of your way and let you do your work.

On the other hand, it's impossible to appeal to everyone, so every decision will make some people happy and others unhappy. There's no way around it. The only thing that matters is whether we can live with it or not, in which case the option is to fix it or move on.

It's frustrating but nobody owes you anything. The sooner you realise this the better.

I for instance wasn't happy with anything available. The closest thing was hyprland so I made my own micro-distro on top of it: https://github.com/gchamon/archie. It's way less work than you think in the age of AI, but it does require you intimate knowledge of the system.


If the expected Linux experience is "go build your own if you disagree", then I'm not clear how that is any better than being told the same by Microsoft/Apple

It's better because at least you can. With windows and apple you have to live with it. But that's not the expected experience. UX in Linux has only become better with time, all things considered.

There are alternatives to Wayland, Systemd, Vulkan, etc. on Linux. There are far fewer options on macOS and Windows, "build your own" typically entails starting from scratch.

> There are several software packages that are essentially mandatory if you want to run a modern distro with good desktop hardware support. Some that come to mind are glibc, systemd, and Wayland.

I run Gentoo on one machine and Alpine on several. I promise, none of those are required.


The problem to me seems to be that we are trying to map everyday language onto the mathematics. Even though we have a symbol for infinity, infinity is not necessarily a "thing" that the symbol points to.

In analysis, when we write "the limit as x goes to infinity" this translates into a logical statement like "for all x, there exists some y > x such that ..." I don't really see anything conceptually difficult or contradictory here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: