This is even more true when there are so many blurry photos. It's as if Nugs acolytes keep putting up photos and making claims but not a single photo clearly shows his three heads or single pogo stick leg. The more photos there are, the more likely it is that at least one of them should clearly display Nug.
Huh. You're right, I never thought about it that way. I had to look it up and apperently there are multiple theories that the reason ufo photos are always terrible quality is that there's some kind of terrible-photo-force-field around them. Great stuff!
The truth is that when we see photos of Nug the mind-bending eldrich horror of the sight disturbs the vision part of our brain. The photos are all perfectly clear, but simply too terrible for our tiny minds to ever percieve.
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."
I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.
Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.
Can't we reach a compromise where proven track record of good use of LLM by a contributor or a company (eg. Bun) be pre-approved or entertained? Blanket ban on a new technology shouldn't be the default option.
Certainly not in the case of asking it to do something you'd be slow at because you are unfamiliar. If you are not familiar enough with the system, how are you confident that what the LLM has produced is valid and complete? IMO the people saying LLMs make then 10x faster were either very bad to start with (like me!) or are not properly looking at the results before throwing them over the wall.
And how do you know if that is the case or the person/team using the LLMs is one of the good ones?
This is the crux of the problem. LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
Speed is seductive.
The bar isn't "this is a known good contributor". Its "this is a known good contributor working in a space they have knowledge in and has a track record of actually checking and thinking about LLM output before submitting it." It's much higher and I don't see how you can approve people on an organization-wide basis.
> LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
if they had a good track record, the current submission that led to this article damaged it.
i am reminded of this quote: it takes more cleverness to debug code than it takes to write it. if you write code as clever as you can, by definition you are not clever enough to debug it. using LLM makes your code many times more clever than what you could write yourself. which means by the same definition the code is to clever for you to understand or debug it.
I like the new corollary to that rule, which is that if the AI is the best coder in the room and writes code too clever for itself, then no one including the AI can debug the code. Then where does that leave you?
i love it. just a moment of thought makes clear that LLMs are not capable to debug their own code because if they were they would be able to write better code. the LLM code doesn't even need to be clever.
Why would it be pre-approved ? Code is code, whether it's bad quality LLM code or meatbag code it shouldn't matter.
The entire problem is that before the meatbag code was either not submitted at all (developer knowing they are not competent enough to do the fix) or the volume of it was low.
With LLM people not competent to even review, let alone write are emboldened to just throw shit on the wall at rapid pace. So the wall is entirely covered by the shit
I use LLM as a tutor. It tailor their answers exactly to the situation I am in, even if it hallucinate. I can correct them on the fly and that also serves as training. I try not copy and paste and type every line of code by hand. That doesn't always happen, but I usually understand the code I am writing.
Why are you writing a vpn multiplexer written in a language you don't use?
You can't review it.
Are you relying on your colleagues to do that, or is this riddled with bugs? Or is it code you're producing for personal use only so it's not worth mentioning as it's not sped your work up, it's just let you write a little play program.
yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.
i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.
I really hope that you have gone over what the LLM decides to do.
Time and time again I've had a project (such as a DSL to SQL compiler, automatic Rust codegen, CSS development) stall because the LLM took a short sighted decision.
I later found better solutions by querying Reddit and upon consulting the LLM, it basically said "oh shit I'm sorry"
This is commonplace. So commonplace that most have worked “checking the LLM” into their workflow so deeply that essentially all that’s done is prompt followed by a mini code review.
To suggest a senior engineer blindly accepts modifications without code review kinda hints at you not using LLMs to realize how quickly it will make a mess of things if you don’t hold it’s hand.
Lol why is it arrogant? My workplace is evidence that having a senior engineer title or even a computer science degree doesn't mean you are a good engineer. I honestly think some people have fake credentials and got their jobs via nepotism.
Jumping to an assumption like this - that they didn't review their work - that's somewhat of an insult to someone who has done this for a long time.
Now it's totally possible that they're an awful developer (who knows!), but it's arrogant to assume that with no evidence.
And I agree, some of the worst devs I've worked with have been PhD's or had otherwise impressive credentials (ostensibly). And I absolutely think at least 1 of them were just lying about their backgrounds.
> However, there are lots of people in the world who live their whole life by vibing
Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
> Why do they think they are "helping" with hallucinated rubbish that can't even build?
Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.
AI is absolutely terrible for people like that, as it's the perfect enabler.
It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.
I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.
This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.
This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.
Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.
Tangential side story, but an interesting one none the less.
I was a food delivery driver back in the mid 00's to the mid teens. Early on, GPS was rare and expensive, so to do deliveries and do them effectively, you had to be able to read a map and mentally plan out efficient routes from the stochastic flow of orders coming out.
This acted as a natural filter, and "delivery driver" tended to be an interesting class of people, landing somewhere in the neighborhood of "lazy genius". Higher than average intelligence, lower than average motivation.
Then when smartphones exploded in the early 10's, the bar for delivering fell through the floor, and the job became swamped with people who would be best identified as "lazy unintelligent". Anyone who had a smartphone and not much life motivation was now looking to drive around delivering food for easy money.
Not saying the job was ever particularly glamorous, but it did have a natural mental barrier that tech tore down, and the result was exactly as one would predict. That being said, I'm not sure end users noticed much difference.
well, they think they don't. until their pii gets leaked all over the internet because whoops our s3 bucket was publicly accessible, or until the service goes down because whoops our llm deleted the prod db...
PII leaks are normalized now. Most people aren't even aware, or just shrug "oh well" and head to the app store to download the latest gacha game or whatever.
> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.
If you will forgive an appeal to authority:
The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.
Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.
Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.
You're at least describing someone who sounds hard-working... what's the problem?
I'd be more concerned if I was someone who signed up to play ping pong two hours a day and do a bi-weekly commit.
There was a time not so long ago where I was watching "a day in the life of a software engineer" videos on Youtube and I was wondering if some of these were parodies. I still remember one in particular which I'm pretty sure was a parody, but it was only marginally distinguishable from the others.
I do believe in hardship. As sacrifice. It yields long term benefits for oneself, and for society.
But submissions into slavery for immediate gain accomplishes little, and costs society a lot more (physical and mental health issues are a huge burden).
Those parodies you saw, they were caricature of elite engineers, who sacrificed decades of his life to become so competent. Can work from home, eat pasta while glancing over a PR and just hit approve.
That you resent the luxury doesn't make it undeserved privilege.
I've met programmers who severely outclassed me. It was extremely uncomfortable and it took me months to accept that reality and reshape my hurt ego into curiosity and desire to learn from someone clearly superior in the craft.
That being said, most people in the privileged positions you described are there by sheer luck and connections. In the very very best-case scenario that offends them the least: they stumbled upon an opportune position and were smart enough to make full use of it... in the first 6 months (when people pay the most attention and lasting impressions are formed). And then rode the reputation they made for years. Their value as engineers on the team after the initial honest burst of productivity becomes... very unclear from that point and on, shall we say.
Again, I've met engineers who fully deserved their privileges. 2-3 times over 24 years of career though (a good chunk of it as a contractor so I've been around). My anecdotal evidence obviously means nothing but we all develop pattern-matching skills with time, making me think what I saw is generally the statistical curve that would apply almost everywhere. Maybe.
> Why are we not paying it off? I sure am. I refactor code left and right. It is up to you.
Do you work alone i presume? Everyone now is engineer. In my department, even managers are "writing code". Producing thousand of lines of ansible code, that nobody can review, with multiple lines of doc that nobody will read. It is just a mess.
That's a management problem. If you can't stop non-coders from coding perhaps you can introduce an AI reviewer to take a load off, demand that they be able to defend every line of code, and put them all on pager duty, since they're coders now ;)
i've been told that it's totally fine because once the codebase turns into spaghetti you can simply tell the agent to refactor it and then everything will be ok
I know this is a tongue-in-cheek response, but this brings me great pain. The spaghetti begins quickly, and your unit/functional tests won't help you unless you hammered out your module API seams before you even began. Oh, your abstractions are leaking? Your modules know too much about each other? Multiply the spaghetti!
For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.
This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.
You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.
Thing is, it's not how incompetent they are, but the opportunism itself. The property I mentioned pulls in opportunists regardless of their competence. So eventually if you work in a field like this, you end up surrounded by them. There's always _some_ around you, of course, everywhere - but across time different fields tended to pull so many of them they would become suffocating to anyone who isn't one. And if you think you can interview your way out of this - an opportunist will often have an easier time to pass a harsh interview process than someone who cares.
IT isn't the only one - finance and law had the issue since forever, AFAIK - but now I'd rather be in a field that's _actively repellent_ to them.
I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.
Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.
> Programming was a domain that filtered out those people because they found it hard to succeed at it.
I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.
I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.
> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason
This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.
I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.
If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.
So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
Do You really want it?
There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.
How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?
> Do You really want it?
If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.
> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.
Ah, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?
Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.
That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.
Right, but the LLM can help you practice the skill too. Without the LLM, you're in a self-guided, autodidactical mode. Obviously, that can have its own advantages, but most people—but especially novices—aren't in a position to assess their skill level or their progress. The average person isn't going to magically get better at thinking or writing without formal training, or at least some direction.
I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:
> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.
I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.
> look at the era of software that garbage collectors have ushered in. Programs are bloated, slow, and wasteful compared to the literal super-computers that are running them.
> Defense contracting is as good or bad as the policies of the government which is going to change over time.
This is true sometimes. But many times the companies and the government get together to kill people for money (The dead people's money or the taxpayers money - they don't mind which, money is money)
If you were a young person with plenty of time, the best way you could spend it would be learning to program without AI- whether you have money or not.
MCP itself is just an API. Unless the MCP server had a hidden LLM for some reason, it's still piece of regular, deterministic software.
The security risk here is the LLM, not the MCP, and you cannot secure the LLM in such system any more you can secure user - unless you put that LLM there and own it, at which point it becomes a question of whether it should've been there in the first place (and the answer might very well be "yes").
The scientific approach is not only or primarily empiricism. We didn't test our way to understanding. The scientific approach starts with a theory that does it's best to explain some phenomenon. Then the theory is criticized by experts. Finally, if it seems to be a promising theory tests are constructed. The tests can help verify the theory but it is the theory that provides the explanation which is the important part. Once we have explanation then we have understanding which allows us to play around with the model to come up with new things, diagnose problems etc.
The scientific approach is theory driven, not test driven. Understanding (and the power that gives us) is the goal.
> The scientific approach starts with a theory that does it's best to explain some phenomenon
At the risk of stretching the analogy, the LLM's internal representation is that theory: gradient-descent has tried to "explain" its input corpus (+ RL fine-tuning), which will likely contain relevant source code, documentation, papers, etc. to our problem.
I'd also say that a piece of software is a theory too (quite literally, if we follow Curry-Howard). A piece of software generated by an LLM is a more-specific, more-explicit subset of its internal NN model.
Tests, and other real CLI interactions, allow the model to find out that it's wrong (~empiricism); compared to going round and round in chain-of-thought (~philosophy).
Of course, test failures don't tell us how to make it actually pass; the same way that unexpected experimental/observational results don't tell us what an appropriate explanation/theory should be (see: Dark matter, dark energy, etc.!)
The ai is just pattern matching. Vibing is not understanding, whether done by humans or machines. Vibe programmers (of which there are many) make a mess of the codebase piling on patch after patch. But they get the tests to pass!
Vibing gives you something like the geocentric model of the solar system. It kind of works but but it's much more complicated and hard to work with.
No, the theory comes from the authors knowledge, culture and inclinations, not from the fact.
Obviously the author has to do much work in selecting the correct bits from this baggage to get a structure that makes useful predictions, that is to say predictions that reproduces observable facts. But ultimately the theory comes from the author, not from the facts, it would be hard to imagine how one can come up with a theory that doesn't fit all the facts known to an author if the theory truly "emanated" from the facts in any sense strict enough to matter.
reply