Hacker Newsnew | past | comments | ask | show | jobs | submit | tacitusarc's commentslogin

I wonder if a successful, albeit slower, approach would be to walk the git commit history in lockstep, applying the behavioral intent behind each commit. If they did this, I would be interested in knowing if they were able to skip certain bug fix commits because the Rust implementation sidestepped the problem.

this is an interesting idea and i might try it with something smaller. there are more than 15,000 commits to bun, so you’d have to have some sort of way to operate on groups of commits in one prompt to get that done without thousands and thousands of api requests

Many segfaults in Bun issue tracker. I bet it would sidestep many.

Well…there would still be panics.

most unsafe language to rust transpilations produce not just pretty terrible rust code but also use unsafe everywhere

which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.

anyways that means segfaults likely would stay segfaults in the initial transpilled version


Interesting idea

These are also two of my primary gripes.

There has been substantial improvement, but the search and symbol follow UX is really bad. Hoping the fix that.


Not trying to promote too much I don't even get anything out of it, but I've been using a slopfork for a while now and it's great. A few flaws obviously since slopped over the weekend, but it's good enough.

https://github.com/zed-industries/zed/issues/26560#issuecomm...


For a relatively small set of dimensions this is true. But the more abstractions the code needs to accommodate, the trickier and more prone to leaky abstractions it becomes. Removing one axis of complexity can be incredibly helpful.


For the Ardour codebase (922k LoC at present, ignoring the vendored GTK/Gtkmm trees), we've found that every new architecture and new OS/platform that we've ported to has thrown up challenges and notably improved the code. That has included FPU/SIMD specializations too.


I don’t believe Hacker News is social media, it’s news aggregator/message board.

Social media requires social network effects, where a large part of the draw is the network effect, and that just isn’t a part of HN.


The Snopes article is useful. For those who don’t want to read it, here is what Grossman says about that quotation:

> That clip took my entire, full day presentation, and took it completely out of context.

-They left out the part where I say that this is a normal biological, hormonal backlash from fight-or-flight (sympathetic nervous system arousal) to feed-and-breed (parasympathetic nervous system arousal) that can happen to anyone in a traumatic event.

-They left out the part where I say that there is nothing wrong if it doesn’t happen, and absolutely nothing wrong if it does happen.

-They left out the part where I say it happens to fire, EMS and even victims of violent crime.

-They left out where I say that it scares the hell out of people.

-They left out where I talk about it (and remember it is common in survivors of violent crime), as kind of a beautiful affirmation of life in the face of death; a grasping for closeness and intimate reassurance in the face of tragedy.


I'm not sure that's at all a defense. That context in no way absolves him of bragging about how he's gets the best sex in his life EVERY TIME HE KILLS SOMEONE.


The quoted text describes separate comments from different police officers. It's also reported by a third party, is a paraphrase rather than a quote, and isn't bragging.


The bit where he calls it a perk of the job is Grossman himself.

There's plenty of video of the guy. https://www.youtube.com/watch?v=ETf7NJOMS6Y


Yes, he seems like a psycho.


How is it not bragging?


There are a million ways to express the fact of the hormonal backlash without including a quote that makes it sound like killing will improve your sex life.

In context, its correct, that's not up for dispute. The question is "does it add anything to the context?" and more importantly "could a student misconstrue its inclusion as something else?"

You'd think that, being so educated on the hormonal backlash from experiencing trauma, that cops and the greater judicial system would be more forgiving of e.g. emergent hypersexuality in rape victims after experiencing a rape that Grossman calls out there. But you would be wrong, because even if Grossman wants his students to understand that concept for their own health, he wildly misunderstands the culture he helped create where the police view themselves as a thin blue line holding back the manifold forces of Chaos Undivided.


I don’t see why any of those should be exonerating?

Also, I feel like “nothing wrong if it does happen” regarding shooting someone, is the wrong perspective. If shooting someone is necessary, then it is necessary, but that doesn’t mean nothing went wrong. Anytime someone gets shot is a time something has gone wrong.


So if someone threatens to kill you and your family, and you shoot them, something has gone wrong? I'd say something has gone right.


Yes, something has gone wrong: someone threatened to kill me and my family, and apparently the only way to stop them from doing so was to kill them. That may be the best option available, but it is still a tragedy.


There are many situations where that isn’t the right response to that.


I really have to wonder what part of that he thinks makes it OK to call it a perk of the job that you get to have awesome sex after murdering somebody for work.


Yeah, shitty people often claim the context is exonerating.

> They left out where I say that it scares the hell out of people.

People literally pay money to do things that feel that way. Haunted houses, bungee jumping, skydiving.

Context: Grossman's employed to train cops to overcome relutance to shoot.


Damn, hoss, didn't think I'd wake up and have to read someone normalizing police violence.

Like, they could just not, you know, go around creating the conditions for their own trauma.... that's a much more legit strategy. That's why folks aren't having this discussion about, say, "fire, EMS and even victims of violent crime".

I know that violence creates traumatic responses, I've been getting a lot out of therapy after being illegally pepper sprayed by DHS last year. Real fuckin' hard for me to feel super sad that those officers probably had big feelings about that violence themselves when they could just, like, not go around assaulting folks.


What can you do? I mentioned the use of AI on another thread, asking essentially the same question. The comment was flagged, presumably as off topic. Fair enough, I guess. But about 80% (maybe more) of posted blogs etc that I see on HN now have very obvious signs of AI. Comments do too. I hate it. If I want to see what Claude thinks I can ask it.

HN is becoming close to unusable, and this isn’t like the previous times where people say it’s like reddit or something. It is inundated with bot spam, it just happens the bot spam is sufficiently engaging and well-written that it is really hard to address.


Could you just be paranoid about it and seeing things where they aren’t? I can’t imagine someone using AI to comment on HN!


I hear you and I agree. I don't know. Gated communities?


No they don’t. Do you really believe that? Maybe on certain niche issues the opinions of a HS student are useful, but mostly they are still growing into some understanding that can contribute in a meaningful way. Which means mostly their opinions are dumb and useless.


I mean, take your position to its natural conclusion: there are people who understand more than you about basically any given topic, which means your opinions are dumb and useless.


This is absolutely true for many topics. There is a threshold of expertise where opinion that does not meet that threshold has no value. There is also a large gray area where there is sufficient expertise such that the opinion might have value. And then there is some point quite bit after that where someone has sufficient expertise such that it is very important to take what they say on the subject seriously. I occupy the first two regions in almost all areas, possibly all. High school students occupy the first area almost exclusively.


One proposed strategy to try and deal with this.

https://freeasinweekend.org/


Does everyone just use AI to write these days? Or is the style so infectious that I just see it everywhere? I swear there needs to be some convention around labeling a post with how much AI was used in its creation.


I'd be embarassed to put my name on AI prose without a disclaimer and I'd also be annoyed to read it as a reader.

IMO it's insulting to the audience, it says your time and attention is not worthy of the author's own time and attention spent putting their own thoughts in their own words.

If you're going to do that at least mention it's LLM output or just give me your outline prompts. I don't care what your LLM has to say, I'm capable of prompting your outline in my own model myself if I feel like it.


> If you're going to do that at least mention it's LLM output

Yes, this! Please label AI generated content. Pull request written by an AI? Label it as ai generated. Blog post? Article generated with AI? Say so! It’s ok to use AI models. Especially if English is your second language. But put a disclaimer in. Don’t make the reader guess.

Eg:

> This content was partially generated by chatgpt

Or

> Blog post text written entirely by human hand, code examples by Claude code


I'm not a fan of AI and try to avoid it, but there is a difference from AI output published by someone knowledgeable and any other AI output that you run by yourself. If an expert looked at the result and found it to be ok, then you can have some assurance that it at least makes sense. Your own AI run doesn't mean anything, it could be 100% hallucination and a non-expert will buy it as truth.


Unfortunately, LLM slop now makes up >53% of the web, and is growing.

It is easy to spot the compacted token distribution unique to each model, but search engines still seem to promote nonsense content. =3

"Bad Bot Problem - Computerphile"

https://www.youtube.com/watch?v=AjQNDCYL5Rg

"A Day in the Life of an Ensh*ttificator "

https://www.youtube.com/watch?v=T4Upf_B9RLQ


Have any outlines you'd care to share?


No, I don't use LLMs to write.


LLMs were trained on stuff that people wrote. I get there are "tells", but don't really think people are as good at identifying AI generated text as they think they are...


I wouldn't have picked this article as AI until I got an agent to do some writing for me and read a bunch of it to figure out if I can stand behind it. Now I see the tells everywhere "It's not this. It's that." is particularly common and I can't unsee it. (FWIW I rewrote most of the writing it generated, but it did help me figure out my structure and narrative)

The problem I think with AI generated posts is that you feel like you can't trust the content once it's AI. It could be partly hallucinated, or misrepresented.


Yeah, but "it's not X. It's Y" is a common idiom that LLMs picked up from people. That's the point i was making. And it's starting to feel like every post has at least one comment claiming that it was AI generated.


Good chunks of the article don't trigger this for me, but I would bet money on the final paragraph involving AI:

> That's not a technical argument. It's a values argument. And it's one that the filesystem, for all its age and simplicity, is uniquely positioned to serve. Not because it's the best technology. But because it's the one technology that already belongs to you.


Is there research showing if and under what conditions LLM output is detected accurately. What are the false positive and false negative rates?


You don't have to be good at identifying AI generated text to detect low-effort slop.


Contractions


As the author I can assure you there’s a human behind these words. Interesting times me live in though, I find myself questioning what’s AI and what’s not often too and at the moment we’ve offloaded that responsibility to the good will of authors or platform policy which might have to change soon


"there’s a human behind these words"

That's a bit vague. Was the article written without the aid of LLMs? Yes or no.


Well, if the words were 100% hand-written, I assume he'd have said that.


As in, you used 0 AI to write or edit this text? Or some AI? I’d like to calibrate myself.


We all know the answer to that.


Nice dodge! Unfortunately, this made it more obvious.


I thought it was a great post tying a lot of things I’ve been reading and thinking about together. Could care less if you used AI if it helps my brain expand and or make connections I wouldn’t have otherwise.


Everyone's trying to be the new thought leader enlightened technical essayist. So much fluff everywhere.


What's wild is that with a few minutes of manual editing it would give exponential return. For instance, a lead sentence in your section saying "here's why X" that was already described by your subheading is unnecessary and could have been wholly removed.


Exponential return? This is the front page of HN! What does exponential returns even look like?

Are you saying this post is a few edits away from becoming a New York Times bestseller?


No, I guess I meant editing to approach a text that doesn't look rushed over (LLM generation is a subset of such poor writings)

But you're right, it did hit the front page, and that says more about my sensibilities not lining up with whoever is voting the article up.


IME many people aren't very capable of editing their own work effectively. It's why "editor" exists as a profession.


That’s pretty presumptive of how obviously the author could improve it. As someone who writes a lot of docs, I find feedback and preferences varies wildly. They may just have well made it “worse” to your preferences by hand editing it more.


You'd have to have a good idea of how you want the document to read, which is half (or more) of the process of writing it.


Does everyone just easily accuse genuine, literate humans of "cheating" with AI when there's no way they could know that?

There are a lot of unique aspects of the writing in this post that LLMs don't typically generate on their own.

And there's not a "delve" or "tapestry" or even a bullet point to be found.

Also, accusations and complaints like this are off-topic and uninteresting.

We should be talking about filesystems here, not your gut instinct AI detector that has a sky-high false-positive rate.

I swear there needs to be some convention around throwing wild accusations at people you don't know based exclusively on vibes and with zero actual evidence.


Why does everyone keep confusing Aspie writing for AI? It's not AI, it's neurodivergence.


This doesn't seem particularly AI slopped to me.


"Not bigger than databases. Different from databases.

It's not a website you go to — it's a little spirit that lives on your machine.

Not a chatbot. A tool that reads and writes files on your filesystem.

That's not a technical argument. It's a values argument."


Imagine, if you would, that the strict libertarians had much more influence in shaping the country. So much so that the roads are toll roads, the parks require a fee, and almost no libraries exist because the ROI just isn’t there.

Furthermore, there is no anti-trust legislation, and as a result, there are only a few companies that control all meeting places: the parks, the coffee shops, the roads, the pubs. And they have set up constant monitoring technology.

If you want to set up a protest on a street corner, it better align with the corporation’s views, or they will ban your access to the roads. If you want to talk with friends at the pub, don’t say anything out of line or you’re not coming back. Events can take place in parks, but make sure you only discuss the weather.

Of course, this is fine: you can always just meet at your own home and say what you think, because that is your own property.

I realize the analogy is overwrought, but there just doesn’t exist an online equivalent of a public space, and ideological enforcement is trivial. Comparing it to the rules we have for physical spaces mean we need to imagine what those physical spaces would be like if they operated like online spaces, and frankly the result is dystopian (in my opinion).

Surely the solution isn’t just to dismiss it as a non-problem? Or, I suppose, to stop looking for a solution because… solutions so far considered have negative side effects, which feels (practically speaking) the same to me.


Physical public spaces are regulated. Laws still apply there.

There are countless online spaces which operated like physical public spaces, where anything legal goes. Move off of the mainstream web and even the illegal stuff is allowed. You can literally run your own instance of whatever application on the Fediverse and follow whomever you want. No matter how radical or extremist your ideology is, someone will happily host it.

It's only a problem if one insists that all online spaces must be run under the same anarchic principles and must be forced to give anyone a platform, but that's far more dystopian than what we have now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: