Hacker Newsnew | past | comments | ask | show | jobs | submit | cluckindan's commentslogin

> to deny all of advertising

I don’t think that is true. Advertising relies on manufactured needs, portraying the hawked goods and services as things one needs to live a comfortable, easy, pleasurable or socially worthy life.

None of that resonates with shock content.

As an extreme example, you supposedly can’t sell guns by showing pictures of gun suicide victims. This is also why some governments require tobacco products to feature gruesome images of smoker lungs, cancer, etc. Ironically, kids in those countries have started collecting and trading those images cut out from tobacco packages.

Curiosity lands squarely opposite of control.


I think there's a lot at work psychologically in advertising, but "kids in those countries have started collecting and trading those images" kind of undercuts your point that shock content doesn't resonate with an audience, create demand or a potential desire to emulate what's depicted.

From another angle, OP's article mentioned something akin to sexual awakenings related to the content they trafficked in.

You can see how popular suicide drone footage out of Ukraine is, there is a large contingent of people eating that stuff up, cheering it on, despite watching a man desperately beg for his life as a drone circles him, toying with him, before going for his head and the feed blacking out being about as grim as it gets.

People are creating games now to replicate the experience. People want to drive drones into other people's heads, all along a spectrum from watching it on youtube, playing a video game, to joining the ukrainian effort and actually performing the act in real life.

My experience is you can find a customer for just about any content, including shock content. Some messages have broader appeal for sure, but even the worst thing you can imagine will have someone with whom it resonates.

It's clear that people are influenced by their environment, and things that were once considered grotesque and unacceptable can be watered down over time with exposure to where, for example, rapists and pedophiles can openly win presidential elections and be placed on the Supreme Court. To where large portions of nations rationalize and support genocide, or any horrible thing you can imagine, even when presented with images of the suffering inflicted.

Humans are malleable and you don't have to have a perfectly crafted advertising campaign to have some people decide they like what they're seeing and want to replicate it, no matter what it is.


> your point that shock content doesn't resonate with an audience, create demand or a potential desire to emulate what's depicted.

My point was that it doesn’t resonate with the principles of advertising. Certainly shock content can resonate with an audience, as well as create demand and desire to emulate: just look at the success of the Jackass franchise.

The kids aren’t buying tobacco because of the images, though — very few if any adults are either. I would assume they’re cutting out their collections from discarded packaging, and that they would not want to emulate lung cancer even if they saw pictures of it.

Purely guessing here, but it doesn’t seem likely that content on rotten.com would have led anyone, let alone masses of people, to become human butchers: in the OP article, the desire to emulate was limited to building narratives around the content.

Crime scene investigator or trauma surgeon, maybe those are more likely outcomes.


So can many other things. People who suffer from psychoses will eventually have an episode, but those who use cannabis in moderation tend to suffer less cognitive impairment than the abstinent or the heavy users.

https://www.sciencedirect.com/science/article/pii/S221500131...


Quit sitting around?

No I still do that :)

The majority of people addicted to drugs have neurodevelopmental issues from childhood. A significant part have preexisting mental illness of one sort or another.

You can call it escapism because that would make sense to you if you were doing drugs, but for most addicts, it’s about being able to feel and act normal, like the others.


About a hay bale’s worth a day should do them in!

Listing the three most addictive, dependency-inducing and mortally dangerous substances used by humans to temporarily achieve a stress-free mental state doesn’t mean cannabis belongs in the same neighborhood.

Simply denying regulatory authority over an intoxicant, as proposed by the devil’s advocate argument I replied to, is obviously incorrect: all intoxicants are intoxicating and intoxication carries a risk of addiction. Where to set regulatory hurdles versus illegalities is much less obvious, and worth considering, but it’s never ‘fully unregulated’ in a prosocial society; if one provides a substance of altered mind, then some subset of those altered will suffer addiction. That’s the downside of our relationship with poisons: sometimes they poison our willpower.

It’s not so simple as ”intoxicant” being addictive.

Many ”intoxicants” are inherently not addictive and may even help getting rid of other addictions (ayahuasca / DMT / other psychedelics).

Many things are not ”intoxicants” yet are addictive.

Addiction is a feature of human physiology. More specifically, FosB turning into delta-FosB seems to be the generic marker of any type of addiction, and it directly drives addictive behaviors when overexpressed in the prefrontal cortex.

The physiology is a result of millions of years of evolutionary adaptation, and while it must have correlated with evolutionary fitness at some point(s) and in some scenario(s), modern humans are surrounded by so many novel stimuli and ways of self-stimulation that we simply have not yet had the time to physiologically adapt to the situation where some of our addictions are not actually conferring true increases in our evolutionary fitness.


Too bad we live in the opposite of a "prosocial society"

That our society’s economic strategy is effectively “how close can we skirt the line to serfdom and slavery” has no bearing on the devil’s advocate proposal of wholly-unregulated intoxicants that I’m replying to. The state will tend to deregulate so long as the intoxicant leaves workers inefficiently functional when they’re at work, but to strictly regulate when it impacts the job market; yet, neither of these tendencies have any bearing on whether we should regulate or not, they’re just inherent biases to be aware of when discussing our society.

As well, take care not to assume that to regulate is to make illegal, make medical-only, impose punitive taxes, etc. Sometimes the outcome of regulation is refusing to get involved — but even then, you do generally (at least, if prosocial societal goals are given sufficient precedence) see societies tend to impose some kind of either age limits or mandatory mentor or religious process onto intoxicants with regard to however they define ‘minors’, so that teenagers have to work for it, can be statistically discouraged en masse without tripping their biological contrarian responses, can be chaperoned by wiser adults, etc.


Making substances completely illegal is the exact opposite of regulation, though.

I can construct many possible theories that underlie your claim but it would be rude for me to put words in your mouth and then reply to them. You’re welcome to offer an explanation if you’d like a second try. Though, I wouldn’t reply to ‘regulation has a special label at this one prohibitive extreme in specific’, which may save you a followup at least!

You can’t regulate the quality of things you don’t produce (or allow to be produced). You can’t regulate the sales of things you don’t sell (or allow to be sold).

This also conveniently sends it to your search provider, and possibly to the browser vendor for analytics.

These days it’s hard to be sure of anywhere you can paste a piece of text and be certain it’s not being sent to a server somewhere.

Depends on your settings.

So a threat actor buys access to a managed kubernetes service, or other linux-based shared hosting platform, and now they have access to the computer.

Hell, GitHub Actions would do.


Is there any service that relies on Linux user separation or containers to separate different user accounts? I’m pretty sure you’re not supposed to do that and the proper way is to run different instances in virtual machines.

Basically every shared webhost that uses cPanel works like this. The security mechanism they use is called CageFS (https://cloudlinux.com/getting-started-with-cloudlinux-os/41...), which makes it so users can't see other users, but it's not like a VM or something.

Right, you're not supposed to do that...

Yes, because hypervisors are simply just a program that runs under linux, not total cpu/memory isolation......

Lemme guess, you probably think this can be used to hack into the backend that runs AWS from any EC2 lol?


Linux is not Unix: it is not derived from AT&T Unix.

Linux 2.2 or 2.4 or so (possibly only Suse Linux) even had a kernel startup message "Unix compliance testing by UNIFIX" or something, back when Unix was considered more prestigious than Linux. It is / was by some official definition "a Unix", though not "UNIX the trademark by AT&T".

I’m fairly certain they’re referring to POSIX compatibility, not calling a Linux a Unix.

Oh damn, you are probably right.

By that definition, nor is BSD. It's kind of their whole raison d'étre.

BSD was originally a derivative of AT&T Unix.

You should read some BSD history.

> can you deterministically test the thing you are asking it to do?

Of course: have it write tests first; and run them to check its work.

Works well for refactoring, but greenfield implementations still rely on a spec that is guaranteed to be incomplete, overcomplete and wrong in many ways.


You can't ask something to check its own work without external reward/penalty. It'll cheat.

Weirdly, and i fully think this is just some cognitive bias I don't have the knowledge to name, the ai seems very happy to please me. Like when it gets something done in one shot, it seems very happy to do so.

It's because expressing emotion tests well in RLHF (reinforcement learning, human feedback), which is the layer on top of the next-token-predictor LLM. As a bonus, it helps manipulate operator reactions to incorrect output, and improve engagement (aka token use).

The "thought process" of an LLM only exists as inference response to next token prediction prompts. It's the illusion of emotion.


Well if the spec is incomplete it sounds like you should lower scope for the AI, and then go from there. I wouldn't be too keen to give a junior engineer free reign and expect awesomeness

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: