Hacker Newsnew | past | comments | ask | show | jobs | submit | mentalgear's commentslogin

Sam Altman: the CEO of companies selling 'Intelligence' and 'Trust' (... me Bro) as a service.

judging by everyone in the AI space, is he that different though?

Not a fan, but unfortunately a "digital proof of citizenship" seems to inevitable due to the en-shitification of the internet, autocratic state actor's doctrines to destabilise free societies through disinformation that matches well with social media's en-rage-ment business model, and the more recent AI slopification / AI bots running wild.

The question is whether citizens can build enough pressure for such verification systems to be state-based and truly zero-knowledge (akin to the EU's) versus having the private sector 'verify' each user to siphon data, profit off it (Thiel's Persona) and fortify surveillance-capitalism and autocratic administrations.


At the moment in the UK (where any mention of digital ID sends half the population mental) you have to email a whole raft of ID docs and personal data to estate agents, mortgage brokers, solicitors etc. to get an ID check done. Or use a private ID service that can have a cost associated and may not be any more secure than my passport scan sitting in someones M365 mailbox. You can't know.

I'd be happy to have a government service replace all that nonsense, where a one-time challenge code could verify my ID. There is now a UK.gov "One Login" authentication used by other government services that is essentially a digital ID as far as I can see. It just needs to be made mandatory for ID checks by law.

Such a service can also be used for age verification with the correct privacy controls in place, far better than all the dodgy age verification services that exist now.

Digital ID and age verification are going to be a part of the internet going forward. I'd rather have a government service that (in a functioning democracy) has accountability to the citizens that use it. ID verification is also a natural monopoly, so the government picks a winner anyway.


> Data for sale included people’s gender, age, month and year of birth, socioeconomic status, lifestyle habits, mental health, self-reported medical history, cognitive function, and physical measures.

If this is not traceable back to individuals, it would probably good to be made public. But I assume the UK Biobank only gives access to trusted partners since - as we know in our 'data analytics' day and age - with enough general data quantity you can trace back anything to anyone if you have the resources. And the capitalist-surveillance econonmy certainly provides the profit-motive.


I paid for TheGuardian because if we don't support truly independent, objective, investigative journalism, who will?

Certainly not Billionaires buying newspapers (e.g. Washington Post/Bezos, ...).


> if we don't support truly independent, objective, investigative journalism, who will?

Like Eric Schmidt, Bill Gates, Warren Buffett, George Soros and countless other billionaires through their "charities"? https://theguardian.org/

Just because they are liberal and non-profit doesn't mean they are independent, that only appears this way if you only think in the narrow confines of the Overton Window between "conservative" and "liberal" of mainstream discourse.


> We took a wrong turn by locking ourselves into content silos and embracing comfort instead of seeking truth, and it will not end well unless we do a hard u-turn to authenticity and sovereignty.

We didn't do that: capitalist interests did.


Pretty sure we still chose the silos. We voted with our wallets.

SuperagentLM made available on-edge PPI redaction models already a few years ago in sizes 20B, 3B, 200M. They still seem to be available via their legacy API - well worth checking out to compare against this one. https://docs.superagent.sh/legacy/llms/superagent-lm-redact-...

> Gary Miller, one of the researchers who investigated these attacks, told TechCrunch that some clues point to an “Israeli-based commercial geo-intelligence provider with specialized telecom capabilities,” but did not name the surveillance provider. Several Israeli companies are known to offer similar services, such as Circles (later acquired by spyware maker NSO Group), Cognyte, and Rayzone.

This. Plus if you want to even attempt measuring real 'intelligence' you want to run a neuro-symbolic, de-lexicalized benchmark (e.g. DL-ReasonSuite, SoLT, GSM-Symbolic) - which none of the providers releasing new models showcase.

This sounds all great - I just wish there was a JS/TS port to be able to use it in the browser or from node/deno/bun!

Fair, it actually started out in JS, moved to Deno, then Zig and ended in Rust.

If I ever find the time I'd like to back port what I have now, up the chain.

It is supposed to be a RDF replacement so it will eventually have to happen, but it's hard work to make everything extremely idiomatically integrated into the host language.


Yes, LLMs should not be allowed to use "I" or indicate they have emotions or are human-adjacent (unless explicit role play).

Why, though? Just because some people would find it odd? Who cares?

Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.


Is it? I don't think most of the content LLM are trained on is written in the first person. Wikipedia / news articles / other information articles don't aren't written in the first person. Most novels, or at least a substantial portion of it are not written in the first person.

LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed


Because LLM saying "I got confused, dropped the database and then got scared and hid this from you" hides the "why" LLMs do the things they do. I would also prefer if they were less sycophantic and argue with what I'm wanting to do rather than treating user as a god (ie - "the algorithm you're trying to use is less performant than an alternative")

I think that it is a fair perspective to allow role play, and it's useful too, when explicit. Does not really make sense for AI to cosplay human all the time though.

the whole reason chatgpt got so popular in the first place is because humans found it easier to intuitively interact with a system that acts and seems more like a human, though.

Was that a good thing though?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: