The trick is to define "privacy-preserving age verification" in an extremely narrow way that ignores any other privacy concerns.
For example, imagine you put the same private key into the 'secure element' of every single iphone. You use code signing so that key is only unlocked when the phone is running unmodified iOS with all security updates. You use encryption and remote attestation for the front-facing camera and face id depth sensor. You use NFC to read government-authenticated age and appearance data from biometric passport chips (or digital ID cards) and you store it on-device.
Then, when you want to access pornhub, they send an age challenge to your device, your device makes sure your face matches the stored passport, and if so it signs the challenge with the private key.
Pornhub gets an Apple-signed attestation of age - but because every phone signs with challenges with the same private key, Pornhub can't link it to a particular phone or identity document.
So in a very narrow sense, privacy is preserved.
You can't use someone else's ID, as it checks your face every time. You can't fool it with a photo of the person because of the depth sensor. You can't MITM/replay the camera/depth data because the link is encrypted. You can't substitute software that skips the check with a rooted phone because of the code signing. Security holes can be closed by just pushing a mandatory OS update.
Sure, it doesn't work on PCs. Doesn't work on Linux, or on unlocked/rooted phones. It hands users' government ID documents over to Google and Apple. It requires people to carry foreign-made, battery powered, network connected GPS trackers (with cameras, microphones and speech recognition) with them. And there are non-negotiable terms of service everyone must agree to. But if you define "privacy-preserving" to ignore all that stuff and only consider whether Pornhub learns your identity, it's privacy-preserving.
That key will get leaked. A key that has to go into every phone, even if done at the manufacturer and onto the TPM chip, will get out.
Also even if it doesn't get leaked directly, the security of TPM chips is not absolute. Secrets from them can theoretically be extracted given an attacker with sufficient means and motivation. Normally nothing that's on a typical TPM chip would warrant a project of that magnitude, but a widely used private key can change that equation.
Plus a TPM chip doesn't really have means to tell the phone isn't being lied to. You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.
Maybe? But biometric passports, chip-and-pin payment cards and SIM cards seem to do reasonably well. And Apple can always push out a mandatory software update that rotates the key, if they need to.
> You could swap out the actual phone camera hardware and sensors for a custom board that feeds the entire phone camera data of your choosing and it would be none-the-wiser.
Apple's 'TrueDepth' cameras are serialised and paired with the rest of the device. The touch ID sensors were before that too.
I don't know the precise details, but reports from people trying to repair devices independently of Apple are that the phone is very much the wiser.
14 year old me ran into porn on the internet all the time. It didn't turn me into a serial killer.
Meanwhile we let kids have exposure to algorithms that pervert their sense of self worth, get them addicted to dopamine and gambling, and make them feel inferior to their peers.
We have the wrong priorities as a society.
And this bullshit is going to turn us into a completely tracked, monitored, controlled bunch of cattle.
"Think of the children" is the stated reason but not the actual reason. We've seen this pattern so many times that it's perplexing that people continue to fall for it.
If the children were the actual reason there are much less invasive solutions that enable reliable parental controls such as mandating self classification of content and fining service operators for inaccuracies.
Think for yourself and consider what the possible ulterior motives might be.
> Sure, and in the meantime try to think and read about how privacy-preserving age verification actually works.
This requires you build a whole apparatus around controlling what people can see, say, and do.
The concept of "slippery slope" is often called a logical fallacy, but in reality it's more than often not a fallacy at all. It's the manner in which you boil the frog.
I think it's something like over 50% of adults do not have kids now. Why should we put the majority of people - for the majority of their lives - at risk for a mere 20% of the population to "not see boobs", when good parenting will suffice?
Let's not put a cage around our freedoms. Let's ask parents to be more responsible. In the edge cases where that isn't sufficient, is that really as bad as what could happen to all of our liberties should we go down that path?
We're burning down the whole village because someone saw a cockroach.
The app[1] on the user's device[2] forwards that request to the chip on the user's ID card. The user authorizes themselves with their 6 digit PIN stored on the card.
The chip produces a signed reply containing the following payload fields: `issuing_country:string` and `over_18:bool`
What happens when I set up a tor hidden service that (in conjunction with some client software) stands in for a visitor's device and will proxy any requests back to my personal card? After all the payloads are anonymous so what's the risk to me?
To prevent this sort of abuse, the server would have to request the `pseudonym` field, which contains a hash across the server identity and the card's secret salt, allowing the server to detect abuse but not to track the user across multiple services.
It's probably even simpler than that: say normal users make a few requests once in a while (because they don't need thousands of tokens every day), and one user makes a ton of requests, then it is an indication that this user may be abusing the system.
It would probably be possible to use the service that the parent is suggesting and try to link it to requests to the server based on timing. But I don't even know if anyone would bother trying to identify the OP: probably it would just be enough to rate-limit the requests.
As always: it's easy to criticise, harder to actually get it right.
They're not actually accessible with 'only one set of keys' in my experience.
You actually have to present your photo ID at the site entry gatehouse, then again to the building entry guard (who will also check you have a work permit and a site-specific safety induction) then you swipe a badge at a turnstile to get from reception into the stairwell, then swipe your badge at a door to get into the relevant floor, then swipe your badge and key in a code to enter the room with the cages then you use the key.
If you charitable (like you should be), then a reasonable assumption is that they probably know what happens on a dairy farm, and that's actually their point.
I once worked at a company that had a wealth of backups. A backup generator, backup batteries as the generator takes a few seconds to start, a contract for emergency fuel deliveries, a complete failover data centre full of hot standby hardware, 24/7 ops presence, UPSes on the ops PCs just in case, weekly checks that the generators start, quarterly checks by turning off the breakers to the data centre, and so on.
It wasn't until a real incident that we learned: (a) the system wasn't resilient to the utility power going on-off-on-off-on-off as each 'off' drained the batteries while the generator started, and each 'on' made the generator shut down again; (b) the ops PCs were on UPSes but their monitors weren't (C13 vs C5 power connector) and (c) the generator couldn't be refuelled while running.
Even if you've got backup systems and you test them - you can never be 100% sure.
> Instead, they were quick to point out that it’s hard to know where these warnings could come from, and we cannot risk all those critical workflows failing when some case of misuse surfaces in a new context.
Ah yes, Schroedinger's workflow. So important any disruption is a disaster, and simultaneously so unimportant they couldn't possibly spend a single dime on the tools critical to the workflow.
I heard the CEO of Lets Encrypt, Warren Buffet, accidentally started a fire while charging his e-unicycle in the data centre and that knocked out the server that issues the certificates. They've got a backup, but it's in a safe only two people have keys to; one keyholder, Anne Hathaway, is at a parrot show in Singapore this week and her flight back is delayed due to fuel shortages. The other keyholder, Henry Kissinger, it turns out has been dead for 3 years.
> The following methods can be used to acquire residential IP addresses for a residential proxy network:
> Software development kit (SDK) partnerships: Proxy services convince mobile application developers to include their SDK in applications in exchange for payment for each person who downloads the application. Individuals download the application and accept the terms and conditions, allowing the SDKs to run in the background and route proxy traffic through users' devices.
> Virtual private network (VPNs) with hidden terms of service: Free VPN services may enroll users' devices in a residential proxy network, without obtaining their consent. The details are often hidden in the terms of service, which most users do not read prior to download, or the language is difficult for the user to understand.
> [malware and compromised IoT devices]
> Passive income schemes: Proxy services convince people to download applications on their device that promise to pay them for their internet bandwidth. People often do not realize that criminals use their internet connection to commit cyber attacks
One reddit post says bandwidth sharing passive income schemes paid them $1 to $9 per month.
PGP’s web of trust was kinda bad privacy-wise in some regards, as it basically revealed your IRL social network.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
The IRL social network is actually the important part of the trust structure.
The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.
> The IRL social network is actually the important part of the trust structure.
For Debian-style applications that are 100% about openness and 0% about secrecy, sure.
But if you want to secure communications between pro-democracy activists in China, or you're a Snowden-like whistleblower wanting to securely communicate with journalists - y'all probably don't want to be vouching for one another's keys.
I participate in developing anti-censorship tools. Chinese are a significant population, and it has some overlap with activists there. In practice, identity settles at "who controls this email address".
Self-signed PGP is very occasionally used to prove continuity across channels or addresses. Cross-signed basically never.
For example, imagine you put the same private key into the 'secure element' of every single iphone. You use code signing so that key is only unlocked when the phone is running unmodified iOS with all security updates. You use encryption and remote attestation for the front-facing camera and face id depth sensor. You use NFC to read government-authenticated age and appearance data from biometric passport chips (or digital ID cards) and you store it on-device.
Then, when you want to access pornhub, they send an age challenge to your device, your device makes sure your face matches the stored passport, and if so it signs the challenge with the private key.
Pornhub gets an Apple-signed attestation of age - but because every phone signs with challenges with the same private key, Pornhub can't link it to a particular phone or identity document.
So in a very narrow sense, privacy is preserved.
You can't use someone else's ID, as it checks your face every time. You can't fool it with a photo of the person because of the depth sensor. You can't MITM/replay the camera/depth data because the link is encrypted. You can't substitute software that skips the check with a rooted phone because of the code signing. Security holes can be closed by just pushing a mandatory OS update.
Sure, it doesn't work on PCs. Doesn't work on Linux, or on unlocked/rooted phones. It hands users' government ID documents over to Google and Apple. It requires people to carry foreign-made, battery powered, network connected GPS trackers (with cameras, microphones and speech recognition) with them. And there are non-negotiable terms of service everyone must agree to. But if you define "privacy-preserving" to ignore all that stuff and only consider whether Pornhub learns your identity, it's privacy-preserving.
reply