Hacker Newsnew | past | comments | ask | show | jobs | submit | Avamander's commentslogin

It's all very proprietary and the tooling is ass, there's a lot of wasted effort creating and testing out the same stuff. Bluetooth is just as horrible for the same reasons.

There aren't any usable chipsets with usable drivers for 802.11ah unfortunately.

ASN.1 is not used because of just bitpacking. There are other benefits to ASN.1 and it's probably one of the least problematic parts there.

People who have thought they can do better have made things like PGP. It's one of the worst cryptographic solutions out there. You're free to try as well though.


People who though they can do better did JWT, that is not complicated at all and has no bugs as well. Also solves 20% of what asn.1 is used for.

Maybe a bit pedantic, but it would actually be the more general JOSE which includes tokens (JWT), signatures (JWS), and key transmission (JWK).

And there is a related binary format that uses CBOR (COSE) as well.


Kind-of. But there are worse things than outages when it's PKIs we're talking about. DNSSEC is also extremely opaque and unmonitored. Any compromise will not be noticed. Nor will anyone have any recourse against misbehaving roots.

Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.


Which is fine. Not because KSK rollover is supposedly complicated, but if you can't manage to keep your private keys and PKI safe in the first place then key rotation is just a security circus trick. But if you do know how to keep them safe, then...

It is not fine. Keeping key material safe is not a boolean between "permanently safe" and "leaks immediately".

Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.

For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.


Yeah, theoretically. They "only" need continued access to CF's internal systems. Surely you're aware that the ZSK is confined to your zone and can be rotated as much as you want without having to involve the root/registrar, and with none of the risks or consequences of not knowing how to perform a KSK rollover?

What's your take on the conundrum of Amazon Trust's 20+ year root cert, with which they sign a 5+ year intermediate, with which they sign a 2-month leaf?


> Keeping key material secure for more than a decade while it's in active use is vastly more complex than keeping it secure for a month, until it rotates.

Nope. Key material rotation is just circus when it's done for the sake of rotation.

> For all we know, some ex-employee might be walking around with that KSK, theoretically being able to use it for god knows what for an another decade.

Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?


The point of rotation for these kinds of keys is that it limits the blast radius of what happens if an employee compromises such a key. This is sort of like how there are one or two die-hard PGP advocates who have come up with a whole Cinematic Universe where authenticated encryption is problematic ("it breaks error recovery! it's usually not what you want!") because mainstream PGP doesn't do it. Except here, it's that key rotation is bad, because of how often DNSSEC has failed to successfully pull off coordinated key rotations.

I can see the periodic rotations used as a way to keep up the operational experience. This is indeed a valid reason, although it needs to be weighted against the increased risk of compromise due to the rotation procedure itself.

I'm just saying that rotating the key just in case someone compromised it is not a great idea. Doubly so if it's done infrequently enough for the operational experience to atrophy between rotations.

And yeah, I fully agree that anything surrounding the DNSSEC operations is a burning trash fire. It doesn't have to be this way, but it is.


I'm glad we agree about DNSSEC, but the rationale I'm giving you for key rotation is the same reason we use short-lived secrets everywhere in modern cryptosystems. It's not controversial (except among Unix systems administrators).

Oh, I never disagreed about the state of DNSSEC. It's horrible. Along with the rest of the DNS infrastructure (I just had the reason to remember the DNS haiku again today, unrelated to .de). My disagreement is that I believe that DNSSEC should be fixed, rather than abandoned. And I believe that this does not actually require all that much work.

And I just don't fully buy this rationale for asymmetric key rotation. It makes total sense for symmetric secrets (except for passwords).


> Or maybe an employee has compromised the new key that is going to be rotated in, while the old key is securely rooted in an HSM?

Also possible, but that'd be an active threat that has some probability of being caught.

Never replacing keys allows permanent compromise that can only be caught if someone directly observes misuse.

Though nobody monitors DNSSEC like that, nor uses it, so it's fine from that aspect I guess.


> Nope. Key material rotation is just circus when it's done for the sake of rotation.

I'm a mere sysadmin and not a cybersecurity expert. But this is always something that leaves me torn.

On the one hand, yes, rotation periods for many/most credentials are long enough that you're not really de-risking yourself all that much.

On the other hand, doing regular rotations allows you to tighten up your threat model. A regularly-rotated credential allows you to say "I implicitly trust that this credential has not been compromised prior to the previous rotation."[0] Whereas, without credential rotation, you're saying "I implicitly trust that this credential has not been compromised ever."

The latter to me seems clearly like the inferior model. The question is just whether the cost-benefit pencils out. And that is obviously very situationally dependent. That calculus doesn't pencil out when dealing with user-owned passwords for instance (i.e. the costs of regular password rotation dominate the benefits of the improved threat model). Human limitations with memory and such are the main issue there. However, that doesn't apply to e.g. hypothetical sufficiently developed DNSSEC infrastructure. Does that calculus pencil out there? I don't know. But it seems plausible at least.

[0] Modulo attackers having been able to pivot into a persistent threat with a previously-compromised credential.


The "can't" comes from the fact that VLC is not going to rewrite their forum software or software forge.

Software written in PHP is in most cases frankly still abysmally slow and inefficient. Wordpress runs like 70% of the web and you can really feel it from the 1500ms+ TFFB most sites have. PhpBB is not much better. Pathetic throughput at best and it has not gotten better in decades now.

I don't know how GitLab became so disgustingly slow. But yeah, I'm not surprised bots can easily bring it to its knees.


> Wordpress runs like 70% of the web and you can really feel it from the 1500ms+ TFFB most sites have. PhpBB is not much better.

At least phpBB died 15 years ago with most communities migrating to Xenforo. I'm not quite sure how or why WP is still around with so many SSGs and SaaS site builders floating around these days.


Xenforo is not much better an has many "administrators" whining about bot traffic as well.

The funniest part about WordPress is that you can usually achieve at least a 50% speed boost or more by adding a plugin that just minifies and caches the ridiculous number of dynamic CSS and JS files that most themes and plugins add to every page. Set those up with HTTP 103 Early Hints preload headers (so the browser can start sending subresource requests in the background before the HTML is even sent out, exactly the kind of thing HTTP/2 and /3 were designed to make possible) and then throw Cloudflare or another decent CDN on top, and you're suddenly getting TTFBs much closer to a more "modern" stack.

The bizarre thing is that pretty much no CMS, even the "new" ones, seems to automate all of that by default. None of those steps are that difficult to implement, and provide a serious speed boost to everything from WordPress to MediaWiki in my experience, and yet the only service that seems to get close to offering it is Cloudflare.

Even then, Cloudflare's tooling only works its best if you're already emitting minified and compressed files and custom written preload headers on the origin side, since the hit on decompressing all the origin traffic to make those adjustments and analyses is way worse for performance than just forwarding your compressed responses directly, hence why they removed Auto Minify[1] and encourage sending pre-compressed Brotli level 11 responses from the origin[2] so people on recent browsers get pass-through compression without extra cycles being spent on Cloudflare's servers.

The solution seems pretty clear: aim to get as much stuff served statically, preferably pre-compressed, as you can. But it's still weird that actually implementing that is still a manual process on most CMSes, when it shouldn't be that hard to make it a standard feature.

And as for Git web interfaces, the correct solution is to require logins to view complete history. Nobody likes saying it, nobody likes hearing it. But Git is not efficient enough on its own to handle the constant bombardment of random history paginations and diffs that AI crawlers seem to love. It wasn't an issue before, because old crawlers for things like search engines were smart enough to ignore those types of pages, or at least to accept when the sysadmin says it should ignore those types of pages. AI crawlers have no limits, ignore signals from site operators, make no attempts to skip redundant content, and in general are very dumb about how they send requests (this is a large part of why Anubis works so well; it's not a particularly complex or hard to bypass proof of work system[3], but AI bots genuinely don't care about anything but consuming as many HTTP 200s as a server can return, and give up at the slightest hint of pushback (but do at least try randomizing IPs and User-Agents, since those are effectively zero-cost to attempt).

[1]: https://community.cloudflare.com/t/deprecating-auto-minify/6...

[2]: https://blog.cloudflare.com/this-is-brotli-from-origin/

[3]: https://lock.cmpxchg8b.com/anubis.html but see also https://news.ycombinator.com/item?id=45787775 and then https://news.ycombinator.com/item?id=43668433 and https://news.ycombinator.com/item?id=43864108 for how it's working in the real world. Clearly Anubis actually does work, given testimonials from admins and wide deployment numbers, but that can only mean that AI scrapers aren't actually implementing effective bypass measures. Which does seem pretty in line with what I've heard about AI scrapers, summarized well in https://news.ycombinator.com/item?id=43397361, in that they are basically making no attempt to actually optimize how they're crawling. The general consensus seems to be that if they were going to crawl optimally, they'd just pull down a copy of Common Crawl like every other major data analysis project has done for the last two decades, but all the AI companies are so desperate to get just slightly more training data than their competitors that they're repeatedly crawling near-identical Git diffs just on the off-chance they reveal some slightly different permutation of text to use. This is also why open source models have been able to almost keep pace with the state of the art models coming out of the big firms: they're just designing way more efficient training processes, while the big guys are desperately throwing hardware and crawlers at the problem in the desperate hope that they can will it into an Amazon model instead of a Ben and Jerry’s model[4].

[4]: https://www.joelonsoftware.com/2000/05/12/strategy-letter-i-... - still probably the single greatest blog post ever written, 26 years later.


> And as for Git web interfaces, the correct solution is to require logins to view complete history.

Why logins, exactly? Who would have such logins; developers only, or anyone who signs up? I'm not sure if this is an effective long-term mitigation, or simply a “wall of minimal height” like you point out that Anubis is.


This is the effect of "every vulnerability is a bug" and "we can't rate the severity of any vulnerabilities".

Which very clearly results in "bugfixes" (security patches) not making it everywhere in time because it's just simply ridiculous to ask for each downstream consumer to rate the severity of everything on their own. It's easy to shit on CVEs, some even put out shit CVEs, but at the same time contribute absolutely nothing towards providing a better alternative.

It's quite certain that both the Linux project and the Linux CNA needs to take some responsibility and put in some effort at communication and making it easier to triage.


They can't. Linux has too high a profile. Any additional "in group" that had access to embargoed critical security information would have a much higher chance of being compromised.

The solution is not to tell more people that patch xxxxxx is a critical security bugfix that needs distros to roll new kernel versions immediately.

Major vendors (all the cloud providers) will have security teams that can have the bug mitigated in a few minutes once they're notified.

For everyone else...

Part of the solution is that distros need to stop believing that their distro kernel branches are any better than linux-stable, and use linux-stable and engage with the linux-stable list and patchsets if they're concerned about what's going into them.

Part of the solution is each distro needs a process for pushing critical updates (module blacklists, ebpf patches) to address things like this without forcing all distro users to reboot, which many won't do promptly anyway.


I used to be work in a group that 'managed' this information a while back. I used to work in redhat product security dealing with embargoed flaws and disclosure dates, it was non trivial to get this process managed.

I do think that its the right thing to do, if the reporter is willing to come to the party, but I also understand why if they dont want to.

> Part of the solution is each distro needs a process for > pushing critical updates (module blacklists, ebpf patches) > to address things like this without forcing all distro > users to reboot, which many won't do promptly anyway.

Almost like a 'mitigation tool' that doesn't require expertise on the users end, but on the providers end.


Can Livepatch mitigate this or is it already? I don't know where to look this up.


I used the mitigation from this CVE report to turn off AF_ALG.


These things were caught and basically all of them weren't covered by any test suite (not even GNU coreutils'). It's a bit bold to claim that it's actively worsening it when it's not an LTS.


That's generally what you call introducing new semantic bugs.


> It's a bit bold to claim that it's actively worsening it when it's not an LTS.

It is LTS now. And not LTS releases are releases.


This is where I kind-of like the idea of PowerShell, it's just that I dislike almost all other aspects of it and around it.


Same - psh has one good idea and it’s this. The next evolution of shells needs to include it.


can either of you elaborate what you mean? are you talking about support for structured data passing between scripts/programs?


Yes - https://devblogs.microsoft.com/scripting/working-with-json-d...

Tons of bugs in scripting in Unix come from the fact that data and metadata are interspersed in the same stream (you can mitigate somewhat with stderr vs stdout but hardly anyone does). Examples include things like trying to handle random filenames from * expansions.

It’s a bit more annoying to deal with sometimes, but for actual scripts it’s much more foolproof.

xargs is one of the programs that is designed to work around the original issue.



Yes, structured data between scripts and programs. No xargs, tee, awk, sed, grep mangling. No "argument list too long" errors.

So many problems are avoided, but at the same time the Windows ecosystem is just so far from providing an properly usable terminal experience. Things are still really not designed to be used from PowerShell.


right, see my response to the sibling comment.


> but they'll still hold hostage of the vast swathes of average white collar workers with Office, people that don't care at all about technology as long as they have Word and Excel.

I can't wait for the anti-trust lawsuits. M365 and O365 are already super shady in terms of being able to migrate out or be interoperable with other solutions. "Accidental" roadblocks almost everywhere.


There won't be any.

I'm old enough to remember this happening: https://en.wikipedia.org/wiki/Standardization_of_Office_Open...

Basically, Microsoft furiously bribed their way into formally standardizing the utterly broken MS Office formats, so EU and potentially other regulators couldn't mandate them to be "interoperable" with existing standards (e.g. OpenDocument, based on OpenOffice, which was on its normal way to become standardized with no fast tracking and no bribing). They even called it "Office Open" to foster confusion.

They can do whatever they want and get away with it because a big part of their business model is, much like Oracle and SAP, based on bribing government bodies across the world.


Yes, but this time there’s the additional driving force of countries trying to become more self reliant and not get locked into US software giants (France and Germany for example). A long way to go, but it’s gaining more traction than the past half-assed attempts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: