It has poisoned more than one company (especially startups). Its the "go big or go home" mentality. The "the market is ours to take if we just put more fuel to this fire" mentality.
was in a startup once (Reid was an investor). The CEOs bought into blitzscaling, told the whole company we're going to "blitzscale". Hired 2 directors (with 0 reports). They had amibitions of hiring 100s of engineers. Then reality struck. There was no revenue and no path to revenue (because early days of AI). The blitzscaling was "paused". The directors had 1 EM report to them each. You can imagine what happened in the months after that.
Anything that happened more recently? At some point, the "overhiring" excuse no longer holds water. Headline from 2050: "Big tech lays off thousands more, due to overhiring 30 years ago..."
Some hard numbers [1] as to why GitHub is struggling with stability issues, directly from GitHub's COO:
Yup, platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)
GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.
So we're pushing incredibly hard on more CPUs, scaling services, and strengthening GitHub’s core features.
In a large enterprise if you task a front end team with solving a performance issue that is caused by the back end, invariably they’ll hack together some workaround… in the front end.
People only ever solve problems in the areas they have control over, whether that’s where the root cause is or not.
From what I remember, it got much worse the moment they started requiring JS for displaying what would otherwise be mostly static (and thus easily cached) content.
Used to be full page loads when you clicked on links too, performance got a lot worse (for me), both network-wise and client-side-wise when that changed.
That doesn't mean it doesn't have usage patterns or other things telemetry would be useful for. And, at the rate these tools are being updated (multiple times a week, multiple times a day in some cases), they practically _are_ SaaS.
If the benchmarks are private, how do we reproduce the results? I looked up the Humanity's Last Exam (https://agi.safe.ai/) this model uses and I can't seem to access it.
Yep, latency's also big if you play competitive multiplayer games. With DOCSIS you get ~11ms +- 3ms added to every packet no matter what because it's shoehorned existing cable infrastructure. Fiber is much better in this regard.
Ping to my public IP's gateway address:
30 packets transmitted, 30 received, 0% packet loss, time 29031ms
rtt min/avg/max/mdev = 1.449/1.915/2.212/0.166 ms
reply