Hacker Newsnew | past | comments | ask | show | jobs | submit | Banditoz's commentslogin

Small aside, if they're dropping their transparency company value does that mean that issue won't be visible anymore? Is that the future for Gitlab?

What would you blame instead?

Bitzscaling. Reid Hoffman's snake oil (thought piece). https://www.blitzscaling.com/

It has poisoned more than one company (especially startups). Its the "go big or go home" mentality. The "the market is ours to take if we just put more fuel to this fire" mentality.

was in a startup once (Reid was an investor). The CEOs bought into blitzscaling, told the whole company we're going to "blitzscale". Hired 2 directors (with 0 reports). They had amibitions of hiring 100s of engineers. Then reality struck. There was no revenue and no path to revenue (because early days of AI). The blitzscaling was "paused". The directors had 1 EM report to them each. You can imagine what happened in the months after that.


> blitzscaling

what a tone-deaf way to name a business. yuck.


Anything that happened more recently? At some point, the "overhiring" excuse no longer holds water. Headline from 2050: "Big tech lays off thousands more, due to overhiring 30 years ago..."

If you create a problem and then do nothing about it, it’s still there years later.

The CEO. No one else to blame, really.

>...why shouldn't we enjoy how small you can get it to pop a root shell?

Because I want to know what the exploit is doing and how it works, and if it's even safe to run.

A privesc PoC is NOT the place for this kind of fun.


Agreed lmao the PoC itself looks like you’re getting attacked

Which I guess is true but I would like to verify the attack is the intended one


Arena is WotC's video game version of Magic the Gathering, yeah. Notably it's got microtransactions and such for opening packs of cards.


TBF, that's just kind of built into the product naturally.


Has the team considered going back all-in on Rails and SSR instead of this hybrid approach?


Have they completed the do-or-die Azure migration? I thought it had another year or something left..


How did the performance of GitHub become so slow in the first place? It didn't used to be this bad years ago.


Some hard numbers [1] as to why GitHub is struggling with stability issues, directly from GitHub's COO:

Yup, platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)

GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.

So we're pushing incredibly hard on more CPUs, scaling services, and strengthening GitHub’s core features.

1: https://x.com/kdaigle/status/2040164759836778878


All of which can be handled with horizontal scaling of identical components.

None of which explains poor latency when opening UI elements, which is more likely be explained by overuse of SPA or spaghetti code in microservices.

Update: yup, that’s exactly it, just as I guessed: https://news.ycombinator.com/item?id=47912867


This whole thread is so embarrassing for GitHub.

The idea that you would change your product design in this way as a quick fix to solve a performance problem is insane.

This would be like if the battery life on a MacBook Pro was too short so Apple fixed it by removing the screen.

Job’s done, boss!


In a large enterprise if you task a front end team with solving a performance issue that is caused by the back end, invariably they’ll hack together some workaround… in the front end.

People only ever solve problems in the areas they have control over, whether that’s where the root cause is or not.


From what I remember, it got much worse the moment they started requiring JS for displaying what would otherwise be mostly static (and thus easily cached) content.


Used to be full page loads when you clicked on links too, performance got a lot worse (for me), both network-wise and client-side-wise when that changed.


AI. GitHub usage has exploded recently due to the ease at which code can be generated.


Not just due to code generation, but to AI code scraping and inspection.


GitHub CLI is not a SaaS. It's a commandline utility.


That doesn't mean it doesn't have usage patterns or other things telemetry would be useful for. And, at the rate these tools are being updated (multiple times a week, multiple times a day in some cases), they practically _are_ SaaS.


If the benchmarks are private, how do we reproduce the results? I looked up the Humanity's Last Exam (https://agi.safe.ai/) this model uses and I can't seem to access it.


You can request access here: https://huggingface.co/datasets/cais/hle

The test data is purposely difficult to access to reduce the chance of leaking it into the training dataset.


Yep, latency's also big if you play competitive multiplayer games. With DOCSIS you get ~11ms +- 3ms added to every packet no matter what because it's shoehorned existing cable infrastructure. Fiber is much better in this regard.

Ping to my public IP's gateway address:

  30 packets transmitted, 30 received, 0% packet loss, time 29031ms
  rtt min/avg/max/mdev = 1.449/1.915/2.212/0.166 ms


I recently found a bypass for this. Put this in your ublock origin custom rules:

  www.linkedin.com##main:style(font-size: 16px !important;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: