Hacker Newsnew | past | comments | ask | show | jobs | submit | codingdave's commentslogin

> In software, the things that matter are durable artifacts: diffs, commits, PR conversations, failing tests, logs, benchmarks, small design notes, repro scripts, traces. That’s what you actually reason with.

No, the author utterly misses the point. What matters is the final product that the end users see. Everything in the list above are artifacts of a dev process that truly doesn't mean squat to the end users. Those were all developed to help human coders.

If you truly believe AI is the way forward, and you believe that people are going to move out of the loop, then nothing in that list matters for the AI-driven future. That is actually the list of tools that needs to be thrown in the garbage and replaced by something new that helps AI perform its function.


> ... data centres that won’t have an environmental impact here on Earth.

Really? I wonder how they are going to get them up there without rocket launches?


And getting them down. Or allowing them to come down on their own... I doubt that is entirely environmental impact free.

LEO satellites come down on their own in a few months/years. 100 tons of metal burning in the atmosphere seems a lot, but it's barely the total mass of meteorites falling in 24-48 hours, actually.

Misleading, because satellites are made of different materials to meteors. Satellites are the dominant injection source of 24 elements into the atmosphere, including Al, Cu, Ti, Nb, Co, Zn, Sn, Pb, Ag, Li, V, Hf, W, Ge, Mo, Zr, B, and Ba. This list includes many transition metals, whose catalytic effects on ozone and cloud nucleation are mostly unknown.

Copper in particular is a well-known catalyst for the destruction of ozone.

Source: https://www.sciencedirect.com/science/article/pii/S027311772...


A million satellites isn't going to be 100 tons; even if they're all on the small side, say 100 kg each, the total is 100,000 tons, therefore by your numbers if they last on orbit for 3 years they'd double to triple the mass rate burning up on aero entry. I think SpaceX actually talking about 1-10 tons/satellite making this more like 10-100x if they last 3 years, but between AI hallucinations getting and Musk's increasing disconnect from reality (let alone political toxicity) this is basically irrelevant. SpaceX won't reach these higher masses to orbit spread over this number of satellites regardless.

Aggravatingly, I have seen research estimating that even the much smaller number of satellites currently in orbit is already enough to be unstable with regard to a Kessler cascade, and any question about the realism of Musk's goals from finance and engineering limits is clearly not enough to prevent this kind of scenario. Which may result in other governments interfering with his ketamine supply to make sure their satellites aren't caught up in one.

Simplest helpful thing for the Kessler problem is "just"* have fewer larger satellites, and if Starship actually delivers the launch costs necessary to make space-based data centres worth the bother vs. just buying some cheap desert land, I anticipate Musk getting managed upwards by his staff in this regard.

* nothing in space is "just"


Totally sidestepping the issue and refuting the words.

Regardless of how they fall, they still fall on the planet.

And this still ignores the massive atmospheric pollution of chemical rocket launch.

Space elevator would be a big help with launch, but the trash is still dropped on the ground, or in the ocean, in the end.


We're only starting to see and understand the damage. https://www.space.com/space-exploration/launches-spacecraft/... https://www.science.org/content/article/burned-satellites-ar...

Leave it to the chainsaw man who has already become the millenium's worst killer, to wreak yet more sad havoc and ruin upon the sphere. What absolute trash, what a mad frivolous pointless ambition meant only to crowd out anyone from thinking of this enormous mass stupidity, destruction. Taking up/taking over of space, for no clear stated reason or value except to steal from us all, to deny & claim from the rest. Madness. https://www.thelancet.com/journals/langlo/article/PIIS2214-1...


Must has clearly stated the reason and value. You know that. You're just trying to be wrong because the internet told you you'll be a good person for hating him.

Giant space data centers, up where what couple particles there are bouncing around are already 1000 degrees.

It seems like an incredible amount of pollution to make, to go lord over everyone's heads. This isn't a plan that has any empathy for the earth or reason, except to just deny everyone else access, to burn as much rocket fuel as conceivably possible. So no one else can. Just go build some terrestrial solar, please, thanks.

The man is the bloodiest butcher of the millennium and this is a vile stealing of shared human space. Your lack of actually saying anything and throwing random jabs my way to defend him is ignoble & distracting, adds 0 engagement.


I wonder about the impact on our health of all the metals that will be present in the atmosphere after several months. For example, it is well known that lead in gasoline has increased crime.

Why are so many people just desperate to imagine environmental, economic, or social harm from any new technology or ambitious projects? Are you all just too old to enjoy the thought of an amazing future that's better than today? Or are you too brainwashed by the negativity in the media and think disasters are the only things that can happen? Or are you just bitter about life and can't have any hope? Or do you just feel smug being a nay-sayer to anything ambitious?

> Why are so many people just desperate to imagine environmental, economic, or social harm from any new technology or ambitious projects?

Rockets aren't new technology and they are not imagining the environmental harm. It has been known for a long time. It is just that with only ~300 launches per year (and about 35000 launches ever) the harm has not risen to the level of something that has to be limited.

A million data center satellites is a significant increase in that harm. Furthermore data center satellite are expected to have a service life of maybe 3-5 years so there will be an ongoing 200-370k replacements needing to be launched. That's 3.3-6.2k launches per year at 60 satellites per launch.


100 tons is quite a lot of gpus. If they manage to solve such "minor" problems as powering and cooling them they could run for a decade or so without consuming or polluting. The methane burned to get mass into orbit is trivial - a 500MW powerplant burns that much in under a day.

> 100 tons is quite a lot of gpus

Is it? 100 tons of gb300 rack is ~0.04% of the expected 30GW of new data centers they want to build by 2030... 100 tons of gb300 gives you a measly 10MW data center, it's not even considered a medium sized data center at that point.

Not counting the hundreds of square meters of solar panels and cooling panels you'd need for each rack, you can easily multiply the total weight by 2-5x

They won't run a decade or two either, the failure rate at 3 years is ~50%.

And of course all of that ends up burning down and is completely un recyclable. It just doesn't make any fucking sense no matter how you look at it really.


It won't be possible to keep the solar panels in full sunlight 100% of the time, so you need batteries too.

The math don't math. Too many young dudes watched too much space opera with big heavy armored spaceships that rumble when they fly. Real space is lightweight and fragile. We don't make data centers out of that stuff.


Yeah, that response trivializes the massive burn that power plants perform each day.

When I worked in a midstream gas company, I recall a meeting when we were explaining the business to some new IT folk, and talking about the plants that process 100K barrels. One new guy in particular literally dropped his jaw and said, "you process 100K barrels of gas a year??" The room looked at him like he was insane and the woman running the meeting politely replied: "No, per day."

So acting as if "it burns less than a power plant" somehow means it is trivial is just a really odd take.

Besides, the methane burn is one piece of the puzzle. There is more to environmental impact than just methane.


Yes it is trivial when humanity is burning 100 million barrels of oil per day and 300 tons of coal per second and 100 tons of natural gas per second.

The problem isn't GPUs the problem is cooling them.

Look into what percentage of the ISS by weight is radiators, look into how little power it can generate and radiate, and you'll see that space data centers is the shitcoin pitch of 2026.


Also they are not building them in 3D space with current tech. We clearly don't have it. Cars barely drive themselves in cities, they are decade behind building and maintaining a. datacenter in space.

ISS is not comparable, we don't have to keep GPUs in human-habitable temp ranges, and radiation speed goes way up with increased temps.

The difference in cooling rate between 290K and 370K isn't as large as you think.

I live in a tourist town. 3000 residents, 4 million visitors each year. And I'm just fine with the tourists not going to the places I go - we tend to like the quieter, more affordable places vs. the big fancy price-gouging places. But assuming that us "locals" just sit at home and do nothing is such an unfair and inaccurate assessment. Why would I want to live in a town as crazy as this if I did nothing here?

I enjoy having a vast variety of restaurants and activities that I otherwise would not have in a small town in the Midwest. The roads are well maintained, we have more parks than we otherwise would, there are trails, rivers, and tons of activities. We don't spend all our time partaking of the tourist activities, but we abso-freaking-lutely spend some time enjoying what the town has to offer.


No fair! I live in a place with lots of tourism and the roads are terrible.

Same here. Potholes in England are part of the tourist package.

You are not alone, but it is not the future. I currently am working a short-term gig within a company with a similar setup - the leadership wants to go that way, and some teams are doing so. Those teams are also failing. Their code passes tests, but doesn't hold up to real usage. The team members don't know the code well enough to even talk about the problems. The DevOps teams don't know the platforms well enough to talk about problems either. Some of the most senior folks in the org will just paste AI slop into a document, then not even be able to navigate their own document on a call to explain things.

"Miserable" is absolutely the right word for it. "Dystopian" is another word I use to describe it. Either way, the results I'm seeing make me think that this trend won't last. These companies will either figure out that it isn't working, or they will fail.

FWIW, "AI-first" is one thing. "AI-only" is where it crosses from a strategic choice into a f'ing pile of stupidity.


I sympathize with their plight. I worked way too many hours the last 2 weeks when bots took down my servers, too.

But rather than try a grass-roots effort to try to change legislation, we tweaked the rules on the AWS WAF. They have built-in rule sets to do targeted inspection levels, which brings in far more capability to block headless browser-based bots.

By all means, talk to your reps about bringing change to our governmental systems. But also learn what options exist right at your fingertips. The AWS WAF is imperfect, to be sure. But you can do more with it than was described here.


I agree with the sentiment, but would not be so harsh on the details. Succeeding in a different technical industry does offer some validation that they can succeed in this industry. Dismissing their achievements as irrelevant, even if true, is an unfair response to someone who sounds sincere in their desire to learn. It is OK to be a little simplistic and off-track at this point in their journey. Everyone starts at the beginning.

I would recommend completely different next steps. I think they are on the right track - using the tools available to them to get projects out and learn how to create. Because that is the key difference between where they are coming from and where they want to go: They are not maintaining a pre-existing system anymore, they are building new systems from scratch, so they don't need to focus on the "mechanics" of CS, they need to focus on the perspective change of owning the design of every piece of the system from the ground up.

My recommendation would be to work on finding parallels between their old skills and software dev. They clearly can understand a complex system, so rather than focusing on the line-by-line (which AI can do for them), they need to focus on the system itself. I'd recommend they look at existing complex projects, get local instances running on their machine, and break them down layer-by-layer to understand the pieces - DB, back-end, front-end, caching, the underlying servers, containers, and hardware, and then the front0line distributions of it all via the web with TCP/IP, DNS, CDNs, etc.

OP has an opportunity to step into this industry and pick up that overall systemic understanding that many of today's coders never even bother to learn. They can then add in actual coding skills and product skills, learn how to create with AI, and probably do quite well.


> watches a non-engineer ship a working product

I have not seen that happen. Ever. I've seen engineers put up the guiderails that allow vibe coding to build a base of a product, which they then finish out either by hand or by spending a boatload of money on tokens.

But not once have I seen a non-engineer do so. The non-engineers are the guys who throw something together in a weekend, slap it online, have it hacked before Monday morning, then say "Oops, I guess I do need to learn a bit about how software works."


The disconnect here is a lack of proof that your increase in personal output actually increases the speed of software development. Considering: https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule as a joking, but true fact of how software projects go, does AI skip that 2nd 90%? Or do we add a whole new bottleneck of review and corrections, and still need to code that last 90%?

When I measure software dev, delivery of code isn't even a metric I care about. It is a key part of the process, to be sure, but I care about results - Did we ship? Did it work? Do we have happier customers and a smaller bug list?

In my experience, while I can answer "yes" to those questions on people who use AI assistance surgically, applying it where its strengths lie... I can answer an emphatic "No" for the teams I've worked with who are "AI-first", making the AI usage itself part of their goals.


That would be the little downwards-facing arrow to the left.

> an entire app ready to be used today.

That is exactly where the disagreement stems from. That app is a draft version that might work for a couple people. It won't scale. It won't be secure. It won't handle edge cases. It won't be flexible enough to iterate based on customer feedback.

That doesn't mean LLM-assisted code has no value. It does mean the guidance needed to go from "v000.1" to something you could actually build a business upon is still significant.

Will LLMs bridge that gap more in the future? Maybe. But honestly, hopefully not. Instead, I hope they stop just churning out the same CRUD apps and wrappers that we did a few years ago and do something new. Because if all they do is: What humans do, just faster... cool, useful, but not worth all the hype.

LLMs are useful tools. I use them. But just like the hammer that sits on my shelf and also gets used, they are just a tool. They won't be truly interesting (to me, at least) unless they are doing things that humans cannot do.


>That is exactly where the disagreement stems from. That app is a draft version that might work for a couple people. It won't scale. It won't be secure. It won't handle edge cases. It won't be flexible enough to iterate based on customer feedback.

As if startup code doesn't have the same issues pre-AI? And still they get to billions of valuations with such code.

They can always pay some beefier consultants when they absolutely have to, for scaling it up or hardening it.

That "it won't be flexible enough to iterate based on customer feedback" is more wishful thinking. It would be code like any other code, following some patterns. In fact, the architecture can be fine tuned by the human in the loop anyway - they just wont be needed 5 more humans to assist them to code it.

>Because if all they do is: What humans do, just faster... cool, useful, but not worth all the hype.

That's literally what automation in any field is. Why should be something more, as if this huge breakthrough is already taken for granted within a few years of being available?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: