Hacker Newsnew | past | comments | ask | show | jobs | submit | echoangle's commentslogin

> Permanent means self sustaining. I.e biodome completely isolated from outside with its own atmosphere.

According to whom exactly? For me, permanent means "permanently without breaks".


If you want another word for that, go with "Continuous".

The ISS has been continuously occupied since November 2, 2000. But it was not, in fact, expected by anyone to be a permanent station; It is made of non-replaceable parts that age and fail (decade scale), it only has very limited life support supplies on board (month scale).


I would call the ISS a permanent station.

https://en.wikipedia.org/wiki/International_Space_Station#:~...

> It is made of non-replaceable parts

Every part of the ISS is replaceable if you want to.

> it only has very limited life support supplies on board (month scale)

I still don't see why self-sustainability is a part of being "permanent".


> I would call the ISS a permanent station.

Since the ISS end of life is scheduled for 2030 - just four years from now - I really would not call it "permanent". Even if gets a few years reprieve, that's quite temporary.

> Every part of the ISS is replaceable if you want to.

There comes a point with buildings and with space stations where tearing down and completely replacing them is a better and cheaper option than repairing or extending them. The ISS is nearing that point.


Something that was permanent and is now scheduled for destruction is still permanent, no?

Or can we at least agree that it was permanent at some point of its life?

> There comes a point with buildings and with space stations where tearing down and completely replacing them is a better and cheaper option than repairing or extending them. The ISS is nearing that point.

Sure, but that's the case for everything, including permanent things. My house won't be around forever, I would still call it a permanent housing.


> Something that was permanent and is now scheduled for destruction is still permanent, no?

No, the ISS never was permanent. It had a limited lifespan from the outset. It's actually beyond the original 15-year life. But it is not indefinite.

> The ISS was originally intended for a 15-year mission, but the mission has been repeatedly extended due to its success and support

https://en.wikipedia.org/wiki/International_Space_Station#En...

> My house won't be around forever, I would still call it a permanent

That's true, in the sense that "A word means whatever I choose it to mean". If you were in a flat in an apartment building scheduled for demolition in 2030, would you call that "permanent housing" ?


I think it depends on the context, but for a home, I would still call it permanent housing if it’s supposed to be demolished in 2070, but probably not by 2030.

I’m not sure the bases in Antarctica all have a set lifetime so it doesn’t really matter for the original point.


I don't think anyone is saying that it's impossible to build a datacenter in space. Of course you can do that if you really want to.

But to make sense, it needs to be cheaper than on earth, and that seems unrealistic.


Looking at the last plot, it seems like the backoff is roughly 1/5 of the total bandwith and it happens every 50 ms or so. Wouldn't it make sense to reduce the backoff and the growth speed if a backoff occurs repeatedly in rapid succession? We want to maximize the area under the curve (transmitted packages), right?

As per the article, CCAs aim to maximize data transfer by inferring the "available bandwidth" of the network. CUBIC relies primarily on packet loss as a congestion signal. For recovery, CUBIC's window size is a cubic function of time since the last congestion event.

After the initial packet loss triggered purposefully the first two seconds in this experiment, the only thing which could cause loss is the network queue (i.e. a simple tail drop, fq-codel, etc) which cannot process packets faster than they can arrive. At this point the link is saturated. The loss becomes a signal for CUBIC to reduce its window. This causes the oscillations you pointed out.

Unlike CUBIC, BBR [0] uses a model-based approach that estimates the available bandwidth and leaves some headroom kind of like you suggest to achieve higher throughput, and doesn't react as aggressively to loss as CUBIC.

[0] https://datatracker.ietf.org/meeting/104/materials/slides-10...


Do you not want to be able to develop while being offline?

also - I do not like developing on my personal machine. I got into this habit a long time ago - I would always use a remote Linux box, and now with LLM's I ride them bareback (or maybe they ride me). If I trash a machine (which has not happened yet), I just rebuild it or find another box.

I am retired, and don't need to - I have a couple of beelink's (just need my home wireless running) and a couple of VPS if I really want to do things away from home, which I don't

I cannot remember the last time I wanted internet access but could not find it. Cell coverage is pretty good and reliable these days.


With AI dependence, unless you are a holdout, offline development isn't really a thing anymore. Perhaps to do some code reviews, but actually producing new code?

That is sad.

Somewhat, it's a tradeoff we are making willingly

I haven't made that tradeoff. I still code myself without agents from time to time.

Nothing changed. You can still code the old way. All those 100x productivity gains are probably closer to 10% productivity gains after you account for all the added debugging and steering.

No reason that you can't. ChromeOS has supported Linux containers for over a decade.

You totally can, I got Linux+VSCode+Docker running on my new Chromebook in less than 15 minutes, without doing any funny stuff.

But for optimal DX it can still be preferable to VSCode tunnel into your big powerful dev box that has everything configured just right.


The difference is that the agent doesn’t run in realtime. If 20 packets are lost and resent, the agent can still process them almost instantly and reply, in contrast to a human. Only the direction from the agent to the human needs to be realtime.

> In many systems of law, the punishment should mirror the crime. You gouge out an eye -> the government gouges out one of your eyes.

Which systems aside form sharia law would that be?

And also the claim was that this law also applies to cyberbullying. So why should boys that cyberbully someone be caned and girls not?


How is that a geojson problem? If your dataset is correct, adjacent borders will just use the same points and will match exactly.

The problem is simplification. Suppose two regions share a border with some nonlinear points a, b, c, d. Simplifying the polygon for the first region might yield a, b, d while the second yield a, c, d. This creates gaps or overlaps between the two regions.

But what is the border? Set the border to what it actually is, not a simplification of it. The state of Colorado is formally a 697 sided polygon, don't simplify it to a rectangle.

This is not what OP is describing. It is very common to simplify objects for decreasing boundary objects by orders of magnitude. GeoJSON is missing correlation when you do that. Simplifying country objects from a GeoJSON source could lead to a gap between the country borders. So you either have poor representation or a longer pipeline to convert objects to an amenable object set. It also breaks idempotency in some regards.

To do the simplification, you detect shared borders, simplify and generate polygons again. That doesn’t make topojson inherently superior. You can convert back and forth and for many applications geojson is easier to process.

Yes, you could write code to do that. Or use the utilities provided in the TopoJSON GitHub and let them do it for you: convert to TopoJSON, simplify, convert back to GeoJSON. They have already written all the code for you.

Yeah, or you could use Geojson and use https://mapshaper.org/

It depends on what purpose you are using the polygons. In an online map you need to simplify way down. Consider these Colorado maps at two different zoom levels:

https://maps.app.goo.gl/JH93ko96QcoLXuBJ9

https://maps.app.goo.gl/au53iTnsmNdFuEZV8

Even the one zoomed in on the state appears to use maybe 15-20 vertices max.

In the second one, if I squint real hard I can just barely make out one slight dogleg on the western border and one on the south. And that is partly because I knew to look for them in the zoomed-in map.

If we use, say, the Census TIGER/Line boundary definitions for the states, we are probably talking about hundreds of thousands of vertices, perhaps millions. You won't be using those in an online map without simplifying.


The Texas border with Mexico is formally down the centerline of the Rio Grande, even as the river moves (ignoring fiddly complications). Even if you could somehow take a perfect snapshot of it at a given time, you'd run into the coastline paradox when sampling it.

So don’t simplify the shapes on their own. Geojson is a storage and exchange format, you can still convert it to other formats if you want to modify it.

I think what the original comment is pointing out is that GeoJSON lacks a concept of a shared boundary. Shared boundaries expressed in GeoJSON get around that by duplicating data. Whenever data is duplicated, there's a risk that the copies will not be exactly the same. That makes the task of modification more challenging given that the real world is full of messy data, like duplicates not matching.

20-25 years ago I worked a lot with map data from otherwise high quality, and sometimes authoritative, sources like the USGS and NOAA that had this non-identical shared boundaries problem (in formats other than GeoJSON). If the format doesn't allow such mistakes to be expressed, then they have to fix their data to publish it in said format.


Sure, but not every format is useful for everything. Geojson is great if you want a simple way to express a shape to show on a map. It’s like criticizing CSV because people put strings in choice value fields instead of doing a foreign key to another table. That’s just not what the format is used for.

I'd take your point further... No format is useful for everything. But we have to be aware of the trade-offs of each format (or language or tool or ...) in order to make the right choice of what to use for a given use case. We do that by sharing knowledge of where a given tool succeeds and where it falls down. Pointing out something a format doesn't handle well is not condemning that format for all use cases (I happily choose GeoJSON over other formats for many things).

Is there any service that relies on Linux user separation or containers to separate different user accounts? I’m pretty sure you’re not supposed to do that and the proper way is to run different instances in virtual machines.

Basically every shared webhost that uses cPanel works like this. The security mechanism they use is called CageFS (https://cloudlinux.com/getting-started-with-cloudlinux-os/41...), which makes it so users can't see other users, but it's not like a VM or something.

Right, you're not supposed to do that...

That would also let us skip the

Keywords -> LLM prose -> LLM summary

Pipeline.


Pandoc can convert LaTeX to Typst and back but probably only for simple snippets without any obscure packages. It’s not lossless.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: