I've driven one. Zipcar UK (RIP!) had a few Fiat 500 Hybrids and I ended up with one once when every other nearby Zipcar was booked and I had a last minute need for a car.
Given they are a relatively gutless car to begin with (1 litre 3 cylinder 70hp tinpot engine) I did wonder what the zigzag/lightning icon was on the dash so I googled it.
Turns out the system uses a 11Ah lithium battery that lives under the driver/passenger seat that charges through regenerative braking. It gives a small boost during acceleration (mostly at low speeds so it's more for stop-start urban driving), I think it's not much more than a glorified belt around the crankshaft giving a few extra hp.
No appreciable benefit to it that I could feel, but if it's helping us burn fewer dinosaurs then that's all good. (It's still a car but much better than a massive wankpanzer.)
How it works in Russia, if your corporate VPN is blocked by mistake, you can just submit the application to whitelist it, providing all the necessary documentation (we have pretty advanced e-government system so you can submit it online), and with high probability it would be accepted. If your VPN gets accidentally blocked again, all you need is to write to an on-duty officer and it will be unblocked.
I think it was a bad idea to put cryptographic APIs or VPN in the kernel. If userspace is too slow for this, you should either reduce context switch overhead, or create special kind of processes, which are isolated, but quick to switch into. They are repeating Windows mistakes.
I like the idea of keeping stuff out of the kernel as much as possible, but in this case, there are good reasons why cryptography has to live in the kernel.
We need on disk encryption, and we need to be able boot from an encrypted disk. So we need encryption for that.
We need network filesystems, and we need the traffic over the network to be encrypted. So we need encryption.
IPsec, for better or for worse, is authenticated and partially encrypted at the transport layer, so if we want a linux machine to speak IPsec, we need encryption.
Fixing/changing this would require a huge restructuring of the kernel; it would basically require switching to a microkernel. Given the fact that nobody's ever written a microkernel that doesn't completely suck ass, I don't know that it would be worth the effort.
I don't think it was a bad idea, doing any idea requires an investment and a better investment would have been kernel layer, just ask the history of export control law what the US feared breaking more. Having security in userland means attacks in kernel or in userland are worthwhile against it. In the kernel it could have been secured better than OpenSSL was with less resources and could have had keys unavailable from userland. Instead it got basically no uptake as everyone hobbled along on slightly more resources spread even thinner on OpenSSL clones.
I think, the best way to keep children from dangerous content is large fines for parents, for example, $4000 for every adult video their child was traumatized by due to their negligence. 50% of the fine is shared with the person who reported the violation (including site operators). After all being a parent is a responsibility.
Such law would not cause inconvenience to normal Internet users without children, would provide additional source of income for vigilant people and underpaid school staff, and would result in much higher degree of compliance. Why you guys don't elect people like me.
Don't you think our society has already pushed too far in the direction of mandated helicopter parenting? You can hardly let your kids play independently nowadays in the US without getting a CPS check-up due to someone believing kids should be on leashes; what your proposing is significantly more draconian
Maybe, but why normal people without children need to experience inconvenience, Internet restrictions and verifications just because there are a minority of negligent parents? Children are parents responsibility. Instead of banning adult sites, is not it better to ban families with children from Internet? Make some family-friendly Internet and let them all go there and not bother normal people.
Probably not in places like Germany where over half the population is over 45. As US becomes more like child-devoid europe, it will become even more hostile to children. And parents will be more and more slaves to the state, to raise children however society says they ought to be raised. The purpose of the parent is to pay and be punished, the purpose of the outsider is to rest on the smug shoulders of the state and proclaim how morally superior they are at no cost to themselves.
As it becomes increasingly apparent having children is a suckers game where everyone piles on the penalties to you while eagerly awaiting the social security payments of your children (you make ~all the investment, then they take the profits), they will have even fewer.
If you feel that parents are treated unfairly, the solution is to impose a tax on people without children and use it to pay the salary for raising kids. I think everybody agrees that monetary support is much better than verbal and moral support.
As I understand, "green threads" are also expensive, for example you either need to allocate a large stack for each "thread", or hook stack allocation to grow the stack dynamically (like Go does), and if you grow the stack, you might have to move it and cannot have pointers to stack objects.
Green threads are fine for large servers with memory overcommit. Even with static stack sizes, you get benefits over OS threads due to the simpler scheduling. But the post was about embedded and green threads really suck there. Only using as much stack as you need for the task is the perfect solution for embedded systems.
>and if you grow the stack, you might have to move it
Most stacks are tiny and have bounded growth. Really large stacks usually happen with deep recursion, but it's not a very common pattern in non-functional languages (and functional languages have tail call optimization). OS threads allocate megabytes upfront to accommodate the worst case, which is not that common. And a tiny stack is very fast to copy. The larger the stack becomes, the less likely it is to grow further.
>cannot have pointers to stack objects
In Go, pointers that escape from a function force heap allocation, because it's unsafe to refer to the contents of a destroyed stack frame later on in principle. And if we only have pointers that never escape, it's relatively trivial to relocate such pointers during stack copying: just detect that a pointer is within the address range of the stack being relocated and recalculate it based on the new stack's base address.
Yes, you're not getting Rust performance (tho good part of it is their own compiler vs using all LLVM goodness) but performance is good enough and benefits for developers are great, having goroutines be so cheap means you don't even need to do anything explicitly async to get what you want
Rust chose a different design space for their async implementation though, so what works well for Go wouldn't work well for Rust. In particular, the Rust devs wanted zero-cost FFI that external code doesn't need to know about, which precludes Go-like green threads.
Rust can be used in contexts like dynamic linkers, kernels, libc, microcontrollers, dynamic libraries, and all sorts of places go has no business running. And it can use async in many of them. Go works fine for many contexts but we already have languages like go that work for those contexts. Rust is for the contexts it doesn't work well for. It's painful that it keeps being pushed to support things that would make it more difficult to support the areas it is unique in supporting.
It only shows "Update and shut down" (which is a lie as it still will reboot) or "Update and reboot.
I think all this was a response to severely outdated Windows machines being infected with worms and what not. Microsoft got bad press for this and went (way) overboard with trying to force users to install updates as soon as released.
I wonder whether it has to do with the actual installation being in progress in the background. There is probably a time window in some updates where an interruption leaves the OS in a bad state.
That doesn't justify it because the user should still be able to decide when to initiate it. People are ok with not interrupting the update if they get to choose when it starts.
They are caused by moral busy bodies who think they know better.
To quote CS Lewis:
>Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.
> They are caused by moral busy bodies who think they know better.
If you know better just hack it your way. Linux is an open platform. Nothing prevents you from gutting Ubuntu and making it your own. You can't say the same for Windows though.
So I guess is not a matter of monopoly of idiocy, but whether you can do something about idiot decisions when they are made. This is why an open platform will always win. It's just architecturally better for the end user.
While this might be technically true, I also think it's a lazy argument that ignores practical reality. It's basically a way to avoid any kind of accountability or self-reflection on the part of developers. "Users aren't happy? If they don't want to make the change themselves, they can fuck off." This is a toxic attitude which I see a lot in discussions of free software.
In practice, 99% of the time it's not worth the time and effort to fork and maintain a large project. Even in a free ecosystem, users get locked into specific products and technologies. This is why sane technical leadership and responsiveness to user feedback are important, even (especially?) in open source projects.
Can you tell me an instance where users got locked into a dying ecosystem in Linux?
What I can tell you is that CentOS, which was used extensively in servers, died and you didn't really see much issue, at least not as much as compared to the pain and suffering users are having to go through now that Windows is the dying place.
What's lazy is the repetition of this realist fallacy of the technical lockin, when in fact what you really have is what you see, an open platform you can very well just leave for another when you disagree with the current vendor.
Dislike Ubuntu and you can very well migrate. That's the practical reality.
There are several software packages that are essentially mandatory if you want to run a modern distro with good desktop hardware support. Some that come to mind are glibc, systemd, and Wayland. These projects have made controversial design decisions which impact the entire ecosystem of Linux software.
I actually did leave Ubuntu because background Snap updates were randomly crashing running applications. Now, I'm fairly happy with Fedora, but it's far from perfect. I reject the idea that if I have technical critiques of these projects, that the fault somehow lies with me if I'm not willing to waste my time jumping distros or rewriting them myself. That attitude is exactly analogous to the user-hostile bullshit coming out of Microsoft.
> These projects have made controversial design decisions which impact the entire ecosystem of Linux software.
> I reject the idea that if I have technical critiques of these projects, that the fault somehow lies with me if I'm not willing to waste my time jumping distros or rewriting them myself. That attitude is exactly analogous to the user-hostile bullshit coming out of Microsoft.
I understand it's frustrating when your distro or OS starts acting up. It's a means to an end, it should get out of your way and let you do your work.
On the other hand, it's impossible to appeal to everyone, so every decision will make some people happy and others unhappy. There's no way around it. The only thing that matters is whether we can live with it or not, in which case the option is to fix it or move on.
It's frustrating but nobody owes you anything. The sooner you realise this the better.
I for instance wasn't happy with anything available. The closest thing was hyprland so I made my own micro-distro on top of it: https://github.com/gchamon/archie. It's way less work than you think in the age of AI, but it does require you intimate knowledge of the system.
If the expected Linux experience is "go build your own if you disagree", then I'm not clear how that is any better than being told the same by Microsoft/Apple
It's better because at least you can. With windows and apple you have to live with it. But that's not the expected experience. UX in Linux has only become better with time, all things considered.
There are alternatives to Wayland, Systemd, Vulkan, etc. on Linux. There are far fewer options on macOS and Windows, "build your own" typically entails starting from scratch.
> There are several software packages that are essentially mandatory if you want to run a modern distro with good desktop hardware support. Some that come to mind are glibc, systemd, and Wayland.
I run Gentoo on one machine and Alpine on several. I promise, none of those are required.
>If you know better just hack it your way. ... It's just architecturally better for the end user.
As an end user I want the product/tool to serve me well out of the box, I don't have time to hack it to fix what I dislike about it on my own dime. That's what my job is for.
This is not always available. What smartphones "serve well out of the box", meaning zero telemetry, root privileges, open source, not requiring an account in a foreign country (and not spamming notifications about this), and working? Google Pixel requires time to fix, despite costing like 3-4 ordinary smartphones.
The same probably can be said about laptops. Linux is great but buggy, and proprietary OSes do not pass the requirements.
How ironic. Reactionary old fart mostly known for writing moralistic children’s novels wants you not to listen to moralistic crusaders. I shall take his advice.
CS Lewis certainly got it wrong, if he believed that the greed of robber barons can ever be satisfied. I’ll take the moral busybodies, if I have to take either.
and I am a GNOME user and I find it totally fine to use KDE. Thats the cool thing about open plattforms. Just some people having too much time to make a UI interface part of their personality rant too much on the internet about other UI interfaces.
reply