Hacker Newsnew | past | comments | ask | show | jobs | submit | p_l's commentslogin

Honestly, a lot of issues was that we needed to build up the necessary infrastructure in the first place.

And the transformation to market economy involved at least two periods of suicidal decisions in name of ideology that regressed the economy (by the same person, even)


You haven't upgraded any x86 compatible beyond 2005? /s

/s notwithstanding... no.

I run a bunch of decade-old HP Microservers. All BIOS-only.

My personal laptops are old Thinkpads, from before the keyboards went crappy & Lenovo took away the expansion options. So they're all about 15, ?20 generation at the newest, but they are all maxed-out and go like stink. BIOS boot mode optional.

My default hypervisor is VirtualBox, because it runs the same on Linux, Windows and Mac. Defaults to BIOS boot.

This is not like some ancient history. All run current OSes and distros.


Thinkpads were all UEFI since T410 (and related other models), not to mention a lot of non-thinkpad machines since ~2005 were UEFI Type 1 (i.e. "always boot to BIOS emulation mode, but everything underneath is UEFI").

Thus the /s :-)


DOS uses / because programs written for CP/M, and which were subsequently ported to MS-DOS, used forward slashes.

when PC/MS-DOS 2.0 was released, with support for directories, it supported both forward and backward slashes for directory separator because Microsoft programmers wanted to use forward slashes (bringing them over from Xenix, including adding virtual "DEV" directory with device files), but for compatibility and user friendliness the default was \ for directories and / for options


Oops, the influence was a bit higher up the ancestry chain on both sides. CP/M uses / under the influence of VMS’s ancestor, TOPS-10. That’s what I get on relying on old memories of things I was told that were probably inaccurate from the start.

The whole issue is specific to C and languages that copied C or use its runtime underneath in implementations (like Python)

For reference, Unix has no API other than bytes either.


> The whole issue is specific to C and languages that copied C or use its runtime underneath in implementations (like Python)

So it's "specific to" almost all programming languages in actual use. That's a rather esoteric point.

> For reference, Unix has no API other than bytes either.

Unix does offer an API for writing C-standard in-memory text strings to Unix-standard on-disk text files, it just happens to be the same one as the API for writing in-memory binary strings to on-disk binary files.


> Unix does offer an API for writing C-standard in-memory text strings

Why on bloody Earth should a presumably generic-purpose OS provide a special API for dealing with internal representation of some data structure in a (particular) implementation of a (particular) programming language?

Besides, it doesn't offer such an API anyhow; you need to take care to manually pass the result of a strlen() call instead of sizeof()'s as the value for the len parameter of a write() call, otherwise a NUL-terminator will get written into the file as well.

And C says nothing about what constitutes a line break, by the way. Nor does it have any concept of a "line", or any utilities for working with lines specifically, it only knows of strings, and that's all. The concept of "text line" is POSIX.


> Why on bloody Earth should a presumably generic-purpose OS provide a special API for dealing with internal representation of some data structure in a (particular) implementation of a (particular) programming language?

Because the purpose of the OS is to facilitate applications (and, on the other end, facilitate hardware), and those applications tend to have a need to process text in-memory and then store it on the filesystem?


All you need for that is the ability to read and write binary blobs to and from files, which Windows gives you, and to know what "text files" means for the other programs on that platform. Windows itself doesn't care for text much; but the other programs have a shared convention that ASCII text files have CRLF-separated variable-length lines of text, and Unicode text files store text in UTF16-LE, (including the CRLF pairs, so those look like "\x0D\x00\x0A\x00" as raw bytes).

All of this is left to the user space to sort out, just as it is on Linux, so I am not entirely sure why you demand Windows to do more for you than Linux does.


The OS is the one providing the filesystem, it should define and support how it's used (including providing standard utilities for manipulating it, both from programs or by the operator) rather than leaving the programs to figure it out between themselves. (After all, if the text storage format didn't matter to the OS, why would we bother using the CRLF format on windows at all? I submit that third-party programs did not spontaneously come up with an arbitrary convention that everyone would use a different text format on Windows; rather programs use CRLF when running on Windows precisely because the standard utilities that ship as part of DOS/Windows expect that format)

As already stated multiple times here, the CRLF is actually the "correct" way (at least in the telex days, where CR and LF have actual meanings of "Return Carriage to home" and "Feed a new Line"), while the LF-only one is a Unix "hack"/abstraction (which was actually converted back into CRLF if fed to a telex or a terminal). It is not really a surprise that DOS, which was inspired by CP/M, simply copied what was supposed to be a physical signal. This is the reason the ASCII/ANSI code has a BEL indicator for ringing a bell. In short, CRLF is the way to handle newlines at the time that DOS was designed. You will expect that CRLF is the ending because that's how terminals work (unlike with the magicking Unix which smooshes two differing things into a character).

If you are writing a developer suite, whether you're Delphi developing for MS-DOS or Microsoft developing for Apple II, you kinda have the idea of how things should work (because you have the reference book for the platform, not the compiler/language). It is not the assumption that the OS provides abstraction for text - in thise days, everyone just implement it from scratch, really ("code page" was from literal code pages, where each character has a well-defined byte). This is manifested in command-line handling on Windows: the platform convention is that it is just a flat string, and the C runtime determines how to chop that up (MSVC and Intel C has historically disagreed heavily here) The abberation of Windows only having CRLF is because Unix-based designs took over the world: macOS is Unix, Linux was insiped by Unix, *BSD was Unix-derived.


It still shows up in IETF-style textual network protocols, which evolved on non-Unix systems (HTTP, SMTP, etc.)

MTA-STS, a very recent standard (RFC 8461), only allows CRLF as the line terminator (to the chagrin of *nix lovers, and to the fact that a majority of mail systems are being operated on *nix systems)

Peril of writing protocols with an eye to debug them using cheapest terminal you can find on campus and a grad student paid in coffee

Indeed, C runtime is not part of windows API, and it's normal to have a program include few different copies of C runtime library due to different modules compiled with different compilers/options.

C runtime library being part of OS is accidental thing in Unix, 16bit and 32bit Windows API even does not use C-compatible ABI (instead, Pascal-compatible one is present)


The true success of WinAPI, including originally with Windows, is that it provided stable ABI from version to version, and didn't lock you into any language.

The Unix world was lazy about it because of the approach of recompiling across somewhat source compatible systems thanks to POSIX, so there was reasonably fast portability if you didn't go too far off the beaten path.

But doing anything other than C (w/ Cfront maybe) and Fortran and Pascal was a problem, even without binary compat. Even from version to version (legacy of which we now have in glibc breaking binary compat all the time).

Microsoft went hard on the idea that if you bought/build a program for Windows version X, it would run on version X+1. You didn't have to buy a special upgraded version. You could update easier.

The same approach later drove introduction of things like PC System Design Guide and ACPI so you could just upgrade your computers instead of waiting for special OS upgrade just to boot (like it was common in other platforms, including Mac, VMS, and Unix workstation world).

Design wise, GUI parts of WinAPI aren't all that different from working with X intrinsics etc libraries (i.e. the parts above raw xlib)


And because of that, badly reinvented mostly through HTTP

AF_ALG if I remember correctly predates userspace-accessible crypto acceleration and was way more important back when it meant you had actual need for "SSL accelerator" cards in servers, among other things

Yes, I remember that time, it was back when I wasn't allowed to know anything about what servers were doing other than to look it up in the internal leak, which was never maintained

*intenral wiki

That is in fact correct.

Both the compiler (in absence of inclusion of copyrighted libraries) and the LLM are considered to not add creative work and thus do not change copyright status of the works they transform.

You can consider the training set of the LLM or other AI model to be 3rd party libraries and the level of copyright from them applying to final output to be how much can be directly considered derivative, just as reading copyrighted code and being inspired by it does not pass that copyright to your work unless it's obviously derivative


>> You can consider the training set of the LLM or other AI model to be 3rd party libraries ...

I like this comparison -- training set as '3rd party libraries'. Except, of course, that the authors behind the training set may not have actually granted permission to use, whereas the 3rd party libraries usually have some permission by way of license.


The law only cares about how the work is distributed - if you acquired it legally by purchasing, yes you can train LLM on it, and with exception of moral rights in places like EU the author does not have more to say on it.

It's treated the same as human reading and learning from the work.

You have only the granted artificial monopoly on acts of distribution under US law


We did not grant human exemptions in copyright law.

We gave certain temporary monopoly on certain uses to humans under rules little understood by laymen even if their livelihood depends on it.


... and from that temporary monopoly humans have exemptions (critique, inspiration, etc.)


It's generally less an exemption and more a constraint on the monopoly, at least in spirit of the law

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: