I know it’s what about ism but I really hope you apply the same logic when Cuba once more tried to enter an alliance with Russia or China to defend itself against a larger aggressor next door. So while I agree that Russia should allow Ukraine and Georgia to join NATO, I also think that’s only fair if countries like Brazil, Cuba and Venezuela are freely allowed to determine their futures by joining Russia, China and Iran military alliances. But you and I know that’s not going to happen. So please let’s stop pretending we don’t have double standards.
As you've chosen to address me directly I'll reply honestly, I have zero concern about Cuba, Venezuela, any of the 190+ countries on the planet, wanting to join or form BRICs.
I have considerably more concern about the ability of a post MAGA USofA to successfully navigate such a world via soft power as they appear to have flushed all the competent diplomatic talent down a golden toilet.
> Technically you don't have to be an employed developer to become a senior developer.
That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??
I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.
You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.
> That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn.
That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.
I've never worked in a corporate environment beyond client projects.
Picked up a book on XHTML (no, that isn't a typo) and CSS in 2007, just kept trying to build stuff I wanted to build and backfilling knowledge as I went. Not only is it possible, it's preferred. ~20 years in and I've learned how to build my own full-stack JS framework, deployment infra, a CSS framework, and an embedded database to boot.
Not one drop of this would have been possible had I taken the traditional corporate track.
All of this is possible on a corporate track. Ability to build frameworks and tools do qualify a person as at least a solid mid-level professional, not having corporate experience and associated skills can be a pretty big gap in their CV.
Analogies to other professions give your argument an air of legitimacy, with none.
There’s plenty of people in this world who are expert programmers without following any traditional path.
“Oh yeah, like who”, you say.
Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.
Maybe they mean you can be not employed and build products yourself? Technically true, but that's like running your own surgeries or something, you're still doing surgery.
They agreed that war was a thing of the past, but still continued to push for NATO to allow new members anyway, ironically causing Russia (and China and everyone who is NOT in NATO) to suspect that war was NOT a thing of the past and therefore never quite abandoning their military completely. Unpopular opinion: the West should either NEVER have abandoned its military production (so as to maintain NATO actual preparedness for war, given that's the only reason for its existence) OR it should just have dismantled NATO and announced to the world that it strongly believes war is a thing of the past, and that other countries are advised to follow suit. But we actually chose the easy, halfway path: keep NATO, keep our militaries "looking strong" (which gives the signal our rivals should also do the same, obviously), but not actually be ready for any sort of major war and as the article points out, even lose actual capacity to become ready for war within any realistic timeframe. The worst possible outcome :(.
After the USSR fell, they left behind many countries abused by Russia that didn't believe it would leave them alone. Those countries wanted defense guarantees in case of future Russian aggression.
NATO wanted to be deliberate and slow about admitting any new members, but countries that wanted to join felt that anyone who didn't join might get attacked or face hybrid measures from Russia to prevent them from joining next. So they grouped up and 7 countries joined NATO simultaneously. NATO was never begging them to join, they wanted to join NATO.
People push this vision of NATO being some hungry bastard that can't get enough, but it's largely outside pressures pushing countries to want to join it.
Sure enough they were right. Russia invaded both Georgia and Ukraine, which wanted to join already because Russia kept interfering in their societies.
This is not true, since the 1990 the us was strongly opposed on the eu building or relying more on own defense industry and more closely aligned defence policy, even threatening end of NATO. Lookup the "three Ds" articulated later by Secretary of State Madeleine Albright: No Duplication of NATO assets, No Decoupling of European security from the US, and No Discrimination against NATO members who were not in the EU.
Those are impediments for sure, but they are not blockers, and if they had the will, they would have overcome that. Most, probably all NATO countries do have their own military industrial complex of some sort and the US is buying from them. Although, it certainly is the case that the US is the largest supplier of military equipment and so, yes, the US would benefit most from efforts to increase military spending.
It could be matching theory for outcome though. The unpopular opinion may still be wrong too. Russia was quite different in 1999, or better in 1992, to the point of joining NATO, and China was nowhere the threat of today, and it could be different reasons- not keeping NATO - which caused today's standup. So, basically, the situation seem to be more complex.
I think you are in the wrong. But my reason is that when you support concurrency, every access and modification must be checked more carefully. By using a concurrent Map you are making me review the code as if it must support concurrency which is much harder. So I say don’t signal you want to support concurrency if you don’t need it.
Most countries in the EU already have widely accepted identity proof apps mostly verified by the banks or the government itself. Once verified the identity app gets a certificate which is signed by the authority which issues the identity. We all know how that works as that’s how TLS works as well. The zero proof age check is based on verifiable credentials and the related verifiable presentation. Once you have a wallet with your identity it’s not hard to issue cryptographic proofs of some properties of your credentials, and age is a property of your identity credentials basically. To learn more about the technical details, search for the specifications I mentioned above: verifiable credentials, verifiable presentations.
Ah, and the sites (or whatever else) can then verify the key is valid locally? Assuming that is the case, that'd make for a surprisingly nice system, further assuming that the produced credential is not reversible. I'm highly cynical and so I expected it to be a backdoor for surveillance as it feels like most things under the pretext of 'won't anybody think about the children' are.
The site can verify the signature of the presentation document using the public key of the credential issuer, yes. Each presentation is generated on demand to avoid identity tracking (sites could collide to to track presentations otherwise).
I think this was a genuine generational change. I am pretty sure Rust would never have become popular 20 years earlier because the priorities back then were so different (that was the era of languages like Ruby and Pearl where conciseness and low verbosity were the most valued aspects).
When Ada came out a lot of programmers couldn't even touch type. You're right there's a generational change and a lot of of the Ada stuff won:
* strong typing
* lots of annotations
* keywords over syntax, support for long variable and token names
* object focus (Ada 83 had some limitations on inheritance so it wasn't OO strictly speaking)
* exceptions
* large standard library
These things were controversial in the 1980s. They are not today.
One of the big differences between K&R C and C89 is the introduction of function prototypes. Strong typing was certainly considered positive for compiled languages. Of course C is a lot less strict than Ada.
If we compare the Rust subset that has similar functionality as C then there is not much difference. You get 'fn'. The is 'let' but Rust often leaves out the type, so 'int x = 42;' becomes 'let x = 42;' in Rust. Rust has 'mut' but C has 'const'. Rust introduced '=>' and removed '->' from object access and moved it to the return type of a function.
The C language has support for long variable names. Some early linkers didn't, but that's an implementation issue, people were certainly unhappy about that.
C++ started in the 80s. Objects were not controversial back then. The same applies to exceptions.
I don't have a metric for the size of a standard library. For its time, the C library in Unix system had a large number of functions. Later that was split in a C standard part and a POSIX part. But that was for practical reasons. Lot's of non-Unix systems have trouble implementing fork().
I have no clue what you mean with annotations. If you mean non-function annotations along with code, then generally Rust programs don't have those.
Exceptions were controversial into the 90s which is why Java went down that whole checked-exceptions rabbit hole. The argument was that an exception was essentially a GOTO (or even COME FROM) which broke functional abstraction.
The Ariane 5 crash involved an exception and that was the central "Ada is unsafe actually" argument from C people.
In fact "exceptions are bad" is so baked into a lot of C people's brains that they left them out of Go!
Short variable names were a technical limitation in early languages but style guides were still arguing against long, descriptive variable names in languages like C into the 2000s.
Objects were also likewise controversial and you can see that in the design of Ada 83 where they were both inspired by OO languages like smalltalk but also hesitant to adopt stuff like inheritance. Inheritance was again, seen as a way to break encapsulation (it kinda is) but also a lot of object implementations were slow and memory inefficient in the 80s. Smalltalk was pretty much the reason why the Apple Lisa failed as a product.
OO became a massive buzzword in the 90s but by that time it had already been around for quite a long time.
By annotations I mean mostly type annotations, of course there's also aspect annotations and other stuff ex: Ada SPARK.
As Gen-X, in the Usenet flamewars, the C and C++ folks used to call Pascal/Modula-2/Ada advocates as straightjacket programming, whereas they would be called cowboy programmers.
Ironically the author of Fil-C calls classical C, YOLO-C. :)
not just the priorities, the overall skill and education of programmers.
in the 1980/1990's i was a dumb kid. problems of large systems were not in my mind. having to type begin/end instead of {} was, i thought, a valid complaint.
with experience, education, and hindsight, most of the advantages of the ada language were not understood by the masses. if ada came out today, it would have taken off just like rust.
I'd say that if the original Ada was introduced at the same time as Rust development started then people would pick Rust. Ada is also a product of its time would have to be modernized quite a bit.
Given how similar the syntax is of C, C++, Javascript, and Go, I think a language with the syntax of Ada would have a hard time.
Ollama is a bit easier to use, you’re right. But the point of the article is the way they just disregarded the license of llama.cpp, moved away from open source while still claiming to be open source and pivoted to cloud offerings when the whole point was to run local models all while without contributing anything back to the big open source projects it owns its existence to. Maybe you don’t care about performance (weird given performance is the main blocker for local LLMs) but you should care about the ethics of companies making the product you use?
And anyway this thread has lots of alternatives that are even easier to use and don’t shit on the open source community making things happen.
I'm making more of a pragmatic point. While ethics of companies are important, i'm still using OpenAI, Anthropic, Microsoft, Apple etc, so I definitely accept a trade-off between morality and ease-of-use.
Currently i've found Ollama to have the best intuitive experience for trying new models. Once i've tried those models and decide on something to use for a project, I can deploy them, and not need to use a UI again.
I'll be trying out the other options in this thread, but my point is that ease of use is going to triumph over the other points the original post made, and some of the alternatives mentioned in the original post miss why Ollama is so popular.
Keep in mind that as the post says, the model you’re trying via ollama may not be the model you asked for! And the performance may be subpar and not reflect the model true performance. Otherwise, I agree they offer an easy and polished product and that explains why they are so popular, besides their personal connections having resulted in their OpenAI partnership.
reply