It's more useful to view the whole situation as EU politicians prioritizing to have their pockets filled with lobbyist money, rather than the EU as a political entity deciding this per se.
It's not completely fair. The US also bullies them into doing those things, it's not only "pure corruption to fill their pockets".
How many European countries buy American weapons because they are scared of what would happen if they pissed off the US? And then they still get tariffs and threats of military invasion.
I've mentioned this above, but I know of a new pupil in one of my local schools who has recently seriously injured another pupil and attempted to strangle one of the teachers (she had to take time off work due to stress).
He is only seven and has just been expelled from another school.
Generally true, but the school's core protection responsibility is for its own students and staff - not the rest of the world. And the school's authority and resources are even more constrained.
At least in some places, school systems have "special" schools or other programs for the kids who they'd rather keep out of contact with the general student population.
Surprisingly hard to expel a child, particularly in the more privileged schools … far more satisfying from the perspective of an educator if they can address the issue.
>>Surprisingly hard to expel a child, particularly in the more privileged schools
In my experience - it's the reverse. Expensive private schools were quick to expel students because as much as they liked the money they liked having good academic results they could boast about much more. It's the basic run of the mill public schools that can't expel anyone because the student has to be in education somewhere and they might be the only school in the catchment area, so there are no good alternatives.
The public schools are loathe to expel (unless there's an agreement in the district that one school is a dumping ground) - midrange private schools are quick to expel to protect the rest, but the highest end private schools will figure out a way to not expel, because the money is sooooo good.
If it's a private school, then they expel pupils pretty rapidly.
Of course, none of this addresses why there are behavioural problems in the first place. A shrink alone may not cut it, especially if there is a wider toxic culture in the school which helps create bullying.
This very much depends on where you live, your school, and the commitment of the parent body.
I went to a school decades ago that was both small, and highly effective at explusion. I can't say that this successfully led to improved academic outcomes however.
In practice if you use no US part you will get sanctioned as a 'national security risk' or whatever anyway. Even if your product uses no US parts the customers who still want to interact with the US financial system (by e.g. having a USD account) will still be unable to purchase it.
Seems to solve a problem very similar to Conan or vcpkg but without its own package archive or build scripts. In general, unlike Cargo/Rust, many C/C++ projects dynamically link libraries and often require complex Makefile/shell script etc magic to discover and optionally build their dependencies.
How does craft handle these 'diamond' patterns where 2 dependencies may depend on versions of the same library as transitive dependencies (either for static or dynamic linking or as header-only includes) without custom build scripts like the Conan approach?
Curious that they listed the GPU as "RDNA5 AT0 XL". RDNA5 is not out yet, but that naming seems to align with the naming in this leak [1]. It's explicitly labelled as a "desktop/gaming" model with 154 CUs and 36GB of VRAM.
We can see how monumentally important to preserving the right to privacy the work to stop Chat Control has been by the frothing anger we see displayed here. The companies which complain to Uncle Sam every time they're fined by the EU for getting caught red handed smothering competition now ask the EU for more regulation.
Ms. von der Leyen will need to find a way to make things up for Google et al considering she has been pushing this regulation at their behest with fervour. That will probably take the form of even further prostration of Europe to US big tech.
What's the 'new normality' in the fifth stage? Do you think you'll start to believe it actually works 100%? Or that you won't change your assessment that it works only sometimes, but maybe pulling the lever on the slot machine repeatedly is better/more efficient than doing it yourself?
No this is still the "bargaining/negotiating" phase thinking. After this is when depression hits when for your usecases you see that the code quality and security audit is very good.
People will accept it as a way to build good software.
Many are still in denial that you can do work that is as good as before, quicker, using coding agents. A lot of people think there has to be some catch, but there really doesn’t have to be. If you continue to put effort in, reviewing results, caring about testing and architecture, working to understand your codebase, then you can do better work. You can think through more edge cases, run more experiments, and iterate faster to a better end result.
I'm kind of excited about that though. What I've come to realize is that automated testing and linting and good review tools are more important than ever, so we'll probably see some good developments in these areas. This helps both humans and AIs so it's a win win. I hope.
> it's looking like assessment and evaluation are massive bottlenecks.
So I think LLMs have moved the effort that used to be spent on fun part (coding) into the boring part (assessment and evaluation) that is also now a lot bigger..
You could build (code, if you really want) tools to ease the review. Of course we already have many tools to do this, but with LLMs you can use their stochastic behavior to discover unexpected problems (something a deterministic solution never can). The author also talks about this when talking about the security review (something I rarely did in the past, but also do now and it has really improved the security posture of my systems).
You can also setup way more elaborate verification systems. Don't just do a static analyis of the code, but actually deploy it and let the LLM hammer at it with all kinds of creative paths. Then let it debug why it's broken. It's relentless at debugging - I've found issues in external tools I normally would've let go (maybe created an issue for), that I can now debug and even propose a fix for, without much effort from my side.
So yeah, I agree that the boring part has become the more important part right now (speccing well and letting it build what you want is pretty much solved), but let's then automate that. Because if anything, that's what I love about this job: I get to automate work, so that my users (often myself) can be lazy and focus on stuff that's more valuable/enjoyable/satisfying.
When writing banal code, you can just ask it to write unit tests for certain conditions and it'll do a pretty good job. The cutting edge tools will correctly automatically run and iterate on the unit tests when they dont pass. You can even ask the agent to setup TDD.
Cars removed the fun part (raising and riding horses) and automatic transmissions removed the fun part (manual shifting), but for most people it's just a way to get from point A to B.
I'm not sure, but I think it boils down to accepting that some things we were attached to are no longer important or normal (not just software building).
But specifically to your examples, the latter: I think the "brute force the program" approach will be more common that doing things manually in many cases (not all! I'm still a believer in people!).
Edit:
Well, I wrote a bad blog post on this some time ago, I might as well share it: I think the accepting means engaging with the change rather than ignoring it.
It doesn't have to work 100% of the time to be ubiquitous! This is just the strangest point of view. People don't work 100% of the time either, and they wrote all the code we had until a couple of years ago. How did we deal with that? Many different kinds of checks and mitigations. And sometimes we get bugs in prod and we fix them.
The new normal will be: Everything will get worse and far more unstable (both in terms of UI/UX and reliability), and many of us will loose their jobs. Also the next generation of the programmers will have shallower understanding of the tools they use.
AI doesn't need to outrun the bear; it only needs to outrun you.
Once the tools outperform humans at the tasks to which they were applied (and they will), you don't need to be involved at all, except to give direction and final acceptance. The tools will write, and verify, the code at each step.
> Once the tools outperform humans at the tasks to which they were applied (and they will)
I don't get why some people are so convinced that this is inevitable. It's possible, yes, but it very well might be the case, that models cannot be stopped from randomly doing stupid things, cannot be made more trustworthy, cannot be made more verifiable, and will have to be relegated to the role of brainstorming aids.
I think they meant that people insisting total genAI takeover of coding is inevitable are likely people who stand to profit greatly by everyone giving up and using the unmind machines for everything.
the original post is an example of how. Every programmer is discovering slowly, for their own usecases, that the agent can actually do it. This happens to an individual when they give it a shot without reservation..
Large scale AI datacenters require a very expensive physical supply chain that includes cheap land, water, and electricity, political leverage, human architects and builders to build datacenters, and massive capital investments. Yes, AI will outperform humans, but at some point it may become cheaper to hire a human programmer.
Huh, not sure where I got the 86 number from, because I did check a primary source. Probably a mixup with a later number that included annexed territory.
There are also large trading/market making firms providing liquidity, especially on markets associated with up/down bets on crypto, stocks etc. They use all the options trading machinery they've already built for more 'respectable' venues like CME/Eurex etc, further squeezing the margins for retail traders.
They're active on bets that are even considered "meme" bets. Example: Jesus returning in 2026 - If you can get a loan at 4% as a big well respected trading firm and plonk it on Jesus not returning at 94 cents, you're making ca. 2% for 'free'. (Unless Jesus returns, in which case you have bigger problems than your portfolio pnl).
Not an expert in game development, but I'd say the issue with C++ coroutines (and 'colored' async functions in general) is that the whole call stack must be written to support that. From a practical perspective, that must in turn be backed by a multithreaded event loop to be useful, which is very difficult to write performantly and correctly. Hence, most people end up using coroutines with something like boost::asio, but you can do that only if your repo allows a 'kitchen sink' library like Boost in the first place.
> that must in turn be backed by a multithreaded event loop to be useful
Why? You can just as well execute all your coroutines on a single thread. Many networking applications are doing fine with just use a single ASIO thread.
Another example: you could write game behavior in C++ coroutines and schedule them on the thread that handles the game logic. If you want to wait for N seconds inside the coroutine, just yield it as a number. When the scheduler resumes a coroutine, it receives the delta time and then reschedules the coroutine accordingly. This is also a common technique in music programming languages to implement musical sequencing (e.g. SuperCollider)
Much of the original motivation for async was for single threaded event loops. Node and Python, for example. In C# it was partly motivated by the way Windows handles a "UI thread": if you're using the native Windows controls, you can only do so from one thread. There's quite a bit of machinery in there (ConfigureAwait) to control whether your async routine is run on the UI thread or on a different worker pool thread.
In a Unity context, the engine provides the main loop and the developer is writing behaviors for game entities.
> but I'd say the issue with C++ coroutines (and 'colored' async functions in general) is that the whole call stack must be written to support that.
You can call a function that makes use of coroutines without worrying about it. That's the core intent of the design.
That is, if you currently use some blocking socket library, we could replace the implementation of that with coroutine based sockets, and everything should still work without other code changes.
reply