Splitting a big task (like anything ML-related) into a set of smaller ones and distribute them across the "fleet" of workers. Then reap the results, stitching it back into a single artifact at the end. This could be commercially viable. This could even become a p2p platform/market where some people basically buy computation while the others offer their hardware for temporary rent to earn a few bucks. You become the coordinator that just connects the demand with the supply and become rich from just commissions alone.
Absolutely! What's _really_ cool is that if you have disjoint computational steps that don't necessarily scale together linearly, you could split them into separately deployed `pln seeds` and let the cluster organically balance the compute as the different usage patterns occur. And yes, "p2p compute on demand" is certainly an intriguing idea.
Doesn't the problem of quality now being barely distinguishable mean that manufacturers would aim to fool consumers by setting high prices to low quality goods to mimic as high quality goods (which probably can't be cheap by definition)?
If that is so - the rest of your points become invalid.
That's a good advice in general to treat any software as untrusted black box as much as possible. But it raises (slightly, but still does) the cost/effort for the user: the user now has to make extra steps and take extra caution.
These concerns were great valid even before vibecoding becoming a thing, but now the estimated probabilities of malicious code's presence have changed, simply because nowadays the cost/effort of writing software plummeted.
reply