Hacker Newsnew | past | comments | ask | show | jobs | submit | jongjong's commentslogin

This is why I advocate for making everything as simple as possible. The more complex the tech, the more likely it will be lost through the passing of time.

It's kind of insane how much knowledge a human being needs to have to build certain technologies and it's taken for granted.

AI might make the knowledge easier to acquire but it's still a lot of knowledge that people have to internalize.


This makes a good point. A lot of people think that big tech has a duty to provide jobs to smart, ambitious people.

They assume that we live in some kind of socialist system. They feel like it's a kind of deal; they accept all the regulations, monopolies bureaucratic bullshit and, in return, the corporate monopolies pay them to keep quiet and stay out of politics.

I understand the sentiment but what's horrible about this mindset is that these people think it's OK to support corrupt political power to enrich themselves at the expense of everyone who doesn't work for a big corporate monopoly. They think that all the smart people work for big tech and everyone else is trash... And they set the criteria for entry into the big tech monopoly club (I.e. screenings and interviews). But the irony is that they're trash! Their pseudo-socialist view of the word is crooked.

The reason I support UBI is because I don't see a meaningful difference between ambitious people and random people. Every generation from boomers onwards are spoiled brats. Mostly monetizing and gatekeeping the ingenuity and labor of past generations by playing dumb social games. The whole system doesn't make sense. As meritocracy declines, the rewards increase and false narratives fill the gaps... They'll have you believe that the person who painted Facebook HQ's walls contributed more to society than the guy who actually invented the paint...


There seems to be a pattern that if someone who was not pre-selected by some elites ends up making their own money (I.e. real 'self-made') they are swiftly attacked by the system. On the other hand, look at Nancy Pelosi; she didn't get into any trouble.

Are people allowed to be self-made anymore?

For me personally, after years of planning and hard work, I once managed to secure myself about $40k of passive income from a blockchain in crypto; this lasted a few years but eventually the founders suspiciously abandoned the entire tech stack (for no reason) and switched to Ethereum; this destroyed the opportunity for me; literally lost that stream entirely. Now, recently, I was able to re-establish a passive income stream of about $10k per year from a non-crypto source; this is from an opportunity I took over 10 years ago... I'm worried about that being taken away somehow.


It feels like the political forces underpinning the software industry are coming to light but it seems like there are two opposing forces now instead of just one.

If this kind of vulnerability exists at the platform level, imagine how vulnerable all the vibe-coded apps are to this kind of exploit.

I don't doubt the competence of the Vercel team actually and that's the point. Imagine if this happens to a top company which has their pick of the best engineers, on a global scale.

My experience with modern startups is that they're essentially all vulnerable to hacks. They just don't have the time to actually verify their infra.

Also, almost all apps are over-engineered. It's impossibly difficult to secure an app with hundreds of thousands of lines of code and 20 or so engineers working on the backend code in parallel.

Some people are like "Why they didn't encrypt all this?" This is a naive way to think about it. The platform has to decrypt the tokens at some point in order to use them. The best we can do is store the tokens and roll them over frequently.

If you make the authentication system too complex, with too many layers of defense, you create a situation where users will struggle to access their own accounts... And you only get marginal security benefits anyway. Some might argue the complexity creates other kinds of vulnerabilities.


the vibe coders don't know what they don't know so whatever code is written on their behalf better be up to best practices (it isn't)

I remember implementing OAuth2 for my platform months ago and I was using the username from the provider's platform as the username within my own platform... But this is a big problem because what if a different person creates an account with the same username on a different platform? They could authenticate themselves onto my platform using that other provider to hijack the first person's account!

Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.

Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.


You should use the subject identifiers, not the usernames. You store a mapping of provider & subject to internal users yourself.

But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.

Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.


Ah yeah but I wanted my platform to provide universal OAuth with any platform (that my app developer user trusts) as OAuth provider. If you rely entirely on subject identifiers; in theory, it gives one platform (OAuth provider) the ability to hijack any account belonging to users authenticating via a different platform; e.g. one platform could fake the subject identifiers of their own platform/provider to intentionally make them match that of target accounts from a different platform/provider.

Now, I realize that this would require a large-scale conspiracy by the company/platform to execute but I don't want to trust one platform with access to accounts coming from a different platform. I don't want any possible edge cases. I wanted to fully isolate them. If one platform was compromised; that would be bad news for a subset of users, but not all users.

If the maker of an application wants to trust some obscure platform as their OAuth provider; they're welcome to. In fact, I allow people running their own KeyCloak instances as provider to do their own OAuth so it's actually a realistic scenario.

This is why I used the hash approach; I have full control over the username on my platform.

[EDIT] I forgot to mention I incorporate the issuer's sub in addition to their username to produce a username with a hash which I use as my username. The key point I wanted to get across here is don't trust one provider with accounts created via a different provider.


Proprietary techniques like this are usually a good indication you’re missing something. In this case it sounds like you are missing appropriate validation of the issuer and/or token itself.

I want to support OAuth2, not OpenID so I don't rely on a JWT; I call the issuer's endpoint directly from my backend using their official domain name over HTTPS. I use the sub field to avoid re-allocation of usernames/emails but my point is that I don't trust it on its own; I couple it with the provider ID.

To make it universal, I had to keep complexity minimal and focus on the most supported protocol which is plain OAuth2.


Yes, this is a genuine problem with AI platforms. It does sometimes feel like they're suspiciously over-promoting certain solutions; to the point that it's not in the AI platform's interest.

I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.

It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.

It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.


> It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.


Never thought I'd be reading this on TechCrunch but fully resonates and it's an interesting article. Also, I understand why some people think we live in a simulation. It can be explained to some extent; we're glued to our phones/devices and those devices choose what information we see.

We are only aware of the stuff that our devices show to us; yet the vastness of the internet creates a false sense that we know everything. This dual reality (deep reality vs the surface reality we see) creates the feeling of being in a simulation; we have a feeling that there's another reality beyond our simulation. We implicitly trust the algorithms to do the curation for us, personalized to our tastes, but the algorithms are heavily biased towards popular content, ideas and people. It's a tiny subset of reality that's highly manipulated and fake. The less critically-minded you are, the smaller but more pleasant your world is (until you reach a certain point?).

We have hype leading adoption, which funds development capacity which leads to slight improvements, which lead to consolidation of hype... But there exist alternatives that are 10x better from the beginning but lacking the hype component altogether and those things appear to not exist. Value creators are often terrible at marketing. It's hard to sell to people who are inside the simulation when you are outside of it because you don't speak the same language.

The contrast between form vs substance has reached comically absurd levels and sadly, the clear winner is form.

To really get the full picture, you almost have to already know all the key information. At best, AI/LLMs can give you confirmation of your existing knowledge with additional supporting data... But even that's under attack; there are narratives trying to discredit the objectivity of LLMs by saying that they are programmed to agree with you for engagement... That's a persuasive narrative, especially in the age of fake news, but I really hope we ignore these narratives; we just have to observe that LLMs do in fact push back effectively when you're wrong! You can't make an LLM agree with you on facts that are wrong no matter how many times or how many ways you repeat them. The only wiggle-room is in terms of 'importance' or 'relevance', not facts.

Critical thinking (e.g. poking holes in otherwise perfectly satisfying explanations) is now more important than ever if you want to stay connected to reality because there are incredibly powerful forces in place to make sure we stay on the first layer.


I decided to not open source my latest project but it has nothing to do with security concerns. My code is perfectly secure and bug-free.

My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.

Also, I'm not really interested in humans anymore. I have human fatigue.


>My concern is mostly financial.

Then AI will eat your lunch anyway if the financial part has anything at all to do with the code.

AI can decompile code very well.


have you like used AI? it's barely usable for anything other than grunt work. You actually have to tell the truth for the cynical but correct trope to work. If it's about now might be time to re-calibrate the hype, if it's about the future, well then there the worst case scenario is far worse than the worst case scenario cynics and LLM company CEOs talk about, human extinction is really the medium bad case if they actually manage to create AGI.

> My code is perfectly secure and bug-free.

I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.


I assumed they were trying to be humorous . Although I find that type of humour obnoxious enough that it would put me off the project.

I gave it a good minute of reading and re-reading because I thought it SURELY was meant tongue in cheek, but I couldn’t make it work.

Maybe I was being too generous - jongjong seems cynical and old enough but can't read similar "humour" from others: https://news.ycombinator.com/item?id=47426320

I previously failed to summarise HN guidelines on sarcasm: https://news.ycombinator.com/item?id=38585465


Humans are fine, the problem is your worth.

I mean that's fair enough but I don't think one person keeping their code closed source really changes all that much. And it depends on what your software does, no-ones out there really replacing (while actually saving any money at least) anything complicated with vibecoding, and GPL violations already happened before.

The distinction isn't meaningful to me because I feel like one of the humans from Planet of the Apes here.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: