The change was about helping teams ensure AI-generated code is attributed in commits - nothing to do with copyrights and the like. If you don't have to take my word for it - query VS Code repo for changes and issues that went into implementing this and you will see.
Thanks for jumping in the conversation. Logically it does makes sense to attribute the authors correctly, however in this context it might be helpful if you can provide any details about the users complaining that their PR's are being marked as co-authored even when they have not used the copilot? Is that intentional or a missed check in the implementation.
Also for layman readers like me who might not be actively involved, it might have been helpful to add the issue/referenced conversation why this change was made on the PR itself
The fact that non-AI changes are attributed to Copilot is a bug. The intent was to allow customers to add attribution of AI-generated code. As with any bug, it was not intetional.
It would be easier to rationalize it if there were an assurance that AI-generated code would be generally credited with the model used. But as I understand it, this credit only happens when using the co-pilot GUI, right? No credit for copy paste code from uncertain pedigree? So I think it makes sense to question the logic here.
Would be possible to admit a brain fart and roll the change back?
I'm not him, but it was pretty obvious that the comments section was going to be attracting more and more people saying the same thing that had already been said before, and that no useful discussion was going to be had. At some point the value of spamming everyone who commented on the issue with a notification (which puts an email in your inbox if you haven't changed the default setting) becomes lower and lower.
I've seen that before on other issue comment threads. The repo owner says "Hey everyone, if you want an issue fixed, please upvote the issue with a thumbs up". And many people don't read that, and instead post "Please fix this" comments without giving a thumbs-up to the issue. So, 1) the repo owner doesn't get to use the "sort issues by # of thumbs-up reactions" to see the priority of that issue, and 2) everyone who has subscribed to the issue gets spammed with a message that's useless to them.
Since nearly all the new comments had become "me too"-style comments, which should have just been a thumbs-up on a previous comment in order to reduce spam, I feel like locking the issue thread was the right move at that point, to stop people from receiving yet more unnecessary email in their already-overflowing inboxes.
Because the `microsoft` group account is the owner of the repo. With group accounts, you can designate many individuals to have admin access to the repo, but the actions taken by those admins will be attributed to the group account that owns the repo. (Because presumably the rest of the admins agree with the action taken, otherwise they would undo/revert it).
Why did a PM create the merge request? It seems like internal testing brought up issues, why was it merged regardless? Is velocity a metric you were aiming for when merging this?
There are customers who would like to see attribution on changes where AI contributed (companies, users, etc). True, that's not everyone, but you can query our repo for the issues for which this feature was implemented.
The rationale I suppose is those customers what to be more careful with code that was contributed by AI.
I don't see how this would actually help. If people don't want to disclose they used AI they will just strip the message from the commit.
Maybe those customers should just be more selective with the people they allow to contribute to their project?
Also, this kind of message doesn't even bring valuable info: it doesn't explain how the AI was used (could be 99% vibe-coding, or just a quick "Please review current changes" + minor fixes at the end?), which model was used, etc. Like other commenters here I can't see this as anything else than a marketing push for Copilot.
Don't take it personally though, you are probably not the one that should be taking the heat since the change was directly pushed by your product manager.
Please don't be personally aggressive in HN comments, regardless of how provoked you are or feel you are. We're trying for something different here, and we particularly want to avoid pile-on, shaming, and mob dynamics.
Edit: your account has unfortunately done this before (e.g. https://news.ycombinator.com/item?id=47548889). I don't want to ban you, so if you'd please review the site rules and not do anything like this again on HN, we'd appreciate it.
I am the person who approved this PR and would like to acknowledge and apologize for the mistake of turning this feature on by default without sufficient upfront validation.
There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI. I'll work on fixing those and meanwhile revert default to off in 1.119 update.
I am open to any (constructive) comments/suggestions - please feel free to reach me directly (my alias @microsoft.com) or open an issue on GitHub. Happy to answer anything here as well.
I think the constructive criticism is best directed at whatever process you are following. That process allowed a very visible user facing change in a widely used piece of software. How did this change make it to production without some process catching the impact of this change? Was there really no internal discussion from a code review at least? This seems hard for me to believe. I expect more from Microsoft.
> Was there really no internal discussion from a code review at least? This seems hard for me to believe.
The outlined story feels unfortunately very believable to me.
Teams need to push out the most number of features, and nobody stops even for a second to think about how a feature might affect other flows or other users not in the feature request.
It might have been quickly reviewed to check if the code does what it needs to do (add the coauthor note).
Do you think reviewers will think about unwanted effects, when they need get back to feeding their own poorly thought out and underspec’d features to their LLMs?
> Was there really no internal discussion from a code review at least? This seems hard for me to believe.
>The outlined story feels unfortunately very believable to me.
100% agree here - we seem to forget that most developers hate code reviews. I actually laughed out loud at the use of the word "discussion," it's so rare people want to get together and talk about changes. By the time the PR is up anything that stands in the way of merging and shipping is seen as a nuisance.
To my mind this whole debacle is not really the individuals fault or even the team's fault but the economic pressures that drive people into situations like this.
Fair point. We did catch it internally in testing (as we use VS Code for all our work, so some folks did stumble on it), but I think we underestimated the impact and should do a better job at that.
This is honestly the most concerning part of all of this. You're saying you knew that this exact bug was present up front and still decided to release it?
This basically invalidates the entire premise that it was an innocent mistake. It's impossible for me to believe that you actually thought that people wouldn't care about 100% of their commits being attributed to Copilot even when it was never used. Either you're misconstruing what you caught with the testing beforehand or your entire development process is tainted, because there's no way that a non-evil corporation would see this default behavior and think that people would be fine with it. It seems far more likely you just thought you could get away with it.
I think there is a "ship fast" component here that should be adjusted. Product Management introduced weekly "stable" releases in March, no matter the content.
I think so too, but my point is that even according to their own words about what happened, the best possible interpretation is that they didn't mean to do it but knowingly let it happen. I agree that a worse version is more likely, but it's pretty damning when even the ceiling for what they can plausibly claim is "we intentionally didn't bother stopping it once it happened accidentally".
A generous read of this comment might be that you did catch it internally in testing AFTER it shipped but shrugged it off as something you'd patch in the next release in a week or two. Is that what you meant here?
Or that it was caught but didn't surface fully before release?
A helpful governance policy here might be that anything that mutates user content without opt-in consent requires a distinct sign-off or a double sign-off. If the goal is to prevent this from happening in future.
I saw a lot of "they made a game I like (Halo), therefore they must not be that bad" from the gaming cloud that only experienced the console side of it
Also, who/what group is pushing for this change internally and what is the opinion of the team implementing it? What is the road map and vision for AI in VSCode?
I think there’s a few of us who appreciate you being up front. I’d question the intent and why it was a mistake, especially when the commit[0] message reverting said functionality states “widespread criticism” citing this very HN article makes it look seemingly like the revert is due to negative PR opposed to a mistake.
My issue with this: if my intention is to never have these "co authored by <tool>" trailers in my commits, this is a sudden breaking change. What's worse, it is not immediately visible to the user. Now I could look like I use a not-company-approved AI. That's absolutely unacceptable, this could cost people their jobs. The "bug" (or "metrics boosting feature", as PMs call it?) that it claims all commits including ones never touched by Copilot are just icing on cake.
Changing the default behavior for all of your users with no notification is pretty unforgivable. Even if this feature worked correctly, it obviously doesn’t, this should at minimum be a prompt after upgrade to let the user confirm that this is what they want. But honestly should be opt in for those that want it.
To have it silently just start adding marketing copy to git commit messages is pretty bad. To have that added text not be visible to the user in the UI so they can remove it before commit is just much worse.
This kind of thing being released speaks to a greater disfunction over there. Not a good look at all and I am not a Microsoft or AI hater. But my commit messages are not where you move fast and break things
> Changing the default behavior for all of your users with no notification is pretty unforgivable.
I noticed that as soon as you make a bug report/feature request on VSCode's repo, you instantly get someone's OpenClaw agent with an automated pull request that sometimes wants to change defaults in the main codebase
Looks like AI is really trigger-happy with that, with zero understanding or care that there's thousands of users affected and it's not just one individual's settings.json
Also, the hallucinated PR does not necessarily address the original issue whatsoever, just like this PR. It should have functionality to detect AI-authored code, but whoever made the PR skipped actually doing the hard work and just changed a default to always on, exactly the kind of misunderstanding you see with OpenClaw shotgun PRs
And then they apparently posted an alibi "I'm sorry" here. Or maybe it is genuine, but the choice is between incompetence and fake "I'm sorry". Where is QA?
> To have it silently just start adding marketing copy to git commit messages is pretty bad. To have that added text not be visible to the user in the UI so they can remove it before commit is just much worse.
This is one of the problems, but it is not only one. To be better, should be:
1. It should be visible in the UI for entering the commit message, to make it clear what it is doing.
2. It should not add such a thing if the Copilot is disabled. (It is mentioned by dmitriv and would hopefully be fixed soon enough)
I do not use Copilot nor any other LLMs nor VS Code, but if the problems are corrected then I think the feature would probably be reasonable.
No, it's fine. I really hope that more people will switch to something else, like Neovim, Emacs, or any other open-source editor where such unacceptable situations are practically impossible. I hope more people will start to value their privacy and right to choose, and find the courage to say gtfo and switch to something else. Because this is unforgivable.
It just means that when changing a global default with such impact the user should be prompted with an option to opt out of the new behavior. Something like “AI assisted changes will now have ‘coauthored by Copilot’ added to the commit message”. If the user clicks “no thanks” it changes their local setting to “off” to opt them out of this new global default.
Don't you understand that the default shouldn't be changed at all in this case? It improves nothing and affects every single user. If an org/project wants this behavior then it can enforce this flag for its contributions. The only valid reason for this change is someone's performance somewhere in Microsoft is dependent on VS Copilot usage metric.
Good feedback, there needs to be a more explicit opt-in into this for teams that want it. FWIW nobody's performance here will improve from having this metric :-)
>a project manager vibe-coded the change without thinking it through at all
The PMs vibe-coding and having no idea what they're doing isn't even the main issue (although it is pretty bad).
The main issue is: how are the actual engineers supposed to "review" the slop? They probably report to the same PM or are at below in the org chart and might be evaluated by them. Not just at MS, but any company.
Such a conflict of interest would be detrimental to quality anywhere. You wouldn't build a bridge like this, nor should you software.
Co-Authored-By is normally a trailer, and trailers aren’t part of the commit message. It’s likely the commit editor isn’t set up to show trailers. They’re not exactly obscure, but it does seem that they’re relatively unknown.
What do you mean they aren’t part of the commit message? Trailers like (signed off by) are absolutely part of the message. Tools can choose to treat them as special metadata, but they’re part of the commit.
I mean that they’re not necessarily part of the --message parameter to `git commit`, but instead part of the --trailer parameter. I don’t know how VSCode is programmed, but it seems plausible that trailers are handled separately from the message parameter.
We're talking about Git here. The question is not "how VSCode is programmed", the question is "does Git have a special field for commit trailers". The answer is no. Git stores the trailer as part of the commit message.
If you look at the comment I’m responding to, it is in fact about how VSCode is programmed; specifically, a possible reason why the Co-Authored-By trailer doesn’t show up in VSCode’s commit message box.
It seems like it would be most reasonable to consider porcelain vs. plumbing command details in deciding if something is logically distinct to Git. git-commit has --message and --trailer options, git-commit-tree has a --message option. I take that as trailer is a convenience option to provide a consistent way to append those details to the commit message. But that doesn't mean it's not part of the commit message, nor that the user shouldn't see it while reviewing the commit message.
I appreciate you acknowledging that this was a mistake, but as you surely know from your own experience with other people’s mistakes, some mistakes are so egregious that they cast doubt on the intentions of the people involved even if they are corrected later.
To me, “let’s add false attribution to every commit by default without informing the user” falls squarely into that category. I don’t think I’ve ever worked in an environment where something like that wouldn’t have been red-flagged in three seconds by anyone who took even a casual glance. I’d honestly be embarrassed if such a proposal even made it into a public pull request for my organization, nevermind that pull request getting merged.
If what you described would make it to our PR queue, it would definitely not pass the gates.
The idea was to track AI-only changes and add the trailer when such changes were detected AND the setting was enabled. Obviously, we didn't want to attribute all changes to AI. There is a bug in change detection (which slipped through testing), which led to even non-AI changes being tracked. And thus we have this problem.
The PR linked here wasn't even implementing the feature, it was changing the default for the setting.
I just wanted to say, while I think this feature was a bad idea, I sincerely applaud your willingness to post here, knowing you'll get roasted. Seriously brave and commendable.
Other people aren’t your slaves. You don’t get to demand they respond immediately, and this Reddit-like mindset needs to die. HN is a place where we often can actually get devs from companies responding directly and listening to feedback, and this hostility is looked at by all the other devs from those similar companies and remembered when it’ll be their turn.
Stop making HN a worse place for everyone by being unnecessarily hostile. (and this comment is only mildly directed at you but rather at a bunch of people in this thread)
They said three times "ask me anything" and then didn't respond to a single question. Stop making HN wose by comparing someone dodging accountability to slavery.
Someone made a mistake, owned up to it and fixed it. No one is entitled to more than that for a free software.
Anyone with a bit of software experience knows it’s easy to miss things when you are doing your own tasks + context switching + giving reviews. We should exercise kindness and empathy instead of projecting evil intentions.
Even if I accepted the premise that this is too stupid to be evil, that doesn't change the fact that this would be extremely easy to test for. The fact that they considered it important enough to get this feature implemented without proper testing says plenty about their incentives.
They might not have intentionally done this (although it's honestly not clear), but they definitely didn't care enough to prevent it because it wouldn't have been hard at all. That's my point here; which bugs slip through and which don't implicitly conveys what their priorities are. I don't think it's particularly hard to infer what story this bug tells.
First comment does not sound constructive - are you interested in my opinion on (n)vim?
I am not a legal, so can't comment on legal things. However, I have already responded elsewhere here that this feature has nothing to do with licensing or ownership and was added for those that want the attribution. I understand the desire to see anything Microsoft as bad and evil, but we are really just trying to make a better experience.
Perhaps next time you should consult with legal before asserting co-authorship on end users’ code. The appended comment was not “edited with VS code” or “sent from VS code”, it was “co-authored by Copilot”. You do understand that there are legal implications to claims of authorship, right?
Comments like this are why developers don’t engage directly. The first link is “just asking questions” and implying that the project is rotten. He’s not being “creative” he’s just not engaging in bait.
They’ve done a commendable job responding. Please show some respect when people put themselves in vulnerable situations, otherwise the whole “devs respond on HN” thing will cease to happen.
I noticed you only respond to comments that are positive (or neutral). The majority (and the most insightful) comments here are negative, but you seem to ignore them.
Why are you taking the fall and not the PM who authored the change (and submitted a PR with an uninformative title and no comment) and, I'm assuming, plays a role in managing the project?
Just for any future mea culpa, I'd recommend not hedging with comments like this one:
> As folks mentioned here - many similar tools do this as well.
It's really doubtful they have the same behavior people are complaining about here: namely including the authored by Copilot statement when it wasn't used (or even enabled).
Anthropic does by default. I had to put “no co-authored by lines in commits, ever” into my global settings.
That’s pretty close to “included when it wasn’t used (or even enabled)” since it’s opt-in by default and you have to explicitly say no. It’s not even clear where to turn it off, I just rely on the AI to figure out not to do it.
That is not what dmitriv claimed. He said this was a bug, the behavior should have been to add it only when AI was involved, which indeed, is what claude does by default.
What is the use-case where you expect users would be happy that you modify their commit messages with MS marketing? Do you think it would be ok to edit every commit to append “written with VS Code”?
> I am open to any (constructive) comments/suggestions
Here's one:
I think a senior sysadmin needs to sit you down in their office and have a very serious talk with you about the responsibility that comes with writing code other people run. I am serious. We used to have these talks with everyone who got sudo access. You shouldn't be shipping code if you don't understand the trust that is required of people in your position.
This isn't just about this "feature" being active when AI features are disabled, the way you mis-implemented this has resulted in it modifying the commit message with the user even seeing it! That is malicious behavior, not an innocent little feature "to make life easier".
I've fully switched off of VS Code to Kate now, which is faster and better behaved in most cases anyway. Bye.
thank you for doing this, it gave me the push I needed to finally switch to zed. vscode has really been going downhill for a while now. it's sad to watch, it used to be a really nice editor
I could easily see companies, especially enterprise-level companies, expect code that was generated with AI to have some level of ownership attributed to that AI. Whether a simple "Co-Authored-by Copilot" byline on the commit is the right way to do that is another question though.
Thanks for facing this head-on here; mistakes happen.
I think the default to on should also be reconsidered regardless. The assessment (co-authored by AI) may be valid but the assumption the user wants that advertising is exactly that, an assumption, and a dubious one at that.
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code.
What metric did Microsoft use to assess that VS Code users "expect" their commits to have unsolicited messages added to them?
> Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI.
Did you discuss adding these messages with your legal department?
What is Microsoft's position on adding such authorship statements to the code Microsoft did not author?
Or is Microsoft stating that using LLM assistants makes Microsoft a co-author of the code?
Does Microsoft have copyright claims on the code if LLM assistants are used at any time during its creation?
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Then make it an extension, not a IDE-behaviour thing. Is that so complicated, so difficult?
So why did this feature get rushed out without proper testing? Are you claiming that not having this happen automatically for the commits where Copilot actually co-authored them is so urgent that it was necessary?
I'd argue that this was extremely non-urgent and the fact that this got rushed so sloppily is a giant red flag about the priorities of you and your team. You asked about constructive criticism, and yet you're also acting like this is a one-off innocent mistake by only addressing what you've done to roll this back for now and address the immediate issue. I don't buy the premise that we could trust that this was a mistake made in good faith when it's something that you clearly should have known people would be so upset about if you got it wrong.
Considering the size (and significance) of the VSCode user base, it feels like someone should be in charge of ensuring that default behavior doesn't change without good reason.
Does anyone (or any team) have ownership of the extensions/git/package.json file?
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code.
Can you expand on this? Who "expects" their code editor to lie about using Copilot?
The supposedly expected functionality is very obviously that it marks copilot co-authored code as copilot co-authored, not the bug that is being reverted.
Hopefully this answers some more of the questions raised here.
It also incorporates a lot of feedback from this thread with respect to next steps (thank you!).
Under which circumstances can you ever approve something like this?
This goes beyond incompetence. Either you do not understand what important information a commit holds or what seems way more plausible to me is that Microsoft simply decided to try this out and see how people would react.
Whether or not the intent is good, the optics are extremely bad.
I assume you are keenly aware that Windows, Office, and by extension, all of MS's customer facing products are not exactly regarded particularly well. Windows 11 specifically is a laughing stock today, even among folks who don't necessarily know computers, and a lot of that resentment is driven by 2 things:
• Pushing AI everywhere when no one asked for it.
• Not reading the room and adding junk features that no one wants.
This change is both of those, again, wrapped up in another package. The timing of this is extremely bad for VS Code as a project as it looks an awful lot like, 'Microsoft is just shoving my AI junk into my stuff and failing to work on the features we actually want'.
I'm not taking a side on this either way as I will jam a fork into my eye before I use VS Code over VS proper and have no stake in this, but I'm just saying that the powers that be that are approving these kinds of changes are ~continuing~ to fail to read the room.
I'll add as someone who may be forced to consider VS Code in the future (Depending on if Windows unfucks itself before something critical breaks for me on W10), I would read something like that and I think rightly assume bad intent. I know VS Code and VS and Office and Windows are not the same team, but again, MS as a whole has a very serious optics problems and my read of this on the surface level is: "Oh, they tried to sneak in more AI junk, and when called out on it, they pushed it to the back, probably to make it a default again in some future update that they can hide it in". It just looks very, very bad at a time when no MS products have negative social capital to spend on this kind of stuff.
I appreciate your willingness to come and try and salvage this situation. What I don't understand is why are you the one doing this here and on GH, during the weekend, and not the PM who created the original PR? Surely they have some input.
And another thing is, why was there absolutely no pushback from your part on any of the issues with the original PR, and why it was merged within hours in that state?
You are working for one of the largest companies on the planet. You push code that gets used by millions of people.
How on earth are you not thoroughly testing your changes??? How can something like this slip into a real build? Like, this is egregious.
I work somewhere that makes software for a lot of users (although not as many as Microsoft!). We also need to ship quickly. But we work on a 45-day cycle, with 15 of those days being dedicated to ensuring we didn't add any awful bugs (and fixing them ASAP before it goes to users - or reverting the change until it is ready).
I would expect Microsoft to have AT LEAST that amount of care. We can't trust that you are shipping software that even works anymore!
What other changes are going in that are broken in more subtle ways? It used to be that VS Code was rock solid, and any issues were likely third-party extensions - but now it's a crapshoot, and I can't be sure if crashes etc. are the fault of extensions or Microsoft themselves!
The VS Code team needs to use this mistake as motivation to lead the charge on making a quality editor. Not an editor that gets half-baked, untested changes pushed weekly. An editor that is dogfooded and where a mistake like this going to prod is unacceptable.
Because if you don't, people won't trust your editor anymore. Just like people have stopped trusting your OS, and now users are fleeing it in such numbers that the Windows team has recognized they have a problem and are changing course.
That WILL happen to VS Code and GitHub soon unless you actually start owning mistakes internally and fixing them before users find them.
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Please elaborate on what "similar tools" claim that commits are co-authored by AI when the AI features are all turned off. You're trying to defend the theoretically correct version of this that you didn't make, not the actual version you did make.
> I am open to any (constructive) comments/suggestions
It's hard to take this seriously; you know exactly what you did wrong here and what you should have done instead. Testing that this doesn't happen when Copilot was not used is extremely trivial; if you're not lying about it being unintentional, the fact that it didn't occur to anyone to do it still says more than enough about what the priorities are here. At absolute best, the priorities of you and your team are so fundamentally wrong that it's impossible to trust any of you going forward.
That aside, corporations and groups don't make decisions. People do. We can understand and empathize with what led them to that decision (and sometimes we might be looking at the wrong person), but they're still responsible.
On the other hand...this feels like a situation where possibly you should not have said anything at all? The fact that you're on HN responding feels ill-advised to me.
So far this is what I've gleaned:
- Microsoft has PMs vibe coding against VSCode (by itself not necessarily a big deal)
- Microsoft PMs can vibe code against VSCode and get stuff shipped to production with only a single approval
That second one is a huge deal in my book. What I've learned now is that VSCode, a product with an enormous deployment base, is trivially compromised if the calls are coming from inside the house. Apparently all that has to happen for all users to be affected is a PM requesting you to "please approve my PR real quick, trying to get it in." And now there's a massive change in the wild, visible to many users.
Being familiar with big corp dynamics, this really worries me. This does feel like a not-well-thought-out mistake but I can easily imagine many other scenarios that would be far worse.
How can I trust VSCode going forward? How can I reassure my employer and fellow colleagues that it's safe to use? This is really a terrible look for Microsoft and very damaging to the reputation.
I feel bad for you the engineer and PM here because with the web being what it is, folks are casting blame onto you. That's missing the point since the issue is that MSFT even let this happen in the first place. Engineering processes need to be halted and re-evaluated basically yesterday. If something like this happens again it may not be possible to rebuild the trust at all.
I hate to say it but for myself this issue makes me strongly consider switching away from VSCode permanently, something I had not seriously considered before yesterday. Best of luck to everyone on the VSCode team.
Absolute clown car of an operation. Just abdicated responsibility even when it comes to very basic testing. This is bonzi buddy scam software bad, intended or not. Have fun Microsoft, but this is where we part ways.
One of my customers actually requires attribution to agents if they're used, not only for tracking purposes but also for understanding potential vectors for slopcode. It's been useful and occasionally enforced. That being said, implementation without due consideration and warning should be frowned upon.
reply