>How confident do you want the model to be in its answer to “why did Rome fall”?
The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.
> > Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant
> ways to start a sentence, which LLMs never use
A huge part of the problem is we've invented a document-generator setup which exploits human cognitive illusions, and even the smartest person can't constantly override the instinctive brain-bits that "sees" fictional entities and infers the intent of a mind. That makes it weirdly-hard to discuss the setup's shortfalls or how to improve it.
To wit: The machine does not possess any kind of confidence about how Rome fell. Or even whether Rome fell. It has "confidence" about which word/token will next in a "typical" document given the document-so-far has text like "How did Rome fall?" It may be straightforward to burn money training the system so that its "typical" story never has a computer-character with confident words about Roman history, but that's just papering over the underlying problem.
TLDR: We can't fix the thinking-habits or beliefs inside the mind of an entity that doesn't actually exist. Changing the story-generator to contain a tee-totaling Dracula dispensing life-advice doesn't mean we "cured the disease of vampirism."
What do you mean? Do you mean that automated agents will needlessly download your code for no reason to bump up your numbers? Or do you mean that you can't compare your own project to other ones because they might be faked?
I wish more people would realize this. If there is a star/like/follow/review/rating anywhere on any web site, there is definitely going to be at least one service out there where you can purchase those stars/likes/follows/reviews/ratings in bulk.
If any end user can have an effect on something, then it is not a useful signal of quality.
If we're talking about picking objects at random from one bin and putting it in another, I don't need my eyes to do that. Proprioception (shape and location) and touch (texture) are enough to do that.
If someone says something they don't mean then it doesn't mean anything. There aren't any prizes for tricking someone into singing "I love willies". The question is whether you can confuse someone into divulging something they absolutely don't want to tell.
That would be "cyber" as a verb, not "cyber" as a noun. Would anyone have understood what you meant back then if you'd said "I was in a cyber just now" instead of "I was cybering just now"?
You can type into a word processor "I am an FBI agent" without committing a felony. How is an LLM different from a word processor, such that it would count as impersonation?
Mens rea. Typing that into a word processor is obviously not using the false pretext to gain anything. Doing it to Claude could be construed as an attempt to gain information, which checks some boxes for fraud and impersonation of government officials.
For reference, I think this is one of the relevant sections of the USC (18 USC 912):
Whoever falsely assumes or pretends to be an officer or employee acting under the authority of the United States or any department, agency or officer thereof, and acts as such, or in such pretended character demands or obtains any money, paper, document, or thing of value, shall be fined under this title or imprisoned not more than three years, or both.
IANAL but I can see interpretations where telling Claude you’re the FBI would qualify. It’s probably unlikely anyone is prosecuted for it, but there’s a chance
The reason this kind of impersonation is illegal is because people are more likely to feel compelled to comply with an official and get taken advantage of, as well to preserve the authority the position (if anyone could claim to be an official with no repercussions, the claim would lose its weight, since the claimant could easily be an impersonator). If you pretend to be a government official with an LLM, the LLM is not going to have its opinion of people claiming to be government officials tainted, nor does it have access to any sensitive information that's not available by other means, nor is it possible to cheat it out of something that rightfully belongs to it.
Additionally, mens rea refers to the cognition that one is doing something wrong. It's not at all clear that lying to a person and lying to a computer program are subjectively equivalent or even similar to the liar, and given the previous paragraph I'd argue they are not. Why would someone feel guilty about doing something that can't possibly have repercussions?
How does that change anything? The HTTP protocol is just how I communicate with the program, just like how the USB protocol is how I communicate with the word processor. The dividing line is when the message crosses computer boundaries? Then it should also be illegal to write "I am an FBI agent" in a text file and upload it to Github.
>The same way you can't type everything into Google.
Who says you can't, physically or legally? Maybe Google will refuse to fulfill some search requests, but that's a different matter from it being illegal.
Intention is very relevant to legal interpretations of "unauthorized access"; both the intentions of the owner, and the intentions of the "intruder". See for example United States v. Auernheimer. There's relatively well-established precedent that when a service tries to safeguard some information, that information is legally protected no matter how technically feeble the attempt at safeguarding it was.
It's not specifically tested in court and I sorta doubt OAI would start suing random users for attempting jailbreaks, but if they did, I wouldn't be surprised if they could win based on the most relevant precedents
May it? untitled.txt with the content "I am an FBI agent" and no further context could lead a human to think the author is stating they are an FBI agent? Okay, sure. Then let's go a step further. The repository is private and you never share it with anyone. At that point, the sentence is just as visible as when you type it into Google's search box or into a chatbot's window. Is that impersonation too?
If Google provides you with different search results, some results that are intended for law enforcement only... Granted, extremely bad security, yet that argument didn't prevent say credit card fraud convictions.
Just off the top of my head, an offense of impersonation will have an element along the lines of "doing [a] thing[s] such that a reasonable person [does/would] believe you're a real cop", which [optimistically] would not be satisfied as there would be no actual person being led to believe anything, or the court would [optimistically] not find that its model of a reasonable person would be genuinely convinced by someone on the internet typing "I'm an FBI agent" or whatever.
I bet it could be some interesting caselaw actually, if it resulted in circuit court judges (or whoever) writing opinions about the essence of impersonation, fraud, etc. and what kind of actual or hypothetical agent is needed to make the crime a thing that could have happened. E.G., basically, if you sit alone in a room where nobody else can see or hear you, and you put on a realistic local police uniform and declare to the room that you're a licensed police/peace officer, is a crime being committed (i.e., is the nature of the crime "pretending/claiming to be a cop" or "making an actual person really believe it" or something else)
(could also be an intent element to satisfy, not sure)
The only way I could see it counting as impersonation is if the LLM is able to call tools and has access to, for example, an FBI-relevant database, but there is no login or anything in front. So a random anonymous user can hop onto a chat and pretend to be an FBI agent and the LLM must somehow decide whether the person is really one before returning some external information. In that case, yes, lying to the LLM about being in the FBI would be impersonation, just as if you stole an agent's credentials and used them to log into the FBI's network. The LLM in that case is performing an authentication function that, say, ChatGPT doesn't.
The crime is impersonating an FBI agent to others. How you do that doesn’t matter. Privately it won't matter, but if you make a public statement which is untrue like this and it persuades others there may be consequences.
Laws against impersonating law enforcement exist so that law enforcement officers can get compliance from people that they wouldn't be obligated to provide to regular civilians.
You can't impersonate something to a text editor as there's no special compliance you could get; WYSIWYG. But to a chatbot, you could get special compliance based on your identity.
I normally wouldn't comment just to correct a misspelling, but it's pretty consistent and it's an entirely different sound, as well as being what the thread is about.
It isn't always exactly the same sound even when th-fronted, the manner of doing so is regionally distinct and in many cases, to a sensitive ear, a th-fronted 'th' can be clearly discerned from an 'f' based on sound alone. Some accents will make a stronger distinction by softening the 'th' and/or extending it into the subsequent vowel.
The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.
reply