100%. I have to hold the floor by filling the space with "ummmmmmmm.... uhhhh...." which inevitably distracts me from my point altogether. Poor user experience.
Seems like there's a big risk of having that habit leak into human conversation. A lot of people try really hard to train themselves not to add those fillers.
Strongly agree, some of us like to choose our words more carefully when interacting with an LLM.
I've tried to convey this to OpenAI through various available channels (dev forums, app feedback, etc.).
Grok solves this by having an optional push-to-talk mode, but this is not hands-free and thus more cumbersome than just having a user-configurable variable like seconds_delay_before_sending_voice_input.
I agree with your categories. The majority of the usage for me is (1) and (3).
(1) LLMs are basically Stack Overflow on steroids. No need to go look up examples or read the documentation in most cases, spit out a mostly working starting point.
(3) Learning. Ramping up on an unfamiliar project by asking Antigravity questions is really useful.
I do think it makes devs faster, in that it takes less time to do these two things. But you're running into the 80% of the job that does not involve writing code, especially at a larger company.
In theory, this should allow a company to do more with fewer devs, but in reality it just means that these two activities become easier, and the 80% is still the bottleneck.
That, and I've never had to beg an LLM for an answer, or waste 5 minutes of my life typing up a paragraph to pre-empt the XY Problem Problem. Also never had it close my question as a duplicate of an unrelated question.
The accuracy tends to be somewhat lower than SO, but IMO this is a fair tradeoff to avoid having to potentially fight for an answer.
Tangential, but you used to be able to use custom instructions for ChatGPT to respond only in zalgotext and it would have insane results in voice mode. Each voice was a different kind of insane. I was able to get some voices to curse or spit out Mint Mobile commercials.
Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.
I do it sometimes (even just through the openai playground on platform.openai.com) because the experience is incredible, but it's expensive. One hour of chatting costs around 20-30$.
Our understanding of the world is overfit to the macro level, where we project concepts onto experience to create the illusion of discrete objects, which is evolutionally beneficial.
However, at the quantum level, identity is not bound to space or time. When you split a photon into an entangled pair, those "two" photons are still identical. It's a bit like slicing a flatworm into two parts, which then yields (we think) two separate new flatworms... but they're actually still the same flatworm.
Experiments like this are surprising precisely because they break our assumption that identity is bound to a discrete object, which is located at a single space, at a single time.
Depends on your interpretation of quantum mechanics. In Bohmian Mechanics, there is a discrete particle guided by a wave described by the wave function. Also, macro discrete objects are not illusions, they're the result of decoherence. The superposition is suppressed from view, assuming the wave function isn't collapsed or just a mathematical prediction tool.
> Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.
This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.
I recognized the word "glymphatic" from recent articles about the discovery of the brain's self-cleaning system, and then understood from the headline that these authors identified that the mechanism by which this occurs is driven by norepinephrine.
reply