tazz.
oz.
+1,341|6798|Sydney | ♥

i use chatgpt a lot- it's like google on steriods. i was able to diagnose a bios issue on my pc that wasn't presenting as a typical bios error by describing symptoms, and it located a reddit post from 5 days ago with two comments. no way i would've found that.

it's great for the troubleshooting side- but when seeking matter-of-fact advice or information- even ~vibe-coding~ the new software developers call it- it needs to be fact checked, frequently.

there's some more interesting implementations in programming specifically- there's a lot of coding that is monotonous,  and some ai tools allow for tab to autocomplete it's guess of what you're about to type out. i love this- and speeds up my work easy 2x.
everything i write is a ramble and should not be taken seriously.... seriously.
uziq
Member
+552|4075
yeah, that works in 90% of the cases but then in 10% of cases, it returns arrant nonsense – worse, advice that can even be harmful. a suggested fix that doesn't solve your tech issue is one thing, but people are increasingly consulting this thing for health or relationship advice – chat-GPT just launched a 'Health' agent ffs! people are going to come to harm because this technology on a quite literal level doesn't know what it's talking about.

whenever you ask it a question on your domain of expertise, you realise quite quickly just how much it is blagging under a thin veneer of the agent's chatty/personable/polite 'voice'. asking it anything doctoral-level, say, on a humanities subject quickly gives away the game. it's regurgitating fine-sounding nonsense, like a jay post in D&ST.

just the other day i asked a very simple query for which it should have been very simple to give a break down in response. i wanted it to compare two speaker models in the same range by the same manufacturer, with only minor changes to specification (mostly size-based). i wanted a breakdown of which speaker is better in which situation. the response from chat-GPT seemed to rely on about 4 reddit threads for the bulk of its sources, and the response it gave to me was garbled nonsense. self-contradictory and sometimes just plain wrong, i.e. talking about one speaker model as if it was the other, and vice versa. of course, as soon as you pull it up and politely query these faults (and politely is the keyword; they actually respond with HIGHER QUALITY replies when you write in a polite tone; not even the company staff know why this happens), it says, "you're right, i got that totally wrong ..."

i totally agree it's like a google on steroids, but it's also occasionally hopelessly shit in a way that google's search algorithm – or at least the old, pre-AI google – isn't/wasn't. sometimes the old way of putting in 15 minutes of cursory reading and self-directed research really does help. for advice on audio hardware, i'm still opting for the 500-page threads on a dusty forum somewhere full of contributions crowd-sourced from a global army of nerds, every single time. and it's not even close. i would rather skimread 20 pages of a thread, even with its noisier signal, than rely on a confident-sounding summary from a chatbot that gets it fucking plain wrong 1 time in 10.

i understand it's going to revolutionise coding and things like the lower rungs of the legal profession. but that's because passing the bar exam or learning to code or whatever is, to a large extent, about memorising the textbooks and official code (whether legal or technical) and regurgitating it back in the right order. of course LLMs are fantastic at that. they've memorised the exam paper backwards and forwards and can recombine it in 10^8 ways.

Last edited by uziq (2026-01-10 05:01:05)

Dilbert_X
The X stands for
+1,839|6729|eXtreme to the maX
Its a chat-bot, it is not an expert system, it does just aggregate information with zero intelligent analysis, and put the information into sentences so it seems like a person.

The few times I've used it its spat out gibberish which almost seems right but isn't.

The most concerning thing to me is that its being used to write code, who knows where that will end up.
Fuck Israel
uziq
Member
+552|4075
it has improved tremendously in recent years/iterations. like if you're asking it day-to-day google query stuff, it will do it.

'give me a trip itinerary for the next week in osaka'. yeah, it'll return like the perfectly simmered and digested breakdown of the ur-tourist experience in osaka. it can do that.

as a day-to-day 'agent' i can see it replacing the culturally ubiquitous habit of 'googling' things. but, yeah, it really doesn't know what the fuck it's doing in any intelligent sense, and the problem is that people are being duped into thinking the 'agent' is somehow an actually intelligent technology that can dispense with advise and expertise, as opposed to presenting extremely concise summaries in 2.5 seconds.

it's what apple's 'hey siri' should have been 10 years ago after they first introduced it. and ironically apple have totally dropped the ball with that particular gizmo.

Last edited by uziq (2026-01-10 04:58:06)

unnamednewbie13
Moderator
+2,095|7395|PNW

uziq wrote:

"you're right, i got that totally wrong ..."
"that was off base of me, and you're totally right to push back on that. i was overreliant on search results from limited sources. if you'd like, we can update with a more grounded comparison between models using real specifications. just let me know!"

slow down and unpack!

Last edited by unnamednewbie13 (2026-01-10 10:32:12)

Board footer

Privacy Policy - © 2026 Jeff Minard