AnnouncementsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
5

It feels like someone just quickly scans pages and then relays an answer based on pattern matching without comprehending the source. Upon closer inspection, you'll find they were actually talking about something else. It's very hit-or-miss, in fact. Oftentimes the results are more relevant than what search engines provide, but that's because there is a diaspora because jewgle censors, spies, and dumbs down the algorithm, but all the other search engines also suck. I will evaluate using ChatGPT simply to gather sources, and then read the sources instead of its answers, so that it becomes a more interactive search engine. Or maybe I should try Claude, hopefully that's less terrible. But I'm really not impressed with FagGPT.

Comment preview
[-]x0x7
0(+0|0)

Clude my dude. I feel like i always have to argue with ChatGPT. For some reason I've never felt the need to do that with Claude for some reason even though it's not perfect.

Maybe because Claude doesn't come off as so sure of itself. With chatgpt you seem to have to be pretty harsh with it for it to not take a conversation in its own direction.

[-]LarrySwinger
0(+0|0)

Maybe because Claude doesn't come off as so sure of itself.

I noticed that immediately. First it says the Scythians conquered Media, but when I ask for details it corrects itself and says they didn't, and later on it corrects itself again and says they did. Make up your mind.

And there are no sources through the Bing interface. Maybe it's different for others but I want to use it anonymously.

On the subject of DDG: I'm honestly surprised by how few captchas I receive from them and how friendly they are toward text browsers, despite the fact that they're so big. Looks like they also found a non-intrusive way of keeping out bots. I know you're good at this already, but maybe we could learn something from them.

But regarding looking up information, I think I want to stay closer to actual articles people wrote and stick to AI-assisted search engines such as Andisearch and You.com. They provide regular search results based on queries and can summarize pages. That seems more trustworthy.

Edit: is that a blockquote? I love how responsive you are with feature requests. Hats off.

[-]x0x7
0(+0|0)

Yeah, I like the format of AI that you see on search engines now but sometimes if you dig into the articles you can't find the numbers it generated in any of the listed articles. I don't know why those search engine AIs hallucinate numbers as badly as then do, but they do.

And it's sad because that's the kind of fact you'd want to get out of a search engine at a glance. "How many blank" or "Which state has the lowest average rent", then it gives you numbers with citations. The average person found exactly what they wanted and uses the figure. But you go into those citations and nope!

Edit: Sad thing. Journalists are the laziest researchers on the planet. Lazier than the average internet commentor. They are going to get those numbers from AI during "research" and print them. Then those numbers will be in print. The average internet article makes it a practice to not give sources already. And they all cite each other if they bother at all. As if news is a valid source for news.

Yes, the robots suck! i find it only useful for coding and even then, if you dont know what you are doing and blindly copy pasta you are fucked. I had to ask the GPT about it but "Transformers use self-attention mechanisms to process sequences in parallel, making them more efficient and better at capturing dependencies across long sequences." I was looking at Seq2Seq for a theoretical extension on my symbolic AI and so i've dabbled in those waters a little bit. Long story short, both are taking in input and kicking out output, that input is a token based breakup of what you type into it. So, the output is constrained on its context window, or the breath of how much it can keep in it "head". 8,192 tokens is what GPT 4o is able to do, this includes the input and output, but not necessarily all out and all input. So you can give it a script, and it's not going to remember the script itself, it may remember what vars you are using, but its going to translate for itself, this script does xyz - which is what it does for its "memory" feature. Further, to get it to respond, they have to train it to respond. So its trained to shut you up as quickly as possible, or, in other words, give you an answer. Finally, on a personal note, either i got much better at coding or this thing stinks now, but the stuff i was able to do a year ago would be nearly impossible with the current model.

[-]LarrySwinger
0(+0|0)

I keep hearing it was better a year ago and my experience is the same. They are dumbing it down because they don't want to empower us.

[-]x0x7
0(+0|0)

I have another more simple theory. They use human evaluators to train a model to evaluate a larger amount of material. These models always pick up on the simplest pattern that works >50% of the time. My theory is these human evaluators kept saying that the longer of two outputs was the better.

So now chatgpt is outputting an encyclopedia for every response. Which ends up being a part of the context. So basically by outputting so much it dominates the context and you can no longer steer the conversation.

I'm currently doing this meetup group online where we are building our own LLMs from scratch. I'm going to make the masking system floating point so you can mark how much a token is a source and how much a token is supposed to give attention to other tokens.

Stable diffusion could use something like that for inpainting as well.

That is very cool! Yeah, i think you are right - it would also explain the low hanging fruit answers that it provides. I mean, > 80% of the world is fucking retarded, so low hanging fruit works for them - and honestly it works for everyone most of the time. But, the creative responses that i was a year ago is not there currently.

[-]PenthouseREIT0(+0|0)

The current AI bubble will burst. They are just cool toys. That's all.