AnnouncementsVideosCensorshipReligionFunnyConspiracyAwwwAnarchoCapitalismCryptoThemesIdeas4MatrixAskMatrixHelpTop Subs
4
Comment preview
[-]LarrySwinger0(+0|0)
Fag overflow.
[-]x0x70(+0|0)
The kind of stupid thing about this is deleting your comment on the internet doesn't always delete it on the back end. Any sort of large tech company that is trying to leverage their data wouldn't let you delete your content for real.
That and journaling exists.
Data from user -> log -> production db.
That way if you production db goes down you should be able to rebuild any state the db has been in before just from logs.
This is why writing junk is better. Yes they will still have the orginal version but now they will have to categorize useful edits from junk edits. That's far more work and more likely they don't even attempt it. Now both data sets have data they don't want.
honestly i think it only hurts the curious, now its going to be harder to debug/find solutions for weird coding questions.
[-]x0x70(+0|0)
Yeah, but I fear an AI monopoly. I basically cheer against whoever is most likely to get it. For that reason I'm cheering Meta over OpenAI at the moment.
There AI is honestly almost as good right now but they don't have as much user focus. I've found it actually gives you better initial solutions in terms of organizing things in a reasonable way whereas openAI is more likely to try to do something more advanced right away that may not fit your project. OpenAI is also more random. There might be 4 semi-advanced ways to solve a problem each of them more reasonable for specific scenarios. OpenAI will pick one at random and feed it to you.
This is why I've been getting good result starting with meta.ai until it stops progressing the code, and then feed it into openAI to improve it. It also means openAI see's less of its own wrong clutter so it has cleaner context.
That was a tangent. Point is meta.ai is pretty alright and open source and not demanding congress lock consumers out of GPUs, or require them to be registered with the government (something openAI is actually pushing for as part of "AI safety" aka "our profits safety").
So yeah, sorry. Burn any data they could get their hands on. Good AI will happen (is largely already here). The timeline of improvement doesn't really matter as much as who is able to accumulate power at the exclusion of others. Considering openAI is already pushing to exclude others means its a real concern.
As a person who would rather smash the system then watch it destroy creative people, i feel those feels. However, as you had said, if its been posted, then they have it. I think the only good way out of all of this is giving the average joe or someone willing to follow a tutorial, that kind of power at their own home. Which we see with the development of LLM's, however, its not economically feasible for the normal person to rock duel a10's to crunch their own numbers. Nor do they have the time to push those boundaries; they got jobs. The only real rebellion is to just walk away, but we both know what that leads to. That just leads to silence and not progressing ourselves and our fellow man, this is another reason why i find censorship so distasteful.
I think we can both agree that the direction that the corps are moving is dangerous. Its a pandoras box thats getting cracked open. Further, if you consider the old saying that darpa is 10 to 15 years ahead of consumer grade tech, well, we see what consumer grade tech can provide. Image that being enhanced by a limitless budget and unending man/brain power. I think we are already well and truely fucked.
Thanks for turning me on to meta, i will check it out later today. GTP has been getting worse and worse as of recent. Just a lot of non-viable solutions, or missing the point in my code. Maybe its just my code, which could very well be.