AnnouncementsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
3
@HighQualityDickPics
I have this game idea I wanted to tell you about but I might as well let everyone see it.
It's a restaurant sim. The idea is actually to get a head start on automating restaurants but doing it in a way where most of your first development is in the soft space rather than the hard space, thus reducing R&D investment for what you get out of it hundreds fold.
So then the idea is to make it hyper realistic in terms of logistics with every role involving realistic interaction. You start off with a small restaurant where you are literally doing every part yourself. It's sort of like Overcooked but with every part. Hosting, seating, waiting, reservations, drinks, bar tending, cooking, running food, running checks, bussing, inventory, supply ordering, unloading trucks, designing floor space, dish.
The idea is it would be more than anyone could do in anything but the smallest restaurant. So then if you want to grow you have a few options. One is multiplayer. You decide that certain maps you've set up you are only going to play when you have friends online. But then also bots. There would be two kinds of bots. One type is programmed and just does certain tasks repetitively and simply. The other type would be trained. You spend money to get bots. The more you spend the better bots you get. For example with the trained bots you can pick up a bot that is pre-trained that can also fine-tune train inside it's environment. Spending more means you can get a more advanced pre-trained model.
Now here is the scheme part. People are fine-tune training their bots locally on their hardware. We'd sniff up those models and the gradients. Similar to how tensorflow created tensorflow web quite a while back that could support distributed training. That never took off for them because cloud host gpu's ended up more economically viable than every site everyone goes to doing ML workload in the browser.
So then where would these nice pre-trained models come from that people can level up for? From the users themselves. But the real scheme is that those pre-trained models might have utility in real world physical space. If you simulate the game hard enough the kind of prioritization problems that happen in a restaurant are going to be realistic. And to the extent that things can be off a hair that can be handled with fine-tune training.
By having automation in the game you can make the over simulation more manageable by a player. Overcooked is a hectic game but only just enough that it is fun. But it is unuseful as a real restaurant sim because it only covers 1/10th of the sim. Automation can help us kill 2 birds with one stone. Cover 80-90% of the complexity but still have it be manageable and fun. Create a sim that is detailed enough that it can have cross over value beyond a game, and be a playground for real world capable models.
The other way players can manage the complexity of their game when they don't have a lot of bots set up is letting them control what the restaurant offers. You can always just not run a bar and that's an entire segment gone. So you start off with a road side BBQ that can be run by just the player and zero bots and eventually you build that into sit down with a bar and two cook lines. In between you just do what Friday's and Chilies does and just microwave food in the back. Low complexity, little reward. So yeah, you have an offering that has a certain complexity to execute, and you have a volume that you are trying to do it at, and successfully adding bots and identifying and making up the slack hands on in game is how you increase both of those things.
So yeah, over cooked with bots, and realistic traffic, and it's the full stack of a business.
If we can get kids to play it we'd be doing them a good service giving them a real education that their schooling isn't going to cover. The real future of their work is going to be managing AI. We might as well give them a head start.
Comment preview
Oh, You can @x0x7 people? Nice to know! that's a cool feature.
I like the enthusiasm, it feel great when you get an idea! I project from being about 3 to 5 months out on this project, at which time, it will either become its own thing or it will be dead in the water. Either way, i would be able and willing to start picking up other projects. Let me tell you about KLEP. One of the neat features of KLEP is that it utilizes the same executables to handle user input/interaction as it does agent interaction with game/problem space. KlepAgent can utilize several advanced forms of AI to navigate problem space. This includes, learning reinforcement, 2 adversarial and swappable tree searches (a and MCTS for now), and temporal context in SLASH memory - all of which may influence the agents decision making process when selecting an executable to fire. All of which can be omitted and KLEP can still function, just at a lower thought process (like an ant or roach). What this allows for is that a neuron may ride passively alongside the user observing how they interact with their environment, training along the way. The goal being that you would be able to train an AI based on how YOU interact with the problem/game space - and then it can take over if your internet drops or you AFK or jump to a different agent. Long story short - i like factorio, i like automation, i like your idea, i think it could use klep.
That being said, i know game creation and i know how much work goes into them. every asset you want, be it a button or an animation or a model or a texture, should be estimated at 40 hours. I know that not every button icon is going to take 40 hours, but when you want a transition effect and you suddenly spend 14 hours trying to get it working... Things are hard. Here is an example of the fun! Undocumented features! Yes please!
https://www.febucci.com/2022/05/custom-post-processing-in-urp/

finally, if you want a deeper examination of KLEP and its systems, i have a youtube channel with my LLC branding, 4d4. Every week i am going deep into the components of KLEP and how the current iteration works. This week was a deep dive into klep Key and its dynamic Property definitions that it and the keyloader utilize. This allows a designer to make generic properties and use them across different, unrelated, projects. Such as one for a department of defense implementation for a UAV and the same loader could be brought into a video game.
https://www.youtube.com/@Roll4d4/
What roadmap are you kicking around in your head?
[-]x0x7
0(+0|0)
Interesting. I don't know if KLEP has a way to solve this problem but a problem I see in using the AI I know about is there is seemingly necessary strengths in both reinforcement learning and ANN learning for this application.
Because it need environmental learning this is really reinforcement learning's strong suit.
But as far as I know the idea of giving a pre-trained model, letting someone fine tune it, and then taking their adjustments with everyone else's and averaging them to make a new base for a pre-trained model is really a vector space sort of thing, aka, neural nets.
A pre-trained model is just a fine-tune trained model on someone else's data, so you can't just average everyone else's models. The real way is:
Train on your data/problem, publish, 3rd-parties fine-tune, gather, average, retrain on your data, publish. That should result in a second version of the pre-trained model that has exposure to their more broad use cases.
But this may be an area where my extension block NN idea may be useful. Every instance of the AI would have a shared part (the larger part of the brain), and in parallel some side neurons that just extend the shared model. That means when they fine tune train their model that is specific for their environment the side neurons will accept most of the training, and after enough exposure will training be accepted into the main part of the brain. This means that the main part of the brain will only get pressure to improve in ways that are less specific each of the instances.
Of course in ANN you really don't train neurons. You train synapses. But all the synapses that connect either to or from the side block of neurons is considered a side synapse and specific to the instance.
This also means when you issue a v2 of the main brain they can plug and play their existing side neurons (synapses) and while that will mean a slight loss of fine-tuning, re-training should be quick.
But long question short: Do you think there is a way to merge two separately trained but related KLEP agents in a way that has any useful value? AKA, would KLEP support transfer learning?
Well, KLEP is not a NN (DNN nor ANN). Its a symbolic AI in the GOFAI (Good old fashioned AI) field, along with GOAP and FSM's. That's part of its power, its an exposed system so there is no black box training mystery voodoo. You CAN INCLUDE a DNN or ANN, but you do not have to - part of my speculation about KLEP's final form is a system that utilizes a prediction model to self code based on teh "baby speak" of its key and lock system (Key - Lock - Executable - Process). The problem with RL (reinforcement learning) is that it's super fucking stupid. Lets say you have an RL system predicting if you left click or right click. You click randomly and it's job is to pick out your subconscious pattern bias. But you keep left clicking, then its ONLY going to guess left click. The advanced form (and still very shaky code base), is that it utilizes a component i call SLASH memory to record snapshots of its state when an success or fail happens, this runs in conjunction to the RL system (if in place). Memory allows it to understand the history of states that lead up to a success (and happy mood with the emotional module), influencing those learned reactions. So if you have 2 shapes, and when one shape is present you click randomly, and one the other is present you left click, then the system will pick that up with RL. But you introduce a new shape every now and then, Memory will pick up what generally happens when that new shape is present in context to the shapes that lead up to it. (WARING::POETIC PHILOSOPHY) We are not if we do not understand our place in the moments leading up to ourselves. You get brain damage (Henry Gustav Molaison), and cant form new memories, then you exist but are not. If that makes sense. So having a temporal understanding of the states it travels through in problem space is critical for any true intelligence; otherwise its just a fancy parrot. Can you move data from one klep agent to another? Yes! Much more then just that. Its all modular. You can move executables from one system to another and if that system can utilize those executables (move to point, reload, fire weapon, buy stocks, parse database for face, communicate with other KLEP systems, release power grid power to xyz sector), it will - in frame at runtime. You can easily move the RL data from one agent to another, its just a dictionary of strings and weights. You can move the memories from one agent to another, no problem, it just a convoluted dictionary that uses "clusters" and bit wise operations. The system is brilliant because it is so simple. A key has a name and an attraction value while OPTIONALLY being able to carry any information you want, and a lock has a name and an attraction value which can check any conditions you want, these allow for the validation and execution of executables, which lends to a process of influences and not a rigid plan in solving or navigating it problem space/environmental space.