A note about Saurian's learning AI
Dear Saurian fans, For those of you who have been attending our livestreams and following our development closely, I have some potentially disappointing news: I am shelving the experimental reinforcement learning architecture I have been developing for Saurian's AI in favor of a more traditional and less risky approach. For those of you who have no idea what I'm referring to, I have been designing Saurian's AI using reinforcement learning--a type of machine learning inspired by behaviorist psychology --in the hope that I could produce unprecedentedly complex, unpredictable, and believable behavior in Saurian's dinosaurs. A handful of games in the past have utilized reinforcement learning in their AI, notably God games like Creatures (1995) and Black & White (2001), but ours was intended to be even more ambitious by allowing AI to learn not only from the player, but from each other.
While I was successful in implementing the algorithm as well as several shortcuts and enhancements (some proven, some experimental), reinforcement learning is an ongoing area of research in the field of AI, and so there were some crucial problems and questions that I could not guarantee feasible solutions to -- which could potentially jeopardize the entire project if I continued programming the system only to hit an insurmountable obstacle later on in development. AI researchers presented on machine learning for the first time at GDC a decade ago , highlighting its potential benefits and disadvantages, which rang true into my development of this system. The main problem I was worried about was the amount of time needed to fully train the AI (and the difficulty in estimating it), which was a liability and time sink not only for final gameplay but for debugging.
I was really excited at the prospect of innovating in the service of bringing dinosaurs back to life as believably as possible, especially since many game AI programmers have been historically avoided experimenting with these architectures, but I now understand their apprehension. I can't continue taking these risks in good conscience when I know there is a safer, time-tested approach available, and when my top priority is to help ensure Saurian's successful release. I hope that one day, if Saurian and Urvogel Games have become successful enough to create more games in other ecosystems, I will be able to resume this research, experimentation, and development--but until then, I am sticking with tradition.
What this doesn't mean is that I'll be any less committed to making Saurian's AI depict dinosaurs as believable animals (not just as mobs and monsters, as they usually are in games), and taking influence from paleoart like All Yesterdays and modern science in depicting potentially unexpected and surprising behaviors. That will always be my goal with this project, which I hope will be seen not only as a game but as legitimate paleoart. What it does mean is that there will be a bit of a delay on AI progress as I have to redesign and rewrite aspects of them. I apologize and take responsibility for that, and hope that you will remain patient with our international team of volunteer developers working on this passion project. We deeply appreciate your continued support, which is a great source of motivation for us.
In the mean time, we thought it'd be nice to release one of my experiments with reinforcement learning to the public so that you could see and play with the learning AI for yourselves (and prove that this post isn't just a shenanigan). I called it the 'DinoTrainer', and it was a development tool that our testers were intended to use to generate varied personalities for the learners, as well as collect data; it was not originally intended for the public, and is not remotely indicative of Saurian's gameplay. Here's our small token of gratitude for those of you who cared enough about us to read this. Thanks for your understanding, and don't forget to read the instructions! Download: [PC] [Mac]
With love from the Saurian family, Henry (AI Programmer)