Call for an Army of Be(a)sts!
Pushing for a more democratized AI.
I used to write on Medium before this, these are ports.
Artificial Intelligence will be powered by three domains, a better understanding of natural languages, generative models learning from probabilistic distributions (GANs, VAEs) and main ingredient training by reinforcement learning (RL) algorithms. Let’s talk about RL for a minute, it is the way forward because it learns to solve problems, not just posterior mapping that supervised algorithms do. You can throw at it a problem to play chess and it will learn to do that with or without your help, or Go, Dota2, Starcraft or even the stock market!
They do so by interacting with environments, simulations of things you want the agent to do. OpenAI with gym, democratized the domain and pushed the involvement of people and awareness of the idea. The environments are a great example of the power of open-source, it’s free, it’s powerful and it’s backed by a community of developers and users who can solve your problems in an instant. Despite being great it lacks a very important item, a complex strategy game, more complex than chess and other board games. Something that has a rules and regulations and requires immense amounts of creativity.
Despite being great OpenAI gym lacks a very important item, a complex strategy game, more complex than chess and other board games.
A screenshot of DeepMind’s Starcraft-2 environment
If we look at more open-source projects in this domain of which there are many, there is nothing like this. The Starcraft-2 environment from DeepMind Safety Research and Blizzard is groundbreaking but is still plagued by many issues, not being open source, lack of access to full version and mainly it’s size 30 GB! All these issues make its use difficult.
Let’s make one then
I have been personally working on this for sometime, and after lot of feedback and validation from some of the best in industry, finally opening it up. Freeciv is an open-source turn based multiplayer deep strategy game with rules, and it makes it the perfect candidate for using it as an environment. Not just is it fun, it’s also one of the oldest games that people still love to play, with initial commit done in 1996 and stale branches going back 21 years! And we are writing a python binder for it. You can check it out here, we have uploaded the original work done for a paper called ‘Learning to Win by Reading Manuals in a Monte-Carlo Framework’.
A screenshot from freeciv 2.4
This work is not just important but will be one of the most influential in the hindsight. This project demands not just the good, but deserves the best people.
There is a small team that is working on this hard but we need more to get this project off the ground and really establish. But we need your help to make this happen. A lot of the work needs to be done on writing the code and dynamics of the environment. We are starting off with a simple requirement, to compile the source code and document the process. The end goal is to create an agent which can learn to traverse and play in complex enviroment by itself with minimal help from humans (the only support should be basic instructions).
Applying deep reinforcement learning here
This environment will provide with a huge opportunity to test the AI because of following reasons, you can also take this as a manifesto for things we need to achieve in this project and contribute:
Let’s not be this dumb
All this is what this environment is offering; main difficulty, we need to build and test this thing out. The repo is new and projects will be added to it to guide us and set benchmarks on the project. Link to github repo: https://github.com/yashbonde/freeciv-python
Cheers!