Starcraft is a game that is played with mouse and keyboard (or game controller on console). I think in order to say an AI has beat a human, it should have to use the same interface that a human uses. That's part of the definition of the game.
In Go, anyone can place a stone as well as anyone else, it's deciding where to place a stone that's the game. In Starcraft, not everyone can control the interface to the game equally well. Playing Starcraft without using the same interface as humans is like playing baseball with something other than a baseball bat.
Also, I think it's unfortunate that they're choosing Starcraft 1: Brood War (presumably because of API reasons) instead of Starcraft 2, since there are many more top-level players playing sc2 right now to give it a run for its money, with an evolving meta that could more likely adapt and challenge the AI after it wins some matches. SC2 is where the tournaments and money is right now (compared to BW), hence the best players devoting the most time.
From the 2015 AIIDE Starcraft AI competition report[0].
> ""Why not StarCraft 2?"
> This is the question we always get asked when we tell people we are doing a BroodWar AI competition. This competition relies completely on BWAPI as a programming interface to BroodWar. BWAPI was created by reverse engineering BroodWar and relies on reading and writing to the program memory space of BroodWar in order to read data and issue commands to the game. Since any program that does this can essentially be seen as a map hack or cheat engine, Blizzard has told us that they don't want us to do anything similar for StarCraft 2. In fact, most of the StarCraft 2 EULA specifically deals with not modifying the program in any way. We are happy that Blizzard have allowed us to continue holding tournaments using BWAPI, and they have also helped out by providing prizes to the AIIDE tournament, however until their policy changes we will not be able to do the same for StarCraft 2."
I imagine it also has something to do with convenience. Being such an old game, the technical requirements are quite low, which allows them to easily have that setup of multiple virtual machines on a few servers. Also, I believe they can use the same CD Key for each instance, and it doesn't require a connection to the internet, reducing variables and potential problems.
That said, it would be cool to see SC2 for this, and maybe we'll see it eventually. But I think Blizzard would have to allow it in the end, considering they've been in direct contact with them about this.
It has always seemed to me that there is a truly "proper" way to construct an AI 'bot' for a networked game, and that is to write your own headless game client software that connects to the game's network—emulating the physics well enough to avoid desynchronization with one's opponent, but otherwise being entirely its own program, rather than having anything to do with the reference client per se.
It's really really hard even for developers of the actual client to avoid desyncs. Trying to reverse engineer the exact physics engine down to the bit would be a massive achievement technically in itself, I'd say.
This is, of course, because being a good game developer doesn't make you a good network programmer.
In the MMO sphere, where the people being hired for development tend to have a decent grasp of how networking is supposed to work, you don't see the "reference client" being developed until quite late—first you get a server, and a client library that does enough physics and network-messaging to appease that server. Then both integration test suites for the server, and the reference client, are written in terms of that library. Thus, the "business rules" of the game's simulation are forcibly decoupled from the particular UI used to present them.
You can tell when a game team has hired some good network programmers, because—at least if the game is a competitive one—the game will have "match recordings" that are able to be replayed on reference client versions newer than the one used to create them. This is true of SC2 (and false of a ton of other games, even much more recent ones like, say, Super Smash Bros U.)
How do they do that? Simple: they keep all the old versions of the "business rules" simulation library around together with the reference client, with a stable ABI such that the newer client can load the older library versions. When you want to watch a match recorded for a given simulation ruleset, it loads the relevant simulation library.
So, if there exists such a library for your game, you don't have to reverse-engineer the entire client; only the library. Mind you, you could just use the library, the same way the reference client does; but the low coupling makes such a library also much easier to analyze and reverse-engineer. So both options are on the table for SC2 in a way they aren't for most games.
Is it problematic to capture screen and generate mouse/keyboard input?
In that way there would be no need of any privileged API... and it would be more natural (i.e. interacting with the real way, instead of its abstracted version).
BW has more than enough players to challenge the AI, there's still plenty of players like Bisu, Flash, Effort, NaDa, etc. There's going to be multiple BW tournaments broadcast this year too.
I think there's no need to make it use a keyboard/mouse, you should just add in some human like constraints that are assumed in the game (some limit on APM, some delay between moving the screen and being able to take an action/observe the state).
The interesting part would be whether they can build an AI, not a robot. It's unfortunate that both games are pretty figured out at this point though, it'd be interesting to see if an AI could be made that could react to a big metagame change like http://wiki.teamliquid.net/starcraft/Bisu_Build
I think APM should simply be passed as a parameter. Max it out at whatever is the theoretical maximum for a human (how often can the fastest person click buttons). Then you can challenge a player to a game where you offer something like 10% less than their average APM over the last n games.
It would actually be quite interesting to see how much APM actually influences the victory. You can easily test this by having cloned AIs play themselves at different APM values and come up with an added value per added APM measure.
But Brood War has been thoroughly explored, and is a much more stable target (SC2 is prone to game-shaking patches). Back when pro play was just ending, I was astonished by the level of play. While players may need a bit to practice against the AI, a good AI spurring more play is really exciting to me.
By forcing a bot to use a keyboard and mouse, it's no longer so much an AI challenge but an engineering challenge of creating robotic limbs that match human performance, totally unrelated. The two aspects should be separated. Arguably, the AI aspect is far more interesting as it relates to StarCraft.
I think the point was mostly to only let the bot have as much information as a player would get on the screen : no omniscient knowledge about the state of every visible unit on the map. You'd probably also want to rate limit the inputs, including changing the viewport position, to something reasonable.
You'd probably end up with a bot that simply rapidly moves the viewport over the whole map over and over again to give commands and see what is there. That's inconveniencing an AI slightly because now things are "seen" with a few frames latency instead of immediately, but I doubt it'd make a difference.
Ah, perhaps. However that raises new questions. What do you limit the input to? On par with the best players? Do you just average their APM and minimap navigation abilities? It could work, but I think it would be more interesting to let the AI take advantage of its enhanced ability to scour the map.
> it should have to use the same interface that a human uses.
Yeah, I've followed Starcraft AI automation for a while. The people programming these AI systems have basically been creating individual unit AIs with a tiny bit of latency.
It's cool to watch but absolutely does not advance the game theory of Starcraft. If you actually had to ration moves, they would have to come up with quite interesting systems. I'd be fine with something borderline impossible like 500 APM - as fast as you can physically click buttons with 10 fingers. Not 2000 APM.
Limiting the number of APM could be a solution to the bias you mention, especially if the AI eventually manages to beat an expert human player with lower average APM. Of course having a real robot using a mouse and a keyboard would be even better in the long run.
The results were pretty great, so it would be fascinating to see this work with Matt's version of SC2 as mentioned elsewhere in this thread: https://news.ycombinator.com/item?id=11326119
Raw pixels. And the score. The score was separate, or rather a signal representing increased score. Relevant quote from the paper:
"The emulator’s internal state is not observed by the agent; instead it observes an image xt ∈ Rd from the emulator,
which is a vector of raw pixel values representing the current screen. In addition it receives a reward rt representing the change in game score."
Agreed. I think that is one of the major drawbacks/limitation/unaddressed aspects of deep learning algorithms -- they are primarily supervised learning. Supervised in the sense that you have to explicitly identify good and bad examples. Determination of what is good and bad itself (figuring out that number at the upper right side of the screen is a score) would be a major breakthrough with implications far beyond game playing. DNN has been a breakthrough with much better accuracy and discrimination capabilities of a complex neural network. It still requires that the researcher point out what is good and bad. We still need a just-as-significant breakthrough in unsupervised learning.
> Starcraft is a game that is played with mouse and keyboard (or game controller on console). I think in order to say an AI has beat a human, it should have to use the same interface that a human uses. That's part of the definition of the game.
That's the easy part, though. If you can make a SC AI play well against a human you can certainly send mouse events into a window. It really doesn't show anything.
See for example Sikuli, which you can use for automated testing.
AI is still giving same commands as a player, move unit to there, attack that unit. Even if it was limited to do one action per frame, assuming 60 fps, that would be 3600 action per minute and still x15-x20 times of a good sc player.
Further limiting the apm would be no different than limiting the AI itself. It wouldn't be any different than limiting the cpu/memory usage imo.
Well, anyone who says that advantage in micromanagement is not a big thing should see "Automaton 2000" videos. Given that automaton is a map script and thus can have virtually unlimited APM, it beats people from a horribly disadvantageous positions (40 banelings vs 21 marine [1], or 100 zerglings vs 20 sieged tanks [2]).
I agree. It's one of only areas the AI's have been hood at. Pathfinding and build orders being other two. AI should be limited to a specific number of actions a minute comparable with opposing player or best human champ. Then, we're grading it on its "thinking" instead of dexterity.
Or, the AI could be set up like AlphaGo was: as a computer giving instructions to a merely human player. I'd love to see what a powerful StarCraft AI would do if trained to "predict and compensate for" not only its opponent, but also the inevitable human errors in faithfully executing its strategy. A certain Sector Command and Control Unit springs to mind...
A huge amount of Starcraft is the intense APM training regiment that the pros undergo.
APM is a huge indicator of Starcraft "skill". Its not the only thing of course, but winning "micro-battles" greatly changes the game.
I don't care how good Michael Jordan is. He'll never "win" a game of basketball vs a standard professional by shouting commands to an average joe.
Similarly, no average joe will be able to perform a muta-harass while retaining full-speed, taking only a single missile from a Turret. That sort of "micro-skill" takes practice and dexterity.
You don't think it'd be interesting to take a player who's already one of the top players, and measure the marginal gains of them doing their own macro+micro, vs. just doing micro and leaving macro to the AI?
The game moves too fast for that. When I contemplated it, I was going to let the AI do the micro battles and build orders. The other aspects are planning one's strategy, identifying opponent's strategy, counters, bluffs, and so on. Humans are best at this. The machines have been laughably easy to beat at it so far.
DeepMind's system is all about finding patterns. It might do better on those aspects. It could even be trained to recognize some aspects of enemy intent, how the battles are going, etc. Thing is, there's lots of potentials for curveballs in Starcraft compared to Go or Atari games. Human pro's curveball on demand. Should be interesting to see what it can do.
Hmm. Instead of purely focusing on macro, what if the human was in the middle, while the AI existed at both the top and bottom?
I'm now thinking more explicitly of the book series I alluded to (The General series by S.M. Stirling and David Drake): the human is a commander, so the units are intelligent in their own right (thus, handled by the AI); and the AI is also giving the human commander real-time advice based on what knowledge it can discover through the human's vision (isolated from the other AI-instance doing micro, but "smart" in the sense that it can assume that the micro is being done by a [fallible] rational actor that thinks like it does.)
I feel like that "human in the middle" configuration would actually make for its own new subgenre of 4X/RTS-like games, if we could get it right; somewhat like a more interesting version of tower defense. Like an RTS, it would be about issuing orders; like a MOBA, you'd have direct control of a "champion" unit. But the job of the unit would be to give those orders, and your job as the player would be to get their position fortified while also gaining enough information to accurately strategize.
Come to think of it, this is what Dungeons & Dragons was originally supposed to be about, wasn't it? Commander-level characters going off to do scouting or other special ops for their army, advancing in rank and gaining underlings in the process. (D&D1e assumes you'll just already have a wargame going with an overworld hex grid, unit stats, etc., and just serves as a "what heroes do in a zoomed-in view" add-on to it. Thus why it doesn't come with its own battle system.)
"the human is a commander, so the units are intelligent in their own right (thus, handled by the AI); and the AI is also giving the human commander real-time advice based on what knowledge it can discover through the human's vision "
I see what you're saying. Yes, I daydreamed about such models too. I got excited about two games that stepped into that direction: Supreme Commander's dual-monitor setup with a macro, commander-like view plus detailed, micro view; Full Spectrum Warrior and Full Spectrum Command. The S.C. setup shows people are dabbling in interfaces that might lead to that. FSW and FSC are straight implementations of what you describe: commanders controlling AI agents that are semi-autonomous and provide feedback. FSC isn't available to public but is what I wanted more: limited commander view with data on your troops, position, intel coming in from video feeds or satellite, and so on. A hybrid model might let me go Harbinger and "assume direct control" of a character or team.
"I feel like that "human in the middle" configuration would actually make for its own new subgenre of 4X/RTS-like games, if we could get it right; "
It could. I'm not sure what it will look like outside Full Spectrum Command or bots in shooter games. A lot of the experience comes from the style of the people playing plus their quirks. The Call of Duty Ghosts AI shows that we might be able to approximate that as it did it so well I thought I was playing online against rookies lol. Most fun bots ever were.
"Come to think of it, this is what Dungeons & Dragons was originally supposed to be about, wasn't it? "
I think it was meant to enable and constrain the imaginations of players so the game took place inside their head. I never played it but it was a brilliant idea. Come to think of it, you're onto something here because games like Skyrim have all kinds of autonomous people doing certain routines or behaving in certain ways. There's even contractors and mayors. Any of these people could benefit from AI. Just a matter of computing resources. Could have one world where everyone plays the same world whose characters are controlled via a server farm at developer's location. I originally envisioned that for Runescape when I failed to get them to create a version of it for AI research. It would've been great for testing pathfinding, build systems, strategy, chatterbots, and so on. Skyrim more so.
Note: We could also test a collective intelligence where individual agents publish what they learn to central forums organized by topic. AI expansions could take time to periodically scrape that, try to understand it, and factor it into their gameplay. Basically, simulating player help forums that humans use. Additionally, could build superintelligences, gods, or advanced/E.T. AI's that tap into that plus much of world state that shouldn't be available. Even let them make changes to map or items with that dynamic factored in.
Lots of potential that might not have been explored yet but could make even simpler bots a lot more fun to watch. ;)
Yeah. infinite apm would make stuff like marine being able to turn around and shoot instantly OP. it was just impossible to exploit before with human reflex.
also the APM is not unlimited, it is only very high.
Watching the zerglings somehow wash through that tank barage was just ... demoralizing. Those little buggers are not supposed to be a hard counter to tanks, dammit! Fighting against Automaton 2000 is like having your own personal Kaizo assaulting your base.
I think this really demonstrates why putting in APM limits is super important, and as brought out elsewhere also making the computer input via mouse/keyboard and get data from the screen (maybe just a filter to ease up on the CV bits?). A lot of the fun in watching a StarCraft map is seeing how the big picture is balanced against the details, and where a person's limited congnition flows from one focus to another. Limited knowledge and scope of sight play a huge part in the skill.
But mostly I'm just ranting because dodging the splash damage feels unfair; goes against everything I stand for. Next we'll start seeing 5 Skill Rays bulldoze the map! (I'm going to be looking into this a lot more after work - it seems like a super cool project!)
Agreed. It also wouldn't be very interesting to watch a human-AI game where the human is better at planning and tactics but the AI just wins even uphill battles due to insane micro.
I would LOVE to see an AI vs AI game though where the AI had no restrictions on APM/micro.
I mean, what would an AI come up with given just mass zerglings vs mass zerglings. Would love to see the strategies possible with near unlimited attention and APM.
For example, while banelings, are considered hard-counter to marines, stimpacked marines are still faster than even speed-banelings not on creep and just a little bit slower on creep. Pros use splitting to minimize damage, but here, as you can see, it can be just ultimate showdown.
Another example would be high templars vs ghosts. It is fairly difficult for Terran player to target high templars, but just two EMP shots basically puts templars out of combat, and miss of those shots renders ghosts mostly useless.
So, any area-of-effect attacks would be mostly useless due to infinite APM. That means minus colossi, tanks, banelings, thors and many other units. Probably that means that some race will be somewhat (or, maybe, insanely) overpowered (my bet is on Terran due to possibility of multiple drops, which themselves could be useless, though (because AI can't "miss" a drop), and stim-marine micro). So, the game will require massive rebalancing.
For Atari games, they limited actions to 50Hz, which is not entirely out of reach for human beings. I guess they would do something similar to make the SC challenge fair as well.
how does the siege tanks vs zerglings work? how do AI know which zergling is targetted? Also was there a delay for siege tanks after it selects a target but before shooting it down? Wasn't it instant like marine, can't remember it.
Or maybe it is because opponent is also AI and automaton simply knows which target it will auto pick.
All tanks have limited and fixed range, so Automaton just instantly splits all zerglings from the one which comes first into tank fire range. It IS instant, but highly deterministic, at least their first attacks are. And after their first shot, due to long cooldown, all zerglings are just already attacking tanks in fairly split manner and mostly unkillable by tanks anymore.
> StarCraft is closed source, making it quite challenging to run simulations. One of the techniques used by AlphaGo is reinforcement learning, which involves a huge amount of simulation. In order for DeepMind to overcome this limitation, it’s likely that novel abstractions of the state-space will need to be developed.
For their atari work, deepmind was using raw pixel data from the screen. You could do something similar for starcraft. I suspect the actual machine learning question is 'given a sequence of n frames of the game, where should i click next? repeat.'
A training set might look something like video of a game with a text file recording X,Y positions for every click and a timestamp that can be synced to the video. They need lots and lots of those.
From there things get drastically more complicated. To go from very unstructured data like that and form higher level representations that can be used for long term planning and strategy... That would be quite a feat.
This has the immense advantage that there's a large corpus of games available to learn from as well. I don't think Battle.Net saves all games ever (only locally on the computer), there's still many thousands available online.
But man, if Battle.Net did save all of them, there'd likely be hundreds of thousands, if not millions, available. That'd be pretty spectacular as a training set!
It sounds like he has no idea about the machine learning basis of Alpha Go (and therefore presumably any future Starcraft AI) and thinks all this complex decision making has to be programmed in by hand (which would indeed be nigh-impossible).
The interesting parts aren't his views of AI (since he doesn't have much knowledge there) but rather his views on the complexities of the game. Because on that topic he is an expert.
The problem is, as ActionButton argues in this review of Diablo 2 [0], Starcraft is a game that may contain some amount of strategy, but wherein the element of strategy is strongly over-powered by a reflex-based combat system more akin to Street Fighter than anything else.
"But they don't; even moderately competitive players can beat the best bots."
That's because the programmers are bad at AI for that kind of game. It just means human beings don't understand how to make a good AI for a game like starcraft. AKA the human being being the idiot that doesn't understand how to make one.
wherein the element of strategy is strongly over-powered by a reflex-based combat system
This isn't true, in general a bigger army will beat a smaller army every time.
After you can make equal size armies, the next problem is army positioning and when to take engagements.
Finally, after you can do all that, then you can get an extra bonus to your power by microing really well. But not every micro is about speed - most of the time it's about knowing what to do.
And even at the most extreme, it's less about clicking quickly and more about clicking precisely.
It's not always the case where smaller armies lose to big armies. There are many situations in Starcraft where superior micro of less units can taken on more units. One example is Phoenixes versus mutalisks. For those who don't play, Phoenixes are faster and have slightly longer range than mutalisks. Phoenixes can infinitely shoot and run and kill of mutalisks.
I'm not sure this is true, even with infinite APM micro, and I have a whole of experience microing phoenixes against mutas. Well, maybe after the range upgrade it is.
If anyone, Google has the image recognition and robotics tech to make a robot sitting in front of a screen, with keyboard and mouse ;)
Yes, anything where the interface to the game becomes critical is difficult to fairly compare with humans. Board games can use a human as an actuator, micro-heavy games hopefully won't for a long time...
No. Those would be the robotics departments using electric actuators that they are keeping; not the hydraulic robot department they are trying to get rid of.
Not really since electromechanical engineering would then be the bottle neck.
If you want "fairness" you can limit the instructions to what keyboard and mouse can actually interpret as in the limited latency between instruction that both the game, the OS, and the hardware can support.
I don't think that's quite fair. Human's aren't anywhere near what the system can handle. The most "fair" thing I can think of would be to add latency to every mouse movement using Fitts's law.
It may also be interesting to model the state of the keyboard hand. Instead of allowing the computer to access keyboard shortcuts directly it could be forced to operate a virtual hand that is in turn operating a virtual keyboard. Constraints could be added to the speed of various finger movements. This provides balance but also captures authentic strategic tradeoffs. Like the resting position of the hand. I imagine if you had a nice visualization it could even reveal subtle optimizations that were practical for humans.
The game is capable of accepting a much higher APM than Korean pro players are capable of pulling off even at peak. IMHO the AI should be subject to an APM limit (e.g. sliding window).
Not if it's done over the normal channels (USB) many players not even "pro korean" ones can easily encounter input issues like key rollover limits and polling rates on your mouse.
USB has quite a bit of limitation for HID's and allot of the gaming mice and keyboards use various hacks to overcome them.
Games and OS also have limits I can easily issue more commands using my keyboard than SC2 can handle so not all key presses will actually register.
Not to mention is that the game actually has to run on regular hardware, start spamming your keyboard in a game and you see the CPU spikes, the game loop also puts quite a bit of limits on how many interrupts per second it can catch, and how long does it take for it to actually switch context (e.g. SC2 build menus).
That said after AlphaSC2 destroys another Korean national passtime I would definitely love to see human like input being used by AlphaSC2.
Not because I think humans deserve a fair fight, but because i think the most interesting part will be to see how AlphaSC2 has to modify it's play style to compensate for the limitations of having to use "meatbag" control methods.
IIRC the entire input polling/NKRO issue can be solved by simply switching from USB2.0 FS (aka USB1.1) to HS, which requires a more powerful keyboard/mouse controller.
I'm a little worried that the AI will quickly develop a simple early game strategy where it sacrifices economy for early units (an "all-in") that never loses because of its perfect unit control.
The AIs unit control speed may have to be limited to get interesting results where it has to use strategy.
I'm not as optimistic as most seem to be. I think SC adds a completely new layer to the game, that none of the previously learned games had. It's not enough to play the game just by rules that are implied by the pixels.
In chess, go, and even simple atari games, you have a clear way of reading the current state of the game in which the next move will be based on. In SC, this state space includes the predictions of your opponent. What will your opponent build next? What will they try to do with their army? There's an element of theory of mind here, where opponents try to project themselves and preempt each other's moves.
For an AI to truly play and win against a SC pro, using only human level interfaces, it would need to learn how human minds will be playing the game. Otherwise, I think we can expect a strong AI with unconventional tactics that SC pros will be able to fool with tricks and take advantage of because the AI lacks a meta-game capability. A more "general intelligence" would be needed to play at this level, not just a learning algorithm that learns from the pixels on the screen.
I think poker would also be a great challenge. It is a special game because as well as the math, you also have to understand your opponent and adjust accordingly. It'd be interesting to see weather DeepMind can beat professional poker players who have different playing styles.
Wouldn't that depend on what you want to achieve? To maximize $/h you need to understand your opponents weaknesses and exploit them.
But if you just to beat the best humans, and don't care about the margin with which you beat them, then playing a mathematically perfect game should be enough. And before you ask, a mathematically perfect game can incorporate occasional bluffs, and occasional calling of bluffs, this would be known as a Mixed Strategy.
One very real issue I'd imagine is that two players could cooperate out of ignorance, i.e. either player could do better by deceiving the other, but they are not good enough players to realize that.
It would also be interesting to see if DeepMind challenges Dota 2 one day maybe. How they would handle a 5-man team game and adapting to major game updates?
I'm more of a LoL guy, but i'd be excited to see this in action. I think it's actually an easier challenge than starcraft - the action space is much smaller and dodging/aiming AI already gives some players extreme advantages (against the TOS of course).
IMO the smaller action space/better defined win conditions is outweighed by the greater playstyle diversity: SC has 3 races while League and Dota have 100+ champions to understand the interactions with. How is an AI supposed to know that Tryndamere's ult has a 0.5 second cast time, but can't be interrupted by any CC?
Personally, I think (all else being equal) Dota might be an easier starting point than League because of its focus on item abilities, rather than champion abilities. Anybody can use a BKB in Dota, but in League you have Fiora W, Morg E, and a bunch of other things. You have a smaller set of abilities/effects to worry about.
The dodge/aim AI can be situationally good (Cass bots) but still, the best players do win against scripters. Scripters rarely make it into the elite echelons of the ladder; there's a situation right now with a former pro Korean player who started scripting but couldn't make it back into Master tier. You'd expect a former pro with very in-depth game understanding to play at the very tip top level with some 'aim assist' but it doesn't appear to be the case!
This is incorrect as Starcraft has a much larger playstyle diversity than League. Every league game follows the same structure: with very high probability, one player will be top, one middle, and two bottom, with one jungler. Teams will farm with occasional gank attempts until the midgame, where there is a slow transition from farming to objective taking and teamfighting. This is relatively formulaic and easy to learn. In contrast, from the start of the game in starcraft, players can choose from a wide variety of strategies, ranging from early rush strategies (with low and high-tech variants), harass-and-expand strategies, greedy expand strategies, and safe expand strategies. Also, it must also construct a prior over strategies for its opponent that changes by map, as map properties change the viability of certain strategies.
Your example of script-users not attaining high ranking in online play for league of legends is also misguided. Because the action space in league is so much smaller than in starcraft, it is much easier to develop an AI with game strategy competency for league. Combine that AI with perfect mechanics, and human players should be falling very readily.
> Every league game follows the same structure: with very high probability, one player will be top, one middle, and two bottom, with one jungler.
The prevalence of lane swaps (2 go top, 1 goes bottom) in competitive is one major counterexample. In fact, these often turn into having 3 or 4 people in one lane at the start of the game, forcing a fast push strategy.
> Teams will farm with occasional gank attempts until the midgame, where there is a slow transition from farming to objective taking and teamfighting.
Stereotypical Chinese LoL matches are nearly all teamfights, for better or for worse. Regardless, laning and midgame is where I think you should consider my point of champion diversity as well. Here, mistakes can be more readily mechanically exploited (late game, mistakes are punished strategically), e.g. if a laner is far from baron without TP, exploiting a 5v4 fight is a strategic decision, whereas missing certain skills early on in certain matchups means a severely negative outcome if the opposite laner has the mechanics, e.g. no flash and Q on CD as Lux means any Morgana can flash ult for guaranteed kill. These have a bigger impact than racial considerations and probably make up for map considerations (idk)
The script user example is one of mechanics: top players still win against those with pixel perfect scripted mechanics. This is despite having some scripters having 'game strategy competency' (that might be a real thing, but I interpret it as a player having real game knowledge).
IMO the biggest challenge to a human player playing against a team of 5 AI-controlled players is that of communication; perfect understanding of each others' intent is a far cry from the communication of even practiced teams.
I like this idea. To be fair, You need five computers to communicate with each other. It will be a huge mark for ai if they can actually learn to communicate with effectively for achieving the goals. Vary the performance of each ai and allow them to swear at each other could get very interesting. I wonder if ai can emulate the toxic side of gamers......
Would be interesting on what level the communication can be done. Do you give them a predefined, limited, game-specific vocabulary and make do with that? Or can you have the networks "talk" in some random set of symbols, with a meaning they have established while learning?
I think it's pretty simple, you just take the starcraft AI, reduce the number of units it can handle at a time, maybe simplify its economy logic, and then let it go. Easy.
The interactions between individual unit types in a game like DOTA are much more complicated than, say, the interactions between marines, zerglings, and zealots; there are many more unit types, too.
as someone else pointed out, the interactions between the units are like fighting games. Computers already have the advantage at fighting games (like Streetfighter).
Micro and unit control is where the AI has the advantage.
The complexity of the unit interactions also has a strategic layer (drafting, lane assignment, timing) which may be far more difficult to train and isn't just purely the micro aspect of the problem.
An AI with human-like abilities (so no infinite APM for instance) would be great for balance problems in competitive video games. Just run the AI a certain number of times, and if some race or unit always wins the game, there's definitely a problem.
StarCraft would indeed be interesting for one of these challenges, however the peak of human skill in this game has already been lost. Tournaments are dwindling, many players have moved on to other games like StarCraft II or retired. The remaining players aren't at the level that they were, say, 5 years ago, when there were two major televised starleagues.
Even if some AI does defeat Flash in 2017, would they have defeated Flash in 2010?
The mechanical skill required for this game means you simply can't come back to the game after a year and resume where you left off. Knowledge and understanding aren't enough, your fingers must be lightning.
I'm rather suprised there are no player beating ais for starcraft. i dont think it should be very hard as decisions need to be made very quickly in this game and humans are bound to make more than a few mistakes. but perhaps i'm underestimating the problem. it may have something to do with the limitations of the api though.
- Resource management and planning: knowing when and where to expand, save resources for certain points in the tech tree, e.g. so you can immediately build a bunch of units once a crucial research or building is complete, ...
- Figuring out what the opponent is building and what he could be doing: where is he and what race, what tech buildings are there, when does he start to harvest gas, key tech buildings may be hidden in other places of the map, essentially even not being attacked with certain units at a certain time may be a clue to some strategy
- Reading and using the terrain properly: High ground has an advantage, Terran can wall off certain entrances with a depot and a barracks, where would be good places for drops (and with what units), figuring out where the enemy could come from and which places to defend – all those vary by map quite a bit
- Finding a good strategy to counter what the opponent is doing: This ties into a few of the things above, because first it must be known what the enemy is doing. Keeping the strategy flexible enough to counter the enemy's counters, etc.
Some of those are necessary to solve in real-time (unit movement), some of those can be run in background and don't need to be frame-exact (strategy, etc.), which helps a bit. But there are still quite a few parts to playing the game, all of which are interconnected and require information from other parts. Many successful bots currently are rush bots, which exploit that most bots are weak in the beginning and are easy to create. Others rely on a number of fixed strategies and build orders that are chosen based on what the enemy is observed to do.
I want to see StarCraft next and then League of Legends/DOTA. The control advantage an AI has in StarCraft is immense. It exists in League of Legends, especially when perfectly coordinating 5 different characters, but it's less impactful.
The control required to master SC2 is far beyond any MOBA style game. AI for MOBA games seems trivial compared to AI for a large scale RTS. It's the difference between controlling 1 unit vs 100 units.
I don't think I agree with that at all. If you can control 1 unit it's trivial to control 100.
I might argue that the AI vs AI challenge in Starcraft is deeper and more difficult to master. But for AI vs Player the Starcraft AI has an enormous mechanical advantage. Being able to harass mineral lines in four different places is a big deal. Being able to perfectly micro multiple lines of attack/defense is a big deal. It's very unfair and very much in the AI's favor.
The developer of the AI in this video even states that this AI is applies 300 APM per unit. That's 30000 actions per minute. There are simply not enough decisions to make in League of legends to make use of that power.
IMHO it's not artificial intelligence it's just intelligent scripting. A real AI, like a human shouldn't be programmed for something specific but adapt itself to what it encounters.
This is an amazing project. Thank you for that post. I would love to find some teams working on this who are interested in adding junior level engineers.
In Go, anyone can place a stone as well as anyone else, it's deciding where to place a stone that's the game. In Starcraft, not everyone can control the interface to the game equally well. Playing Starcraft without using the same interface as humans is like playing baseball with something other than a baseball bat.
Also, I think it's unfortunate that they're choosing Starcraft 1: Brood War (presumably because of API reasons) instead of Starcraft 2, since there are many more top-level players playing sc2 right now to give it a run for its money, with an evolving meta that could more likely adapt and challenge the AI after it wins some matches. SC2 is where the tournaments and money is right now (compared to BW), hence the best players devoting the most time.