Jump to content
XCOMUFO & Xenocide

PRG-Ai Resources


Maverick

Recommended Posts

I read a bunch about AI this evening and wrote down some resources. Go to google and search these terms for generic AI resources:

 

neural network AI

genetic algorithms

expert systems

case-based reasoning

finite-state machines

production systems

decision trees

search methods

planning systems and scheduling systems

first order logic

situation calculus

multi-agent system

artificial life (A-life; what's in the Sims)

flocking

robotics

fuzzy logic

belief networks (Bayesian inference)

A* algorithm

 

for specific websites

www.gameai.com

www.ai-depot.com

www.generation5.org

www.citeseer.com

www.gamedev.net

www.ai-junkie.com

 

and a sourceforge project dealing with AI

www.ffll.sourceforge.net

 

This is a paraphrase of a quote from the reading:

"Modern games bring in a programmer specifically for AI. Even if they haven't had any experience with AI they can produce a believable, working AI using known systems."

 

I think this should be our goal as of right now. I have ideas relating to AI that could make it so potentially powerful, but it would require lots of processor (I don't know how much is available for AI, and I'm also going to run it by the professor to see what he thinks about the feasability.) Basically, with my design the AI would be different between missions, could do completely unexpected things, make mistakes, act like God, whipe you out in a turn, stand paralyzed in fear, set traps, counter player-specific strategies (meaning if you came up with a strategy you've never tried before it might only work once -- it might not work at all). The problem is that this could potentially be more bulky code than chess AI, would require lots of testing/debugging, and could fail after lots of work (as far as I know this approach has never been tried before). The benefits are that you would have an opponent as realistic as AI can get, the AI would be like a digital MacGyver (acutally able to do things it was never scripted to do by a programmer). And more importantly, if you played multiple times in a row it could me completely different between games -- even starting two games and playing at once could produce entirely different AI's. They aren't even reliant on generated algo's....like I said, its never been done before. It depends on how ambitious we should be with this (in this case I would be glad to learn to code JUST to test this idea)

 

The simple version of the idea is this: the goal of AI is to emmulate human actions, we can't predict how EVERY possible player will play, so we can't hard code it -- by analyzing battle data the AI could adapt to emmulate the human player it is playing against, and in fact use your own strategies against you. Your tactics would have to shift drastically and frequently (removing your ability to function as a team) in order to beat the AI. Best part, the AI doesn't even have to cheat.

Link to comment
Share on other sites

I bet you want to use a Bayesian or Neural Networks approach for that... I had though about it too...

 

The initial steps will be, given the knowledge of the topography to mark spots as ambush places, snipping posts, safe zones, dangerous zones, posible landing zones, etc just to give the agent a hint about in what way he can use the topography to his advantage (it doesnt means that it has to know the complete topography at all)... after that the Agent should have several well known conduct patterns like cover, prepare ambush, etc... With a bayesian approach, the morale can introduce a random element distorting the real values of the functions. And those functions use a learn by case approach... suppose you want the agent to learn to cover... so you dont give him a weapon and a soldier that shoot from a static position.... then all is clear with some object in the middle... the learning function could be how many rounds it passes without getting shot... Thats just an example but its a quite powerful approach...

 

For neural networks is more of the same....

 

Greetings

Red Knight

Link to comment
Share on other sites

Neural nets sound cool, but I think they are too easy to fool; the reason that my idea hasn't really been done is because you start with a semi-neural net, but depending on the human player the AI will become completely different.

 

You have to remember that I don't understand much code, so everything I have is from my conceptual understanding of the way these things work.

 

I figured that aliens would be capable of "sensory" limits, which is the distance from the alien that he can see or hear (or perhaps "feel" in the case of aliens that track by psionics) At game start (i mean when I click new game, pick my base, etc) the AI has only certain pre-programmed tactics it can employ, but it watches what the player does. So lets say I put my whole team into 3 man groups and move in a broad semi-circle to surround the UFO. What if, within the next few battles, the aliens themselves tried that very tactic -- leaving it up to you, the human player who IS capable of truly improvising, to defeat your own strategy. And by doing so, you tell the AI how to respond if they see a pattern that suggests you are using the same tactic.

 

I think that between the Neural Nets and the terrain thing you could pull it off; but I'd like to learn a bit more about these AI's (Barnes and Noble only had two books on it and I finished those already) so I'll probably talk to the teacher about it.

 

as a side note: X-com 1 had cheating AI, but it had to in order for the game to present any challenge. The largest proof it cheated was sectoid leaders MCing a rookie at the back of a skyranger in their first turn....second biggest proof: they knew when you were behind them. I don't think our AI should have to cheat.

Link to comment
Share on other sites

Would it be more accurate to say the computer knows what you tell it, and that you tell it only what's relevant at the time? Having a sensory range would be nice, as I think it quite different for the alien to know you're RIGHT behind it due to footsteps, compared to 30 yards away and not moving. Only if you move into its field of view or range of hearing could it react. The same should happen with the soldiers as well, with a type of 'fog of war' based on current field of view. Then you'd always have a rear guard watching your back for flankers. But we would never deploy in such a way as to allow being flanked, would we? :D
Link to comment
Share on other sites

Guest stewart

It's a trade-off more accurate AI -vs- plain old cheating. In terms of play the end result may be similar anyway. We can take time to make the computer play honestly but then the AI must be very good or have the computer cheat a little and save some time on the AI. If we can have the computer play in an honest way then great but how long will it take to implement. Is a smart AI a v1.0 thing?

 

Just something to think about, don't get all huffy and puffy about it.

Link to comment
Share on other sites

Is a smart AI a v1.0 thing?

That depends of what you call "smart AI", if there is something that studying AI for the final has taught me is that "The computer is as dumb as the programmers who program it"...

 

Greetings

Red Knight

Link to comment
Share on other sites

Pretty much....we just need to set a monkey loose on the AI and see what he comes up with....

 

but until we have a monkey available (and for those who missed it in another thread) this is Dr. Meyers website. Take a look. He's one of Rookie's teachers and I'm hopefully gonna see him with Rookie next week just to chat about AI for a while. like i've said before, this AI stuff really interests me -- i might learn to code just so i can help with this.

 

http://www.csc.calpoly.edu/~lmyers/

Link to comment
Share on other sites

Guest stewart
Pretty much....we just need to set a monkey loose on the AI and see what he comes up with....

 

but until we have a monkey available (and for those who missed it in another thread) this is Dr. Meyers website.  Take a look.  He's one of Rookie's teachers and I'm hopefully gonna see him with Rookie next week just to chat about AI for a while.  like i've said before, this AI stuff really interests me -- i might learn to code just so i can help with this.

 

http://www.csc.calpoly.edu/~lmyers/

You could probably write pseudo-code anyway. Do you know scripting for example? Sure it's inherantly procedural whereas Xenocide will be OO but you would still be able to get your ideas across.

Link to comment
Share on other sites

I've written AI before. Specifically, a back-propagating Neural Net which was designed to 'learn' how to extend the life of plastic injection mold dies. It worked relatively well but only because we had a clear understanding of how to 'train' the neural net and give it a decent reward / punishment system.

 

Having the AI mimic the human is nearly impossible for a few simple reasons: 1) the net may be able to see what the human is doing, but is has no idea WHY he's doing it... I.E.; He see's the human join his squaddies up in groups of 3 (which would be an extremely difficult programming task to try and explain to the Net what was happening). The Net has no tactical concept of WHY this is a good thing to do. THAT is the difficult part of training a net. 2) Nets do not learn that fast. It takes HUNDREDS of iterations before a net starts to get near the optimal decision curve and you often have to wipe it and start again if it picks up bad habits.

 

Simply put - you CAN and probably SHOULD think about using a Neural Net or combination of a Genetic algorithmn coupled with a neural net, BUT --- you will need to train that net with a special set of tools BEFORE you release the code. As to whether you let the net continue to learn or fix it after release is up to you guys. Be warned, however, Nets that continue to learn after they are released into the wild usually pick up bad habits from the players that play them which actually DECREASE performance instead of INCREASE it.

 

Not trying to rain on your parade or anything - just trying to make sure you realize a nueral net is not something you can get out of a cracker jack box and expect to work.

Link to comment
Share on other sites

I understand that finding the right training cases is the hard part, but how difficult do you think it could be for certain patterns like hiding behind objects, etc...

 

Given that you had been working with them (first hand)... how difficult is to program a topygraphy recognition algorithm using them??? What i mean, flag certain map places as good to certain tasks like hidding... i havent worked on AI (even though i had to given the final on friday), i like graphics and I HATE THEORETICAL AI (what you learn on most unis at least here)...

 

Be warned, however, Nets that continue to learn after they are released into the wild usually pick up bad habits from the players that play them which actually DECREASE performance instead of INCREASE it.
I didnt even though about keep it learning after release, but thanks 1 mistake less to make.

 

Greetings

Red Knight

Link to comment
Share on other sites

Topography is really not the issue. Topography can be done with either A* or Djikstra's algorithmn's. Djikstra's is better in that it allows you to 'weigh' certain nodes to be more favorable, and thus the AI will automatically favor those nodes.

 

BUT - I have to warn you! Topography is really only a drop in the AI bucket; there's tactics, group dynamics, ammo usage, lines of fire, visibility, expected movement ability of the enemy, communicating enemy positions to other team-mates, finding weaknesses in the enemy group's tactics, pinning down units with suppression fire, sniping, wounding an enemy unit and allowing the medic to come up (so he can be shot), in other words..... 100's of small things you need to 'train' your AI to do. None of these are easy.

 

Some of them can be alleviated by allowing the AI to 'cheat' and see the uncovered map, but that won't solve all of your problems.

 

Not trying to burst your bubble - just trying to put things into perspective. ;)

Link to comment
Share on other sites

Thank you very much for everything you've said -- I've been thinking about most of those issues and don't really know how they will relate from the implementation level. I understand these things from a concept basis and not a code basis; so while I can visualize what I want it to do, I have a hard time realizing the limitations. maybe I'll just have to put in a LOT of work on this and push the barrier. ;-) I also knew that nets have to see many many many repetitions before they learn. The specific example I read they had to wipe the system because instead of learning to identify people it learned to identify light from dark. Talk about a headache....i felt bad for them just reading about it. But anyway, I'm really interested in any more insight you can offer. Thanks for contributing!
Link to comment
Share on other sites

Guest stewart

You could (edit: NOT ) designate areas of the map simply as good or bad for hiding, since the direction to the XCOM soldiers matters.

 

Couldn't we start with some Gigantic "If" tree or something, until someone who knows what they're doing comes along (assuming that one hasn't already).

Link to comment
Share on other sites

Here's how I suggest it's done.

 

1) Build a working model of the Battlescape. It's important that the features that go into the game ALL be implemented in this model, because this is where the AI is going to figure out how to play. Graphics are a non-issue, the AI could care less. BUT - most of the weapons should be implemented, and the EFFECTS of those weapons should be close to what the release version is going to do.

 

2) Build a input layer on the net that can see all of the stats for its own team and that is big enough to have at least one free input node per enemy. In other words, if the biggest alien team size is 20 and the biggest XCOM team is 14, the input grid should be at least 34 by 34. Some people split their input grids into two discrete input grids (one for the good guys, and one for the aliens). I have never seen a significant improvement doing this, but it's up to the developer.

 

3) Give the net as many intermediate levels as you want. The more you give it, the more memory and computer time the net will eat per move, but the more room it will have to 'learn'.

 

4) Define an output grid that has all of the moves possible for each team mate. So, if squatting, going prone, auto-fire, priming / throwing grenade, moving, etc were all moves one side of the output grid would be that size, the other would be 20 deep (1 for each member of the team).

 

5) You then write a routine that scans the output grid from lowest priority move to highest priority move. The output grid should be 'weighted' by the net in the move it thinks is most important (the one with the highest value). Then, if the alien unit still has TU's, move to the next highest item in the grid, etc.

 

NOTE: One of the Nets input layers will need to be as big as the map, with barriers deliniated as well as friendlies, enemies and hazards. This is necessary so the net can 'see' what's going on and act accordingly.

 

ALSO NOTE: An important input layer should be the aliens and enemies 'obvious' stats - IE; what weapon(s) are visible, what kind of armor they have, whether they appear to be wounded / stunned, etc. This will allow the net to determine its actions according to what it see's. E.G.; A Psilon will run and take cover from a guy with Heavy Armor and switch his weapon from a blaster pistol to a grenade.

 

That is the rough outline of a successful net. The last step you will need to do is have the ability to watch everything the net does and give it a value from -10 to 10. This will 'back-propagate' values into the net and train it to do what you want. You should always take your best 3 tactical players and have them rotate teaching the net. Eventually, it should be able to get where it can readily beat all 3 players, or at least hold its own. Training the net is important and its REALLY important not to exaggerate your values. For example, if the net moves to an area you don't think it should have moved to, don't give it a -10 score. Maybe give it a -2 or -3. If it manages to blast one of your guys, don't reward it for that - reward it for making a decent maneuver. In other words, did the psilon hop out of cover, use ever TU it had to blast the guy and is now standing completely exposed to the rest of the squad? That action should be lightly punished or perhaps a slight reward because he managed to geek a squaddie.

 

Again, just my 2 cents..... :blink:

Link to comment
Share on other sites

That's not a bad idea Stewy. Maybe there's a way to train the AI system to learn from our plays. Basically two players go at each other and the AI records what all the moves are. Like Marki said it doens't matter about gfx, just that the game envionmentally correct and the weapon data buildings an such are defined. That way when playtesting whoever is playing will have all their moves recorded and possibly used in game. Instead of forcing the AI to learn why, just show it what it should do. The amount of data that's collected from playtesting (something I'm sure we won't have any shortage of volunteers for) could be the basis of the decisions made by the AI.

 

I've got no idea whether this would be feasable but seeing how bots in Quake 2 learnt new maps by running round them, I thought we might be ale to do something similar?

Link to comment
Share on other sites

Stewart - you're most likely correct. :)

 

Quake bots have way-points embedded into the map. They use Djikstra's algorithmn to run through it. Unreal bots use the same idea. If you've ever built a level and accidently forgotten to include the way-point markers, you get a really good picture of how dumb the bots really are. :crazy:

 

as far as the multi-player battlescape being used - I think that's a good idea. But instead of the AI watching the other player, it would be more like the AI sitting in AS another player and then having someone criticize it's movements to give it feedback. Since the multi-player battlescape is being planned anyway, it sounds like the best way to make the training ground.

 

Also - a neural net may not be the fastest way to get your results. You could always fall back to a rule-based state machine which is what most game AI uses. It's not pretty, and it's not always the brightest, but it's relatively easy to program.

 

Has anyone started the battlescape yet? I was thinking of using something like the SIMS wall / ceiling / object placement editor to build landscapes. You could throw together a simple mesh file of a wall segment and have the editor patch them together and throw the textures on them. I'm really not sure that something like a quake editor would be appropriate for the battlescape files. I guess you could always use Maya, etc but it seems like a small and simple editor would be the best way all around.

Link to comment
Share on other sites

I don't know if we were planning on pre-existing maps or doing tiles of maps and putting them together for a random map generator. I would prefer to see the random maps -- but based on the AI etc that just might not be feasible. As far as I know nothing has been started with the battlescape. Rule based is what we had originally intended to do and I agree it is much easier to program, I may have just gotten ahead of myself with the net thing and been a little too ambitious. We'll see how things go.
Link to comment
Share on other sites

Quake bots have way-points embedded into the map.  They use Djikstra's algorithmn to run through it.  Unreal bots use the same idea.  If you've ever built a level and accidently forgotten to include the way-point markers, you get a really good picture of how dumb the bots really are.  :crazy:
Thats exactly why i had asked you how difficult is to program a net that can recognize topographic features from a map (in this case probably a random generated height field, with objects in it)...

 

 

Has anyone started the battlescape yet?  I was thinking of using something like the SIMS wall / ceiling / object placement editor to build landscapes.  You could throw together a simple mesh file of a wall segment and have the editor patch them together and throw the textures on them.  I'm really not sure that something like a quake editor would be appropriate for the battlescape files.  I guess you could always use Maya, etc but it seems like a small and simple editor would be the best way all around.
In Xcom theres no pregenerated maps, they are procedurally generated... and i think everybody here wants to keep it that way.... About the battlescape, no work had been done YET... so if you think that you can help on that, go for it... if you want i can send you a height fields implementation (no graphics, only the height field, i have to make the graphic interpretation yet) using Perlin Noise and you can start working from it. What do you think?

 

Greetings

Red Knight

Link to comment
Share on other sites

Guest stewart

Mark(v or y)j, I would not be surprised if we use a state machine (or as I said earlier, a gigantic "if" tree), at least to start.

 

I think the Battlescape AI should be a separate library/dll, and we should set things up so that we are prepared for anouther separate geoscape AI (when we are ready for it). Then to multiplayer Battlescape the AI is a person as far as it's concerned.

 

You know if do this right we can make the AI play both sides while we watch, just a thought.

Link to comment
Share on other sites

A Perlin Noise generator for the height map would be fine for outdoor maps (or undersea), but you'd need more than that to get a playable field. You'd need to be able to flatten sections and drop buildings on it and still make it somewhat believable to someone looking at the map. Plus, a Perlin Noise generator won't put in hazards / obstacles / cover like bushes and trees (or lamp posts, etc.)

 

That being said, here's some sample Perlin Noise code I dug up.

 

protected float Noise1(int x, int y)

{

int n = x + y * 57;

n = (n:)

Link to comment
Share on other sites

Stewart -

 

Hey, it's MARK - Y - J. I gave myself a humorous nickname to remind myself to never take myself too seriously. :)

 

And yes - if the AI is done right, it could take either the AI or the XCOM side and play both extremely well. People watching should be able to pick up some new tricks..... or they may watch and laugh as the AI flops around like a chicken with it's head cut off (if the AI sucks).

 

Hey - side note: In multi-player mode, if team A gets to the event within a minute of team B, can they gang up (or compete) on the same map? That'd would be really cool to have more than 1 XCOM player on the same map. You'd never know if they were going to help you or shoot you in the back!

Link to comment
Share on other sites

Guest stewart

Multiple teams falls in the realm of geoscape multiplayer; discussion of which has been tabled.

 

As for believeable terrain. What if instead the random terrain generator didn't generate the ENTIRE map but pieces, which are then sown together. Really being like Battlescape is now except that the map isn't built from premade parts, but parts made right then and there. You'd get your flat sections that way and more varied terrain as you could change the generator parameters for each section. BTW we could still throw in premade sections as well, it'd just be one more "if" statement afterall.

Link to comment
Share on other sites

Yeah, that's doable. You have pre-made map sections with XML descriptor tags defining where each one is appropriate. Then, when building the map, you assemble it from acceptable pieces in a random fashion.

 

You'd have to have descriptors like which side can face a street, etc. But that's all doable.

Link to comment
Share on other sites

A Perlin Noise generator for the height map would be fine for outdoor maps (or undersea), but you'd need more than that to get a playable field.  You'd need to be able to flatten sections and drop buildings on it and still make it somewhat believable to someone looking at the map.  Plus, a Perlin Noise generator won't put in hazards / obstacles / cover like bushes and trees (or lamp posts, etc.)

Of course not, first you generate the map geometry (height field)... then you randomly (you can use whatever method it suits your needs, say LSystems, etc) position all object (flattening the height field in that position). In Terror sites you can start with an undisturbed height field and then put the roads and houses all over the place in the best way you can think off...

 

Just for the start premade patches are not a problem with that approach, leaving place for future additions.

 

Greetings

Red Knight

Link to comment
Share on other sites

  • 1 month later...
Now that I have joined the group, I want to volunteer my services helping with the AI. I obviously do not have the theoretical background of Markyj, since I have been busy writing operating systems for the last 25 years. However, I have developed game AIs in the past. I think Micah can vouch for the effectiveness of my AI in the game ISC. Imperial Space Command (I didn't name it) is 2-8 player, play-by-email, 4X game of space conquest that was published in 1986 and can be downloaded (with source code) from www.the-underdogs.org. The game was written in Structured BASIC (my own dialect) for DOS 2.0, and will not run on WinME. However, it runs fine on OS/2 and Win2K. The AI is probably best described as an expert system that mimics my style of play. I published a series of articles decribing some of its terrain handling algorithms in BYTE Magazine in 1979.
Link to comment
Share on other sites

How does it work; a big humongous "if" tree or something?

The ISC AI attempts to make moves by going through a series of phases, based on the priority of targets. It continues to allocate units to tasks until it runs out of resources. For each kind of mission, it looks at what it can build, then runs a quick simulation to see if it will probably succeed. If not, it goes to the mission with the next highest priority. I believe I wrote a document describing it. I'll try to find it and post it here.

Link to comment
Share on other sites

  • 1 year later...

I was wondering if this topic was still "hot" and if the participant people are still around?

 

Maybe there has been some progress on that issue already?

 

I'm asking because I strongly believe that a basic AI should be implemented as a genetic algorithm. So if there was progress my idea may already be obsolete - if there was none I'd like to put it in for discussion.

 

The pro for GA would be:

- reasonably fast (much faster than brute force but slower than rules)

- gives pretty logic results if done properly (the chain of actions will be pretty smart)

- allows nice scaling of the AI (allow more generations to be calculated or more individuals to exist and the solution will be better)

 

cons would be:

- quite some thought would go into coding the actions

- it won't adapt so the coding/decoding is critical to the success of the GA

- it might result in interface changes or increase the number of interfaces (decoding might need a few calculations like "will bullet hit?" and "did I just walk through that wall?")

 

So... I just thought I'd ask before placing an idea nobody needs. :D

Link to comment
Share on other sites

Hi, I know nothing about AI accept it seems to cheat in some games, and that it will be to core of making the game tactically interesting.

Ok so hers my 2cents worth.

If you programmer type people ever invent a learning AI then what would be so bad about taking the learnt information from the public beta test and incorporating that in to the game so that the computer has some idea of how actual people play…

P.S. can the AI NOT cheat in the v1.0

Link to comment
Share on other sites

Having read through the forums and talking to some people I see that it is planned to give the AI the same interface as the user. So it should not know more than the user knows.

Really?!

That will make it very hard to make a challenging AI. They have to have some advantage because they don't have common sense and anything like that.

Link to comment
Share on other sites

Having the same interface does not neccessarily mean that it has the same amount of data. e.g. it might get a visible radius 50% larger than the user.

What I meant with my last post was that it does not get extra kinds of information.

 

Or it might be allowed to ask the server for a few calculations. So it would simulate a few (or a horde) of possible actions to see the outcome.

 

Giving the AI "supervision" will not make is smarter - only more knowledgeable.

 

The problem is that the AI would have to go in at least two levels:

 

a strategic level would have to coordinate the efforts of the group and

 

a tactical level would control the individual aliens.

 

-----------------------------------------------------------------

 

I myself would suggest a genetic algorithm as the neural network is so hard to train that it would take a whole lot of discipline and hours upon hours to do so.

 

The GA would simply be a bunch of calculations.

 

The difficulty with the GA are mostly the coding/decoding (which has to be done "on paper") and the amount of calculations. The more calculations the better the actions might be, but the longer it would take.

Link to comment
Share on other sites

Having read even more I come to realize that many would like to see certain tactics implemented.

 

If you go with the two level design it should be possible to incorporate a number of tactics by giving the AI a number of preset actions as starting-solutions for the GA.

 

The problem that still is existing in my eyes is the translation of "they must go a few steps ahead to spot the enemy and than the other troops can go and kick them". For the AI all these various tactics would have to be a sequence of well-defined actions.

 

---------------------------------------------------------

Like:

Strategic Level:

Have {1,3} soldier(s) of high TU and reaction go in direction A

Have {2,4} soldiers of high accuracy and damage go in direction A but invest less TUs

 

Tactical Level:

Soldier1: go 5 steps into direction A, knee down

.

.

.

Soldier 5: go as many steps in direction A as you can, keep enough TU for auto-fire

 

---------------------------------------------------------

 

If the tactics could be translated in this way we could feed the GA with this and it would (more or less) randomly alter them, combine tactics and thus be relativly clever.

 

The strategic GA would have to run every two or three turns to determine a new/better tactic while the tactic GA would run every turn.

 

The strategic GA would also have to set the fitness-function for the tactic one. If this proofes to be too difficult it might be chosen to have the strategic GA replaced by a switch-case statement choosing one tactic and having a well-defined fitness-funcion.

 

--------------------------------------------------------

 

In the end it would be essential to have the AI as a modal thing. So a number of implementations can be testet...

Link to comment
Share on other sites

After reading through i got a simple idea.

If u use a Neural net dont use it to determine the move of every alien that would be waste but use it for global tactics.

He has the input nodes so then he can deside where to advance and where to retreat to safe zones. What enemy units pose the biger threat and what enemy units can be attacked the safest.

Then when u know what to do deside what units should do what retreat or attack

and then let the units themself deside where to do it and how.

So basicaly u have 1 commander NN that gives orders like Unit1 retreat Unit2 retreat Unit3 attack Enemy 1, Unit 4 attack enemy2 etc u get the point

then the units themself look for the best way to hide or attack and do so.

This makes the NN simpler to work with and simpler to train since they dont have to do that much.

 

Just a global idea need to be refined though.

 

And really good news is i finaly am able to get my development Pc online so hopefully i can start working and looking though the code.

Link to comment
Share on other sites

Hi!

 

I've been thinking about AI again <_< .

This time I thought about the mission of the planetscape AI and this is what I came up with:

  • harvest livestock and people
  • bring terror to the world
  • kill X-CORPS

These are their basic targets in my eyes, right?

 

Thus we find what they'd do all day:

  • find remote places and targets --> patrol mission
  • attack remote places or targets --> attack/terror missions
  • build bases in remote places --> transport missions

I suggest that the Planetscape AI would be pretty dumb. Not more than a thread that is sleeping 99% of the time.

 

In my opinion the whole point of the Planetscape AI is to fly around and create missions for the player - it does not have to be clever to do this.

 

So I'd do an endless loop which sleeps and sometimes decides to do one of the three missions described above. For every successful mission I'd give the AI some points.

These points would decide if the AI does another mission when it gets out of it's sleep or not. Thereby we can slow it down so it doesn't put too much pressure on the X-CORPS.

 

I suggest points for Patrol Missions (1), spotting an X-CORPS-Base (20), harvesting missions (5), terror missions (10), killing civilians (2) and killing X-CORPS-personnel (5).

 

Each month it gets a new limit. When the limit is reached it will cease any activity.

 

Edited by Red Knight

Edited by red knight
Link to comment
Share on other sites

Addition:

 

If this kind of dumb AI is implemented along with the Automatic battles suggested elsewhere, the Planetscape could be functional and ready for testing earlier than expected.

Edited by red knight
Link to comment
Share on other sites

  • 1 month later...

Perhaps, we can use method, already used in chess programs. They use:

1. some statistical methods, type of "it-is-good-most-of-time".

2. resource-based strategies (defend the king).

3. dictionary moves.

 

First and second are hardcoded and inspirited by expirienced chessmasters. But the third one is a collection of "winning strategies". Then program detects "known position" with dictionary, it follows found path while human-player does known steps.

 

My idea is the next one: for all three methods we need a database of positions and moves, that leads to winning. This database could be used for analizing (not only in dynamic, but after game also) and updating of dictionary. It can be used for neural networks education, if we'll use this method.

 

We can use embeded database for collecting player moves, storing of AI dictionaries and so on. We can even exchange data, to make a centralized AI-analizing engine, quick and easy update of AI.

 

This database can also be used for storing game resources, saved games and texts translations.

Link to comment
Share on other sites

×
×
  • Create New...