Jump to content
XCOMUFO & Xenocide

Self Improving Ai


Isebrand

Recommended Posts

Hi all,

 

I don't know how the current AI is designed, so I don't know if my idea of a self-improving AI is applicable at all. Anyway - here it is:

 

My suggestion is to use an genetic algorithm to improve the behaviour of the computer controlled aliens (for the sake of briefness I use bots from here on). Genetic algorithms (or more general "evolutionary programming) have been around since the early 60ies. The idea is to use some of the mechanisms found in evolution to search a space of possible solutions for the (hopefully) best solution.

 

Genetic algorithms have been more or less succesfully used on a wide range of problems, from optimising a jet-engine outlay to aligning DNA sequences. I used it once for a set of differential equations that described the metabolism of a bacteria.

 

Here is the basic idea:

 

First one would need a way to descipe the behaviour of the bots in a sequential way. A program is a set of sequential orders, so one could run the genetic algorithm directly on the code. However, this would be rather inefficient and would lead to a lot of problems. A much better way is to implement the behaviour in an abstract way. Every bot would have a set of "genes", each gene describing how a bot reacts under certain circumstances. For instance an extremely simple set of genes with very abstract reaction schemes would be:

gene : possible reactions

[under fire] : attack/hide/play dead/lay low/find cover/run away/find ally

[wounded] : attack/hide/play dead/lay low/find cover/run away/find ally

[spotted enemy] : attack/hide/play dead/lay low/find cover/run away/find ally

[been spotted] : attack/hide/play dead/lay low/find cover/run away/find ally

[match start] : attack/hide/play dead/lay low/find cover/run away/find ally

etc etc

 

Every bot would then have a set of rules, for instance:

[under fire] : attack

[wounded] : hide

[spotted enemy] : find ally

[been spotted] : hide

[match start] : find cover

(Of course the reaction schemes would have been much more detailed, this is just to make the point.)

 

For the first fight all bots would have a random set of genes, their "genome". However, for the second fight those genomes with a "higher fitness" would have a higher chance to be drafted for the fight again. Fitness is a measurment of how well the bot performed in the last battle, e.g. damage inflicted + survival time.

So the genomes of those bots that did well have a higher chance to become the genome of a bot in the next fight than those genomes of bots that didn't perform well.

 

The important part is now that some genomce are changed. Each gene has a certain chance of spontabious mutation, say 0.1%. In addition there are other operations possible, for instance cross-overs between genomes.

 

An example would look like this:

Imagine 5 bots with the genomes A,B,C,D,E. After the first battle a fitness for each genomce based on the performance of the bots is calculated, and you get something like this.

A - fitness 0.4

B - fitness 0.15

C - fitness 0.1

D - fitness 0.05

E - fitness 0.3

 

In the next round 10 bots are needed. For each bot a dice is rolled what genome it gets. By chance you would expect something like 4 x A, 1 x C, 3 x E, 2 x B and 0 x D. These genome then undergo mutation, so for instane in one A genome the gene for [wounded] is mutated from "hide" to "attack". In addition information between some genomes is exchanged by crossing over.

 

After the battle is over, fitness is calculated and the whole thing starts over again.

 

In practise you would probably need different genome classes for different aliens - a psi-fighter with a genome optimised for a sniper wouldn't do any good.

The genetic algorithm has the advantage that the AI would adapt to different styles of different players. In addition it fits quite nicely into the background story - first the aliens have no clue how to fight the humans, but with more and more battles they learn and "genetically modify" their soldiers to fight better against their enemy.

 

Any comments? :)

Edited by Isebrand
Link to comment
Share on other sites

so, as the game progresses, through a set of randomly determine mathematical 'mutations' the aliens become better at combat and they become smarter.

The idea of adaptive AI has kept me on my toes for some time, some games, however get to smart to quickly. The problem with this type of game is that if you need to play and loose the first five times before you work the game out, then by the time you do work the game out the AI has evolved to the point where you can’t beet it with said strategy.

The AI file would need to be reset with every new game start or stored with the save game file.

For example, if you play X-com apocalypse, for, say four months, in the very first game, poppers will not show up until the fourth week, depending on how well you play. After that, if you restart then the poppers will show up in the second week or even the first. Here in lies the problem. Along with a sophisticated AI you need to base the game on certain triggers. Similar to the configuration of X-com interceptor. Interceptor relies on five major triggers, the alien messages. If you don’t research them then the AI evolves slowly, but if you do, than the AI suddenly jumps to a whole new level of thinking, the trick then is simply not to activate the trigger.

What I’m saying is that after all is said an done, after twelve years of gaming, there are a lot of AI’s out there and they work in a lot of different ways, all requiring amounts of processing power and not all of them work as well at the creators had hoped. The AI needs to evolve, no question that the game needs to get faster as it goes on, but at the same time, it needs to go slow enough the each person that plays the game has a chance of winning. I’m sure if one of you family members started playing is super human and bee the game that first time you would s*** you self, but if you then played and the AI was still going in super human and you were playing in medium you wouldn’t be to pleased.

 

Any way, I have rambled on for long enough.

 

Good idea BTW

Link to comment
Share on other sites

Well, it is a good idea, but wouldn't polymorphic code (or something similar, like what you are describing) be hard to mod? We are trying to make xeno as easy to mod as possible. Other than that, it is a good idea.
Link to comment
Share on other sites

A problem could be auto-combat.

If it will be implemented as a full AI vs AI

simulation, the "human" AI would have quite

a disadvantage because it's got less

experience (because at least some battles

would be played by the player himself).

On the other hand, if the "human" AI could

analyze the "player intelligence"'s actions and

use this data for its evolution, that would be

cool. BUT this could lead to a too powerful

"human" AI...

 

Errr... what I'm trying to say is:

There could be multiple AIs and all of

them should evolve at the same speed,

so that they are fair partners when playing

against each other.

Link to comment
Share on other sites

Thanks for the positive answers. First - if you think this could be usefull, I'd volunteer to program it (I'd actually love to do it). That is - if you could diggest another developer. To minimise the friction introduced into the development process I would only work on the gentic algorithm, connected to the main AI code only via a (hopefully) clean interface. The idea would be that you can have a classical AI that works, but if the genetic algorithm works out you can plug it in to improve the AI.

 

A few notes:

- auto-combat:

The easiest solution for auto-combat would probably be to have a differnt set of AI for auto-battle and manual battle. These could then be optimised independently.

 

- polymorphic code

If using an abstract representation as I described, the code would not be polymorphic. It would only become polymorphic if you would run it on the code itself. That's an interesting approach, but you would end up most of the time with memory violations and none-executable code. This has been done before, and you can implement rules to prevent these problems, but it is a rather inefficent way.

 

- mod-support

No problem - the "genomes" could be saved in clear text, so any modder could look at them. In addition one could have the formular for fitness modable, so a modder could decide to train for "most glorious last stance of an alien" rather than "most damage inflicted". ;) Changing the fitness for optimisation has another advantage:

 

- agressivness levels

If you want aggresisve aliens, you optimise for "damage inflicted". If you want cowards, you optimise for "survival times".

 

In addition I have been thinking about outfitting every alien with a "commander genome". Whenever an alien spots an enemy, he would become a temporary commander. Based on its military rank and species, it would have a certain range within it could command other aliens to set up an ambush (for instance Colonnel - all on battefield, private = within 5 squares, brainless aliens = 0 squares) . It could be done in a rather simple way, for instance based on rules in the genome like "only fire if at least x% of the aliens under temporary command can move into position to open fire at the same time". If that works out, it would give players the incentive to kill high ranking aliens first, because then the aliens would not be able of coordinated actions anymore ... Ok, ok, probably this is going too far for now, but something one could keep in mind. I think it would be extremely hard (and extremely interesting) to create an AI that is capable of succesful coordinated actions that are more complex than "here is an enemy, everybody go get him".

 

Back to the "tactical combat" genome - it should also contain information about preferred locations. The alien would then try to stay to close to the prefered location most of the time, for instance "inside room x squares away from window", "rooftop", "inside UFO y squars away from door". How hard it tries to maintain its preferred location could of course also be subject of optimisation by the genetic algorithm. :) You could also have an initial setup scheme, that is also optimised by a genetic algorithm. :wacko:

 

Some words of caution:

It could happen that the optimisation is not going fast enough (e.g. it would take too many battles before good genomes arise). In this case one would have to start with preoptimised genomes, but would lose the ability of an self-adjusting AI. Not sure if this will happen, only testing could show. :)

Edited by Isebrand
Link to comment
Share on other sites

you are willing to program this? Excellent! :)

Post in the Recruitment Center and you will be made a recruit. Then speak with the other programmers (such as mamutas) and see what they think. Remember, programmers are welcome here! We always need more! :)

 

Back to the topic:

It would be excellent if we could implement something like this. :D Great idea! :)

Link to comment
Share on other sites

possible solution to the "ai starts unoptomised and easy" problem. Include a partially optomised AI, just have someone really good play a few rounds with it so it's not helpless, but it's not very good. Now since I know nothing about ai or programing, i could be totally missing the point. But I thought I may as well offer a possible solution.
Link to comment
Share on other sites

If you want partly optimized AI and everything is based on randomized numbers then, maybe you could start with a random numbers, say 3,4,9,3,7,3,1,7,3,4,9,6,0 and then add another random number whose range is determined by the hardness of the game, like

Difficulty Range of possible random optimizers

1 EAZY 0 – 1

2 0 – 2

3 0 – 3

4 0 – 5

5 0 – 8

6 0 – 13

7 SUPERHUMAN and a little bit 0 – 21

And so on, using the Fibonacci number sequence. And including all of the numbers too two decimal points inside the given range.

Or have I missed the point completely?

Link to comment
Share on other sites

@A_dxman - not sure if I understand your suggestion correctly. But if I do, the problem would be that you wouldn't know what values would make for a diffcult / easy game. It would be combination of all values that determine how good the AI is. Edited by Isebrand
Link to comment
Share on other sites

Ah... I'm not a programmer so my opinion is highly contestable, but doesn't that entail a lot of programming know how? Much like that computer that beat KAsparov.

 

I wasn't aware that cognitive learning skills can be programmed into an AI now.

Edited by warhamster
Link to comment
Share on other sites

Well Isebrand (I hope I spelled that right) has volunteered to program it. So, hopefully it shouldn't add any more work for our primary dev team. Plus, Red alert 2 had something similar and it went like this: "If player builds x>5 tanks, begin building anti-tank units. If player builds x>5 attack dogs, do not build more infantry except engineers when needed." The biggest problem would probably be getting the comp to remember what it learned by next battle.
Link to comment
Share on other sites

The grenade example is a very good one. I will use this as one of the first testcases - one team only equiped with grenades vs. a team with guns and see how the AI for the team with guns evolves. If it works out, you could also use this approach for game balancing - if there is no good strategy against weapon X only users, then this weapon might have to be scaled down.

Though it might take a while, at the moment I'm still struggling with the C++BuilderX. But I 'll get a book next week, so shouldn't be much of a problem. The code implementation is straight forward, and the automatic battle resolving is perfect for testing the genetic algorihtm (otherwise you would have to play hundreds of battles to see how it works out).

Edited by Isebrand
Link to comment
Share on other sites

How about creating several 'used' genomes by playtesting them. Then put them as the starting ones.

 

eg. little used genome: Medium

Quite a bit: Hard

A lot: Superhuman

Beginner endgame: Iron man (Oh $hit!!!)

Link to comment
Share on other sites

  • 3 weeks later...

I am extremely impressed over these ideas of self-improving AI. The whole concept of Xcom IMHO is challenge and spooky feeling. With this AI, the game would get harder and harder, without implementing SUPERHUMAN alien races (like the sectopod in original XCOM). Don't get me wrong, I would never remove any alien, but my point was, that a Sectoid can be just as dangerous as an Ethereal, if he's trained enough.

 

Btw, my first post! :D

Link to comment
Share on other sites

  • 1 month later...

This is a very great idea, but there are instences where it can be exploited.

 

Like you said, games that use triggers, allow the players to find the triggers and avoid them to their advantage. A similar tactic can be used againts your AI. I guess I have to dub it AI Conditioning.

 

SCENARIO:

 

Alien "fitness" is determined by damage done + survival time.

 

A bot evolves so that it attacks early, then hides the rest of the mission. ( this would give it first strike and then a prolonged survival time)

 

This type of AI could easily be defeated once the player reconizes it. The player would simply defend againts the first attacks, then ignor the bot the rest of the mission, until the less favorable gnomes are eliminated.

 

Thus, this weak gnome is believed to be better then the ones that actually are, because the player has conditioned the AI to his liking.

 

SCENARIO:

 

Alien "fitness" is determined by damage done + survival time.

 

The player sacrifices rookies who simply walk out in the open (or something that would teach the AI the wrong way to fight). The bot learns that it no longer needs to take cover (or something similar). The more experieced agents take out the aliens and procede to the next mission.

 

The player has given himself an advantage that can be used in a more important mission. Sacrificing rookies in unimportant missions would condition the AI to be weaker, and would make those harder more important missions w/ experienced agents a lot easyer.

 

I have done a lot of AI programing and the search for the perfect AI continues. In games like chess or checkers the possible moves is limited and can be analised each turn easily. In a game like X-Com, you cant simply find the best course of action because there are so many.

 

To keep the player guessing, the AI has to hide its own inteligence. It has to do irrational things, sacrifice a soldier, or do something stupid so that the player doesnt become wise to how the AI really works. These actions should not change how the AI works, they should just be a random occurance.

 

MY GUESS AS TO HOW THE AI IN X-COM WAS PROGRAMED.

 

I believed the programers used a simple algorithm that is similar to the simple one that was proposed.

 

There seemed to be 3 main types of codes (the vary in each mission and by each species of alien)

 

I: Defend area. The alien attempts to defend its starting location or a random location on the map. It hides until an agent is spotted or suspected, then attacks.

 

II: Scout area. The alien simply moves in a random direction (most often towards the UFO or Human ship). The alien engages any enemy it encounters, taking cover when possible

 

III: Defend commander. Like the defend area, the alient defends a target alien. A very aggresive AI, sacrificing its own life if needed to protect that of the target alien.

 

To give a complete breakdown of the AI would take pages and pages of text. If anyone is interested I would be willing to perform a combined effort to further our understanding of the X-Com AI.

 

If anyone already knows how it works, I would love to have the information!!! :beer:

Link to comment
Share on other sites

Well i cant compile the project files yet because of an internal compiler error. But once that is done i will be happy to implement the AI to a point that it might even learn from its mistakes. Although i dont think that i would really need that part.

Making bots for every allien is nice for easy but it is simply not good enough for a squad based game since it is more important to work good as a team then to work good alone. ( Anyone that playes 3D team shooters will agree i think )

 

I would propose again ( yes i am a fan of it ) project based ai where the determination of the projects and the priority will be done by some sort of neural net / genetic algo. So the aliens work together as a team and try to take out the most important / the easiest targets out first. So if u put 4 next to each other then a grenade will be inbound but if your sws is all alone and can be attacked by 4 at once u better say :wave: to it :)

 

But once i get the project working i will be happy to work or at least discuss the implementation fo the AI.

 

As a sidenote i actualy build a small test project based ai and the only problem was that ( well this is embaresing ) it beat me :( :( or at least it played the same as i would do if i where playing that side.

Link to comment
Share on other sites

I hope u are refering to my post. Just a normal map with some enemies. They could only attack hand to hand but that was just the game.

I had the algorithm already complete and the number of enemies where just a few to test it. I did not had the time to fully complete the tests on a complicated test level since the engine was not good enough ( and time constraints ).

 

It worked as i would and as i would like it to play and i used a project based AI system where my units where projects for the AI and it devided them in the best way it could. It actualy tried to corner my units ( HTH attacks ) and attack them in pairs so there was no way the units could escape.

Link to comment
Share on other sites

  • 3 weeks later...

First off I would like to say Hi, im new to these forums. I was looking over the topics and this one peeked my interest (im a 4th year student of AI at university).

 

Have you thought more on how you will optimise the code? A standard genetic algorythm (GA) may not be the most suitable. Instead an elovutionary stratagy (ES) may be better. Im thinking along the lines that the different races would 'learn' on the battlefield differently BUT in general indivuidals of that race would learn in a similar way. Eg sectoids may optimise toward survivability and floaters toward agression.

 

So rather than using a GA that will optimise every individual on the battlefield seperatly toward a single goal. Each 'race' would optise toward its own style (but each individuals within its race would still learn seperatly).

 

Im thinking more that an ES using golbal intermediate recombination (the ES form of crossover) would be the best choice. This means that each race would form a seperate population and that each member in the population would contribuite toward the new 'offspring'. There are a few differences between this method and that of a GA, firstly each member of the population will contribute rather than selecting 2 members like a GA, also each gene is created seperatly (in a GA a crossover point is selected and the first half of genes is used form one parent and the second half is used from the other parent)

 

main advantages:

1. different alien races can be grouped seperatly and may appear to 'learn' differently as you set up different targets for them.

2. with different races developing differently, a seperate form of AI could be used in the geoscape mode to choose how to pick a force to attack (eg a tactical choice to use sectoids + mutoid men or sectoids + floaters. a fairly simple knowledge based system could do this form of AI)

3. this method will work for an odd number of troops (eg if 1 out of the 4 enemy troops died then a GA wouldnt be able to pair up and crossover 3 people)

 

possible disadvantages:

1. adds mode coding and scope for something that you may not think is nessary (as a GA will optimise and let the troops 'learn', my idea of giving each race different targets to learn toward might not be wanted/needed)

2. an ES (let alone a few smaller multiple ones) will take longer for the pc to process, although i dont think it would be that much longer than a GA.

Link to comment
Share on other sites

hmmmm....i have no experience with AI, but i got some ideas.

 

instead of adding 100% selfimproving AI, we simply tell it to do it;

 

scenario: you use granades to stop the aliens.

 

The programming department would code in counter-tactics to give the aliens. Say, in the scenario, the department set several counter tactics agenst "granade bombardement". so it could be;

 

1)camp in ufo

2)scout, keep out of throwing range.

3)flee if granade is spotted.

4)mindcontrol soldiers with granades, make them drop the nades.

5)stay away from other aliens.

 

now, only high sectoids, and etherials can use tactic 4, so the AI ignores this if not those.

 

the ai chooice one of those setnings for each alien. It might be diffrent, and "stupid" aliens might even 'dissobay' the counter tactics and attack in packs and storm the craft. It might be diffrent for each individual genomes; one with "run away" would likely chooice the flee option, while a hide, or find cover genome have a higher chance of the scouting option.

 

scenario; player is camping outside UFOs door(s).

 

now, this scenario leaves few possibilities. I use this to demonstrate more advanced tactics they might use (smarter AI for higher levels).

 

1)outcamp the opposition.

2)storm the doors.

3)mindcontrol soldiers.

4)ADVANCED TACTIC; open door indirectly, and shoot blasterbomb/throw granade out the door.

5)ADVANCED TACTIC; command weaker alien to suicide himself to find humans possition, then mind control, or normal attack.

6)ADVANCED TACTIC; blow hole in UFO hull and flank opposition.

 

Here, there are several more teknically tactics to use. Those are more unlikely to be chocen, and i only thing blowing a hole in their own hull might be more of a last resort. Those could be called "desparere" tactics.

 

There could also be offensive tactics (eg, if player is camping).

Link to comment
Share on other sites

what you are saying there is more of an knowledge based system, which looks at the situation and makes a decision based on the facts and rules that are given/programmed. The problem with these systems in games is that not every situation can be though of and added as a rule so a player can find a loophole and exploit it.

 

Having said that i think that rather than using self improving AI from scratch, the game can randomly generate the genes and depending on the difficulty setting chosen the game can 'learn' (example easy mode will just randomly generate the genes, and hard mode will randomly generate the genes and run a few training passes to 'optimise' the enemys before combat has started)

 

it might be needed to teach certain things by example but details of that will purely depend on the gene/rule structure chosen and scope of the game.

Link to comment
Share on other sites

Presumably we would seed it with a certain level of learning as a baseline by running the genes against a bunch of real people and seeing what emerges?

 

Are you studying games AI or general AI?

Link to comment
Share on other sites

yes you can set base values that you get from various testing (i would assume this would be best in the game. Then again randomly generating everything from scratch will mean every encounter should be different). You can have a learning rate that varys depending on the difficuly setting (learns faster on harder modes) and/or different races learn at different speeds etc.

Once you have a system in place to test you will be better to acess what to do (if random generation just makes them morons that huddle together and hide or one man army gun toting psychopaths then maybe suitable values can be used to begin with, as each encounter will be different the troops will learn in different ways)

 

Also you could use 'normally distributed numbers' rather than 'uniformly distributed random numbers' for the mutations in the genes. Uniformly distrubeted random numbers are as they sound: a random number (for example between 0 and 1, they would have an equal chance of being a 0.3 as they would a 0.5).

Normally distributed random numbers make small mutations more likely than large mutations (so a 0.3 is more likely to occur than a 0.5). The huge advantage of this is:

1. because the numbers are worked out in a formula they are random (c++ sometimes has a nasty habit of making psudo random numbers, although there are lot of ways around it, eg using the system clock at time of processing)

2. small random changes are prefered over large ones BUT there is always a chance of a large mutation appearing, which may be needed to get out of whats call a local minema (which means the system is stuck and isnt training anymore)

 

sorry if im going into what seems a lot of depth.

Im not doing gaming AI, im doing general AI but have done practical implementation of GA's and neural networks in c++ (AI is actually a simple subject to understand, especially if you are mathmatically/program minded)

Link to comment
Share on other sites

  • 4 months later...

Kryptic,

 

Let me get that straight... so for a specie:

 

- Have a set of S individuals that are derived from a pre-determined "basic" individual using normalized mutation.

- Pick up N individuals for the given mission

- During the mission, keep track of each individual efficiency (through a "mark" system, dependant, for instance, on the number of victims, the time passed hidden, the time passed alive, etc...). Of course, I suppose the way you balance between the different factors for the final mark is what separates each specie.

 

Now, what do we do?

 

- Mutate the N individuals according to their marks (the worse the mark the higher the mutation factor)?

- remove the worse out of the N individuals (mark threshold? Percentage of N?)... but when do you actually put mutation after that?

- A little of both?

 

Then:

 

- Sex-fest, the individuals without the worse and/or with the N mutants cross-breed to get a new set of S individuals.

 

Is that what you have in mind? I'd really like to fully comprehend it from an algorithmic point of view.

Edited by Julian
Link to comment
Share on other sites

I see everyone has considered Battlescape AI, nd so far the ideas look good. I reckon I understand it.

 

A game I find interesting (but not like) is the Dogz 4 program my sister recently bought. When two dogs are bred together it creates a realistic offspring using algoryhtms to carry dominant 'genes'. Could we use something similar (no not breeding the aliens). Eg. All aliens genomes at a start of a game are random. The useless ones die off, of the ones left the statss are mixed to create a more effective genome. Next battle the genome is used but slightly randomised with a few exceptional mutations. The weak ones die off... process is repeated.

 

Sorry if im basically repeating already suggested stuff... some entries were a bit long so i speed read.

 

Getting back to what i was saying... Has anyone considered Geoscape AI. I suggest that the globe be split up into areas (these could be the boundries of countries) and that each are generates a report each month/week/day of player stratagy in that set area. Eg Aggressiveness, Resources, funding, importance of area etc. An algorythm would then decide depending mainly on player resources, aggressiveness and areas of activity which areas are strongest. And then attack the weak points such as a really high funding country with a base which has recently been badly damaged etc.

 

Just some ideas. Hopefully they mean something and arent just ramblings :) I will now shut up. Thanks for your time

Link to comment
Share on other sites

  • 1 month later...

Don't worry Kamikazee, you are repeating, but it's sometimes good to repeat what's be told in an ocean of text... ^_^

 

Having studied AI at the University too, I must say those Ideas are interesting, but some are simply far too impractical... :huh:

 

Usually, with even a simple genetic algorithm, most beginning generations (anywhere between ten and a million) are plain and simply unfit to live... If the aliens were randomised from scratch, most would be unplayable, and it might take a thousand missions or more before the aliens become more challenging...

Genetic algorithms are not cut out to be used for learning on the fly. They are the great specialists of N-P complex problems that you can't solve analitically, hence you solve those statistically. They are great for initiating STARTING conditions of an AI, not to make it evolve...

 

Neural networks are, and their specialit is generalisation, as in pattern recognitions. However, they also need lots of training, and their starting values are often best initiated... with genetic algorithms, Besides, Neural networks are not garanteed to converge toward a solution, and they need a FIXED input to learn, ie: "in that kind of situation, do that... in this kind of situation do this..." and you whack them on the head until they learn... :hammer:

Hence, Neural networks are not cut out for this work, they'd recognise (with lots of difficulty) what the situation is, but could'nt do squat about it...

 

Instead, the original XCOM AI seems to be a simplified logic system, wich could be easilly adapted to fuzzy logic, a way to graduate the responses in instances where the conditions for certain actions are not fully 0 or 1, and the corresponding reactions are not either... Like for driving an automatic car (one crossed the US without the driver touching the wheel), or helicopter (that's how their new autopilots work), or Skyranger (seen any pilots in there?)...

Besides, Fuzzy logic would definitely look like overkill, because aliens either move toward something or not, they've spotted an ennemy or not, they have good cover or not, they shoot or not...

 

I think we've all seen the aliens being a challenge in XCOM without being too smart...

Sometimes they are just so damn tough you have to outsmart them, at other times they simply surprise us (rookie door-opener does not count, that's suicide).

 

 

FINALLY: :idea:

 

I think an expanded rule of the kind that was in the original would work just fine, just add more conditions/possible actions to cover certain weapons/tactics, like camping and grenading... Preferred spawn points are nice, simple & efficient, and we can vary those a bit more...

The "temporary alien officer" idea is great to make them coordinate, and it'll be a thousand times more efficient that way to make them use a bit of group tactics... Neat, simple, efficient... :D

 

 

Besides, this thread seems to have been inactive a long time, is there some new AI material to chew on?

 

*Edit*

Besides, What Icebrand proposed would be best served in using a genetic algorithm to make the INITIAL logic pattern as teh best possible... Little rules to make the AI more adaptive (like the grenades or the camping issues) would make all the difference in making the AI a lot more fearsome...

 

Imagine a kamikazee Cyberdisk hopping in the skyranger the first turn instead of shooting you... :boohoo:

Or better yet, 4 Sectopods attacking from different angles... :hammer:

A chrysalid rush!! OMG!! :devillaugh:

Edited by Paladin
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
×
×
  • Create New...