I thought this was pretty interesting. Not sure if any of you have seen this:

Seeing as Dell is selling them, I would say they’re making their way alright:

When i saw this first, the idea comes to my mind that in time of dual and quad core cpus one cpu core could be used only for ai. That way autonomous acting bots based on neuro-evolving with the rtNEAT algorithm for example would perform better.

For me the simulation of believable acting bots which have senses like vision, hearing, etc. in combination with charatcter attributes like to be curious or afraid is more fascinating than graphic.

  • justin

this card accelerates things like line of sight, A* pathfinding etc, something the rtNEAT would use as simulation input so they are not competing in that sense…it accelerates some low level areas that are often basics for higher level ai systems and need fast crunching(the core seems to be a fast graph solver)

it's a cool idea, but it's hard for these things to get out on the market, just like agea's physx…

It's not about wether Dell sells them, it about how many of them Dell and other can sell.

Not too many yet, I can promise you.

As for AISeek, to quote myself from:

I also don't seem to understand why many people here claim that current CPUs are very good for very branchy code. You shouldn't confuse features like branch predicting or out of order processing as making a CPU very good for branchy code (escp. not when branches can be very unstable). These are in fact features added to combat the problems that modern CPUs have with branchy code because of their long pipelines (to get more megahurtz).

I can see this work more easily than a PPU…

There is a lot less synchronization (expensive bus traffic to update from PPU -> CPU -> GPU), think in terms of one set of coordinates per object, rather than all the coordinates for each vertice in that object.

Secondly there is a lot less duplication (no need to put 256 MB of expensive RAM on your add-in card) because the data needed is way less. Unlike physics with it's needs for per-triange collisions and the like you can greatly simplify world data for stuff like pathfinding.

And lastly, unlike what is said here by some I believe some of the algorithms could definatly benefit from optimized hardware.

The tricky part will be to build this in a way that it can be applied to many different types of games, but for things like RTS' it could definatly work.