I'm involved in an AI project and we are expanding our testing/implementation environments due to some limitations with our previous environments. We have implemented interfaces to Final Fantasy 11 and the LEGO Mindstorm NXT. The former does not give us enough navigation data to really develop solid movement behaviors and the later provides sensor data that is essentially flat and featureless. Minecraft, however, has a rich environment that is broken down into a finite set of elements which makes it a good environment to develop a behavior based AI system. As such, we are looking to develop a modified client that will either expose user input and "sensor" data to the AI or build in a database interface into the client to read/write the data directly to the interface. To give you a better idea of the the AI project, here's a quick rundown:
Project Looking Glass is a software architecture designed to allow for the implementation of a multiple layered, scale less Artificial Intelligence. This artificially developed intelligent system is inspired by the structure and function of the human Prefrontal Cortex and Frontal Eye Field. An implemented AI on the architecture will serve as the main controller for a number of other projects in development by Valhalla. Despite sounding arcane and complex, this system has applications in Alzheimer's treatment, war gaming, enhanced learning and bureaucracy management.
The interface development is just getting started and before we dive into coding I wanted to see if any had experience with this type of interfacing or if anyone was interested in getting involved in the project.
As of today, cardinal movements, camera x and y, jump, left click and right click can all be controlled via the interface through the Project Looking Glass database. Tomorrow I'm going to start digging to find out what "sensor" data is on hand to start feeding to the AI, in particular collision information, local blocks and the fixated block. Once that's finished, I can start recording some training sets and building behaviors.
After some digging I found that the occlusion and frustum checks are done on a renderer by renderer basis and each renderer is tied to a chunk. As such, I can't utilize the Minecrafts Occlusion and frustum for near-field analysis of what is occluded.
However, I decided to develop a ray trace functionality that will looking from the front clipping plane to a max distance and return the first block it encounters. Right now it is set for 9 rays, a center block and all adjacent blocks, but it can be expanded. In order to do this, I build a rotation and translation transformation function so you can set through the ray distances along the player's origin, for which I have chosen the head.
TLDR: I have a list of objects that will contain the nearest block a ray encounters. It should end up like a fusion of a RFID and a sonar.