Friday 6 July 2012

Abandoned Projects

So, as a little interlude, I thought I'd list (more as a reminder to me) the "big" projects that I started and then gave up on; these are in the order that I started to write them.

  • Mercury - A game engine that was largely inspired by Deus Ex. There was a game idea associated with it too (Mercury: Rising) which was a multi-path, multi-ending FPS with a large RPG element. I always had trouble coming up with a suitably emotionally climatic ending for all possibilities though. I also wanted to make sure that this engine had excellent AI after having played and made some levels in Unreal Tournament. Learned an amazing amount about computer hardware, algorithm and code optimisation, graphics rendering theory (including some of the more advanced stuff for core engine development) and basic AI.
  • Perspective - An OS aimed to increase speed, decrease dependence on BIOS and be designed with security in mind. It barely got much past the bootloader, initialisation sequence and a handful of the core drivers before I got bored. Linux took long enough to develop for Linus Torvalds and I didn't want to spend so much time re-inventing the wheel. I did learn a lot from this too though.
  • Spy-Games - A joint venture with a friend on a hack challenge website; I mainly did storyline and PC based challenge design. Went quite well but eventually we abandoned it - apparently the website code still exists somewhere in the archives (but not mine). Great fun.
  • CASM: The Compatible Assembler - Pretty much what it sounds like, an assembler/linker (linker was to be called CLINK) that would be modular enough to allow it to run with any of the umpteen x86 assembly syntaxes. Primarily this was to produce a replacement for the version of MASM included in Hutch's MASM32 without the legal problems and to allow easy extension for the 64 bit processors and the OOP stuff being created in the community (particularly the correct name mangling). Naturally, I wanted it to self assemble, so it was written in x86 assembly. As it happens, GoASM did much of what I wanted to do anyway, so I gave up. Again, very interesting to learn about compiler and linker technology though.
Those are all the really big ones... maybe sometime I'll list the medium sized ones. I just hope that Flatrix doesn't end up in this list. I think my main reason for failing before was that I tried to do it all on my own - I won't be making that mistake again. For example, in Flatrix, I won't be doing the website or the UI - I freely admit to being pants at those; I'm an algorithms person. But even there, I may need some help, so we'll see.

Programming Games

I was reminded the other day about the old programming games that I was only vaguely aware of when I was in my teens: RobotWar, Code War etc. I just had a look around and most of these seem to have disappeared; only a handful are still active/remain and many of those are very old now. Most are some sort of very constrained adversarial game - the most open ended active one seems to be Imperfect World of Robots (IWOR) which is still incredibly constrained for testing general AIs. What would be best is something like the Matrix for testing bots.

Taking an aside for a second, consider what happens to children who grow up in the wild without contact with other humans (feral children). There have only been a few cases and documentation is notoriously bad but none of them turn out like the Jungle Book. Feral children seem to act pretty much the same as humans did before the stone age - almost no social skills, unable to learn language, etc. My point is that their brains at birth are entirely capable of doing all of these things but due to a lack of socialisation in a culture that we would consider even rudimentary, they fail to develop these skills. So if we manage to create a general AI as powerful as the human mind but fail to provide it with a cultured, social environment to develop in, it will end up an incoherent idiot.

But there are two problems:

  • Where do the "original" (first generation) parents come from?
  • A full 3D world with all the complex interactions of the real world would require processing power far too great for our initial algorithms... and how would we work out whether it is working anyway?
Ideally we would also like to run the simulated world faster than real time which means that the world needs to be simple but not too simple otherwise it won't be rich enough to generate "intelligent" behaviour. So what do we need?

I would suggest that a 2D world (think flatland) where the bots receive stereoscopic 1D visual information, have pressure/touch sensors, rudimentary hearing, etc (essentially all the human senses but scaled down a lot). That still leaves us the problem of the parents, however; in order to do something here, I would suggest that a load of scripted AIs be generated using a crowd-sourcing approach to play the part of parents and the rest of society. In other words, we need to create an entire flatland world and society... what I like to call the Flatrix.

So here's the plan:
  1. Determine the physics of the world.
  2. Design the physical characteristics and detailed sense information for the inhabitants.
  3. Define the way that we want society to work (languages, trade, governance, etc).
  4. Identify a scoring system to identify the best parent scripts/programmes.
  5. Define the interface for competitors (a protocol on top of TCP... or maybe even UDP)
  6. Create several of our own bots to start to populate the world.
  7. Open the competition, preferably with a website with a nice UI.
I would suggest two types of entry: free and $10. The $10 entry money will go into a central pot and at the end of the competition, will be split between, say, the top three entries. The free entries will not be eligible for any prize money.

Next time: Flatrix Physics Design

Saturday 15 October 2011

Sam & Polly

I bumped into this over at XKCD today... being bored, I wrote a bit of software to solve it for me (once I had figured out the method). I then had a look at the discussion page (to check my solution) and discovered that there was a conjecture about it working without the x, y <= 97 limit. Since I already had the code, I modded it to work up to an arbitrary values (well... beyond about 46000 it is likely to fall over due to the fact that it works with signed 32-bit numbers). Putting in 1000 for the limit actually gives a second solution: 16, 73. Code below (if you actually use this, please remember to turn on compiler optimisations... it will cut computation time by about 50%). Note that there are two extremely inefficient sections to this code, so up to about N=2000 it works fine but beyond that execution will take more than just a few seconds - I don't know if the algorithm is exponential complexity or just a high order polynomial (and I don't care)... but be warned.



#include <iostream>
using namespace std;


#define N 97
#define NPRIMES (N)
#define NCANDS (N)
#define NFACTS (N)
#define NPC 1000000


struct sPC {
int product, firstsum;
bool unique;
};


int primes[NPRIMES];
int candidates[NCANDS];
int factors[NFACTS];
int factorset[NFACTS];
sPC productcandidates[NPC];


void calculatesolution(sPC *ppc) {
cout << "Solution has a sum of " << ppc->firstsum << " and a product of " << ppc->product << "\n";
cout << "The solution is therefore: " << (ppc->firstsum - ((int) sqrt((double) ((ppc->firstsum * ppc->firstsum)-(4 * ppc->product)))))/2 << ", " << (ppc->firstsum + ((int) sqrt((double) ((ppc->firstsum * ppc->firstsum)-(4 * ppc->product)))))/2 << "\n\n";
}


int main() {
int h, i, j, k; // General iterators
int x, y; // values as per question
int candidatecount, sum, product, factorcount, hfactorcount;
int minx, maxx, factorsetcount, pccount;
bool allindeterminate;


primes[0] = 2;
for(i=3,k=1;i<=N;i++) {
for(j=0;(j<k)&&(i%primes[j]!=0);j++);
if(j==k) primes[k++]=i;
}


// for all possible sums
for(candidatecount=0,sum=6;sum<=(N*2);sum++) {
minx = (sum < (3 + N)) ? 3 : sum - N;
maxx = sum / 2;


// for each possible pair of values for this sum
for(allindeterminate=true,x=minx;x<=maxx;x++) {
y = sum - x;
product = x * y;


// generate a list of all of the prime factors of the product, p
for(factorcount=0, k=0; product!=1; ) {
if((product % primes[k])  == 0) {
factors[factorcount++]=primes[k];
product /= primes[k];
}
else k++;
}
product = x * y;


// work out if this list can generate more than one in-range product
// for every number of primes to have in the product
for(hfactorcount=factorcount/2,factorsetcount=1,i=0;(factorsetcount <= hfactorcount) && (i < 2); factorsetcount++) {


// init a list of indexes to the products
for(j=0;j<factorsetcount;j++) factorset[j]=j;


// check this initial permutation is/isn't a valid product
for(j=0,h=1;j<factorsetcount;j++) h *= factors[factorset[j]];
j = product / h;
if((h>=3)&&(h<=N)&&(j>=3)&&(j<=N)) {
if(i==0) {
i++;
k = h;
}
else if((k!=h)&&(k!=j)) i++;
}


// iterate through all permutations of the list
while((i<2)&&(factorset[0]!=factorcount-factorsetcount)) {
// rearrange index list
for(j=factorsetcount-1;(j>=0)&&(factorset[j]==(factorcount+j-factorsetcount));j--); // find the lattermost "non-stuck" element
for(factorset[j++]++;j<factorsetcount;j++) factorset[j]=factorset[j-1]+1; // move it along one and move all other items to be immediately after it


// check this permutation
for(j=0,h=1;j<factorsetcount;j++) h *= factors[factorset[j]];
j = product / h;
if((h>=3)&&(h<=N)&&(j>=3)&&(j<=N)) {
if(i==0) {
i++;
k = h;
}
else if((k!=h)&&(k!=j)) i++;
}
}
}
if(i!=2) allindeterminate = false;
}


// Check if it is a candidate
if(allindeterminate && (minx != maxx)) candidates[candidatecount++]=sum;
}


// Find the list of all products that can be produced by the candidate sums
// also mark if they are unique or not
for(i=0,pccount=0;i<candidatecount;i++) {
minx = (candidates[i] < (N+3)) ? 3 : candidates[i] - N;
maxx = candidates[i] / 2;
for(x=minx;x<=maxx;x++) {
y = candidates[i] - x;
product = x * y;
for(j=0;(j<pccount)&&(productcandidates[j].product!=product);j++);
if(j==pccount) {
productcandidates[pccount].firstsum = candidates[i];
productcandidates[pccount].product = product;
productcandidates[pccount].unique = true;
pccount++;
}
else productcandidates[j].unique = false;
}
}


// delete all non-unique products
for(i=0,j=0;i<pccount;i++) {
if(productcandidates[i].unique) {
productcandidates[j].product  = productcandidates[i].product;
productcandidates[j].firstsum = productcandidates[i].firstsum;
productcandidates[j].unique   = productcandidates[i].unique;
j++;
}
}
pccount = j;


// find possible solutions by looking for candidate sums with just one unique prouct


if(productcandidates[0].firstsum != productcandidates[1].firstsum) calculatesolution(&productcandidates[0]);
if(productcandidates[pccount-1].firstsum != productcandidates[pccount-2].firstsum) calculatesolution(&productcandidates[pccount-1]);
for(i=1;i<pccount-1;i++) if((productcandidates[i-1].firstsum != productcandidates[i].firstsum)&&(productcandidates[i].firstsum != productcandidates[i+1].firstsum)) calculatesolution(&productcandidates[i]);


return 0;
}

Tuesday 6 September 2011

Why Blog?

I have a bit of a fascination with a great many academic topics but two of my favourites are Artificial Intelligence (AI) and Psychology. In particular, I am especially interested in Artificial General Intelligence (AGI) or Strong AI. One of the things that frustrates me about many AI texts is that they aim too low - they simply want to design a system that is capable of going about a particular task in an "intelligent" manner - flexibly with little to no human assistance. This is a laudable aim in and of itself but in my opinion offers no insight into the greater Strong AI problem; the reason is simple: Strong AI isn't just able to go about a single or even multiple tasks in an intelligent manner, it isn't constrained by the idea of a task: the most important aspect of Strong AI is that it itself decides what its task is.


Yes, Picasa has an eerily accurate facial image recognition algorithm; yes, computers are more and more often exhibiting behavior that even a few months before seemed to exist only in the realms of science fiction. Kurzweil predicts the singularity from a perspective of computational power where computers will exceed the abilities of human minds. I'd argue that whilst this computing power would enable such a capability to exist, without the appropriate algorithms that can redefine themselves sufficiently, such a singularity will not occur. After all, in many fields of human endeavor computers already outstrip humans: first this was in highly organised repetitive tasks such as addition, etc - it has since extended to code compilation - complex algorithms for computers which execute much faster than the equivalent human process. More recently the power of statistical and pattern recognition systems has allowed the public a view of the forefront of algorithm development - however, despite the seemingly "magic" ability of these algorithms and their ability to learn information, they have made little inroad into the depths of Strong AI. Why?


First, they are all only able to perform a task defined by another - increasingly complex tasks, naturally - but the task is still defined by a human: the algorithm designer. Some algorithms are able to adapt and change their own behavior but these are still performed within the bounds defined by the programmers' desire and imagination (the more limited of the two). Some curios of self defining systems have been created at a small scale in the lab but these lack direction of drive... after all, what drives us but ambition?


So, what do I want to do with this blog (aside from produce possibly random drivel)? I would like to explore the world of algorithms that define their own ambition in life, that are adaptive and are unconstrained by the imagination of the programmer.


I hope some people decide to come along for the ride.