26th International Conference on Inductive Logic Programming

4th - 6th September 2016, London


2016

L

P






Competition

Entering the Competition

You should first register your system. Then you should upload a .tar.gz archive containing 2 scripts (and any dependencies).

The first script should be called build and will be run once in order to install any dependencies required by your system. You may assume that this runs on a clean installation of Ubuntu 16.04.

The second script should be called run and will be run for each task. It will be run with one argument which is a path to a file containing the task.

When uploading the archive, you should specify which categories you would like your system tested on. The two main tracks which the system can be entered on: probabilistic and non-probabilistic (you may also choose for your system to be entered in both). You must also specify which of the sub categories that you would like the current version of your archive to be tested on. The final version will be tested on all minor categories forthe track(s) you have entered. In order to keep all tasks open to both Prolog and ASP systems, we have used a subset of both Prolog and ASP; however, due small syntactic differences you still need to specify which language you are using. You may upload your system as many times as you like. We encourage you to upload your system regularly during development.

Task format

Your learning system will be tested on many different tasks. The tasks consist of a background knowledge, a language bias and two sets of traces. The first set, the example traces, have with them the complete set of valid moves for use as examples in your ILP system. The second set of traces, the test traces, do not have a set of valid moves. These traces are used to test the accuracy of the hypothesis that your system has learned. We use meta statements to separate the various components of the task.

  • #background preceeds the background knowledge.
  • #Example(i) preceeds an example trace with identifier i. #trace is followed by the path that the agent took and #valid_moves is followed by the complete set of valid moves.
  • #Test(i) preceeds an test trace with identifier i. #trace is followed by the path that the agent took.
  • #target_predicate preceeds the single predicate schema which indicates the predicate which should be defined by the hypothesis. Usually, this will be valid_move(cell, time). This is similar to a modeh mode declaration in standard language biases.
  • #relevant_predicates preceeds a list of predicate schemas indicating the predicates from the background knowledge which might be used to define the hypothesis. These are similar to modeb mode declarations in standard language biases. Note that when the task requires predicate invention, we will not give the schema(s) of the predicate(s) which should be invented.
The arguments of the predicate schemas are types. The types are unary predicates defined in the background knowledge.

Expected Output

In the non-probabilistic case, for each of the test traces, your system should output either VALID(i) or INVALID(i) (where i is the id of the trace). In the probabilistic case, you should output VALID(i, p) (where p is what your estimate of the probability of trace i being valid).

As there is a time limit, you may want to output multiple answers (with the best hypothesis you have found so far). Each time you output a different set of answers, you should preceed them with the meta statement #attempt. Your last attempt will be the only one which is marked.

Note that although all other output is ignored, it is available for you to view and might be useful for debugging purposes.

VM specification

The competition will be run on an Ubuntu 16.04 virtual machine with a 2GHz dual core processor and 2GB of RAM. Your learner will have 30s to complete each problem before it is timed out.

Example Tasks

Initially, there will be 8 problems in each track (probabilistic and non-probabilistic) for entrants to try. These 8 problems each have three different settings (easy, medium and hard), with different sizes of language bias. As the competition progresses, we will be adding more problems, so it is worth checking regularly for updates!

Non-probabilistic examples
Problem Name Prolog representation ASP representation
Teleport easy, medium, hard easy, medium, hard
Wall easy, medium, hard easy, medium, hard
Gaps in the floor easy, medium, hard easy, medium, hard
Transitive Links easy, medium, hard easy, medium, hard
Transitive Adjacent Links easy, medium, hard easy, medium, hard
Unlocked easy, medium, hard easy, medium, hard
Recursive Keys easy, medium, hard easy, medium, hard
partial keys easy, medium, hard easy, medium, hard
non-OPL transitive links easy, medium, hard easy, medium, hard
No backtracking easy, medium, hard easy, medium, hard
Hurdler, no backtracking easy, medium, hard easy, medium, hard
Probabilistic examples
Problem Name Prolog representation ASP representation
Chewing Gum easy, medium, hard easy, medium, hard
Probabilistic Dynamite easy, medium, hard easy, medium, hard
Spin easy, medium, hard easy, medium, hard
Revisit easy, medium, hard easy, medium, hard
Probabilistic Links easy, medium, hard easy, medium, hard
Probabilisticly Broken Links easy, medium, hard easy, medium, hard
Hurdler easy, medium, hard easy, medium, hard
Probabilistic Teleporter easy, medium, hard easy, medium, hard
Conveyor belts easy, medium, hard easy, medium, hard
Bomb easy, medium, hard easy, medium, hard
Delayed Explosions easy, medium, hard easy, medium, hard

Example System

An example .tar.gz archive can be found here. The run script in this example is random.

If something goes wrong...

As this is the first year that the competition is running, there may be the odd issue with the system. If you encounter any problems, please email mark.law09@imperial.ac.uk with details of what has gone wrong.

Finally... GOOD LUCK! We hope you enjoy the competition!