A PDF file should load here. If you do not see its contents
the file may be temporarily unavailable at the journal website
or you do not have a PDF plug-in installed and enabled in your browser.
Alternatively, you can download the file locally and open with any standalone PDF reader:
https://link.springer.com/content/pdf/10.1007%2Fs10994-011-5259-2.pdf
ILP turns 20
Stephen Muggleton
0
1
2
3
4
Luc De Raedt
0
1
2
3
4
David Poole
0
1
2
3
4
Ivan Bratko
0
1
2
3
4
Peter Flach
0
1
2
3
4
Katsumi Inoue
0
1
2
3
4
Ashwin Srinivasan
0
1
2
3
4
L. De Raedt Katholieke Universiteit Leuven
0
1
2
3
4
Leuven
0
1
2
3
4
Belgium e-mail:
0
1
2
3
4
Editors: Paolo Frasconi and Francesca Lisi.
0
I. Bratko University of Ljubljana
,
Ljubljana, Slovenia
1
D. Poole University of British Columbia
,
Vancouver
, Canada url: http://www.cs.ubc.ca/~poole/
2
S. Muggleton ( ) Imperial College London
,
London, UK
3
K. Inoue National Institute of Informatics
,
Tokyo, Japan
4
P. Flach University of Bristol
,
Bristol, UK
Inductive Logic Programming (ILP) is an area of Machine Learning which has now reached its twentieth year. Using the analogy of a human biography this paper recalls the development of the subject from its infancy through childhood and teenage years. We show how in each phase ILP has been characterised by an attempt to extend theory and implementations in tandem with the development of novel and challenging real-world applications. Lastly, by projection we suggest directions for research which will help the subject coming of age.
1 Introduction
The present paper was authored by members of a discussion panel at the twentieth
International Conference on Inductive Logic Programming. The topic of the panel was a discussion
of the achievements of the field to date and possible new directions for the future.
Discussions by email following the panel led to the present paper, in which an attempt has been
made to reach consensus among the sometimes disparate views of the panelists.
The first workshop on Inductive Logic Programming was held twenty years ago, and
brought together a diverse group of researchers working at the intersection of Machine
Learning and Logic Programming. This paper reflects on the achievements of the subject
to date and the challenges ahead. The subject has progressed through a number of distinct
phases which we describe in terms of its infancy, childhood and teenage years. In each phase
ILP has been characterised by an attempt to extend theory and implementations in tandem
with the development of novel and challenging real-world applications. The paper suggests
directions for future research which are aimed at helping the subject coming of age into
adulthood.
This paper is organised as follows. Section 2 describes some of the motivations of the
field by situating it within the broader context of Machine Learning. This is exemplified by
various problems which benefit from a relational representation. Section 3.1 describes
several phases of the twenty year history of the field starting from the year of the first conference
and founding paper in 1991 and progressing through to the present day. Various possible
future directions for the field are then described in Sect. 4. We then conclude the paper in
Sect. 5 and provide a few pointers for further reading in Sect. 6. Lastly we summarise the
author contributions to the paper in Sect. 6.
2 Motivation of ILP
The field of Inductive Logic Programming essentially combines Machine Learning with
logical knowledge representation. To understand the needs for such a combination, consider
learning from the two datasets in Fig. 1 (from Poole and Mackworth 2010). Dataset (a) is
the sort used in traditional supervised and unsupervised learning. Standard textbook
supervised learning algorithms can learn a decision tree, a neural network, or a support vector
machine to predict User Action. A belief network learning algorithm can be used to learn a
representation of the distribution over the features.
Dataset (b), from which we may want to predict what Joe likes, is different. Many of the
values in the table are meaningless names that cannot be used directly in supervised learning.
Instead, it is the relationship among the individuals in the world that provides the
generalizations from which to learn. Learning from such datasets is the core task of Inductive Logic
Programming (ILP) (Muggleton and De Raedt 1994; Lavrac and Deroski 1993) mainly
because logic programs provide a good representation for the generalizations required to
make predictions. Logic programs are also more expressive than alternative representations
that are sometimes used today to cope with such data (such as network and graph-based
representations).
Lastly, a key goal of ILP is the generation of human-interpretable explanations. It is
usually stratightforward to translate logic programs into a series of easily understandable
Fig. 1 Two datasets. Dataset (a) represents input to a traditional feature-based learner in which each example
can be viewed as a vector of features describing a single object. By contrast dataset (b) represents input to a
relational learner, in which each example describes the relationship between a group of named objects
sentences which can be understood by a domain expert. This is not the case in many other
approaches within Machine Learnin (...truncated)