Guest editors' foreword

Hans Ulrich Simon, Etsuji Tomita
<span title="">2007</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/elaf5sq7lfdxfdejhkqbtz6qoq" style="color: black;">Theoretical Computer Science</a> </i> &nbsp;
It contains six articles that were among the best in the conference. 1 The authors of these papers have been invited by the special issue editors to submit completed versions of their work for this special issue. Once received, these papers underwent the usual refereeing process of Theoretical Computer Science. "Learning" is a complex phenomenon that is studied in different scientific disciplines. A computer program with the ability to "learn" contains mechanisms for gathering and evaluating
more &raquo; ... ormation and, consequently, for improving its performance. Algorithmic Learning Theory provides a mathematical foundation for the study of learning programs. It is concerned with the design and analysis of learning algorithms. The analysis proceeds in a formal model such as to provide measures for the performance of a learning algorithm or for the inherent hardness of a given learning problem. The variety of applications for algorithms that learn is reflected in the variety of formal learning models. For instance, we can distinguish between a passive mode of "learning from examples" and active modes of learning where the algorithm has more control over the information that is gathered. As for learning from examples, a further decision is whether we impose statistical assumptions on the sequence of examples or not. Furthermore, we find different success criteria in different models (like "approximate learning" versus "exact learning"). The papers in this special issue offer a broad view on the current research in the field including studies on several learning models (such as Bayesian and statistical models, PAC-learning, query-learning, inductive inference, and defensive forecasting). Below we briefly introduce each of the papers and provide on the way some background information about the respective underlying learning models. Bayesian learning refers to the problem of inferring the unknown parameters of a distribution (chosen from a known parameterized class of distributions). Typically the a priori distribution for the unknown parameters gives support to a wide range of parameters, whereas the a posteriori distribution is peaked around the true parameter values. Variational Bayesian Learning results from Bayesian Learning by introducing a simplifying assumption (in the case where there are hidden variables) that makes the approach computationally more tractable. Empirically it is known to have good generalization performance in many applications. Watanabe and Watanabe provide some additional theoretical support by proving lower and upper bounds on the stochastic complexity in the Variational Bayesian Learning of the mixture of exponential families. In the PAC-learning model, the learner receives as input training examples, drawn at random according to an unknown distribution and labeled according to an unknown target function f , and returns a "hypothesis" h that (with high probability of success) is a close approximation of f . While the first papers on PAC-learning focused on binary classification problems with the probability of misclassification as an underlying pseudo-metric, there have been many extensions of the basic model since then. The paper by Palmer and Goldberg deals with socalled Probabilistic Deterministic Finite State Automata (PDFA). A PDFA, in contrast to a DFA (its deterministic counterpart), performs random state transitions and thus represents a probability distribution over strings. In a recent 1
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.tcs.2007.07.022">doi:10.1016/j.tcs.2007.07.022</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/komgxsgx7rfp3kgfklnk4zft5u">fatcat:komgxsgx7rfp3kgfklnk4zft5u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20171005171721/http://publisher-connector.core.ac.uk/resourcesync/data/elsevier/pdf/f7d/aHR0cDovL2FwaS5lbHNldmllci5jb20vY29udGVudC9hcnRpY2xlL3BpaS9zMDMwNDM5NzUwNzAwNTQ1Mg%3D%3D.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8f/84/8f84245d742bda41ff246dd911d9c534fc425494.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.tcs.2007.07.022"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>