A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
John Ian Langford (1935–2013)
2014
Journal of Applied Crystallography
Later, an important contribution of Ian Langford to line-broadening theory and practice was his interpretation of direction-dependent line breadths in terms of the mean dimensions and shape of crystallites ...
Langford & Louë r, 1996) . Ian's interests and passions extended well beyond powder diffraction and science. ...
His study on Granite millstones of Shropshire and adjoining counties was recently published (Langford, 2011) . Ian was a patient and very friendly man. ...
doi:10.1107/s1600576714025527
fatcat:xwdkrz2b3rf6vobeag2hzhw5ry
Martha Langford and John Langford, A Cold War Tourist and His Camera. Montreal and Kingston, McGill-Queen's University Press, 2011, 208 pp., 85 colour photos, $39.95, ISBN: 9780773538214
2011
RACAR Revue d art canadienne
Martha Langford and John Langford, A Cold War Tourist and His Camera. Montreal and Kingston, McGill-Queen's University Press, 2011, 208 pp., 85 colour photos, $39.95, ISBN: 9780773538214. ...
A collaboration between art historian Martha Langford and her brother, John, a political scientist, A Cold War Tourist focuses on a series of photographs taken by their father, Warren, in the 1960s while ...
doi:10.7202/1066745ar
fatcat:bcmsq3xxnrblzeslosajnxn6hy
Review of Lewis and Langford: Symbolic Logic
1954
Mathematics Magazine
REVIEW OF LEWIS AND LANGFORD: SYMBOLIC LOGIC
Not for the beginner but a treasury of insights into modern logical theory for themature student, this reprint without alteration of a book which has been for ...
doi:10.2307/3029107
fatcat:mnrtslvlrrd5ti3zvikyrdnr5u
Slow Learners are Fast
[article]
2009
arXiv
pre-print
c 2099 John Langford, Alex Smola, and Martin Zinkevich.
. ideally, one could design code optimized for quadratic representations, and never explicitly generate the whole example ...
Implementation The code was written in Java, although several of the fundamentals were based upon VW (Langford et al., 2007) , that is, hashing and the choice of loss function. ...
arXiv:0911.0491v1
fatcat:apifcdawpjbwhawnuxd7m5nuwa
Out of the Earth. G. B. Langford
1955
The Journal of geology
LANGFoRD University of Toronto Press, 1954. ...
doi:10.1086/626279
fatcat:gxoqcjpsnjgejohw2lzlclinni
John Squires, Malcolm Langford, Brett Thiele (eds), The Road to a Remedy: Current Issues in the Adjudication of Economic, Social and Cultural Rights
2006
QUT Law Review
Langford, Brett Thiele (eds), The Road to a Remedy: Current Issues in the Adjudication of Economic, Social and Cultural Rights (Australian Human Rights Centre, Sydney and Centre for Housing Rights and ...
doi:10.5204/qutlr.v6i1.197
fatcat:gm4s24db6jdizodx4usvu5ke3q
Federated Residual Learning
[article]
2020
arXiv
pre-print
Q., Dasgupta, A., Langford, J., Smola, A. J., and Attenberg, J. Feature hashing for large scale multitask learning. ...
Zinkevich, M., Langford, J., and Smola, A. J. Slow learners are fast. In Advances in neural information processing systems, pp. 2331-2339, 2009. ...
arXiv:2003.12880v1
fatcat:omtz7hchhvg6haepck3r4o5juq
Normalized Online Learning
[article]
2013
arXiv
pre-print
We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust.
arXiv:1305.6646v1
fatcat:22on7quxvrggllqymjestxnwri
Para-active learning
[article]
2013
arXiv
pre-print
Training examples are not all equally informative. Active learning strategies leverage this observation in order to massively reduce the number of examples that need to be labeled. We leverage the same observation to build a generic strategy for parallelizing learning algorithms. This strategy is effective because the search for informative examples is highly parallelizable and because we show that its performance does not deteriorate when the sifting process relies on a slightly outdated
arXiv:1310.8243v1
fatcat:daxeg43bzjbmndepla6itmefhu
more »
... Parallel active learning is particularly attractive to train nonlinear models with non-linear representations because there are few practical parallel learning algorithms for such models. We report preliminary experiments using both kernel SVMs and SGD-trained neural networks.
Parallel Online Learning
[article]
2011
arXiv
pre-print
This is accomplished via pairwise training concerning which of two ads was clicked on and element-wise evaluation with an offline policy evaluator (Langford et al., 2008) . ...
The Vowpal Wabbit (VW) software (Langford et al., 2007) provides an existence proof that it is possible to have a fast fully online implementation which loads data as it learns. ...
arXiv:1103.4204v1
fatcat:qlfskhjyzjavpikk7epaur4uq4
PAC-Bayes & Margins
2002
Neural Information Processing Systems
There are two mathematical flavors of margin bound dependent upon the weights Wi of the vote and the features Xi that the vote is taken over. Those ([12], [1] ) with a bound on Li w~and Li x~("bib" bounds). 2. Those ([11], [6] ) with a bound on Li Wi and maxi Xi ("it/loo" bounds).
dblp:conf/nips/LangfordS02
fatcat:6bomyokiqnb2zinaaxymhmupbq
Search-based Structured Prediction
[article]
2009
arXiv
pre-print
We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted
arXiv:0907.0786v1
fatcat:rw6z6s6ehbbfhlhkr7kdglwple
more »
... structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.
Normalized Online Learning
[article]
2014
arXiv
pre-print
We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust.
arXiv:1408.2065v1
fatcat:e3y7s7okejg7vlygxqymfldd74
Agnostic Active Learning Without Constraints
[article]
2010
arXiv
pre-print
Langford. Importance weighted active learning. In Twenty-Sixth
International Conference on Machine Learning, 2009.
[CAL94]
D. Cohn, L. Atlas, and R. Ladner. ...
Langford. Agnostic active learning. In Twenty-Third International Conference on Machine Learning, 2006. Balcan, A. Broder, and T. Zhang. Margin based active learning. ...
arXiv:1006.2588v1
fatcat:g454v3uydbf77edt5mblmfr6p4
Interaction-Grounded Learning
[article]
2021
arXiv
pre-print
Correspondence to: Tengyang Xie <tx10@illinois.edu>, John Langford <jcl@microsoft.com>, Paul Mineiro <pmineiro@microsoft.com>, Ida Momennejad <idamo@microsoft.com>. ...
Combining these lemmas together with a similar argument as in (Langford & Zhang, 2008) , we establish a proof for Theorem 2. ...
arXiv:2106.04887v2
fatcat:h6lwbjlq3jcxnaqfx54h7luks4
« Previous
Showing results 1 — 15 out of 15,131 results