Filters








223 Hits in 1.1 sec

Document Classification Using a Finite Mixture Model [article]

Hang Li, Kenji Yamanishi
1997 arXiv   pre-print
(Tanner and Wong, 1987; Yamanishi, 1996) ) 6 , an algorithm to efficiently approximate the Bayes estimator of P (k j |c i ).)  ... 
arXiv:cmp-lg/9705005v1 fatcat:ybrkovlmwfhyvketkeifde5aqe

Probably almost discriminative learning

Kenji Yamanishi
1995 Machine Learning  
Basic problem The problem of learning stochastic rules has recently come to be widely discussed in computational learning theory (see for example, Kearns & Schapire, 1994 , Yamanishi, 1992a , Yamanishi  ...  A rule of this form is called a stochastic rule with finite partitioning (Yamanishi, 1992a) .  ... 
doi:10.1007/bf00993820 fatcat:7mdbuudxgrenpetelfqki3xtve

A Learning Criterion for Stochastic Rules [chapter]

Kenji Yamanishi
1990 Colt Proceedings 1990  
The learning criterion proposed in this paper follows the definition in Yamanishi (1990b) .  ...  ., inferring decision trees (Quinlan & Rivest, 1989) , shape reconstruction (Pednault, 1989) , shape recognition (Segen, 1989) , classification rules with hierarchical parameter structures (Yamanishi  ... 
doi:10.1016/b978-1-55860-146-8.50008-4 fatcat:rltd2hub5vh63jl4np6up7y6zi

High-dimensional Penalty Selection via Minimum Description Length Principle [article]

Kohei Miyaguchi, Kenji Yamanishi
2018 arXiv   pre-print
There exists another approach for minimization of LNMLs in which a stochastic minimization algorithm is proposed [Miyaguchi et al(2017)Miyaguchi, Matsushima, and Yamanishi] .  ...  There also exist a considerable number of studies that relate error bounds with the MDL principle, including (but not limited to) [Barron and Cover(1991) ], [Yamanishi(1992) ] and [Chatterjee and Barron  ... 
arXiv:1804.09904v1 fatcat:j55lamgznbe77jyxsnwfvzysaa

Distributed Cooperative Bayesian Learning Strategies

Kenji Yamanishi
1999 Information and Computation  
KENJI YAMANISHI  ... 
doi:10.1006/inco.1998.2753 fatcat:d4far2np6zflhlzz67k4gjphsq

Summarizing Finite Mixture Model with Overlapping Quantification

Shunki Kyoya, Kenji Yamanishi
2021 Entropy  
Finite mixture models are widely used for modeling and clustering data. When they are used for clustering, they are often interpreted by regarding each component as one cluster. However, this assumption may be invalid when the components overlap. It leads to the issue of analyzing such overlaps to correctly understand the models. The primary purpose of this paper is to establish a theoretical framework for interpreting the overlapping mixture models by estimating how they overlap, using
more » ... rlap, using measures of information such as entropy and mutual information. This is achieved by merging components to regard multiple components as one cluster and summarizing the merging results. First, we propose three conditions that any merging criterion should satisfy. Then, we investigate whether several existing merging criteria satisfy the conditions and modify them to fulfill more conditions. Second, we propose a novel concept named clustering summarization to evaluate the merging results. In it, we can quantify how overlapped and biased the clusters are, using mutual information-based criteria. Using artificial and real datasets, we empirically demonstrate that our methods of modifying criteria and summarizing results are effective for understanding the cluster structures. We therefore give a new view of interpretability/explainability for model-based clustering.
doi:10.3390/e23111503 pmid:34828201 pmcid:PMC8622449 fatcat:64uxyo5pl5bdxen5zhxzqvn7gy

Mixture Complexity and Its Application to Gradual Clustering Change Detection [article]

Shunki Kyoya, Kenji Yamanishi
2020 arXiv   pre-print
., 2005; Yamanishi, 2013, 2019) have been invented to select the cluster size.  ...  We consider the problem of detecting changes in the cluster structure; Dynamic model selection (DMS) (Yamanishi and Maruyama, 2005, 2007; Hirai and Yamanishi, 2012) addressed this problem by observing  ... 
arXiv:2007.07467v1 fatcat:esc7lk6sxfcmtj5if4wvxptgli

A learning criterion for stochastic rules

Kenji Yamanishi
1992 Machine Learning  
The learning criterion proposed in this paper follows the definition in Yamanishi (1990b) .  ...  ., inferring decision trees (Quinlan & Rivest, 1989) , shape reconstruction (Pednault, 1989) , shape recognition (Segen, 1989) , classification rules with hierarchical parameter structures (Yamanishi  ... 
doi:10.1007/bf00992676 fatcat:st7dsalrxbhudhfcmhsjgxmcfy

Generalization Error Bound for Hyperbolic Ordinal Embedding [article]

Atsushi Suzuki, Atsushi Nitanda, Jing Wang, Linchuan Xu, Marc Cavazza, Kenji Yamanishi
2021 arXiv   pre-print
Hyperbolic ordinal embedding (HOE) represents entities as points in hyperbolic space so that they agree as well as possible with given constraints in the form of entity i is more similar to entity j than to entity k. It has been experimentally shown that HOE can obtain representations of hierarchical data such as a knowledge base and a citation network effectively, owing to hyperbolic space's exponential growth property. However, its theoretical analysis has been limited to ideal noiseless
more » ... deal noiseless settings, and its generalization error in compensation for hyperbolic space's exponential representation ability has not been guaranteed. The difficulty is that existing generalization error bound derivations for ordinal embedding based on the Gramian matrix do not work in HOE, since hyperbolic space is not inner-product space. In this paper, through our novel characterization of HOE with decomposed Lorentz Gramian matrices, we provide a generalization error bound of HOE for the first time, which is at most exponential with respect to the embedding space's radius. Our comparison between the bounds of HOE and Euclidean ordinal embedding shows that HOE's generalization error is reasonable as a cost for its exponential representation ability.
arXiv:2105.10475v1 fatcat:drx2h4ckuna45kqrbz72w6hppi

Discovering Emerging Topics in Social Streams via Link Anomaly Detection [article]

Toshimitsu Takahashi, Ryota Tomioka, Kenji Yamanishi
2011 arXiv   pre-print
Detection of emerging topics are now receiving renewed interest motivated by the rapid growth of social networks. Conventional term-frequency-based approaches may not be appropriate in this context, because the information exchanged are not only texts but also images, URLs, and videos. We focus on the social aspects of theses networks. That is, the links between users that are generated dynamically intentionally or unintentionally through replies, mentions, and retweets. We propose a
more » ... ropose a probability model of the mentioning behaviour of a social network user, and propose to detect the emergence of a new topic from the anomaly measured through the model. We combine the proposed mention anomaly score with a recently proposed change-point detection technique based on the Sequentially Discounting Normalized Maximum Likelihood (SDNML), or with Kleinberg's burst model. Aggregating anomaly scores from hundreds of users, we show that we can detect emerging topics only based on the reply/mention relationships in social network posts. We demonstrate our technique in a number of real data sets we gathered from Twitter. The experiments show that the proposed mention-anomaly-based approaches can detect new topics at least as early as the conventional term-frequency-based approach, and sometimes much earlier when the keyword is ill-defined.
arXiv:1110.2899v1 fatcat:wv5azphfwjdmrbolt24m3cljuq

Descriptive Dimensionality and Its Characterization of MDL-based Learning and Change Detection [article]

Kenji Yamanishi
2019 arXiv   pre-print
Yamanishi and Fukushima [26] derived upper bounds on error probabilities for the MDL for this scenario.  ...  As for model change detection, Yamanishi and Maruyama formulated the problem of dynamic model selection (DMS) when the underlying probabilistic model changes over time [28, 27] .  ... 
arXiv:1910.11540v1 fatcat:sgjan5qmrfaipcbclia5jzvhai

Exact Calculation of Normalized Maximum Likelihood Code Length Using Fourier Analysis [article]

Atsushi Suzuki, Kenji Yamanishi
2018 arXiv   pre-print
Recently, Hirai and Yamanishi non-asymptotically calculated the NML code length for several models in the exponential family [4] .  ...  contrast, our theorem does not involve these assumptions and allows the Fisher information to converge to zero or diverge towards the boundary. 2) Exact Calculation Formula for Exponential Family: Hirai and Yamanishi  ... 
arXiv:1801.03705v1 fatcat:gqa6dany2fh2hioppxjuboevou

Mining product reputations on the Web

Satoshi Morinaga, Kenji Yamanishi, Kenji Tateishi, Toshikazu Fukushima
2002 Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '02  
doi:10.1145/775047.775098 dblp:conf/kdd/MorinagaYTF02 fatcat:ma2zoenvb5fgpixe7s3bf5dmma

Topic analysis using a finite mixture model

Hang Li, Kenji Yamanishi
2003 Information Processing & Management  
For example, Li and Yamanishi propose to employ in text classification a mixture model (Li and Yamanishi, 1997) defined over categories: proposes using in information retrieval a joint distribution which  ...  ., 1996; Li and Yamanishi, 1999; Joachims, 1998; Weiss et al., 1999; Nigam et al., 2000) ).  ... 
doi:10.1016/s0306-4573(02)00035-3 fatcat:ill5ma2xbzfurhrm7b5sfgtxs4

Extended Stochastic Complexity and Minimax Relative Loss Analysis [chapter]

Kenji Yamanishi
1999 Lecture Notes in Computer Science  
We are concerned with the problem of sequential prediction using a given hypothesis class of continuously-many prediction strategies. An eective performance measure is the minimax relative cumulative loss (RCL), which is the minimum of the worst-case dierence between the cumulative loss for any prediction algorithm and that for the best assignment in a given hypothesis class. The purpose of this paper is to evaluate the minimax RCL for general continuous hypothesis classes under general losses.
more » ... der general losses. We rst derive asymptotical upper and lower bounds on the minimax RCL to show that they match (k=2c) ln m within error of o(ln m) where k is the dimension of parameters for the hypothesis class, m is the sample size, and c is the constant depending on the loss function. We thereby show that the cumulative loss attaining the minimax RCL asymptotically coincides with the extended stochastic complexity (ESC), which is an extension of Rissanen's stochastic complexity (SC) into the decision-theoretic scenario. We further derive non-asymptotical upper bounds on the minimax RCL both for parametric and nonparametric hypothesis classes. We apply the analysis into the regression problem to derive the least worst-case cumulative loss bounds to date.
doi:10.1007/3-540-46769-6_3 fatcat:bn7t5e36uvh4tnj233anr2khgm
« Previous Showing results 1 — 15 out of 223 results