1,675 Hits in 3.2 sec

Efficient Error Setting for Subspace Miners [chapter]

Eran Shaham, David Sarne, Boaz Ben-Moshe
2014 Lecture Notes in Computer Science  
For many miners, a key input parameter is the error used which greatly affects the quality, quantity and coherency of the mined clusters.  ...  The paper presents a new method for automatically setting the error to the value that maximizes the number of clusters mined.  ...  Also, we are grateful to Andrew Rosenberg for the permission to use the ClusterEvaluator package [26] , and for enlightening remarks.  ... 
doi:10.1007/978-3-319-08979-9_1 fatcat:gdnonhh6ireppi3j3r7eexo7xq

A simple efficient density estimator that enables fast systematic search [article]

Jonathan R. Wells, Kai Ming Ting
2017 arXiv   pre-print
We identify that existing outlying aspects miners are restricted to datasets with small data size and dimensions because they employ kernel density estimator, which is computationally expensive, for subspace  ...  This paper introduces a simple and efficient density estimator that enables fast systematic search.  ...  While histogram is more efficient than KDE, it has larger estimation errors than KDE.  ... 
arXiv:1707.00783v2 fatcat:saoygrpfjzgdteqwbeuw7sq6ki

Dictionary pruning in sparse unmixing of hyperspectral data

Marian-Daniel Iordache, Jose M. Bioucas-Dias, Antonio Plaza
2012 2012 4th Workshop on Hyperspectral Image and Signal Processing (WHISPERS)  
This paper proposes a methodology for obtaining such a dictionary pruning. The efficiency of the method is assessed using both simulated and real hyperspectral data.  ...  Spectral unmixing is an important technique for remotely sensed hyperspectral data exploitation.  ...  Finally the subspace errors for the two considered noise levels were: 0.411 for noise with SNR=30dB and 0.412 for SNR=40dB.  ... 
doi:10.1109/whispers.2012.6874329 dblp:conf/whispers/IordacheBP12 fatcat:ouy3zhj2nrccvfjltvasixwswa

Subspace Clustering for Sequential Data

Stephen Tierney, Junbin Gao, Yi Guo
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
Current subspace clustering techniques learn the relationships within a set of data and then use a separate clustering algorithm such as NCut for final segmentation.  ...  We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces.  ...  error for each experiment.  ... 
doi:10.1109/cvpr.2014.134 dblp:conf/cvpr/TierneyGG14 fatcat:elu4xo6oyfdf5edt7aesclhsdu

Selection of Genetic and Phenotypic Features Associated with Inflammatory Status of Patients on Dialysis Using Relaxed Linear Separability Method

Leon Bobrowski, Tomasz Łukaszuk, Bengt Lindholm, Peter Stenvinkel, Olof Heimburger, Jonas Axelsson, Peter Bárány, Juan Jesus Carrero, Abdul Rashid Qureshi, Karin Luttropp, Malgorzata Debowska, Louise Nordfors (+3 others)
2014 PLoS ONE  
The RLS method allowed for substantial reduction of the dimensionality through omitting redundant features while maintaining the linear separability of data sets of patients with high and low levels of  ...  Identification of risk factors in patients with a particular disease can be analyzed in clinical data sets by using feature selection procedures of pattern recognition and data mining methods.  ...  Especially low errors of 1{2% (with standard deviation of 10%) obtained for RLS method in the combined phenotypic and genotypic feature space (Table 5) demonstrate its good efficiency.  ... 
doi:10.1371/journal.pone.0086630 pmid:24489753 pmcid:PMC3904924 fatcat:icazbd5dfvdw7i5zthxvwo5yqe

Efficient Scalable Accurate Regression Queries in In-DBMS Analytics

Christos Anagnostopoulos, Peter Triantafillou
2017 2017 IEEE 33rd International Conference on Data Engineering (ICDE)  
the answers to two key query types that reveal dependencies between the values of different attributes: (i) mean-value queries and (ii) multivariate linear regression queries, both within specific data subspaces  ...  However, computing their exact answers leaves a lot to be desired in terms of efficiency and scalability.  ...  Problem Formulation Consider C1 and let us adopt the squared prediction error function (y −f (x, θ)) 2 for penalizing errors in prediction.  ... 
doi:10.1109/icde.2017.111 dblp:conf/icde/Anagnostopoulos17 fatcat:2ag5t5d6avh23kqtxcym2rodei

Sparse and Low-Rank Subspace Data Clustering with Manifold Regularization Learned by Local Linear Embedding

Ye Yang, Yongli Hu, Fei Wu
2018 Applied Sciences  
Additionally, to solve the complex optimization problem involved in the proposed models, an efficient algorithm is also proposed.  ...  In all the data clustering methods, the subspace spectral clustering methods based on self expression model, e.g., the Sparse Subspace Clustering (SSC) and the Low Rank Representation (LRR) methods, have  ...  In the following experiments, we will report the parameters setting for each dataset.  ... 
doi:10.3390/app8112175 fatcat:tek2oxdomvfghdvapppxb53ww4

Noise estimation for hyperspectral subspace identification on FPGAs

Germán León, Carlos González, Rafael Mayo, Daniel Mozos, Enrique S. Quintana-Ortí
2018 Journal of Supercomputing  
We present a reliable and efficient FPGA implementation of a procedure for the computation of the noise estimation matrix, a key stage for subspace identification of hyperspectral images.  ...  Our modular implementation decomposes the QR factorization that comprises a significant part of the cost into a sequence of suboperations, which can be efficiently computed on an FPGA.  ...  R n is the modeling error vector.  ... 
doi:10.1007/s11227-018-2425-3 fatcat:ikfevwcn6ndvjouvsmaoajxn4y

Page 771 of Mathematical Reviews Vol. , Issue 2003A [page]

2003 Mathematical Reviews  
Letting be denote the set of all k-dimensional subspaces of V = GF(q)", F = is called a Steiner structure S(t,k,n), if the elements of F are k-dimensional subspaces and each ¢-dimensional subspace of V  ...  771 94B Theory of error-correcting codes and error-detecting codes (235-249); Michael Brown [Michael Stephen Brown], Darrel Han- kerson, Julio Lopez and Alfred Menezes, Software implementation of the NIST  ... 

Improved algorithm for hyperspectral data dimension determination

Jie CHEN, Lei DU, Jing LI, Yachao HAN, Zihong GAO
2017 IOP Conference Series: Earth and Environment  
However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated  ...  are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace  ...  Signal subspace estimation is carried out for each 100 simulated data sets, and the mean and standard deviation of calculation results are used to be the evaluation indicators.  ... 
doi:10.1088/1755-1315/57/1/012044 fatcat:z2eylsfr2ff3rnjh5hamocayrq

Large-scale predictive modeling and analytics through regression queries in data management systems

Christos Anagnostopoulos, Peter Triantafillou
2018 International Journal of Data Science and Analytics  
Predictive models like linear regression for prediction and logistic regression for classification are typically desired for exploring data subspaces of a d-dimensional data space of interest in R d real-valued  ...  However, computing their exact answers leaves a lot to be desired in terms of efficiency and scalability.  ...  Acknowledgements The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.  ... 
doi:10.1007/s41060-018-0163-5 fatcat:hasd6glgdrfj7gpofrn3ywh3d4

One-Dimensional Convolutional Auto-Encoder for Predicting Furnace Blowback Events from Multivariate Time Series Process Data—A Case Study

Carl Daniel Theunissen, Steven Martin Bradshaw, Lidia Auret, Tobias Muller Louw
2021 Minerals  
Modern industrial mining and mineral processing applications are characterized by large volumes of historical process data.  ...  While the one-dimensional auto-encoder was evaluated comparatively on a simulated furnace case study, the methodology used in this evaluation can be applied to industrial furnaces and other mineral processing  ...  Minerals 2021, 11, x FOR PEER REVIEW 11 of 18 Figure 9 .  ... 
doi:10.3390/min11101106 fatcat:hr56gubxczealkte4a3kzd3pga

Mining adversarial patterns via regularized loss minimization

Wei Liu, Sanjay Chawla
2010 Machine Learning  
However, in several adversarial settings, the test set is deliberately constructed in order to increase the error rates of the classifier.  ...  We solve for the Nash equilibrium which is a pair of strategies (classifier weights, data transformations) from which there is no incentive for either the data miner or the adversary to deviate.  ...  The authors are also grateful to Lile Li and members of the Pattern Mining Group (University of Sydney) for proofreading the paper.  ... 
doi:10.1007/s10994-010-5199-2 fatcat:guukyeichrb4ldjhzrjrwtpr64

Discovering outlying aspects in large datasets

Nguyen Xuan Vinh, Jeffrey Chan, Simone Romano, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Jian Pei
2016 Data mining and knowledge discovery  
, and (b) how to efficiently search through the exponentially large search space of all possible subspaces.  ...  Finally, we evaluate the effectiveness of different methods for outlying aspects discovery and demonstrate the utility of our proposed approach on both large real and synthetic data sets.  ...  HOS-Miner then searches for subspaces in which the OD score of q is higher than a distance threshold δ, i.e., significantly deviating from its neighbors.  ... 
doi:10.1007/s10618-016-0453-2 fatcat:h2z3zx7nn5cvhfxtmzcg3scdc4

MUSIC-CSR: Hyperspectral Unmixing via Multiple Signal Classification and Collaborative Sparse Regression

Marian-Daniel Iordache, Jose M. Bioucas-Dias, Antonio Plaza, Ben Somers
2014 IEEE Transactions on Geoscience and Remote Sensing  
The algorithm exploits the usual low dimensionality of the hyperspectral data sets.  ...  For each member, the Euclidean distance to the subspace was then computed. Fig. 2 shows the obtained projection errors for all members.  ...  The error indicator we use is the normalized Euclidean distance between one member of the library and the estimated subspace in which the data lives. • Active set detection (lines 6 and 7): Sorts the normalized  ... 
doi:10.1109/tgrs.2013.2281589 fatcat:7llbkr5vijh23btguqz2qhxoym
« Previous Showing results 1 — 15 out of 1,675 results