A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
Egocentric action anticipation consists in predicting a future action the camera wearer will perform from egocentric video. While the task has recently attracted the attention of the research community, current approaches assume that the input videos are "trimmed", meaning that a short video sequence is sampled a fixed time before the beginning of the action. We argue that, despite the recent advances in the field, trimmed action anticipation has a limited applicability in real-world scenariosarXiv:2202.04132v1 fatcat:jtirzagenbe3zamrkepbr2caqa
more »... here it is important to deal with "untrimmed" video inputs and it cannot be assumed that the exact moment in which the action will begin is known at test time. To overcome such limitations, we propose an untrimmed action anticipation task, which, similarly to temporal action detection, assumes that the input video is untrimmed at test time, while still requiring predictions to be made before the actions actually take place. We design an evaluation procedure for methods designed to address this novel task, and compare several baselines on the EPIC-KITCHENS-100 dataset. Experiments show that the performance of current models designed for trimmed action anticipation is very limited and more research on this task is required.
An important problem in metagenomic data analysis is to identify the source organism, or at least taxon, of each sequence. Most methods tackle this problem in two steps by using an alignment-free approach: first the DNA sequences are represented as points of a real n-dimensional space via a mapping function then either clustering or classification algorithms are applied. Those mapping functions require to be genomic signatures: the dissimilarity between the mapped points must reflect the degreedoi:10.1101/146001 fatcat:sv7chw2uafhdzagh62vcparbve
more »... of phylogenetic similarity of the source species. Designing good signatures for metagenomics can be challenging due to the special characteristics of metagenomic sequences; most of the existing signatures were not designed accordingly and they were tested only on error-free sequences sampled from a few dozens of species. In this work we analyze comparatively the goodness of existing and novel signatures based on tetranucleotide frequencies via statistical models and computational experiments; we also study how they are affected by the generalized Chargaff's second parity rule (GCSPR), which states that in a given sequence longer than 50 kbp, inverse oligonucleotides are approximately equally frequent. We analyze 38 million sequences of 150 bp-1,000 bp with 1% base-calling error, sampled from 1,284 microbes. Our models indicate that GCSPR reduces strand-dependence of signatures, that is, their values are less affected by the source strand; GCSPR is further exploited by some signatures to reduce the intra-species dispersion. Two novel signatures stand out both in the models and in the experiments: the combination signature and the operation signature. The former achieves strand-independence without grouping oligonucleotides; this could be valuable for alignment-free sequence comparison methods when distinguishing inverse oligonucleotides matters. Operation signature sums the frequencies of reverse, complement, and inverse tetranucleotides; having 72 features it reduces the computational intensity of the analysis.
Through the last century, the increased greenhouse gases emissions altered the atmosphere's composition and resulted to the phenomenon known as climate change. Climate change threatens the sustainability of the agricultural sector in the Mediterranean region. Droughts and extreme heat waves will probably become more frequent in the next few decades, thus maintaining sufficient yields in heat and drought susceptible major crops will be challenging. In Greece, cotton is of paramount economicdoi:10.15835/nbha49412547 fatcat:ibrztmqyzveyjjbwwjzkftj2ye
more »... tance. Besides the fact that it is regarded as the most significant fiber crop, Greece is the main cotton producer of the European Union. The aim of the present review was to examine the environmental factors that might affect cotton production in Greece and assess whether (or not) climate change has the potential to limit the productivity of this crop in the near future. According to the existing literature, cotton can adapt to the changing climate. Climate change-induced elevated CO2 levels and temperatures might even benefit cotton. The mitigation of the adverse effects of climate change is possible via the adaptation of site-specific agronomic practices. A simplistic framework, based on the literature and the goals of the European Union, that aims to the preservation of sufficient cotton yields in Greece is proposed in the present study.
Lecture Notes in Computer Science
Web personalization is the process of customizing a web site to the needs of each specific user or set of users. Personalization of a web site may be performed by the provision of recommendations to the users, highlighting/adding links, creation of index pages, etc. The web personalization systems are mainly based on the exploitation of the navigational patterns of the web site's visitors. When a personalization system relies solely on usage-based results, however, valuable informationdoi:10.1007/11908678_10 fatcat:jtpv6xshnzcgroad32575yfdlu
more »... lly related to what is finally recommended may be missed. The exploitation of the web pages' semantics can considerably improve the results of web usage mining and personalization, since it provides a more abstract yet uniform and both machine and human understandable way of processing and analyzing the usage data. The underlying idea is to integrate usage data with content semantics, expressed in ontology terms, in order to produce semantically enhanced navigational patterns that can subsequently be used for producing valuable recommendations. In this paper we propose a semantic web personalization system, focusing on word sense disambiguation techniques which can be applied in order to semantically annotate the web site's content.
Lecture Notes in Computer Science
Stability of a learning algorithm with respect to small input perturbations is an important property, as it implies the derived models to be robust with respect to the presence of noisy features and/or data sample fluctuations. In this paper we explore the effect of stability optimization in the standard feature selection process for the continuous (PCA-based) k-means clustering problem. Interestingly, we derive that stability maximization naturally introduces a tradeoff between clusterdoi:10.1007/978-3-642-23783-6_27 fatcat:kjpu6fyk5rbcfg4dllr4a4lb5q
more »... on and variance, leading to the selection of features that have a high cluster separation index that is not artificially inflated by the feature's variance. The proposed algorithmic setup is based on a Sparse PCA approach, that selects the features that maximize stability in a greedy fashion. In our study, we also analyze several properties of Sparse PCA relevant to stability that promote Sparse PCA as a viable feature selection mechanism for clustering. The practical relevance of the proposed method is demonstrated in the context of cancer research, where we consider the problem of detecting potential tumor biomarkers using microarray gene expression data. The application of our method to a leukemia dataset shows that the tradeoff between cluster separation and variance leads to the selection of features corresponding to important biomarker genes. Some of them have relative low variance and are not detected without the direct optimization of stability in Sparse PCA based k-means.
Lecture Notes in Computer Science
This paper introduces an algorithm for network community detection called DEEN (Delete Edges and Expand Nodes) consisting of two simple steps. First edges of the graph estimated to connect different clusters are detected and removed, next the resulting graph is used for generating communities by expanding seed nodes. DEEN uses as parameters the minimum and maximum allowed size of a cluster, and a resolution parameter whose value influences the number of removed edges. Application of DEEN to thedoi:10.1007/978-3-642-35686-5_13 fatcat:3x7rs6yr6fb4zavg5z5sz3ja4e
more »... budding yeast protein network for detecting functional protein complexes indicates its capability to identify clusters containing proteins with the same functional category, improving on MCL, a popular state-of-the-art method for functional protein complex detection. Moreover, application of DEEN to two popular benchmark networks results in the detection of accurate communities, substantiating the effectiveness of the proposed method in diverse domains.
Lecture Notes in Computer Science
The World Wide Web provides an enormous amount of images easily accessible to everybody. The main challenge is to provide efficient search mechanisms for image content that are truly scalable and can support full coverage of web contents. In this paper, we present an architecture that adopts the peer-to-peer (P2P) paradigm for indexing, searching and ranking of image content. The ultimate goal of our architecture is to provide an adaptive search mechanism for image content, enhanced withdoi:10.1007/978-3-540-79860-6_15 fatcat:67bmae5blrbovc6a4pop3ofw3i
more »... g, relying on image features, user-defined annotations and user feedback. Thus, we present PIRES, a scalable decentralized and distributed infrastructure for building a search engine for image content capitalizing on P2P technology. In the following, we first present the core scientific and technological objectives of PIRES, and then we present some preliminary experimental results of our prototype.
Lecture Notes in Computer Science
The introduction of hierarchical thesauri (HT) that contain significant semantic information, has led researchers to investigate their potential for improving performance of the text classification task, extending the traditional "bag of words" representation, incorporating syntactic and semantic relationships among words. In this paper we address this problem by proposing a Word Sense Disambiguation (WSD) approach based on the intuition that word proximity in the document implies proximitydoi:10.1007/11564126_21 fatcat:5fx2q7mv6fbshm2bwwkvbelegm
more »... in the HT graph. We argue that the high precision exhibited by our WSD algorithm in various humanly-disambiguated benchmark datasets, is appropriate for the classification task. Moreover, we define a semantic kernel, based on the general concept of GVSM kernels, that captures the semantic relations contained in the hierarchical thesaurus. Finally, we conduct experiments using various corpora achieving a systematic improvement in classification accuracy using the SVM algorithm, especially when the training set is small.
Several studies have demonstrated the prospects of spectral ordering for data mining. One successful application is seriation of paleontological findings, i.e. ordering the sites of excavation, using data on mammal co-occurrences only. However, spectral ordering ignores the background knowledge that is naturally present in the domain: paleontologists can derive the ages of the sites within some accuracy. On the other hand, the age information is uncertain, so the best approach would be todoi:10.1007/s10115-009-0215-1 fatcat:y4gfwfjqxvdjrllm4j6o44iaue
more »... e the background knowledge with the information on mammal co-occurrences. Motivated by this kind of partial supervision we propose a novel semi-supervised spectral ordering algorithm that modifies the Laplacian matrix such that domain knowledge is taken into account. Also, it performs feature selection by discarding features that contribute most to the unwanted variability of the data in bootstrap sampling. Moreover, we demonstrate the effectiveness of the proposed framework on the seriation of Usenet newsgroup messages, where the task is to find out the underlying flow of discussion. The theoretical properties of our algorithm are thoroughly analyzed and it is demonstrated that the proposed framework enhances the stability of the spectral ordering output and induces computational gains.
We provide preliminary details and formulation of an optimization strategy under current development that is able to automatically tune the parameters of a Support Vector Machine over new datasets. The optimization strategy is a heuristic based on Iterated Local Search, a modification of classic hill climbing which iterates calls to a local search routine.arXiv:1707.03191v1 fatcat:7kwcmneelbcr5db6bvh36x77em
Lecture Notes in Computer Science
Spectral methods, ranging from traditional Principal Components Analysis to modern Laplacian matrix factorization, have proven to be a valuable tool for a wide range of diverse data mining applications. Commonly these methods are stated as optimization problems and employ the extremal (maximal or minimal) eigenvectors of a certain input matrix for deriving the appropriate statistical inferences. Interestingly, recent studies have questioned this "modus operandi" and revealed that usefuldoi:10.1007/978-3-642-33460-3_22 fatcat:ev5ubf4bobgchpyuljdu2qg23q
more »... ion may also be present within loworder eigenvectors whose mass is concentrated (localized) in a small part of their indexes. An application context where localized low-order eigenvectors have been successfully employed is "Differential Power Analysis" (DPA). DPA is a well studied side-channel attack on cryptographic hardware devices (such as smart cards) that employs statistical analysis of the device's power consumption in order to retrieve the secret key of the cryptographic algorithm. In this work we propose a data mining (clustering) formulation of the DPA process and also provide a theoretical model that justifies and explains the utility of low-order eigenvectors. In our data mining formulation, we consider that the key-relevant information is modelled as a "low-signal" pattern that is embedded in a "high-noise" dataset. In this respect our results generalize beyond DPA and are applicable to analogous low-signal, hidden pattern problems. The experimental results using power trace measurements from a programmable smart card, verify our approach empirically.
Accumulation of the neuronal protein SNCA/alpha-synuclein and of the oligodendroglial phosphoprotein TPPP/p25A within the glial cytoplasmic inclusions (GCIs) represents the key histophathological hallmark of multiple system atrophy (MSA). Even though the levels/distribution of both oligodendroglial SNCA and TPPP/p25A proteins are critical for disease pathogenesis, the proteolytic mechanisms involved in their turnover in health and disease remain poorly understood. Herein, by pharmacological anddoi:10.6084/m9.figshare.18093951.v1 fatcat:of2giwhup5hutfx57l5pbxs5l4
more »... molecular modulation of the autophagy-lysosome pathway (ALP) and the proteasome we demonstrate that the endogenous oligodendroglial SNCA and TPPP/p25A are degraded mainly by the ALP in murine primary oligodendrocytes and oligodendroglial cell lines under basal conditions. We also identify a KFERQ-like motif in the TPPP/p25A sequence that enables its effective degradation via chaperone-mediated autophagy (CMA) in an in vitro system of rat brain lysosomes. Furthermore, in a MSA-like setting established by addition of human recombinant SNCA pre-formed fibrils (PFFs) as seeds of pathological SNCA, we thoroughly characterize the contribution of CMA and macroautophagy in particular, in the removal of the exogenously added and the seeded oligodendroglial SNCA pathological assemblies. We also show that PFF treatment impairs autophagic flux and that TPPP/p25A exerts an inhibitory effect on macroautophagy, while at the same time CMA is upregulated to remove the pathological SNCA species formed within oligodendrocytes. Finally, augmentation of CMA or macroautophagy accelerates the removal of the engendered pathological SNCA conformations further suggesting that autophagy targeting may represent a successful approach for the clearance of pathological SNCA and/or TPPP/p25A in the context of MSA. Abbreviations: 3MA: 3-methyladenine; ACTB: actin, beta; ALP: autophagy-lysosome pathway; ATG5: autophagy related 5; AR7: atypical retinoid 7; CMA: chaperone-mediated autophagy; CMV: cytomegalovirus; CTSD: cathepsin D; DAPI: 4′,6-d [...]
Mavroeidis were at IBM Research -Ireland. ...doi:10.1007/s10618-015-0421-2 fatcat:wd4gxxgm7fb6ncvfynndbpbitu
Recent studies have demonstrated the prospects of data mining algorithms for addressing the task of seriation in paleontological data (i.e. the age-based ordering of the sites of excavation). A prominent approach is spectral ordering that computes a similarity measure between the sites and orders them such that similar sites become adjacent and dissimilar sites are placed far apart. In the paleontological domain, the similarity measure is based on the mammal genera whose remains are retrieveddoi:10.1109/icdm.2008.120 dblp:conf/icdm/MavroeidisB08 fatcat:oabaxsxyqjd7pjfjb5idgjsjsq
more »... each site of excavation. Although spectral ordering achieves good performance in the seriation task, it ignores the background knowledge that is naturally present in the domain, as paleontologists can derive the ages of the sites of excavation within some accuracy. On the other hand, the age information is uncertain, so the best approach would be to combine the background knowledge with the information on mammal co-occurrences. Motivated by this kind of partial supervision we propose a novel semi-supervised spectral ordering algorithm. Our algorithm modifies the Laplacian matrix used in spectral ordering, such that domain knowledge of the ordering is taken into account. Also, it performs feature selection (sparsification) by discarding features that contribute most to the unwanted variability of the data in bootstrap sampling. The theoretical properties of the proposed algorithm are thoroughly analyzed and it is demonstrated that the proposed framework enhances the stability of the spectral ordering output and induces computational gains.
We present a case report of a patient with Bouveret syndrome with interesting radiological findings and successful surgical treatment after failure of the endoscopic techniques. The report is followed by a review of the literature regarding the diagnostic means and proper treatment of this rare entity. Bouveret syndrome refers to the condition of gastric outlet obstruction caused by the impaction of a large gallstone into the duodenum after passage through a cholecystoduodenal fistula. Manydoi:10.1155/2013/839370 pmid:23864977 pmcid:PMC3707284 fatcat:wajzk4pcjbclndor5rogxrdwga
more »... scopic and surgical techniques have been described in the management of this syndrome. This is a case of a 78-year-old patient with severe medical history who presented in bad general condition with an 8-day history of nausea, multiple bilious vomiting episodes, anorexia, discomfort in the right hypochondrium and epigastrium, and fever up to 38,5°C. The diagnosis of Bouveret syndrome was set after performing the proper imaging studies. An initial endoscopic effort to resolve the obstruction was performed without success. Surgical treatment managed to extract the impacted gallstone through an enterotomy after removal into the first part of the jejunum.
« Previous Showing results 1 — 15 out of 44 results