161 Hits in 1.9 sec

Schema transformation without database reorganization

Markus Tresch, Marc H. Scholl
1993 SIGMOD record  
We argue for avoiding database reorganizations due to schema modi cation in object-oriented systems, since these are expensive operations and they con ict with reusing existing software components. We s h o w that data independence, which is a neglected concept in object databases, helps to avoid reorganizations in case of capacity preserving and reducing schema transformations. We informally present a couple of examples to illustrate the idea of a schema transformation methodology that avoids database reorganization.
doi:10.1145/156883.156886 fatcat:ir72nwbawnbmfkug32efbv4y7i

Classification across gene expression microarray studies

Andreas Buness, Markus Ruschhaupt, Ruprecht Kuner, Achim Tresch
2009 BMC Bioinformatics  
The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods
more » ... four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive) and histological grade (low/high) of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM), predictive analysis of microarrays (PAM), random forest (RF) and k-top scoring pairs (kTSP). Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV) aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results: For each individual study the generalization error was benchmarked via complete crossvalidation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In particular, the better predictive results of DV in across platform classification indicate higher robustness of the classifier when trained on single channel data and applied to gene expression ratios. Conclusions: We present a systematic evaluation of strategies for the integration of independent microarray studies in a classification task. Our findings in across studies classification may guide further research aiming on the construction of more robust and reliable methods for stratification and diagnosis in clinical practice.
doi:10.1186/1471-2105-10-453 pmid:20042109 pmcid:PMC2811711 fatcat:yreubhimwjgbtoi7qfcn7jbny4

Evolution towards, in, and beyond object databases [chapter]

Marc H. Scholl, Markus Tresch
1994 Lecture Notes in Computer Science  
Published i n P r oc. 3rd GI Workshop Information Systems and Arti cial Intelligence, Hamburg, March 1994. Springer LNCS 777, pp. 64{82. Abstract. There is a manifold of meanings we could associate with the term \evolution" in the database arena. This paper tries to categorize some of these into a unique framework, showing similarities and di erences. Among the topics touched upon are: extending traditional data models to become \object-oriented", migrating existing data to (not necessarily OO)
more » ... databases, schema extension and modi cation in a populated database, integration of federated systems, and the use of \external services" to enrich DBMS functionalities. The following are presented in more detail: rst, we describe the necessity o f o b j e c t e v olution over time second, we discuss schema evolution and third, we p r e s e n t e v olutionary database interoperability b y identifying di erent coupling levels. A few basic mechanisms, such as views (derived information) and a uniform treatment of data and meta data, and type and/or class hierarchies, allow for a formal description of (most of) the relevant problems. Beyond presenting our own approach, we try to provide a platform to solicit further discussion. { Evolution towards object databases: Here we address the advance of database technology in terms of data models. Data models have e v olved from at les via rst generation DBMSs (network, hierarchical), second generation DBMSs (relational) to third generation DBMSs (extended relational, object-oriented, : : : ) 3 0 ]. It has been the primary concern of our prior work 21] to point out that particularly the latter advance can in fact be evolutionary, i.e., preserve the advantages of relational technology, such a s p o werful descriptive query and update languages.
doi:10.1007/3-540-57802-1_4 fatcat:viytjhtnfveyle5ghf6whoag7u


Markus Tresch, Marc H. Scholl
1993 Database Systems for Advanced Applications '93  
In contrast to three schema levels in centralized objectbases, a reference architecture for federated objectbase systems proposes five levels of schemata. This paper investigates the fundamental mechanisms to be provided by an object model to realize the processors transforming between these levels, namely schema ezlension, s&emu filtering, and schema composition. It is shown, how composition and extension are used for stepwise bottom-up integration of existing objectbases into a federation;
more » ... how extension and filtering support authorization on different levels in a federation. A powerful View definition mechanism and the possibility to define subschemata (i.e., parts of a schema) are the key mechanisms used in these processes.
doi:10.1142/9789814503730_0005 fatcat:la7v46ruhvfhlhep4tcsnztwkm

Selective Phenotyping, Entropy Reduction, and the Mastermind game

Julien Gagneur, Markus C Elze, Achim Tresch
2011 BMC Bioinformatics  
An implementation of SPARE in the statistical programming language R [10] is available in Additional File 1 and at  ... 
doi:10.1186/1471-2105-12-406 pmid:22014271 pmcid:PMC3258278 fatcat:6cu5aw5swjd33i23343ytteo2i

Meta object management and its application to database evolution [chapter]

Markus Tresch, Marc H. Scholl
1992 Lecture Notes in Computer Science  
In this paper, we address the problem of supporting more exibility on the schema of object-oriented databases. We describe a general framework based on an object-oriented data model, where three levels of objects are distinguished: data objects, schema objects, and metaschema objects. W e discuss the prerequisites for applying the query and update operations of an object algebra uniformly on all three levels. As a sample application of the framework, we focus on database evolution, that is,
more » ... izing incremental changes to the database schema and their propagation to data instances. We s h o w, how e a c h s c hema update of a given taxonomy is realized by direct updating of schema objects, and how this approach can be used to build a complete tool for database evolution.
doi:10.1007/3-540-56023-8_19 fatcat:4yd7owm7hjhydcuonlrgpdk2nu

Distributed Processing over Stand-alone Systems and Applications

Gustavo Alonso, Claus Hagen, Hans-Jörg Schek, Markus Tresch
1997 Very Large Data Bases Conference  
This paper describes the architecture of OPERA, a generic platform for building distributed systems over stand alone applications. The main contribution of this research effort. is t,o propose a "kernel" system providing the "essentials" for distributed processing and to show the important role database technology may play in supporting such functionality. These include a powerful process management environment. created as a generalization of workflow ideas and incorporating transactional
more » ... s such as spheres of isolation, atomicit.y, and persistence and a transactional engine enforcing correctness based on the nested and multi-level models. It also includes a tool-kit providing externalized database functionality enabling physical database design over heterogeneous data repositories. The potential of the proposed platform is demonstrated by several concrete applications currently being developed.
dblp:conf/vldb/AlonsoHST97 fatcat:g6m42wqrr5aydovu23glj5etpu

Evolution not Revolution: The Data Warehousing Strategy at Credit Suisse Financial Services [chapter]

Markus Tresch, Dirk Jonscher
2001 Lecture Notes in Computer Science  
Data Warehousing is not new to Credit Suisse Financial Services. Over the past twenty years, a large number of warehouse-flavored applications was built, ranging from simple data pools to classical management information systems, up to novel customer relationship management applications using state-of-the-art data mining technologies. However, these warehouse projects were neither coordinated nor are they based on the same infrastructure. Moreover, dramatic changes of the business design had a
more » ... uge impact on information analysis requirements. Both together resulted in a nearly unmanageable complexity. Therefore, Credit Suisse Financial Services started a 3-year enterprise-wide data warehouse re-engineering initiative at the beginning of 1999. This paper presents the motivation, experiences, and open issues of this strategic IT project.
doi:10.1007/3-540-45341-5_3 fatcat:mcm4a6k325fsbkchixjuqcc6ga

An extensible classifier for semi-structured documents

Markus Tresch, Allen Luniewski
1995 Proceedings of the fourth international conference on Information and knowledge management - CIKM '95  
Semi-structured documents (e.g. journal art,icles, electronic mail, television programs, mail order catalogs, . ..) often not explicitly typed; the only available t,ype information is the implicit structure. An explicit t,ype, however, is needed in order to a.pply objectoriented technology, like type-specific methods. In this paper, we present a.n experimental vector space cla.ssifier for determining the type of semi-structured documents. Our goal was to design a. high-performa.nce
more » ... er in t,erms of accuracy (recall and precision), speed, and extensibility.
doi:10.1145/221270.221575 dblp:conf/cikm/TreschL95 fatcat:3d7evournfavhe6f5oo237cvxy

Data mining at a major bank: Lessons from a large marketing application [chapter]

Petra Hunziker, Andreas Maier, Alex Nippe, Markus Tresch, Douglas Weers, Peter Zemp
1998 Lecture Notes in Computer Science  
This paper summarizes experiences and results of productively using knowledge discovery and data mining technology in a large retail bank. We present data mining as part of a greater effort to develop and deploy an integrated IT-infrastructure for loyalty based customer management, combining data warehousing, and campaign management together with data mining technology. We have completed a first campaign where potential customers were selected using the new built data warehouse together with
more » ... a mining. Because of the better insight we have used a decision tree as selection method.
doi:10.1007/bfb0094837 fatcat:iatxvpxl75elxc3he6suohyz6u

T cell-specific inactivation of mouse CD2 by CRISPR/Cas9

Jane Beil-Wagner, Georg Dössinger, Kilian Schober, Johannes vom Berg, Achim Tresch, Martina Grandl, Pushpalatha Palle, Florian Mair, Markus Gerhard, Burkhard Becher, Dirk H. Busch, Thorsten Buch
2016 Scientific Reports  
The CRISPR/Cas9 system can be used to mutate target sequences by introduction of double-strand breaks followed by imprecise repair. To test its use for conditional gene editing we generated mice transgenic for CD4 promoter-driven Cas9 combined with guide RNA targeting CD2. We found that within CD4 + and CD8 + lymphocytes from lymph nodes and spleen 1% and 0.6% were not expressing CD2, respectively. T cells lacking CD2 carryied mutations, which confirmed that Cas9 driven by cell-type specific
more » ... moters can edit genes in the mouse and may thus allow targeted studies of gene function in vivo.
doi:10.1038/srep21377 pmid:26903281 pmcid:PMC4763270 fatcat:2ocglmf3hfddbdoazscdqwchmu

The functional cancer map: A systems-level synopsis of genetic deregulation in cancer

Markus Krupp, Thorsten Maass, Jens U Marquardt, Frank Staib, Tobias Bauer, Rainer König, Stefan Biesterfeld, Peter R Galle, Achim Tresch, Andreas Teufel
2011 BMC Medical Genomics  
Cancer cells are characterized by massive dysegulation of physiological cell functions with considerable disruption of transcriptional regulation. Genome-wide transcriptome profiling can be utilized for early detection and molecular classification of cancers. Accurate discrimination of functionally different tumor types may help to guide selection of targeted therapy in translational research. Concise grouping of tumor types in cancer maps according to their molecular profile may further be
more » ... ful for the development of new therapeutic modalities or open new avenues for already established therapies. Methods: Complete available human tumor data of the Stanford Microarray Database was downloaded and filtered for relevance, adequacy and reliability. A total of 649 tumor samples from more than 1400 experiments and 58 different tissues were analyzed. Next, a method to score deregulation of KEGG pathway maps in different tumor entities was established, which was then used to convert hundreds of gene expression profiles into corresponding tumor-specific pathway activity profiles. Based on the latter, we defined a measure for functional similarity between tumor entities, which yielded to phylogeny of tumors. Results: We provide a comprehensive, easy-to-interpret functional cancer map that characterizes tumor types with respect to their biological and functional behavior. Consistently, multiple pathways commonly associated with tumor progression were revealed as common features in the majority of the tumors. However, several pathways previously not linked to carcinogenesis were identified in multiple cancers suggesting an essential role of these pathways in cancer biology. Among these pathways were 'ECM-receptor interaction', 'Complement and Coagulation cascades', and 'PPAR signaling pathway'. Conclusion: The functional cancer map provides a systematic view on molecular similarities across different cancers by comparing tumors on the level of pathway activity. This work resulted in identification of novel superimposed functional pathways potentially linked to cancer biology. Therefore, our work may serve as a starting point for rationalizing combination of tumor therapeutics as well as for expanding the application of wellestablished targeted tumor therapies.
doi:10.1186/1755-8794-4-53 pmid:21718500 pmcid:PMC3148554 fatcat:bqrdc4gmk5cxre63o23t4optx4

Identification of aberrant chromosomal regions from gene expression microarray studies applied to human breast cancer

Andreas Buness, Ruprecht Kuner, Markus Ruschhaupt, Annemarie Poustka, Holger Sültmann, Achim Tresch
2007 Computer applications in the biosciences : CABIOS  
Motivation: In cancer, chromosomal imbalances like amplifications and deletions, or changes in epigenetic mechanisms like DNA methylation influence the transcriptional activity. These alterations are often not limited to a single gene but affect several genes of the genomic region and may be relevant for the disease status. For example, the ERBB2 amplicon (17q21) in breast cancer is associated with poor patient prognosis. We present a general, unsupervised method for genome-wide gene expression
more » ... data to systematically detect tumor patients with chromosomal regions of distinct transcriptional activity. The method aims to find expression patterns of adjacent genes with a consistently decreased or increased level of gene expression in tumor samples. Such patterns have been found to be associated with chromosomal aberrations and clinical parameters like tumor grading and thus can be useful for risk stratification or therapy. Results: Our approach was applied to 12 independent human breast cancer microarray studies comprising 1422 tumor samples. We prioritized chromosomal regions and genes predominantly found across all studies. The result highlighted not only regions which are well known to be amplified like 17q21 and 11q13, but also others like 8q24 (distal to MYC) and 17q24-q25 which may harbor novel putative oncogenes. Since our approach can be applied to any microarray study it may become a valuable tool for the exploration of transcriptional changes in diverse disease types. Availability: The R source codes which implement the method and an exemplary analysis are available at
doi:10.1093/bioinformatics/btm340 pmid:17599933 fatcat:gnbrr7iocvfybd3dm7g4oyesey

ISMB 2016 Proceedings Papers Committee

2016 Bioinformatics  
Ringnér Cenk Sahinalp Marcel Schulz Russell Schwartz Nathan Sheffield Kai Tan Amos Tanay Achim Tresch Jean-Philippe Vert Martin Vingron Ioannis Xenarios Yu Xia Yinyin Yuan Deyou Zheng  ...  Haiyan Huang David Jones Tommy Kaplan Sunduz Keles IIona Kifer Philip Kim Anshul Kundaje Shaun Mahony John Marioni Satoru Miyano Alexandre Morozov Leelavati Narlikar Uwe Ohler Tim Reddy Markus  ... 
doi:10.1093/bioinformatics/btw296 pmid:27307631 pmcid:PMC4908369 fatcat:otmt2zdtavamvjdikc4zpolyxe

Survey of the COCOON Project [chapter]

M. H. Scholl, H.-J. Schek
1992 Objektbanken für Experten  
In the COCOON project, Christian Laasch, Christian Rich , and Markus Tresch have worked on the formalization, optimization, and meta modeling, respectively. Bin  ...  Tresch are now in Ulm. Internal reports and non-refereed publications: [DHL +92, Jia90a,  ... 
doi:10.1007/978-3-642-77873-5_11 fatcat:uflaooaywjfnjn64ixt4gwb4ra
« Previous Showing results 1 — 15 out of 161 results