Filters








11,196 Hits in 1.6 sec

D4.1 Pilots Requirements Analysis Report

Julián Moreno-Schneider, Georg Rehm
2018 Zenodo  
Moreno-Schneider (DKFI), Georg Rehm (DFKI) 0.2 Second draft version 30/04/2018 0.3 Third draft version 09/05/2018 0.4 Include conclusions and surveys in Annex 1 25/05/2018 0.5 Include  ...  Legal Knowledge Graph for Smart Compliance Services in Multilingual Europe D4.1 | Pilots requirements analysis report 1 VERSION MODIFICATIONS DATE AUTHORS 0.1 First draft version 16/04/2018 Julian  ... 
doi:10.5281/zenodo.1256829 fatcat:4uoccv5ykretdmgjwe4bzoisx4

Errata

JULIAN HUXLEY, DUNN SCHNEIDER, WEBB, GARBER, QUISENIJERRY
1923 Journal of Heredity  
doi:10.1093/oxfordjournals.jhered.a102376 fatcat:xkm7jho3kzgjlj2jb2smzxailm

Taking The Carpentry Model To Librarians

Tim Dennis, John Chodacki, Juliane Schneider
2017 Zenodo  
Schneider, Harvard, @JulianeS Tracy Teal, Carpentries, @tracykteal • James Baker started at British Library • 4 contributors • Lessons on Data Into, Shell, Git, OpenRefine • In more fixed documents  ...  manage the details of a merge • Small group to work with the Carpentries' Assessment Director to align our assessment efforts with other carpentries as possible with the goal of inter-carpentry analysis Juliane  ... 
doi:10.5281/zenodo.1209481 fatcat:vjswrk46yvd4hcmdbwvhxddvtq

LIBER Webinar: Data Curation From A Practical Perspective

Juliane Schneider
2019 Zenodo  
In this webinar, Juliane Schneider relates stories of some of her real-life adventures as a metadata consultant for research data, and the strategies she used to take data from laptops and un-networked  ...  Juliane Schneider Team Lead/Lead Data Curator www.eagle-i.net Harvard Catalyst | Clinical and Translational Science Center Juliane_Schneider@hms.harvard.edu https://orcid.org/0000-0002-7664-3331 What You  ... 
doi:10.5281/zenodo.3541601 fatcat:3pzffkxc5jboxgr2njyhmlebma

Lynx D4.2 Initial version of workflow definition

Julián Moreno Schneider, Georg Rehm
2018 Zenodo  
Moreno Schneider (DFKI), Georg Rehm (DFKI) 0.2 First draft of structure 18/07/2018 0.3 Final version of TOC 25/10/2018 0.4 First version of introduction 29/10/2018 0.5 Description of  ...  Knowledge Graph for Smart Compliance Services in Multilingual Europe D4.2 | Initial version of Workflow definition 1 VERSION MODIFICATION(S) DATE AUTHOR(S) 0.1 First draft version 01/06/2018 Julián  ... 
doi:10.5281/zenodo.1745324 fatcat:dxeqzzfisvenhew4te2gpahg7a

Lynx D4.3 Final version of Workflow definition

Julián Moreno-Schneider, Georg Rehm
2019 Zenodo  
This report describes the final definition of the curation workflows associated with every business use case (as defined in D4.1 [LynxD41]). This process started with the initial definition provided in D4.2 [LynxD42], in which we outlined four scenarios: data protection, labour law, CE marking and geothermal energy. One of them has been put on hold (CE marking) and one of them has been adapted to cover a certain type of document (contracts). Data protection is now named contract analysis. The
more » ... ecification of a workflow includes its input and output as well as the functionality it is supposed to perform: annotate or enrich a document, add a document to the knowledge base, search for information, etc. Workflows make use of the services (building blocks) to implement the required functionality. The content curation workflows for the different use cases that we prototypically implement in the project have been defined. We performed a systematic analysis of the microservices (developed in parallel) and matched them with the required functionalities for each use case. First, we determine the principal elements involved in each use case, i.e., the services, input and output. Second, we define the order in which the services have to be executed. Third, we identify the shared components in the different workflows. Currently we have defined five different workflows, which are divided into two groups: (i) those that are commonly used in more than one use case; and (ii) those that are used case specifically. The common workflow is: LKG population The use case specific workflows are: Contract Analysis (OLS) Labour Law Question Answering (CuatreCasas) Geothermal Project Analysis (DNV GL) Geothermal Project Extended Analysis (DNV GL) Apart from the business cases, there is the General User/Public Portal use case, which is also considered for defining specific workflows.
doi:10.5281/zenodo.3235766 fatcat:v32rytep4fhidpkpdyixabezlu

Lynx D3.8 Summarisation and annotation services

Julián Moreno-Schneider, Georg Rehm, María Navas-Loro
2020 Zenodo  
This report provides the final description of the summarisation and annotation services developed under Task 3.2 in the Lynx project. This report describes several services that are classified into Annotation Services, whose goal is to enrich documents with semantic annotations, and Summarisation Service, which aim at generating a new and shorter piece of a text from one or several longer texts (documents or parts of documents). The description of the services consists of two parts: for each of
more » ... the services, first, its general approach is presented, and then, its application in the Lynx project is introduced, putting the focus on datasets used for training new models, rules defined for domain adaptability or generation of dictionaries for specific topics or scenarios. Although being the final report, the described services are still going to be further developed and evaluated, and will still experience changes and improvements in the following months (and until the end of the project).
doi:10.5281/zenodo.3865667 fatcat:rcfafvhqmrfjvlu4ltyrajdkoi

Vat Photopolymerization of Cemented Carbide Specimen

Thomas Rieger, Tim Schubert, Julian Schurr, Andreas Kopp, Michael Schwenkel, Dirk Sellmer, Alexander Wolff, Juliane Meese-Marktscheffel, Timo Bernthaler, Gerhard Schneider
2021 Materials  
Numerous studies show that vat photopolymerization enables near-net-shape printing of ceramics and plastics with complex geometries. In this study, vat photopolymerization was investigated for cemented carbide specimens. Custom-developed photosensitive WC-12 Co (wt%) slurries were used for printing green bodies. The samples were examined for defects using quantitative microstructure analysis. A thermogravimetric analysis was performed to develop a debinding program for the green bodies. After
more » ... ntering, the microstructure and surface roughness were evaluated. As mechanical parameters, Vickers hardness and Palmqvist fracture toughness were considered. A linear shrinkage of 26–27% was determined. The remaining porosity fraction was 9.0%. No free graphite formation, and almost no η-phase formation occurred. WC grain growth was observed. 76% of the WC grains measured were in the suitable size range for metal cutting tool applications. A hardness of 1157 HV10 and a Palmqvist fracture toughness of 12 MPam was achieved. The achieved microstructure exhibits a high porosity fraction and local cracks. As a result, vat photopolymerization can become an alternative forming method for cemented carbide components if the amount of residual porosity and defects can be reduced.
doi:10.3390/ma14247631 pmid:34947227 pmcid:PMC8706196 fatcat:qyd4t76ebna5felkitn2zb5jvq

Exploring the Structural Space of the Galectin-1-Ligand Interaction

Nadja Bertleff-Zieschang, Julian Bechold, Clemens Grimm, Michael Reutlinger, Petra Schneider, Gisbert Schneider, Jürgen Seibel
2017 ChemBioChem  
This is the author manuscript accepted for publication and has undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record.
doi:10.1002/cbic.201700251 pmid:28503789 fatcat:j3agck3rmjcsjprgzvtktayxui

Towards Tracking Data Flows in Cloud Architectures [article]

Immanuel Kunz, Valentina Casola, Angelika Schneider, Christian Banse, Julian Schütte
2020 arXiv   pre-print
As cloud services become central in an increasing number of applications, they process and store more personal and business-critical data. At the same time, privacy and compliance regulations such as GDPR, the EU ePrivacy regulation, PCI, and the upcoming EU Cybersecurity Act raise the bar for secure processing and traceability of critical data. Especially the demand to provide information about existing data records of an individual and the ability to delete them on demand is central in
more » ... regulations. Common to these requirements is that cloud providers must be able to track data as it flows across the different services to ensure that it never moves outside of the legitimate realm, and it is known at all times where a specific copy of a record that belongs to a specific individual or business process is located. However, current cloud architectures do neither provide the means to holistically track data flows across different services nor to enforce policies on data flows. In this paper, we point out the deficits in the data flow tracking functionalities of major cloud providers by means of a set of practical experiments. We then generalize from these experiments introducing a generic architecture that aims at solving the problem of cloud-wide data flow tracking and show how it can be built in a Kubernetes-based prototype implementation.
arXiv:2007.05212v1 fatcat:6m2s3m5o65furclhtsyqeaktni

Lynx D3.3 Intermediate summarisation and annotation services

Julián Moreno-Schneider, Georg Rehm, María Navas-Loro
2019 Zenodo  
This report provides an overview of the intermediate summarisation and annotation services (developed under the Task 3.3 of WP3 in the Lynx project). This report describes several services that are divided into Annotation Services, which goal is enriching documents by annotating semantic information on them; and Summarisation Service, which aim at generating a new and shorter piece of a text from one or several texts (documents or parts of documents). The description of the services is composed
more » ... of two parts: for each of the services, first, its general approach is presented. Then, its application in the Lynx project is introduced, putting the focus on datasets used for training new models, rules defined for domain adaptability or generation of dictionaries for specific topics or scenarios. Being an intermediate report, the described services are still under development, and will still experience changes and improvements in the following months. However, large amounts of work have been done in the interoperability aspect (data interchange format) and the conversion of the services for working in a docker microservice architecture. The fact that the services are already properly running and working in Openshift is a success.
doi:10.5281/zenodo.3235751 fatcat:h2l4tiwypngunostbndvpsiibq

Lynx 5.4 Testing and Evaluation Plan

Filippo Maganza, Julián Moreno-Schneider, Pascual Boil, Pieter Verhoeven, Christian Sageder
2020 Zenodo  
The purpose of this document is to describe the testing and evaluation plan of the Lynx platform, which is organized in three different levels: microservice level, integration level, and pilot use case level. In the microservice level the Lynx microservices are tested as single components. The evaluation methodology of this level is discussed and decided within WP3. Integration level testing aims to verify the functionalities of the Lynx platform implemented by many microservices that are
more » ... ated together. The population workflow, which is used to enrich documents and add them to the LKG, is clearly the functionality of this level that deserves more attention because of its complexity. The simpler test case of the population workflow consists of running a single instance of it, and, when it is completed, check whether the correctness conditions are satisfied. Moreover, to test the behaviour of the system under a stress condition, workloads of several population workflow instances will also be experimented. The objective of pilot use case level testing is to verify the proper functioning of the Lynx platform from the customer perspective. With this purpose, use case testing will be experimented, a black box software testing technique in which the tester follows the steps defined by the use cases and verify that they work as expected.
doi:10.5281/zenodo.3865748 fatcat:pos62imdcbh2znd5bqnnbozztu

Schmankerl Time Machine: Rechnerisch-explorative Zugänge zur Gastronomie in München

Stefanie Schneider, Julian Schulz
2020 Zenodo  
Die Web-Applikation "Schmankerl Time Machine" wurde im Rahmen des Hackathons für offene Kulturdaten, "Coding da Vinci Süd 2019", entwickelt und in der Kategorie "Most Technical" prämiert. Sie basiert auf 375 Speisekarten von Münchner Restaurants aus den Jahren 1855 bis 2008, die die Monacensia der Stadtbibliothek München für den Hackathon in digitaler Form zur Verfügung gestellt hatte. Das Poster präsentiert die Idee hinter der Applikation, ihre technische Umsetzung – u. a. mit Transkribus, der
more » ... Open-Source-Umgebung R und den R-Paketen Tidyverse und Shiny – sowie ihre Funktionalität. Folgende Frage steht dabei im Vordergrund: Wie kann die enorme Vielfalt der Karten auch von einer technisch wenig versierten Zielgruppe auf möglichst unterschiedliche Art und Weise exploriert werden? Gleichermaßen vorgestellt wird ein umfassendes Nachhaltigkeitskonzept gemäß den FAIR-Prinzipien.
doi:10.5281/zenodo.3688567 fatcat:iwcrk4rovnbqpc4rotc2fcygwa

Curation Technologies for a Cultural Heritage Archive: "Project Tongilbu"

Peter Bourgonje, Julián Moreno-Schneider, Georg Rehm
2019 Zenodo  
We are developing a platform for generic curation technologies, using various NLP procedures, that is specifically targeted at, but not limited to, document collections that are too large for humans to (manually) read and go through. The aim then is to provide prototypical NLP tools like NER, Entity Linking, clustering and summarization in order to support rapid exploration of a data set. In this particular submission, the data set in question is the result of "Project Tongilbu", a report
more » ... by the Korean Ministry of Re-unification, on the unification of East- and West-Germany in the 1990's. The majority of the content in this data set is in German, with small parts in Korean. With the collection being a set of PDF files, we first apply OCR to extract machine-readable text. Focusing on German, we then apply an NER model trained on Wikipedia data, retrieve URIs of recognized entities in the GND (Gemeinsame Normdatei, a German database of entities with additional information), perform temporal analysis and cluster documents according to the retrieved entities they contain. This is then visualized in a curation dashboard. Since support (in terms of tooling, but also training data) for Korean is limited, for the Korean texts we experiment with Machine Translation on the texts extracted from the PDFs, to then apply the German pipeline and project annotations back onto the original Korean text.
doi:10.5281/zenodo.3404254 fatcat:zxzzbzl3ljgrhd63obw4gtmh3e

Developing a Repository Lifecycle Model and Metadata Description: Modeling and Describing Changes

Juliane Schneider
2020 Zenodo  
In the past decade a wealth of data repositories and open datasets across all disciplines have been created. Registries of repositories have also been established, mostly by discipline (medical, social sciences) or by ownership (academic, governmental).nbsp; We have reached a point where a lifecycle model should be constructed for these resources, as well as a set of agreed-upon metadata to describe them. We will present our repository lifecycle model, and propose the most likely existing
more » ... ta schemas for constructing an overall description for repositories.
doi:10.5281/zenodo.3777055 fatcat:kk76m7qx45azleyfliags2gdd4
« Previous Showing results 1 — 15 out of 11,196 results