Speaker
Description
An essential challenge by creating FAIR datasets is the often underestimated I, which stands for interoperability. Especially for a dataset that is meant to be exported from its ecosystem, it is important to store the metadata and data in the appropriate exchangeable format based on standards.
One possible source for metadata is an electronic lab notebook that stores it in a structured manner. In many cases the internal structure does not match any established metadata scheme and a mapping is required for a meaningful export. This talk presents a concept of what is necessary to make a generic export from an electronic Labbook based on semantic Mediawiki for ingestion into SciCat or interoperable Nexus files.
First an existing scheme (e.g. Nexus application class) needs to be imported and the class dependencies is stored. Reference to the origin of created classes and properties are essential. Within Mediawiki, templates are used to assign a set of properties to a page. Together with a configuration which describes through which properties the templates are interconnected a generic export is possible. Now by selecting a certain measurement, the export script can extract all essential metadata to create e.g. a SciCat export with project, instrument, sample and dataset information or create a nexus file and knows which groups needs to be created and under which path a property is stored.
Since the information is bound to the property, the application of a mapping can be optimized in an iterative manner. This makes it a flexible procedure that is perfectly usable for existing documentation where metadata schemes are applied at a later stage or need to be updated. In addition, the reference to the original metadata scheme is known in the whole pipeline and could be included in the export.
Abstract publication | I agree that the abstract will be published on the web site |
---|