What's new

  • whatshot2018-07-04: LinkedPipes ETL featured @ ISWC 2018 Demo Session!

    LinkedPipes ETL will be featured as part of the LinkedPipes DCAT-AP Viewer demo at ISWC 2018! See you in October in Monterey, California, USA!

  • whatshot2017-10-30: LinkedPipes ETL @ iiWAS 2017!

    LinkedPipes ETL will be presented at iiWAS 2017! See you in December in Salzburg, Austria!

  • whatshot2017-10-16: LinkedPipes ETL @ ODBASE 2017!

    Data chunking in LinkedPipes ETL will be presented at ODBASE 2017! See you in Rhodes, Greece!

  • school2017-08-17: A set of simple how-tos added

    We added an initial set of simple how-tos that may help you start with LP-ETL. More how-tos are on their way.

  • school2017-07-31: Geocoding with nominatim tutorial available

    Another tutorial showing geocoding with Nominatim in LP-ETL by @jindrichmynarz is available!

lightbulb_outlineMore Tips & Tricks

Featured component

DCAT-AP to CKANopen_in_browser

Loads DCAT-AP v1.1 metadata to CKAN

DCAT-AP to CKANclose

Loader, loads DCAT-AP v1.1 metadata to a CKAN catalog.

URL of the CKAN API of the target catalog, e.g. https://linked.opendata.cz/api/3/action
The API key allowing write access to the CKAN catalog. This can be found on the user detail page in CKAN.
CKAN dataset ID
The CKAN dataset ID to be used for the current dataset
Load language RDF language tag
Since DCAT-AP supports multilinguality and CKAN does not, this specifies the RDF language tag of the language, in which the metadata should be loaded.
CKAN can be extended with additional metadata fields for datasets (packages) and distributions (resources). If the target CKAN is not extended, choose Pure CKAN.
Generate CKAN resources from VoID example resources
If the input metadata contains void:exampleResource, create a CKAN resource from it.
Generate CKAN resource from VoID SPARQL endpoint
If the input metadata contains void:sparqlEndpoint, create a CKAN resource from it.

extensionMore components

3-minute screencast



Modular design

Deploy only those components that you actually need. For example, on your pipeline development machine, you need the whole stack, but on your data processing server, you only need the backend part. The data processing options are extensible by components. We have a library of the most basic ones. When you need something special, just copy and modify an existing one.


All functionality covered by REST APIs

Our frontend uses the same APIs which is available to everyone. This means that you can build your own frontend, integrate only parts of our app and control everything easily.


Almost everything is RDF

Except for our configuration file, everything is in RDF. This includes the ETL pipelines, component configurations and messages indicating the progress of the pipeline. You can generate the pipelines and configurations using SPARQL from your own app. Also, batch modification of configurations is a simple text file operation, no more clicking through every pipeline when migrating.