What's new

  • whatshot2017-10-16: LinkedPipes ETL @ ODBASE 2017!

    Data chunking in LinkedPipes ETL will be presented at ODBASE 2017! See you in Rhodes, Greece!

  • school2017-08-17: A set of simple how-tos added

    We added an initial set of simple how-tos that may help you start with LP-ETL. More how-tos are on their way.

  • school2017-07-31: Geocoding with nominatim tutorial available

    Another tutorial showing geocoding with Nominatim in LP-ETL by @jindrichmynarz is available!

  • school2017-06-30: Tabular data to RDF tutorial available

    We have created a tutorial showing conversion of tabular data to RDF, including all transformations and metadata necessary to publish a LOD dataset. The tutorial is based on real-world data of the Local Administrative Units (LAU) code list.

  • whatshot2017-05-05: Feedback welcome

    Current users please fill out this short usability questionnaire. It should not take you more than 2 minutes. Thank you.

lightbulb_outlineMore Tips & Tricks

Featured component

Pipeline input
Pipeline inputopen_in_browser

Passes files from an HTTP POST pipeline execution request

Pipeline inputclose

Extractor, allows the user to pass files from an HTTP POST pipeline execution request.

extensionMore components

3-minute screencast



Modular design

Deploy only those components that you actually need. For example, on your pipeline development machine, you need the whole stack, but on your data processing server, you only need the backend part. The data processing options are extensible by components. We have a library of the most basic ones. When you need something special, just copy and modify an existing one.


All functionality covered by REST APIs

Our frontend uses the same APIs which is available to everyone. This means that you can build your own frontend, integrate only parts of our app and control everything easily.


Almost everything is RDF

Except for our configuration file, everything is in RDF. This includes the ETL pipelines, component configurations and messages indicating the progress of the pipeline. You can generate the pipelines and configurations using SPARQL from your own app. Also, batch modification of configurations is a simple text file operation, no more clicking through every pipeline when migrating.