What's new

  • school2017-08-17: A set of simple how-tos added

    We added an initial set of simple how-tos that may help you start with LP-ETL. More how-tos are on their way.

  • school2017-07-31: Geocoding with nominatim tutorial available

    Another tutorial showing geocoding with Nominatim in LP-ETL by @jindrichmynarz is available!

  • school2017-06-30: Tabular data to RDF tutorial available

    We have created a tutorial showing conversion of tabular data to RDF, including all transformations and metadata necessary to publish a LOD dataset. The tutorial is based on real-world data of the Local Administrative Units (LAU) code list.

  • whatshot2017-05-05: Feedback welcome

    Current users please fill out this short usability questionnaire. It should not take you more than 2 minutes. Thank you.

  • whatshot2017-03-15: JSON-LD formatting

    JSON-LD files can now be created out of ordinary JSON files by adding a JSON-LD context using the JSON to JSON-LD component. Output JSON-LD files can be formatted to a Compacted, Flattened or Expanded JSON-LD format using the Format JSON-LD component.

lightbulb_outlineMore Tips & Tricks

Featured component

Pipeline input
Pipeline inputopen_in_browser

Passes files from an HTTP POST pipeline execution request

Pipeline inputclose

Extractor, allows the user to pass files from an HTTP POST pipeline execution request.

extensionMore components

3-minute screencast



Modular design

Deploy only those components that you actually need. For example, on your pipeline development machine, you need the whole stack, but on your data processing server, you only need the backend part. The data processing options are extensible by components. We have a library of the most basic ones. When you need something special, just copy and modify an existing one.


All functionality covered by REST APIs

Our frontend uses the same APIs which is available to everyone. This means that you can build your own frontend, integrate only parts of our app and control everything easily.


Almost everything is RDF

Except for our configuration file, everything is in RDF. This includes the ETL pipelines, component configurations and messages indicating the progress of the pipeline. You can generate the pipelines and configurations using SPARQL from your own app. Also, batch modification of configurations is a simple text file operation, no more clicking through every pipeline when migrating.