Metadata-Driven job

We are looking for a solution to have metadata-driven etl job where we only need to create a single templated graph. This is used for the integration phase of just loading source systems tables into our staging environment. We would like to build the necessary objects in the database. Is this possible?

The graph/graphs will begin by looking into a job list table and iterate through the list. Each job entry will have a source table, and a destination table for which it will set as parameters I am guessing. No source/dest will involve any joins or transforms, just straight s=>d loading except that it may not be pulling all columns from the source. This list of columns will be in a table.

The graph will select from the columns table where tbl_name = ‘source_tbl’ and use the returned columns to set the metadata
The graph will select from the columns table where tbl_name = ‘dest_tbl’ and use the returned columns to set the metadata
- We are not necessarily going to do a SELECT * as we may/may not be loading all columns

After it populates the metadata, it will then need to build the mappings. I assume we will have another table that has column_mappings since source and destination may not have the same name, such as ‘f_name’ => ‘first_name’

Anyone have suggestions or example graphs or components that automate just straight source to destination iterations with a generic graph?

Hmm,
this looks like interesting endeavor. Your approach should work, especially if you use linked-in metadata and parameter files. Not sure what “transformation” of data you want to perform, but it seems like you just want to move 1:1 or almost. In this, you could use “star” notation in CTL which allows the same code for any situation and which fields get assigned/propagated from source to dest depends on wich pair of metadata for source&dest you choose.

Look at mapping in CTL for details.

David.