Fabric Deployment Pipeline Rule to set Data Pipeline Parameters
+ Deployment Stage Conditional Formatting in Power BI Reports
I probably should give the Fabric Variable libraries (preview) a try. But a preview is essentially a beta version, and if a solution/workaround based on General Availability (GA) features works, it’s not stupid, right?
The only deployment rule for notebooks is the default lakehouse rule.

So how can we pass parameters from deployment rules to a notebook? Maybe there is another GA solution [let me know], but the following works well:
Let’s say you have a TEST Fabric Deployment DEV workspace for development and a TEST Fabric Deployment TEST workspace for testing in the deployment pipeline.

➡️ Add an additional DEV lakehouse into the DEV workspace. Upload Data Pipeline Parameters.json into the lakehouse.

{
"environment": {
"Stage": "DEV",
"Parameter1": "abc",
"Parameter2": "xyz",
"Parameter3": "123"
}
}
➡️ Add an additional TEST lakehouse into the TEST workspace. Upload Data Pipeline Parameters.json into the lakehouse files. Same file name and JSON structure, but different values.

{
"environment": {
"Stage": "TEST",
"Parameter1": "xxx",
"Parameter2": "zzz",
"Parameter3": "888"
}
}
➡️ Add Read Deployment Parameters notebook into the data pipeline.
import json
# Read parameters form a file (in a default Lakehouse)
config_path = "Files/Data Pipeline Parameters.json"
raw_lines = spark.read.text(config_path).collect()
raw_json = "\n".join([row['value'] for row in raw_lines])
config = json.loads(raw_json)
# Output JSON
mssparkutils.notebook.exit(json.dumps(config))
The notebook reads a JSON file from the default lakehouse and outputs it into the data pipeline as ExitValue.
➡️ In you main Data Processing notebook activity that will be executed after the Read Deployment Parameters, add base parameter deployment_stage_parameters with dynamic content value.
@activity('Read Deployment Parameters').output.result.ExitValue


This expression reads ExitValue from the output into deployment_stage_parameters parameter.
Now add a parameter cell into the Data Processing notebook. deployment_stage_parameters will contain the JSON.
deployment_stage_parameters = "" # base parameter (input)

Then in the same notebook parse JSON and use parameters for whatever you want.
import json
config_dict = json.loads(deployment_stage_parameters)
env = config_dict.get("environment", {})
default_params = {
# Extract individual values safely
"stage": env.get("Stage", ""),
"param1": env.get("Parameter1", ""),
"param2": env.get("Parameter2", ""),
"param3": env.get("Parameter3", ""),
}
stage = default_params["stage"]
sql = f"SELECT '{stage}' AS Stage"
mssparkutils.notebook.exit(sql)
➡️ Optional: generate SQL query and save stage name into a Warehouse table. You can do that from the same notebook, or pass the SQL query (mssparkutils.notebook.exit(sql)) father along the pipeline into the following Copy activity.

➡️ Now in the Copy activity use
@activity('Data Processing').output.result.exitValue
expression (reads output of the previous activity) as a source of the SQL query and write the result into a warehouse. The expression returns value (SQL query) that looks like on of the following rows:
SELECT 'DEV' AS Stage
SELECT 'TEST' AS Stage

➡️ Create a deployment rule (for TEST stage) that changes the default lakehouse of the notebook from DEV to TEST.

➡️ Deploy DEV into TEST (do not deploy DEV lakehouse into the next stage, it’s not needed there).
Results:
Data Processing Notebook in DEV workspace uses parameters from JSON file stored in DEV lakehouse.
Data Processing Notebook in TEST workspace uses parameters from JSON file stored in TEST lakehouse.
You can store multiple parameters in the JSON files. Therefore you can use only one deployment rule for all parameters needed in a data pipeline: data source URIs, database names, azure key vault URIs and secret names and so on.
Deployment stage name (e.g. “DEV”, “TEST”, “PROD”) is passed as one of the parameters, stored in the warehouse table and can be used in the reports.
For example, color of the report header, text and color of the subtitle are changing automatically for each deployment stage. Just read stage name from the warehouse table and use it for conditional formatting.
DEV

TEST

PROD
