Summary
The materialized_lake_view materialization macro compares old_relation.type.value against the string 'materializedview' (no underscore), but RelationType.MaterializedView.value is 'materialized_view' (with underscore). The comparison is therefore always unequal, so the adapter always drops the existing relation before running CREATE OR REPLACE MATERIALIZED LAKE VIEW. Combined with #, this forces a drop/recreate on every MLV rerun, producing a downtime window for consumers.
Environment
Offending code
dbt/include/fabricspark/macros/materializations/models/materialized_lake_view/materialized_lake_view.sql:111:
{#-- Drop existing object if it's a different type (table/view) --#}
{% if old_relation is not none and old_relation.type.value != 'materializedview' %}
{{ log("Dropping " ~ old_relation.type ~ " " ~ old_relation.render() ~ " to replace with materialized lake view") }}
{{ adapter.drop_relation(old_relation) }}
{% endif %}
Ten lines earlier, the same file builds the target relation with the correct underscored value (line 42):
{%- set target_relation = api.Relation.create(
identifier=identifier,
schema=schema,
database=database,
type='materialized_view') -%}
So the two sides of the same file disagree — line 42 is right, line 111 is wrong.
Proof
>>> from dbt.adapters.contracts.relation import RelationType
>>> RelationType.MaterializedView.value
'materialized_view'
>>> 'materialized_view' != 'materializedview'
True
No RelationType.value in dbt-core equals 'materializedview', so the check is True for every possible old_relation.
Observed effect
Every rerun of an MLV model triggers adapter.drop_relation(old_relation) immediately before create or replace materialized lake view, even when old_relation is already the same MLV. CREATE OR REPLACE MATERIALIZED LAKE VIEW is atomic (Delta transactional swap), so on its own it would be zero-downtime for readers — but the preceding drop removes the view first.
Suggested fix
Change line 111 to compare against the canonical value:
{% if old_relation is not none and old_relation.type.value != 'materialized_view' %}
This issue needs to land together with the companion parser bug — the parser currently classifies existing MLVs as Table, so fixing only the typo here still causes a drop. Both fixes together enable zero-downtime MLV rebuilds.
Summary
The
materialized_lake_viewmaterialization macro comparesold_relation.type.valueagainst the string'materializedview'(no underscore), butRelationType.MaterializedView.valueis'materialized_view'(with underscore). The comparison is therefore always unequal, so the adapter always drops the existing relation before runningCREATE OR REPLACE MATERIALIZED LAKE VIEW. Combined with #, this forces a drop/recreate on every MLV rerun, producing a downtime window for consumers.Environment
dbt-fabricspark==1.9.5Offending code
dbt/include/fabricspark/macros/materializations/models/materialized_lake_view/materialized_lake_view.sql:111:Ten lines earlier, the same file builds the target relation with the correct underscored value (line 42):
So the two sides of the same file disagree — line 42 is right, line 111 is wrong.
Proof
No
RelationType.valuein dbt-core equals'materializedview', so the check is True for every possibleold_relation.Observed effect
Every rerun of an MLV model triggers
adapter.drop_relation(old_relation)immediately beforecreate or replace materialized lake view, even whenold_relationis already the same MLV.CREATE OR REPLACE MATERIALIZED LAKE VIEWis atomic (Delta transactional swap), so on its own it would be zero-downtime for readers — but the preceding drop removes the view first.Suggested fix
Change line 111 to compare against the canonical value:
This issue needs to land together with the companion parser bug — the parser currently classifies existing MLVs as
Table, so fixing only the typo here still causes a drop. Both fixes together enable zero-downtime MLV rebuilds.