• Azure Synapse Link - Correct the structure of CSV file when the data model changes

    In Azure Synapse Link for Dataverse, Tables of Dataverse are being synchronized to CSV files. We face issues when the data model of a Table changes.


    Indeed, when a new column is added in a Table, only the new or updated records have the new column in the CSV file. Existing records do not have the new column in the CSV file.


    Then, all lines in the CSV file don't have the same number of columns, which is not standard and makes the CSV file not readable by many tools and languages. The only solution to fix the CSV file is to remove the concerned Table from the synchronisation, and add it again.


    This is also a bigger issue because Microsoft can deploy new fields on a Table at any time, and can do it in Production-type environments before Sandbox-type environments.

    Then, we face unpredictable issues in the CSV files in Azure Synapse Link, that we can solve only after they happened, even in Production.


    The idea is the following: when a new column in added in a Table synchronized in Azure Synapse Link: all records in the CSV file should be updated to use the new column, in order to have a standard CSV file accepted by all tools and languages.

  • More control/visibility on new fields deployed by Microsoft on Dataverse Tables

    Microsoft can deploy new fields on Dataverse Tables at any time, and can do it in Production-type environments before Sandbox-type environments. These unexpected changes in data model can generate issues on applications (having their own constraints) that consume Dataverse.


    The suggestion is:

    • To have visibility in advance on fields deployed by Microsoft in Dataverse
    • That Microsoft always deploys new fields in Sandbox-type environments before Production-type environments