web
You’re offline. This is a read only version of the page.
close
  • Enhancements to Multi-User Store Commerce Installation

    When using EntraId single signon for Store Commerce in combination with shared installation (Multi user, --UseCommonApplicationData) there are two main challenges we need to handle:

     

    - Adding users to the local security group RetailChannelUsers is a challenge when installing in userspace since we cannot easily add all users that need to use the computer to that local security group. Using group policy is not always possible due to security setup so a way to add users centrally in HQ and allow for replication to the local group or similar using Entra ID groups would be preferred.

     

    - Automatically killing or otherwise handling the requirement to have only one instance of the Store Commerce application running is a challenge since users are not always clear that they need to shut down the application when locking the screen temporarily. Either allowing users to terminate the running instance or otherwise allowing multiple users to use the same device using entraid without needing to reboot the computer would be needed.

  • Improve staging performance in parallel processing DMF scenarios

    In DMF parameter there is a control for how data is inserted to staging table from uploaded file (or from integration queue). The control "Data Access Mode" has these two possible settings:


    Table or view (Fast load)

    Table of View


    Fast load is the default setting and what it does is add two things to the insert statement via DIXF/SSIS logic: (check constraints, with tablock). This is prudent and efficient since best practice dictates that bulk insert operations are faster when doing a full table lock.


    However it is not recommended in parallel processing with smaller payloads where full table locks might slow down performance due to the processing of data starting at the same time as inserts (causing wait times of LCK_MX and delays for inserting into staging table).


    When using recurring integration there are many scenarios where parallel processing is of the essence, but since this parameter is global there is no way to currently adhere to best practices for all integration scenarios, either we use bulk insert with table lock globally or we don't.


    My suggestion is to either provide this parameter per DMF project, or have it as part of the "Run messages in order" control on DMF recurring integration batch jobs. This will allow for better parallel processing where needed and still allow for bulk inserts of larger payloads to use the table lock for improved performance.


    The change will have to be made in DMF components on SSIS as well as having this parameter per DMF project.