web
You’re offline. This is a read only version of the page.
close
  • Support sorting by lookup columns that reference virtual entities in model-driven app views

    Current Behavior:

     

    When a model-driven app view includes a lookup column that references a virtual entity (such as the Microsoft Entra ID / AAD User table), attempting to sort the view by that column results in a generic SQL error:


    SQL error: Generic SQL error. [...]

     

    SQL error 207 means "Invalid column name". This occurs because Dataverse's FetchXML-to-SQL translator generates an ORDER BY clause referencing a companion name column (e.g., cr5a0__ProjectName) that is never created for virtual entity lookups - virtual entities have no physical SQL storage, so the denormalized name column pattern does not apply.

     

    According to current documentation, this is a known platform limitation:

    "Although you can add virtual table columns as a lookup on a grid or other UI views, you cannot filter or sort based on this virtual table lookup column."

     

    Requested Improvements:

     

    This idea contains two requests:

     

    1. Enable sorting by virtual entity lookup columns

    Users should be able to sort model-driven app views by lookup columns regardless of whether the target table is a physical or virtual entity. The platform could resolve the display name at query time or cache it in a denormalized column to support sorting, similar to how it handles standard lookups.

     

    2. Improve the error message when sorting is not supported

    If sorting by virtual entity lookups remains unsupported, the error message should be user-friendly and descriptive rather than exposing a raw SQL error. For example:

    "Sorting by this column is not supported because it references a virtual entity. Please sort by a different column."

    The current error, "SQL error: Generic SQL error, Sql Number: 207", is confusing, alarming, and provides no actionable guidance. Users and administrators cannot reasonably determine the root cause from this message without deep technical investigation.

     

    Business Impact:

    • Lookup columns referencing virtual entities (especially the built-in Microsoft Entra ID table) are common in enterprise environments
    • Users expect sorting to work on any visible column in a view
    • The current behavior blocks basic usability with no clear indication of why
    • The raw SQL error creates unnecessary support cases and alarm

    Affected Components:

    • Dataverse FetchXML-to-SQL translator (ORDER BY generation)
    • Model-driven app view rendering (subgrid sorting)
    • Virtual entity metadata integration
    • Error handling and user-facing error messages
  • Allow administrators to opt out of Default environment inactivity notification emails or disable automatic re-creation of the Default environment

    Current Behavior

    When a tenant's Default environment has no user activity for an extended period, the Power Platform automatic cleanup mechanism sends warning notification emails to tenant administrators and eventually deletes the environment. After deletion, a new replacement Default environment is automatically created. This new environment also becomes inactive over time, triggering the same notification cycle again.


    For organizations that do not use Power Platform (for example, tenants that only use Power BI), this creates an endless loop of notification emails that cannot be stopped. The official documentation confirms:


    "You can't turn off this cleanup mechanism. However, you can review the last activity date for environments in the Power Platform admin center."

    Source: Automatic deletion of Power Platform environments (Microsoft Learn)


    The only options to prevent the cycle are triggering activity on the environment periodically or acquiring a premium license, neither of which is practical for organizations that do not use Power Platform.


    Requested Improvements

    This idea contains two requests:

    1. Allow administrators to opt out of inactivity notification emails: Tenant administrators should have the option to acknowledge the cleanup process and suppress future notification emails for the Default environment. For example, a one-time "Do not notify me about this environment's inactivity" toggle in the Power Platform Admin Center.
    2. Allow administrators to disable automatic re-creation of the Default environment: After a Default environment is deleted due to inactivity, administrators should have the option to prevent the system from automatically creating a new replacement Default environment. This would break the cycle entirely for tenants that do not need the environment.


    Business Impact

    • Organizations that only use other Microsoft products (e.g., Power BI, Microsoft 365) but do not use Power Platform receive recurring notification emails indefinitely with no way to stop them
    • This creates unnecessary confusion and concern for administrators who do not understand why they are receiving deletion warnings for a service they do not use
    • The current behavior generates avoidable support cases, consuming both customer and Microsoft support resources
    • Providing an opt-out option would improve administrator experience and reduce noise for non-Power Platform tenants


    Affected Components

    • Power Platform automatic environment cleanup mechanism
    • Default environment lifecycle and re-creation logic
    • Administrator notification email system
    • Power Platform Admin Center settings
  • Customer-facing uninstall option for unused first-party Microsoft-published managed solutions

    In the Power Platform admin center, the Environments > Dynamics 365 apps surface is intentionally an install, configure, and update experience and does not offer an Uninstall action for first-party Microsoft-published managed solutions. The same removal request from the Power Apps > Solutions page is, in practice, almost always blocked by managed properties or by dependencies the customer cannot resolve. The combined result is that solutions from retired or unused first-party product families remain permanently installed on an environment even when the customer has never adopted them and has no plan to. Administrators discover them, recognize them as unused, and reasonably try to remove them, but there is no supported customer path to do so.


    This is not a hypothetical concern. A real example surfaced through a recent Microsoft Support case for a small education-sector tenant running on its default production environment. The tenant relies on the included Dataverse quota (3 GB database, 3 GB file, 1 GB log) and has no paid storage add-ons. On that environment, the residual first-party footprint that the customer cannot remove looks like the following:


    • The Project Service Automation (PSA) family, covering Project Service Core and 12 related Project / Scheduling solutions, is the largest contributor and accounts for the bulk of the approximately 520 MB observed in the Solution table on this environment.
    • PSA itself is a retired product that reached end of support on March 31, 2025, with Project Operations as the supported successor, and there is no customer-removable path for the residual solutions.
    • Microsoft Check-ins is a platform-provisioned first-party package that maintains a persistent footprint on the environment and is also not customer-removable.
    • Data Archive Service - Trial is another platform-provisioned first-party package, related to the long-term retention capability, with the same persistent footprint and no customer-removable path.


    For a tenant of this profile, ~520 MB inside the Solution table is a meaningful share of the 3 GB included database quota, and it is content the customer never asked for and cannot release. The orphaned-component cleanup script that Microsoft Support runs in the backend addresses a different scenario (orphaned shared web resources in WebResourceBase) and does not touch this category, so support engineers today have no remediation lever to offer beyond "leave it in place."


    The pattern this creates is repetitive and avoidable.

    A customer notices the unused solutions, attempts removal, hits the missing-Uninstall behavior, and opens a support case. The support engineer confirms the behavior is by design, recommends against forcing removal because of dependency and data-loss risk, and routes any further capacity need to the documented Dataverse capacity-management path.


    Each cycle consumes both customer and Microsoft time on something that cannot be remediated under the current product behavior, and the absence of a definitive public statement on Microsoft Learn (especially for Microsoft Check-ins and Data Archive Service - Trial, where there is no documented customer-side uninstall procedure at all) reinforces the loop.


    Several improvements would address this in a way that is consistent with how the platform already behaves elsewhere, listed in order of preference.


    1. The first and strongest option is to expose an Uninstall action in PPAC > Dynamics 365 apps specifically for first-party solutions that the platform itself determines are safe to remove, such as packages from retired products, trial packages, and packages that have never been used in the environment and have no remaining dependencies.
    2. A second option, slightly weaker but still valuable, is to make the same category of solutions properly removable through Power Apps > Solutions, paired with a clear and actionable dependency report so the customer understands exactly what blocks removal and what is safe to proceed with.
    3. A third option, much lower effort but still meaningful, is to publish definitive Microsoft Learn guidance, on a per-solution-family basis, stating whether each first-party solution is customer-removable, what the prerequisites are, and what the supported mechanism is. Even an explicit "not customer-removable, by design" statement would significantly reduce the support pattern described above.
    4. Finally, as a longer-term improvement, an opt-in platform job could automatically retire residual solution footprints from end-of-support first-party products (for example, removing PSA components from environments that have not used them for N months following the EOS date), aligning the platform behavior with the product lifecycle.


    This request is intentionally narrow and does not ask for bulk removal of actively supported first-party solutions, nor for any action that would affect tenants currently using those products. It asks specifically for customer-facing clarity and optional removal of unused or retired first-party footprints, with the platform itself deciding which solutions qualify. The goal is to give administrators an honest answer to a reasonable question they keep asking: "if I am not using this, and Microsoft has retired it, why must it stay?"

  • Reflect default environment storage quota in tenant-level capacity summary and alerts

    Problem Statement


    The tenant-level storage capacity summary in the Power Platform Admin Center calculates entitled capacity based solely on purchased license SKUs. This creates a blind spot: tenants that operate exclusively through Microsoft 365 licensing (Business Basic, Business Standard, Office 365 E1/E3) receive 0 GB of Dataverse File, Database, and Log entitlement on the summary page, despite the fact that every default environment is provisioned with a built-in storage quota.

     

    Per Microsoft Learn: 

    "The default environment has the following included storage capacity: 3 GB Dataverse database capacity, 3 GB Dataverse file capacity, and 1 GB Dataverse log capacity."

     

    The automated capacity alert system uses the license-based entitlement figure, not the real available capacity, to calculate threshold breaches. The result is false-positive over-capacity warnings.


    Example


    A small business using Microsoft 365 Business Standard licenses has one default Dataverse environment. They use two Canvas apps and a few Power Automate flows, accumulating 1.67 GB of file storage (mainly from platform-managed web resources and note attachments). Their tenant-level summary shows:

    TypeEntitledUsedStatusFile0.00 GB1.67 GB100% Over-Capacity

     

    The admin receives a weekly email: "You're out of File capacity. Your tenant has used 100 percent of available File storage. Please act immediately to continue operating without disruptions."

    In reality, the default environment holds a 3 GB included file quota. Actual usage is only 56% of the true limit. No operational risk exists. The customer opens a support case, only to learn the alert was a cosmetic overstatement.

     

    This is not an edge but the standard experience for every M365-only tenant that uses Dataverse in a default environment.


    Why This Happens


    The reason behind this is a data presentation gap between two layers:

    1. License layer (entitlement engine): Calculates capacity from license SKUs. M365 SKUs like Business Basic and Office 365 E1 grant CDS Lite capacity for Teams environments but contribute 0 GB to the standard Dataverse capacity pools. The entitlement engine correctly reports 0 GB - these licenses genuinely do not purchase standalone Dataverse capacity.
    2. Environment layer (included quota): Every default environment is allocated 3 GB DB / 3 GB file / 1 GB log at provisioning time, independent of licensing. This quota is visible only when drilling into the environment-level details view. The tenant-level summary does not aggregate it.

    The alert pipeline reads from layer 1. The real capacity exists in layer 2. The disconnect produces warnings that are technically accurate at the license level but meaningless for customers who operate within the included quota.


    Suggestion


    Include the default environment's built-in quota in the tenant-level capacity calculation. Specifically:

    1. Summary tab: Add a "Default environment included capacity" row under "Storage capacity, by source" - alongside "Org (tenant) default", "User licenses", and "Additional storage". This shows the 3/3/1 GB allocation transparently.
    2. Alert threshold calculation: Factor the default environment's included quota into the over-capacity check. A tenant using 1.67 GB of file storage with a 3 GB default quota should show 56% usage, not 100%.
    3. Alert email wording: When a tenant's overage is driven entirely by the absence of license-based entitlement rather than actual consumption exceeding the included quota, the email could state: "Your default environment's included storage is being used. If your usage exceeds the included 3 GB, consider purchasing additional capacity add-ons."


    What This Would Not Change


    • The StorageDriven capacity model and overflow rules would remain intact.
    • Tenants whose usage genuinely exceeds the 3 GB included quota would continue to receive over-capacity notifications as they do today.
    • Tenants with paid Dataverse capacity add-ons would see no change.
    • The default environment list view behavior (showing only consumption beyond included quota) would remain unchanged - this suggestion addresses the summary-level presentation only.


    Additional Consideration


    The same gap likely affects the environment creation capacity check. Per the documentation: "The capacity check conducted before creating new environments excludes the default environment's included storage capacity when calculating whether you have sufficient capacity." If a tenant has 0 GB license-based capacity, the included quota is already factored into the new-environment check - but the summary page does not show this deduction, creating an inconsistency between what the admin sees and what the platform enforces.

     

    The fix would bring the summary presentation in line with the provisioning logic that already accounts for the included quota.