-
Regarding the restoration of Dataverse tables
Suggested by Takuto Sakuma – New – 0 Comments
Under the current Dataverse specifications, it is not possible to restore only an individual deleted table.
To return a Dataverse table to its state prior to deletion, an environment backup taken before the deletion must be used, which requires restoring the entire environment. As a result, restoring only the deleted table individually is not supported at this time.
If it becomes possible to restore only deleted tables on an individual basis, it would improve customer convenience. We kindly ask that you consider this enhancement.
-
Improve Power Platform License Usage Reports (12‑Month History & Data Consistency)
Suggested by Ramit Saha – New – 1 Comments
PPAC reports are used in our organisation for audits, financial planning, compliance etc. It is essential for us to get transparency on the data we get from PPAC so that we get clear indicators which can guide our strategic decisions.
Requesting for below improvements for PPAC license reports:
- Support at least 12‑month historical license usage data: Power Platform Admin Center (PPAC) license usage reports currently have limitations that impact enterprise reporting and governance. The maximum historical view is limited (e.g., 180 days), while our requirement is at least 12 months of data.
- Similar experience for Power Apps and Power Automate licensing usage reports:
- Currently Power Apps license can only be downloaded month on month, and Power Automate license can only be downloaded with filters like 30 days, 60 days, 90 days, 180 days. However, it would be useful if there are similar experience for both license types. For example: Why not combine both types together for both licenses? Keep data for at least a year to download and user can download either month on month data or consolidated data for license usage.
- There is another difference between Power Apps and Power Automate report. Power Apps report provides consolidated report for all licenses, whereas Power Automate provides option to download each and every license usage separately. The best UX would be to combine both of them and to allow users whether to download only for a single license or multiple or all the licenses.
- Clearly explain counting methodology and discrepancies: There are multiple columns with confusing names and subcategories - like License Classification, ServicePlanName and SKUName. We are confused on which one to consider when calculating usage for a particular license. It would be great if there was a clear documentation on what should be considered on which scenarios. Also add details for each of the subcategories of each columns so that the report has transparency on what data it shows.
- Blank environment names: There are certain rows which return Blank environment names. Wondering why it happens and if that has any specific reason, can we have a documentation on why that happens?
These enhancements would reduce manual effort, improve trust in PPAC as a single source of truth, and strengthen enterprise license governance.
-
Clarify PAYG Dataverse capacity UI behavior: 1GB entitlement shown as overage + default overage handling state
Suggested by Leon Le – New – 4 Comments
Please publish official documentation that clearly explains the intended behavior of Dataverse capacity reporting and overage handling for pay-as-you-go (PAYG) environments, specifically covering the two scenarios below, which are currently unclear from documentation and create governance and cost-management confusion for admins.
Scenario 1. “1 GB included entitlement appears as overage (orange) immediately after environment creation”
Observed behavior: After creating a new environment, the tenant capacity summary can show 1 GB database usage as overage (orange) even though PAYG documentation/positioning describes a 1 GB database + 1 GB file included entitlement for PAYG environments. This creates uncertainty for admins trying to validate whether the environment is actually in an overage state vs. simply reflecting UI logic.
Documentation needed:
- What the orange/overage indicator represents in this scenario (billing vs. tenant pool consumption vs. allocation state vs. UI timing).
- Whether the “included at no charge” entitlement is expected to appear as overage in some tenant-level views, and why.
Scenario 2. “Overage handling options are unchecked by default, with no documented fallback when neither is selected”
Observed behavior: In the environment capacity management experience, the overage-handling options (e.g., draw from tenant pool vs bill to PAYG) can appear unchecked by default, and the UI can allow saving without selecting either option. There is no official documentation that explains what happens if capacity is exceeded while neither option is selected (silent fallback vs enforcement vs restrictions).
Documentation needed:
- What the default state is intended to be, and what it means operationally.
- The exact behavior when capacity is exceeded while neither option is selected (including what is blocked, what remains available, and what notifications admins should expect).
Why this matters
These UI behaviors create “gray zones” for tenant administrators: it becomes difficult to explain capacity posture internally, justify configuration decisions, and confidently manage cost exposure and enforcement behavior across environments. Clear documentation of the intended UX and fallback/enforcement rules would reduce confusion and prevent incorrect assumptions.
Proposed outcome
Add a dedicated section (or an FAQ) in official documentation that explicitly defines:
- How PAYG included entitlement should appear in tenant-level capacity views, including any UI timing/visualization rules.
- The default meaning of “overage handling” controls and the definitive behavior when neither option is selected.
-
Clear notification for Preview features
Suggested by Minh Nguyen – New – 2 Comments
==EN/US==
Customer feedback: Import/Export Excel/.csv file is in preview mode, leading to actual operation loss and difficulties. Customer wants to check for specified release/fix plan and date, but lack of information.
Customer suggest that the release/ update plan should be clearly notified before-during-after changes as they can cope with changes.
==JA/JP===
お客様からのフィードバック:Excel/.csvファイルのインポート/エクスポートがプレビューモードになっているため、実際の操作に支障が生じ、作業が中断されるなどの問題が発生しています。お客様は、リリース/修正計画と日付を確認したいのですが、情報が不足しています。
お客様は、リリース/アップデート計画について、変更前、変更中、変更後に明確に通知することで、変更への対応が可能になるとのご意見を寄せています。
-
Introduce Grace Period and Second Confirmation for Tenant Deletion in Dynamics 365
Suggested by Bibo Tran – New – 4 Comments
Description:
Currently, when a tenant is deleted in Dynamics 365, the process is immediate and irreversible. This design poses a significant risk because even a minor mistake can result in the permanent loss of critical data and configurations. The deletion workflow is straightforward and does not include any additional safeguards such as a second confirmation step or a grace period.
I propose introducing a configurable grace period (e.g., 7–30 days) before permanent deletion occurs. During this grace period, administrators should have the ability to restore the tenant easily. Additionally, notifications should be sent to administrators before the final deletion to ensure awareness and provide an opportunity to act.
Reasoning:
- The current deletion process is simple and lacks protective measures, making accidental deletions irreversible.
- There is no second confirmation prompt or delay mechanism to prevent unintended actions.
- A grace period would serve as a safety net, allowing organizations to recover from human errors without significant business disruption.
- This enhancement aligns with best practices for data protection and user experience, ensuring critical actions are safeguarded by additional checks.
Benefits:
- Prevents accidental and irreversible loss of data and configurations.
- Provides a safety net for organizations managing multiple tenants.
- Enhances user confidence in tenant management processes.
- Reduces operational risk and potential downtime caused by accidental deletions.
Request:
Please consider adding this feature in future releases of Dynamics 365 to improve reliability and reduce the risk of accidental data loss.
-
Reflect default environment storage quota in tenant-level capacity summary and alerts
Suggested by Jacob Huynh – New – 2 Comments
Problem Statement
The tenant-level storage capacity summary in the Power Platform Admin Center calculates entitled capacity based solely on purchased license SKUs. This creates a blind spot: tenants that operate exclusively through Microsoft 365 licensing (Business Basic, Business Standard, Office 365 E1/E3) receive 0 GB of Dataverse File, Database, and Log entitlement on the summary page, despite the fact that every default environment is provisioned with a built-in storage quota.
Per Microsoft Learn:
"The default environment has the following included storage capacity: 3 GB Dataverse database capacity, 3 GB Dataverse file capacity, and 1 GB Dataverse log capacity."
The automated capacity alert system uses the license-based entitlement figure, not the real available capacity, to calculate threshold breaches. The result is false-positive over-capacity warnings.
Example
A small business using Microsoft 365 Business Standard licenses has one default Dataverse environment. They use two Canvas apps and a few Power Automate flows, accumulating 1.67 GB of file storage (mainly from platform-managed web resources and note attachments). Their tenant-level summary shows:
TypeEntitledUsedStatusFile0.00 GB1.67 GB100% Over-Capacity
The admin receives a weekly email: "You're out of File capacity. Your tenant has used 100 percent of available File storage. Please act immediately to continue operating without disruptions."
In reality, the default environment holds a 3 GB included file quota. Actual usage is only 56% of the true limit. No operational risk exists. The customer opens a support case, only to learn the alert was a cosmetic overstatement.
This is not an edge but the standard experience for every M365-only tenant that uses Dataverse in a default environment.
Why This Happens
The reason behind this is a data presentation gap between two layers:
- License layer (entitlement engine): Calculates capacity from license SKUs. M365 SKUs like Business Basic and Office 365 E1 grant CDS Lite capacity for Teams environments but contribute 0 GB to the standard Dataverse capacity pools. The entitlement engine correctly reports 0 GB - these licenses genuinely do not purchase standalone Dataverse capacity.
- Environment layer (included quota): Every default environment is allocated 3 GB DB / 3 GB file / 1 GB log at provisioning time, independent of licensing. This quota is visible only when drilling into the environment-level details view. The tenant-level summary does not aggregate it.
The alert pipeline reads from layer 1. The real capacity exists in layer 2. The disconnect produces warnings that are technically accurate at the license level but meaningless for customers who operate within the included quota.
Suggestion
Include the default environment's built-in quota in the tenant-level capacity calculation. Specifically:
- Summary tab: Add a "Default environment included capacity" row under "Storage capacity, by source" - alongside "Org (tenant) default", "User licenses", and "Additional storage". This shows the 3/3/1 GB allocation transparently.
- Alert threshold calculation: Factor the default environment's included quota into the over-capacity check. A tenant using 1.67 GB of file storage with a 3 GB default quota should show 56% usage, not 100%.
- Alert email wording: When a tenant's overage is driven entirely by the absence of license-based entitlement rather than actual consumption exceeding the included quota, the email could state: "Your default environment's included storage is being used. If your usage exceeds the included 3 GB, consider purchasing additional capacity add-ons."
What This Would Not Change
- The StorageDriven capacity model and overflow rules would remain intact.
- Tenants whose usage genuinely exceeds the 3 GB included quota would continue to receive over-capacity notifications as they do today.
- Tenants with paid Dataverse capacity add-ons would see no change.
- The default environment list view behavior (showing only consumption beyond included quota) would remain unchanged - this suggestion addresses the summary-level presentation only.
Additional Consideration
The same gap likely affects the environment creation capacity check. Per the documentation: "The capacity check conducted before creating new environments excludes the default environment's included storage capacity when calculating whether you have sufficient capacity." If a tenant has 0 GB license-based capacity, the included quota is already factored into the new-environment check - but the summary page does not show this deduction, creating an inconsistency between what the admin sees and what the platform enforces.
The fix would bring the summary presentation in line with the provisioning logic that already accounts for the included quota.
-
Requesting improvement of retrieve API for Dynamics UCI views
Suggested by Kevin Quach – New – 0 Comments
Retrieve API for Power Platform/CE Apps views currently has a tight threshold so when we have upwards of a million record, the time it takes to load the view increases pretty significantly.
We'd like to request improvement of the retrieve API to reduce the loading time for fetch requests.
-
Enable Audit Log Migration / Import Between Dataverse Environments
Suggested by Dio Nguyen – New – 1 Comments
Problem Statement:
Currently, the Dataverse Audit table is read-only — audit records can be retrieved and exported but cannot be written or imported into another environment's native audit store. This creates a significant gap for organizations that need to migrate audit history between environments, especially during environment copy, migration, or consolidation scenarios.
If a customer performs an environment copy without enabling the "Copy audit logs" option at the time of the copy, there is no supported way to transfer audit data afterward. This is a critical limitation for organizations with strict regulatory and compliance requirements that mandate full audit trail continuity.
Real-World Scenario:
An organization migrates from a source environment to a new production environment. After the migration, they realize that millions of historical audit records (e.g., for work orders, cases, or financial data) were not carried over. Since the new production environment is already in active use with new transactions, performing another environment copy is not feasible without losing post-migration data.
Current Workarounds:
- None available. The Audit table does not support data import through any supported API, SDK, or manual process.
- Re-performing an environment copy would overwrite all new data created since migration.
Suggested Idea
We recommend Microsoft consider one or more of the following enhancements:
- Post-Copy Audit Log Transfer Tool: Provide a supported mechanism (via Power Platform Admin Center or API) to transfer audit log data from a source environment to a target environment after the initial environment copy has been completed.
- Audit Data Import API: Enable a write/import capability for the Audit table in Dataverse, allowing organizations to programmatically migrate historical audit records between environments while preserving data integrity.
- Selective Audit Log Merge: Allow administrators to merge audit logs from a source environment into an existing target environment without overwriting existing audit data in the target.
- Enhanced Copy Environment UX Warning :Add a more prominent warning or confirmation step during environment copy to alert administrators about the consequences of not enabling audit log copy, helping prevent accidental omission.
Business Impact
- Compliance Risk: Organizations in regulated industries (healthcare, finance, government) may face audit failures or legal exposure if historical audit trails are lost during migration.
- Data Continuity: Audit history is essential for traceability, accountability, and root cause analysis across environment transitions.
- Customer Confidence: Providing robust audit migration capabilities strengthens trust in the Dynamics 365 / Power Platform ecosystem for enterprise-grade deployments.
-
Improve Dataverse field design visibility by showing downstream schema impact for Azure Synapse Link and data export scenarios
Suggested by Bibo Tran – New – 1 Comments
Issue Description
When developers configure text field lengths in Dataverse through Power Apps, the visible MaxLength value does not clearly show how the field may be projected downstream through Azure Synapse Link.
In this scenario, a Dataverse field with a visible maximum length of 80 characters was projected into Synapse SQL as a much larger varchar length due to how Dataverse DBLength and Synapse Link schema generation are handled.
This can create confusion for developers and data architects because the field appears correctly sized in Dataverse, but downstream systems may receive a wider schema than expected. As a result, we may only discover the impact later when Data Warehouse updates, ADF pipelines, or other downstream processes encounter failures and require additional code or schema changes.
Current Pain Points
- We only see the business-level field length in Power Apps, not the effective storage length or downstream projection length used by Azure Synapse Link.
- We may assume that a field configured as MaxLength = 80 in Dataverse will remain equivalent to 80 characters in downstream SQL systems.
- The generated Synapse Link schema may appear incorrect or unexpected when external tables are created with larger varchar lengths than the value shown in Power Apps.
- The impact may only become visible after the downstream schema has already been generated and related pipelines or Data Warehouse processes begin failing.
- This can lead to operational impact, as we may need to review schemas, adjust downstream code, split wide tables, or redesign parts of the data model after implementation.
- The behavior may be perceived as a product defect because the Power Apps UI does not clearly explain why the downstream SQL schema is larger.
Key Confusions
MaxLength versus DBLength
Developers see MaxLength in Power Apps, but Azure Synapse Link uses the Dataverse database length rather than only the visible MaxLength value. This creates confusion when the downstream schema does not match what is expected from the Power Apps UI.
Unicode versus non-Unicode projection
Dataverse string fields are stored as nvarchar, while Synapse projections may use varchar. Because DBLength is byte-based and carried into the downstream projection, the generated SQL length may appear larger than expected.
Historical field length changes
If a field previously had a larger allocation, reducing the visible field length may not always result in the downstream schema matching the newly expected size.
Design-time versus runtime impact
The risk is not obvious during Dataverse table design. We may only see the impact later when downstream systems encounter failures related to wide rows or oversized variable-length columns.
Proposed Improvement
Improve Public Documentation for Dataverse Field Length and Downstream Schema Projection
Developers and architects may design Dataverse fields based on the values shown in Power Apps without clear public guidance on how those values may be translated downstream through Azure Synapse Link.
Although documentation and FAQ content exist, this scenario shows that the current guidance may not be clear enough for teams designing Dataverse tables for analytics, reporting, ADF pipelines, or Data Warehouse workloads.
The related public documentation and FAQ pages should be updated to clearly explain:
- The difference between Power Apps MaxLength and Dataverse DBLength.
- Why Dataverse string fields may have backend storage characteristics that differ from what is shown in Power Apps.
- Why Azure Synapse Link may project string fields into SQL with a larger varchar length than the visible MaxLength.
- What design considerations should be reviewed before using Dataverse as a source for downstream analytics or Data Warehouse workloads.
- Examples showing how a field configured in Power Apps may appear after Synapse Link projection.
Together, these improvements would help developers and architects make informed database design decisions earlier, reduce unexpected downstream failures, and improve transparency around Dataverse-to-Synapse schema behavior.
-
Customer-facing uninstall option for unused first-party Microsoft-published managed solutions
Suggested by Jacob Huynh – New – 2 Comments
In the Power Platform admin center, the Environments > Dynamics 365 apps surface is intentionally an install, configure, and update experience and does not offer an Uninstall action for first-party Microsoft-published managed solutions. The same removal request from the Power Apps > Solutions page is, in practice, almost always blocked by managed properties or by dependencies the customer cannot resolve. The combined result is that solutions from retired or unused first-party product families remain permanently installed on an environment even when the customer has never adopted them and has no plan to. Administrators discover them, recognize them as unused, and reasonably try to remove them, but there is no supported customer path to do so.
This is not a hypothetical concern. A real example surfaced through a recent Microsoft Support case for a small education-sector tenant running on its default production environment. The tenant relies on the included Dataverse quota (3 GB database, 3 GB file, 1 GB log) and has no paid storage add-ons. On that environment, the residual first-party footprint that the customer cannot remove looks like the following:
- The Project Service Automation (PSA) family, covering Project Service Core and 12 related Project / Scheduling solutions, is the largest contributor and accounts for the bulk of the approximately 520 MB observed in the Solution table on this environment.
- PSA itself is a retired product that reached end of support on March 31, 2025, with Project Operations as the supported successor, and there is no customer-removable path for the residual solutions.
- Microsoft Check-ins is a platform-provisioned first-party package that maintains a persistent footprint on the environment and is also not customer-removable.
- Data Archive Service - Trial is another platform-provisioned first-party package, related to the long-term retention capability, with the same persistent footprint and no customer-removable path.
For a tenant of this profile, ~520 MB inside the Solution table is a meaningful share of the 3 GB included database quota, and it is content the customer never asked for and cannot release. The orphaned-component cleanup script that Microsoft Support runs in the backend addresses a different scenario (orphaned shared web resources in
WebResourceBase) and does not touch this category, so support engineers today have no remediation lever to offer beyond "leave it in place."The pattern this creates is repetitive and avoidable.
A customer notices the unused solutions, attempts removal, hits the missing-Uninstall behavior, and opens a support case. The support engineer confirms the behavior is by design, recommends against forcing removal because of dependency and data-loss risk, and routes any further capacity need to the documented Dataverse capacity-management path.
Each cycle consumes both customer and Microsoft time on something that cannot be remediated under the current product behavior, and the absence of a definitive public statement on Microsoft Learn (especially for Microsoft Check-ins and Data Archive Service - Trial, where there is no documented customer-side uninstall procedure at all) reinforces the loop.
Several improvements would address this in a way that is consistent with how the platform already behaves elsewhere, listed in order of preference.
- The first and strongest option is to expose an Uninstall action in PPAC > Dynamics 365 apps specifically for first-party solutions that the platform itself determines are safe to remove, such as packages from retired products, trial packages, and packages that have never been used in the environment and have no remaining dependencies.
- A second option, slightly weaker but still valuable, is to make the same category of solutions properly removable through Power Apps > Solutions, paired with a clear and actionable dependency report so the customer understands exactly what blocks removal and what is safe to proceed with.
- A third option, much lower effort but still meaningful, is to publish definitive Microsoft Learn guidance, on a per-solution-family basis, stating whether each first-party solution is customer-removable, what the prerequisites are, and what the supported mechanism is. Even an explicit "not customer-removable, by design" statement would significantly reduce the support pattern described above.
- Finally, as a longer-term improvement, an opt-in platform job could automatically retire residual solution footprints from end-of-support first-party products (for example, removing PSA components from environments that have not used them for N months following the EOS date), aligning the platform behavior with the product lifecycle.
This request is intentionally narrow and does not ask for bulk removal of actively supported first-party solutions, nor for any action that would affect tenants currently using those products. It asks specifically for customer-facing clarity and optional removal of unused or retired first-party footprints, with the platform itself deciding which solutions qualify. The goal is to give administrators an honest answer to a reasonable question they keep asking: "if I am not using this, and Microsoft has retired it, why must it stay?"
