• 0

    Increase the 100-Character schemaname Limit for Catalog Template Item Solution Components

    Suggested by Marck Anthony Payanay New  0 Comments

    When installing a solution as a Template Item via the Power Platform Catalog, the installation can fail because the generated Dataverse schemaname exceeds the current 100-character limit.

    This is especially limiting for solutions that include components such as Copilot Studio bots, topics, triggers, cloud flows, and other named solution assets. These components often require meaningful business names so makers, admins, and support teams can clearly understand their purpose.

    The issue is that the failing schemaname is system-generated, immutable, and includes values such as the publisher prefix, topic/flow name, component details, action name and GUID. Makers do not have direct control over the final generated value, yet the Catalog installation can fail because of it.


    Requested improvement:

    Please improve this behavior by implementing the following:

    • Increase the current 100-character schemaname limit for Catalog Template Item scenarios.
    • Add pre-validation before publishing or installing a Catalog Template Item so makers are warned before installation fails.
    • Use a safer schema name generation pattern
    • Provide official documentation explaining how schema names are generated for Catalog Template Items and what naming limits makers should follow.
    • Confirm whether this limitation applies only to Copilot Studio botcomponent records or to other Dataverse-backed solution components as well.


    Business impact:

    The current limitation forces makers to shorten component names in ways that reduce clarity and maintainability. This is not ideal for enterprise-ready solutions where naming standards, governance, ALM, supportability, and reusability are important.

    A platform-level fix or clear validation guidance would reduce deployment failures, improve Catalog adoption, and make Template Items more reliable for reusable Power Platform solution distribution.


  • 21

    Regarding the restoration of Dataverse tables

    Suggested by Takuto Sakuma New  0 Comments

    Under the current Dataverse specifications, it is not possible to restore only an individual deleted table.

    To return a Dataverse table to its state prior to deletion, an environment backup taken before the deletion must be used, which requires restoring the entire environment. As a result, restoring only the deleted table individually is not supported at this time.

    If it becomes possible to restore only deleted tables on an individual basis, it would improve customer convenience. We kindly ask that you consider this enhancement.


  • 2

    Sharepoint Connector, Only allow specific sites

    Suggested by Adam Macaulay New  0 Comments

    For the SharePoint connector, only allow specific sites to be selected under the data privacy policies for that connector much like we do for the HTTP connector.


  • 7

    Reflect default environment storage quota in tenant-level capacity summary and alerts

    Suggested by Jacob Huynh New  2 Comments

    Problem Statement


    The tenant-level storage capacity summary in the Power Platform Admin Center calculates entitled capacity based solely on purchased license SKUs. This creates a blind spot: tenants that operate exclusively through Microsoft 365 licensing (Business Basic, Business Standard, Office 365 E1/E3) receive 0 GB of Dataverse File, Database, and Log entitlement on the summary page, despite the fact that every default environment is provisioned with a built-in storage quota.

     

    Per Microsoft Learn: 

    "The default environment has the following included storage capacity: 3 GB Dataverse database capacity, 3 GB Dataverse file capacity, and 1 GB Dataverse log capacity."

     

    The automated capacity alert system uses the license-based entitlement figure, not the real available capacity, to calculate threshold breaches. The result is false-positive over-capacity warnings.


    Example


    A small business using Microsoft 365 Business Standard licenses has one default Dataverse environment. They use two Canvas apps and a few Power Automate flows, accumulating 1.67 GB of file storage (mainly from platform-managed web resources and note attachments). Their tenant-level summary shows:

    TypeEntitledUsedStatusFile0.00 GB1.67 GB100% Over-Capacity

     

    The admin receives a weekly email: "You're out of File capacity. Your tenant has used 100 percent of available File storage. Please act immediately to continue operating without disruptions."

    In reality, the default environment holds a 3 GB included file quota. Actual usage is only 56% of the true limit. No operational risk exists. The customer opens a support case, only to learn the alert was a cosmetic overstatement.

     

    This is not an edge but the standard experience for every M365-only tenant that uses Dataverse in a default environment.


    Why This Happens


    The reason behind this is a data presentation gap between two layers:

    1. License layer (entitlement engine): Calculates capacity from license SKUs. M365 SKUs like Business Basic and Office 365 E1 grant CDS Lite capacity for Teams environments but contribute 0 GB to the standard Dataverse capacity pools. The entitlement engine correctly reports 0 GB - these licenses genuinely do not purchase standalone Dataverse capacity.
    2. Environment layer (included quota): Every default environment is allocated 3 GB DB / 3 GB file / 1 GB log at provisioning time, independent of licensing. This quota is visible only when drilling into the environment-level details view. The tenant-level summary does not aggregate it.

    The alert pipeline reads from layer 1. The real capacity exists in layer 2. The disconnect produces warnings that are technically accurate at the license level but meaningless for customers who operate within the included quota.


    Suggestion


    Include the default environment's built-in quota in the tenant-level capacity calculation. Specifically:

    1. Summary tab: Add a "Default environment included capacity" row under "Storage capacity, by source" - alongside "Org (tenant) default", "User licenses", and "Additional storage". This shows the 3/3/1 GB allocation transparently.
    2. Alert threshold calculation: Factor the default environment's included quota into the over-capacity check. A tenant using 1.67 GB of file storage with a 3 GB default quota should show 56% usage, not 100%.
    3. Alert email wording: When a tenant's overage is driven entirely by the absence of license-based entitlement rather than actual consumption exceeding the included quota, the email could state: "Your default environment's included storage is being used. If your usage exceeds the included 3 GB, consider purchasing additional capacity add-ons."


    What This Would Not Change


    • The StorageDriven capacity model and overflow rules would remain intact.
    • Tenants whose usage genuinely exceeds the 3 GB included quota would continue to receive over-capacity notifications as they do today.
    • Tenants with paid Dataverse capacity add-ons would see no change.
    • The default environment list view behavior (showing only consumption beyond included quota) would remain unchanged - this suggestion addresses the summary-level presentation only.


    Additional Consideration


    The same gap likely affects the environment creation capacity check. Per the documentation: "The capacity check conducted before creating new environments excludes the default environment's included storage capacity when calculating whether you have sufficient capacity." If a tenant has 0 GB license-based capacity, the included quota is already factored into the new-environment check - but the summary page does not show this deduction, creating an inconsistency between what the admin sees and what the platform enforces.

     

    The fix would bring the summary presentation in line with the provisioning logic that already accounts for the included quota.


  • 6

    Enable Audit Log Migration / Import Between Dataverse Environments

    Suggested by Dio Nguyen New  1 Comments

    Problem Statement:

    Currently, the Dataverse Audit table is read-only — audit records can be retrieved and exported but cannot be written or imported into another environment's native audit store. This creates a significant gap for organizations that need to migrate audit history between environments, especially during environment copy, migration, or consolidation scenarios.

    If a customer performs an environment copy without enabling the "Copy audit logs" option at the time of the copy, there is no supported way to transfer audit data afterward. This is a critical limitation for organizations with strict regulatory and compliance requirements that mandate full audit trail continuity.


    Real-World Scenario:

    An organization migrates from a source environment to a new production environment. After the migration, they realize that millions of historical audit records (e.g., for work orders, cases, or financial data) were not carried over. Since the new production environment is already in active use with new transactions, performing another environment copy is not feasible without losing post-migration data.


    Current Workarounds:

    • None available. The Audit table does not support data import through any supported API, SDK, or manual process.
    • Re-performing an environment copy would overwrite all new data created since migration.


    Suggested Idea

    We recommend Microsoft consider one or more of the following enhancements:

    1. Post-Copy Audit Log Transfer Tool: Provide a supported mechanism (via Power Platform Admin Center or API) to transfer audit log data from a source environment to a target environment after the initial environment copy has been completed.
    2. Audit Data Import API: Enable a write/import capability for the Audit table in Dataverse, allowing organizations to programmatically migrate historical audit records between environments while preserving data integrity.
    3. Selective Audit Log Merge: Allow administrators to merge audit logs from a source environment into an existing target environment without overwriting existing audit data in the target.
    4. Enhanced Copy Environment UX Warning :Add a more prominent warning or confirmation step during environment copy to alert administrators about the consequences of not enabling audit log copy, helping prevent accidental omission.


    Business Impact

    • Compliance Risk: Organizations in regulated industries (healthcare, finance, government) may face audit failures or legal exposure if historical audit trails are lost during migration.
    • Data Continuity: Audit history is essential for traceability, accountability, and root cause analysis across environment transitions.
    • Customer Confidence: Providing robust audit migration capabilities strengthens trust in the Dynamics 365 / Power Platform ecosystem for enterprise-grade deployments.



  • 0

    Power Pages Portal Visit and Page Tracking

    Suggested by Rutika Pawar New  0 Comments

    Currently, Power Pages Portal does not offer an out-of-the-box method to track user visits and page activity. There should be a feature related to Portal Analytics for Power Pages Portal so these activities can be monitored in d365 CRM and used to generate charts and dashboards for easier access by upper management.


  • 3

    Improve Dataverse capacity warning emails when File capacity increases due to reporting categorization changes

    Suggested by Leon Le New  1 Comments

    We would like to request an improvement to the Dataverse capacity warning email experience in Power Platform admin center.


    In our scenario, administrators received Dataverse capacity warning emails for the Default environment after File capacity usage appeared to increase unexpectedly. After review, Microsoft Product Group confirmed that this behavior was related to a recent change in how Dataverse capacity is reported. Some existing system data that was previously shown under other storage categories is now being reported under File capacity.


    Because of this reporting change, File storage usage may appear to increase in Power Platform admin center. However, this does not necessarily mean that new data was suddenly added to the environment or that a separate background process is unexpectedly consuming more storage. The same existing data is now being presented under a different storage category.


    This warning email experience can create confusion and concern for administrators because it may appear as though there is a new storage issue, new business risk, or urgent action required, even when the change is mainly related to how storage is categorized and reported.


    We would like Microsoft to improve the Dataverse capacity warning email experience so that administrators can better understand why the warning was triggered. Possible improvements could include:


    1. Adding clearer explanation in the warning email when the increase is caused by a reporting or categorization change.

    2. Providing more details in Power Platform admin center about which storage category changed and why.

    3. Providing an option to suppress, acknowledge, or reduce repeated warning emails when the behavior is already understood.

    4. Providing clearer documentation or in-product messaging for cases where existing system data is reclassified under File capacity.

    5. Allowing administrators to delegate capacity warning notifications to specific recipients or admin groups.


    This improvement would help administrators avoid unnecessary concern, reduce support cases, and better understand whether a capacity warning requires immediate action or is related to a reporting/categorization update.


  • 6

    Improve Dataverse field design visibility by showing downstream schema impact for Azure Synapse Link and data export scenarios

    Suggested by Bibo Tran New  1 Comments

    Issue Description


    When developers configure text field lengths in Dataverse through Power Apps, the visible MaxLength value does not clearly show how the field may be projected downstream through Azure Synapse Link.


    In this scenario, a Dataverse field with a visible maximum length of 80 characters was projected into Synapse SQL as a much larger varchar length due to how Dataverse DBLength and Synapse Link schema generation are handled.


    This can create confusion for developers and data architects because the field appears correctly sized in Dataverse, but downstream systems may receive a wider schema than expected. As a result, we may only discover the impact later when Data Warehouse updates, ADF pipelines, or other downstream processes encounter failures and require additional code or schema changes.


    Current Pain Points


    • We only see the business-level field length in Power Apps, not the effective storage length or downstream projection length used by Azure Synapse Link.
    • We may assume that a field configured as MaxLength = 80 in Dataverse will remain equivalent to 80 characters in downstream SQL systems.
    • The generated Synapse Link schema may appear incorrect or unexpected when external tables are created with larger varchar lengths than the value shown in Power Apps.
    • The impact may only become visible after the downstream schema has already been generated and related pipelines or Data Warehouse processes begin failing.
    • This can lead to operational impact, as we may need to review schemas, adjust downstream code, split wide tables, or redesign parts of the data model after implementation.
    • The behavior may be perceived as a product defect because the Power Apps UI does not clearly explain why the downstream SQL schema is larger.


    Key Confusions


    MaxLength versus DBLength

    Developers see MaxLength in Power Apps, but Azure Synapse Link uses the Dataverse database length rather than only the visible MaxLength value. This creates confusion when the downstream schema does not match what is expected from the Power Apps UI.


    Unicode versus non-Unicode projection

    Dataverse string fields are stored as nvarchar, while Synapse projections may use varchar. Because DBLength is byte-based and carried into the downstream projection, the generated SQL length may appear larger than expected.


    Historical field length changes

    If a field previously had a larger allocation, reducing the visible field length may not always result in the downstream schema matching the newly expected size.


    Design-time versus runtime impact

    The risk is not obvious during Dataverse table design. We may only see the impact later when downstream systems encounter failures related to wide rows or oversized variable-length columns.


    Proposed Improvement


    Improve Public Documentation for Dataverse Field Length and Downstream Schema Projection

    Developers and architects may design Dataverse fields based on the values shown in Power Apps without clear public guidance on how those values may be translated downstream through Azure Synapse Link.


    Although documentation and FAQ content exist, this scenario shows that the current guidance may not be clear enough for teams designing Dataverse tables for analytics, reporting, ADF pipelines, or Data Warehouse workloads.


    The related public documentation and FAQ pages should be updated to clearly explain:

    • The difference between Power Apps MaxLength and Dataverse DBLength.
    • Why Dataverse string fields may have backend storage characteristics that differ from what is shown in Power Apps.
    • Why Azure Synapse Link may project string fields into SQL with a larger varchar length than the visible MaxLength.
    • What design considerations should be reviewed before using Dataverse as a source for downstream analytics or Data Warehouse workloads.
    • Examples showing how a field configured in Power Apps may appear after Synapse Link projection.


    Together, these improvements would help developers and architects make informed database design decisions earlier, reduce unexpected downstream failures, and improve transparency around Dataverse-to-Synapse schema behavior.


  • 0

    Moving Fulcrum Connector from Preview to GA

    Suggested by Chris Louie New  0 Comments

    We are requesting to move their Power Automate connector (Fulcrum) to General Availability (GA) and wants to know the current health metrics and if they meet the criteria for GA. They also want to add support for the US Government (GCC) region.


  • 15

    Improve Power Platform License Usage Reports (12‑Month History & Data Consistency)

    Suggested by Ramit Saha New  1 Comments

    PPAC reports are used in our organisation for audits, financial planning, compliance etc. It is essential for us to get transparency on the data we get from PPAC so that we get clear indicators which can guide our strategic decisions.


    Requesting for below improvements for PPAC license reports:

    • Support at least 12‑month historical license usage data: Power Platform Admin Center (PPAC) license usage reports currently have limitations that impact enterprise reporting and governance. The maximum historical view is limited (e.g., 180 days), while our requirement is at least 12 months of data.
    • Similar experience for Power Apps and Power Automate licensing usage reports:
    1. Currently Power Apps license can only be downloaded month on month, and Power Automate license can only be downloaded with filters like 30 days, 60 days, 90 days, 180 days. However, it would be useful if there are similar experience for both license types. For example: Why not combine both types together for both licenses? Keep data for at least a year to download and user can download either month on month data or consolidated data for license usage.
    2. There is another difference between Power Apps and Power Automate report. Power Apps report provides consolidated report for all licenses, whereas Power Automate provides option to download each and every license usage separately. The best UX would be to combine both of them and to allow users whether to download only for a single license or multiple or all the licenses.
    • Clearly explain counting methodology and discrepancies: There are multiple columns with confusing names and subcategories - like License Classification, ServicePlanName and SKUName. We are confused on which one to consider when calculating usage for a particular license. It would be great if there was a clear documentation on what should be considered on which scenarios. Also add details for each of the subcategories of each columns so that the report has transparency on what data it shows.
    • Blank environment names: There are certain rows which return Blank environment names. Wondering why it happens and if that has any specific reason, can we have a documentation on why that happens?


    These enhancements would reduce manual effort, improve trust in PPAC as a single source of truth, and strengthen enterprise license governance.