SURF downgrades Microsoft Copilot education risks to medium while privacy concerns persist

Dutch education IT cooperative signals cautious progress but maintains reservations about AI tool's data handling and accuracy issues

SURF DPIA cover for Microsoft 365 Copilot education privacy assessment showing medium-level risks
SURF DPIA cover for Microsoft 365 Copilot education privacy assessment showing medium-level risks

SURF, the Dutch IT cooperative serving educational and research institutions, announced on September 11, 2025, that it has downgraded two of the four previously identified high privacy risks for Microsoft 365 Copilot to medium status. The update follows extensive negotiations between SURF, the Dutch government's Strategic Supplier Management (SLM), and Microsoft to address data protection concerns outlined in the original Data Protection Impact Assessment (DPIA) published in December 2024.

According to Privacy Company has performed an update DPIA for SURF on Microsoft 365 Copilot, the generative AI-tool integrated in the Microsoft Office 365 Suite. The update shows that Microsoft has taken measures to reduce the four previously identified high data protection risks, but does not get the green light for all. The comprehensive 217-page assessment evaluated the paid Education licenses that provide Copilot access to internal documents stored on SharePoint, OneDrive, and Exchange Online.

The two risks that remain at medium status relate to the processing of inaccurately and incompletely generated personal data in the replies, and with the excessive retention period of 18 months for the pseudonymized metadata, the Required Service Data and Telemetry Data. These concerns reflect broader industry challenges with AI accuracy and data governance.

Technical scope and testing methodology

The DPIA focused exclusively on the paid M365 Copilot Edu Sub Add-on, which differs from free consumer versions by providing access to organizational data through Microsoft Graph. Microsoft does not make this paid service available to users under 18 years. Therefore this DPIA does not contain a risk assessment related to use by children. Testing was conducted across multiple platforms including Windows, MacOS, and browser-based applications.

Privacy Company performed extensive technical analysis, including interception of the network traffic while using the Microsoft 365 Copilot application. This includes cookie traffic and collection of telemetry data. The testing revealed 208 different types of telemetry events collected during Microsoft 365 Copilot usage, raising questions about data transparency and retention practices.

The assessment examined both content data processing and diagnostic data collection. Content data encompasses prompts, responses, and access to organizational documents, while diagnostic data includes metadata about service usage, telemetry events, and system-generated logs that Microsoft retains for up to 18 months.

Microsoft's mitigation efforts

Microsoft has implemented several measures to address SURF's concerns, though details of some improvements remain confidential. The company committed to transparency improvements, including publishing expanded documentation about its Required Service Data and providing better access to diagnostic information through Data Subject Access Requests.

Microsoft has explained why some fields in the Diagnostic Data in reply to the Data Subject Access Request (DSAR) were empty: because Microsoft does not collect and send data to Microsoft for that field. The company also enhanced its documentation to explain data processing mechanisms and retention policies more clearly.

However, significant gaps remain in Microsoft's approach to accuracy and content filtering. The DPIA identifies concerns about the "Workplace Harms filter," which Microsoft introduced without adequate documentation or customer controls. Microsoft has published four sentences about the purpose of the new Workplace Harms filter. It is unclear if this replaces the commitment to document Workplace Harms definitions and severity scales with the same level of detail as the Harmful Content filter.

Data retention and pseudonymization concerns

One of the two remaining medium-level risks centers on Microsoft's 18-month retention period for pseudonymized diagnostic data. Microsoft has published that it may need to process the Required Service Data and Telemetry Data throughout the 18-month retention period for the three agreed processor purposes. Customers can (theoretically) shorten this period by deleting a user account, or terminating their use of Microsoft 365.

The extended retention period raises reidentification risks, particularly given the volume and granularity of collected telemetry data. SURF's analysis suggests this timeframe exceeds necessity requirements under GDPR data minimization principles, though Microsoft argues the retention supports essential service operations.

Accuracy and AI hallucination risks

The second persistent medium-level risk involves inaccurate personal data generation. The DPIA documents instances where Microsoft 365 Copilot produced fabricated information about individuals, including non-existent academic papers and incorrect attributions. Testing revealed that Microsoft 365 Copilot was asked the same question about international data transfers. This result was even more difficult to understand. In the content of the answer it suggested 5 recent scientific papers with different titles but every 'article' contained the same source reference to 1 of the 10 available articles in SharePoint.

These accuracy issues become particularly concerning when applied to educational contexts where reliable information is crucial. The interface design, resembling a chat system, may mislead users into treating generated content as factual rather than statistically probable text completion.

Microsoft's approach to addressing accuracy concerns involves multiple strategies, though implementation remains incomplete. The company has committed to user interface improvements designed to add "friction" that encourages verification of generated content, though specific measures remain confidential and have not yet been deployed.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Implications for educational institutions

SURF's updated guidance reflects a nuanced approach to AI adoption in education. Due to the privacy improvements in Microsoft 365 Copilot, from which all users benefit, SURF no longer advises against its use entirely. However, given the remaining risks, we recommend that educational and research institutions adopt a cautious approach to using Copilot and carefully weigh the risks for each type of use.

The guidance places significant responsibility on individual institutions to implement appropriate safeguards. SURF calls on education organisations that start using Microsoft 365 Copilot to share a copy of all complaints about inaccurate personal data, including complaints about incorrect filtering of data. This crowdsourced approach to quality monitoring reflects the challenges of regulating AI systems at scale.

Educational institutions must now develop comprehensive AI usage policies that address both privacy protections and accuracy verification. The guidance recommends disabling access to Bing web search, implementing role-based access controls, and establishing clear protocols for handling AI-generated content containing personal information.

Broader industry context

SURF's measured response to Microsoft 365 Copilot reflects broader tensions between AI adoption and privacy protection in European institutions. The assessment occurs within a context where Dutch authorities have been developing GPT-NL, a sovereign AI solution designed to align with European values and reduce dependency on US-based AI services.

The timing coincides with increased regulatory scrutiny of US technology companies' data handling practices, including concerns about the Cloud Act's implications for European data protection. Microsoft has faced questions about its ability to protect European data from US government access requests, adding complexity to institutional adoption decisions.

Privacy regulators across Europe have intensified enforcement of data protection requirements, with particular attention to consent mechanisms and algorithmic transparency. The ICO's recent guidance on profiling tools and automated decision-making systems reflects similar concerns about AI governance in institutional settings.

Industry response and future outlook

Microsoft's substantial investments in AI development have transformed its advertising business, with the company reporting over $20 billion in annual advertising revenue as Copilot integration drives new engagement patterns. However, institutional deployment faces different requirements than consumer applications, particularly regarding data protection and accuracy standards.

The education technology sector continues grappling with balancing innovation benefits against privacy risks. SURF's development of EduGenAI as an alternative demonstrates institutional interest in privacy-preserving AI solutions, though implementation remains in early stages.

SURF plans to reassess the situation in six months, potentially adjusting its risk ratings based on Microsoft's implementation of committed improvements. The cooperative emphasizes ongoing dialogue with Microsoft while maintaining its role in protecting institutional interests against vendor overreach.

Implementation timeline and recommendations

Organizations considering Microsoft 365 Copilot deployment should implement several key protections:

Disable web search functionality through Bing to prevent data sharing with external services. Establish comprehensive AI usage policies addressing both privacy and accuracy verification requirements. Configure role-based access controls to limit exposure of sensitive organizational data. Implement monitoring systems to identify and address inaccurate AI-generated content.

The assessment underscores that AI adoption in educational settings requires careful balance between innovation potential and institutional responsibilities for data protection and information accuracy. While Microsoft's improvements have reduced immediate risks, fundamental challenges around AI transparency and data governance persist.

Timeline

  • December 2024: SURF publishes original DPIA identifying four high-risk areas
  • April 2025: Microsoft provides mitigation information showing progress
  • June 2025: SURF acknowledges improvements while maintaining cautious stance
  • September 11, 2025: Updated DPIA downgrades two risks to medium status
  • Future assessment: SURF plans six-month review of Microsoft's implementation progress

Summary

Who: SURF (Dutch IT cooperative for education and research), Privacy Company, Microsoft, and Dutch educational institutions

What: Updated Data Protection Impact Assessment reducing Microsoft 365 Copilot education risks from high to medium status, while maintaining concerns about data accuracy and retention practices

When: September 11, 2025 announcement of updated assessment, following December 2024 original DPIA

Where: Netherlands and broader European education sector, affecting institutions using Microsoft 365 educational licenses

Why: Privacy concerns about AI-generated inaccurate personal data and excessive retention of diagnostic information necessitated comprehensive risk assessment and ongoing negotiation with Microsoft to implement adequate protections