The Dark Side of Recall: Microsoft’s AI Feature is Still Capturing Sensitive Data and Poses Grave Security Risks
In the autumn of 2024, Microsoft reintroduced a controversial feature in Windows called Recall—an artificial intelligence system that periodically captures screenshots, allowing users to later search through their entire on-screen activity. Marketed as a helpful assistant meant to streamline everyday information workflows, Recall in practice risks becoming a meticulously organized archive of vulnerabilities, awaiting exploitation by those with malicious intent.
Recall was launched exclusively on Copilot+ laptops equipped with a Neural Processing Unit (NPU), a hardware accelerator dedicated to AI functions. Although Microsoft asserts that all screenshots are encrypted and stored within a secure enclave, testing has revealed that the system’s filtering mechanisms for sensitive data are alarmingly inconsistent.
Users are actively prompted to enable Recall during the initial Windows setup. By default, it pledges to avoid capturing sensitive content such as credit card numbers and passwords. However, even a cursory examination reveals the limitations of this protection. In multiple instances, Recall recorded banking pages, login screens, payment forms, and password lists—especially when such elements lacked trigger keywords like “password” or “credit card.” While these shortcomings may appear minor, they render the screenshot archive hazardous in the event of unauthorized access.
An evaluation of Recall’s filtering mechanism—which is supposed to omit sensitive data—exposed numerous flaws. While the system sometimes correctly ignored password fields and financial credentials when marked with expected indicators, in other cases, it preserved highly sensitive content. For example, when logging into a bank account, Recall captured the main account overview page, including the balance, even though account numbers were omitted. On Microsoft’s own site, adding a credit card left fields like card number, CVC, and expiration date blank—but when tested on a custom input form without payment-related keywords, Recall recorded the full card number.
Testing of text files revealed a similar vulnerability. Files containing the word “password” were excluded. But the same credentials, when listed without identifiers, were preserved. The issue extended to Microsoft Word documents—using the tag “My SS#:” sometimes triggered filtering, yet substituting it with “Soc:” led to full social security numbers being captured.
One particularly telling case involved a photograph of a passport. When the image filled the screen, Recall ignored it. However, once partially obscured by another window, the system captured it. This indicates that filtering decisions rely not only on content but also on context—with unpredictable outcomes.
To its credit, Microsoft implemented security upgrades after facing criticism in spring 2024. Screenshots are now stored within a Virtualization-Based Security (VBS) enclave, accessible only after authentication via Windows Hello. However, Windows Hello supports not only biometrics but also PIN codes—easily observed or guessed. Moreover, Recall history can be viewed remotely—for instance, through TeamViewer—if the PIN is entered on the remote device. Thus, physical access is no longer a prerequisite for compromise.
A noteworthy feature allows users to blacklist specific applications and websites from being captured by Recall. Yet this measure requires users to preemptively consider all privacy risks. In reality, avoiding sensitive capture might mean disabling browsers entirely—which defeats the very purpose of Recall.
Some browser vendors are already taking a stand. Developers of Brave have implemented a workaround that marks all tabs as private, forcing Recall to bypass them. This is particularly crucial for vulnerable user groups, such as survivors of domestic abuse, where simply viewing content on crisis shelters, healthcare, or escape planning could reveal dangerous personal intentions to someone with access to the device.
While Microsoft insists that Recall is still in a “preview phase,” its integration into the initial system setup and default activation on new laptops renders that label somewhat disingenuous. Security researchers, including those at Huntress Security, have warned that virtualized environments such as VBS or Hyper-V can be susceptible to side-channel attacks if the system remains unpatched or hyper-threading is enabled—an exploit path previously used to extract cryptographic keys, and potentially applicable to Recall in the future.
Thus, the notion of a benevolent AI assistant recording one’s screen for convenience finds itself mired in real-world constraints—from technical deficiencies to profound security concerns. Users may have no clear understanding of what has been stored or who might access it. And security, despite all assurances, proves permeable under the simplest threat models.
For users seeking robust privacy, the options are stark: either meticulously blacklist all browsers and applications—nullifying Recall’s utility—or disable the feature altogether. In its current incarnation, Recall remains a precarious compromise between convenience and confidentiality, where, in moments of tension, the user’s interests may not prevail.
Cybersecurity experts acknowledge Microsoft’s efforts to strengthen Recall’s protections, but too many questions remain unresolved. Modern software cannot anticipate every way users structure personal information on-screen. Even the most sophisticated algorithm cannot distinguish login lists from secrets in the absence of explicit markers. And with side-channel vulnerabilities in VBS and Hyper-V still plausible—especially when hyper-threading is active—Recall may well become an alluring target for future attacks.