Source: Cailian Press
Editor: Xiaoxiang
Since the rise of the in-house generative artificial intelligence trend earlier this year, the controversy over data security has been ongoing. According to the latest research from a cybersecurity company, Microsoft's artificial intelligence research team accidentally leaked a large amount of private data on the software development platform GitHub several months ago, including over 30,000 pieces of internal information from Microsoft teams.
The cybersecurity company Wiz's team discovered that Microsoft's research team leaked the data when they released open-source training data on GitHub in June this year. The cloud-hosted data was leaked through a misconfigured link.
According to a blog post by Wiz, Microsoft's AI research team originally released open-source training data on GitHub, but due to a misconfiguration of the SAS token, it was mistakenly configured to grant permissions to the entire storage account, as well as granting users full control permissions—not just read permissions. This means they could delete and overwrite existing files.
According to Wiz, the leaked 38TB of data included disk backups from two Microsoft employees' personal computers, which in turn contained passwords, keys, and over 30,000 pieces of internal information from 359 Microsoft employees.
Wiz's researchers stated that open data sharing is a key component of AI training, but if used improperly, sharing large amounts of data can pose significant risks to companies.
Ami Luttwak, CTO and co-founder of Wiz, pointed out that Wiz shared this situation with Microsoft in June, and Microsoft quickly removed the exposed data. The Wiz research team discovered the cached data when scanning misconfigured storage on the internet.
In response, a Microsoft spokesperson later commented, "We have confirmed that no customer data was exposed, and no other internal services were threatened."
In a blog post published by Microsoft on Monday, the company stated that it had investigated and remedied an incident involving a Microsoft employee who shared a URL to an open-source artificial intelligence learning model in a public GitHub repository. Microsoft stated that the exposed data in the storage account included backups of the workstation configuration files of two former employees, as well as internal information from these two former employees and their colleagues in the Microsoft team.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。