
PANews April 24 news, OpenAI announced the launch of a biosafety vulnerability bounty program for GPT-5.5, inviting researchers with AI red team, safety, or biosafety experience to test model protections. The challenge goal is to construct a "universal jailbreak prompt" that passes five biosafety question tests without triggering a review. The first successful participant will receive a reward of $25,000, and partial success may also receive a prize. Applications close on June 22, the testing period is from April 28 to July 27, and all research processes are subject to confidentiality agreements.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。