Google DeepMind、英国AI Security Instituteとの提携を深化 Deepening our partnership with the UK AI Security Institute
- Google DeepMindは英国のAI Security Institute(AISI)との提携を強化し、フロンティアAIモデルの安全性評価や脆弱性検証で協力を拡大すると発表した。
- 両者は共同でリスク評価手法やセーフガードの改善を進める。
English summary
- Google DeepMind and UK AI Security Institute (AISI) strengthen collaboration on critical AI safety and security research
Google DeepMindは、英国政府の専門機関であるAI Security Institute(AISI)との連携をさらに深化させると発表した。AIフロンティアモデルの安全性評価や脆弱性検証における協力を拡張し、リスク評価の方法論を共同で発展させる狙いがある。
発表によれば、両者はGeminiをはじめとするフロンティアモデルへのアクセスを通じて、サイバー攻撃支援、生物・化学兵器関連知識、自律性などのリスク領域でAISIによる事前評価を可能にする。これにより、モデルがリリースされる前にセーフガードの有効性を検証し、必要に応じて改善する流れが確立されると見られる。
AISIは2023年のBletchley ParkでのAI Safety Summitを契機に設立された英国政府機関で、当初はAI Safety Instituteの名称だったが、2025年に「Security」へと改称し、国家安全保障的観点を強める姿勢を示した。米国にも対応する組織(US AI Safety Institute、NIST傘下)が存在し、英米のInstitute間でも連携が進んでいる。
Google DeepMindは英国のAI Security Institute(AISI)との提携を強化し、フロンティアAIモデルの安全性評価や脆弱性検証で協力を拡大すると発表した。
業界動向としては、OpenAIやAnthropicも同様にAISIや米AISIに対して事前テストアクセスを提供する協定を結んでおり、フロンティアモデルの第三者評価が事実上の標準になりつつある。EUではAI Actのもと汎用AIモデルにシステミックリスク評価が義務付けられ、規制と自主的評価の境界が曖昧になっている。
Google DeepMindにとっては、Frontier Safety Frameworkの運用と外部評価を組み合わせる形となり、ガバナンス面での透明性アピールにもつながる可能性がある。一方で、政府機関による評価が実際にどこまで深く行えるか、商業展開のスピードと両立できるかは引き続き論点となりそうだ。
Google DeepMind has announced an expanded partnership with the UK's AI Security Institute (AISI), deepening collaboration on the evaluation and stress-testing of frontier AI models. The move signals continued momentum behind government-led oversight of advanced AI systems in coordination with the labs that build them.
Under the deepened arrangement, AISI will gain richer access to DeepMind's frontier models — including Gemini — to conduct pre-deployment evaluations across risk areas such as cyber offense capabilities, chemical and biological knowledge uplift, and model autonomy. The two organisations also intend to co-develop methodologies for assessing safeguards, sharing technical insights that can feed back into both DeepMind's internal safety processes and AISI's evolving evaluation science.
AISI itself emerged out of the November 2023 Bletchley Park AI Safety Summit, originally founded as the AI Safety Institute before being rebranded earlier this year to emphasise national-security framing. It now sits among a small but growing network of state-backed evaluation bodies, including the US AI Safety Institute housed within NIST, and counterparts forming in countries such as Japan, Singapore and Canada. Cross-border cooperation between these institutes has become a notable feature of the post-Bletchley landscape.
The DeepMind announcement fits a broader pattern: OpenAI and Anthropic have struck similar pre-deployment testing agreements with AISI and its US counterpart, making third-party government evaluation a de facto norm for top-tier frontier systems. In parallel, the EU's AI Act introduces mandatory systemic-risk assessments for general-purpose models above a compute threshold, blurring the line between voluntary commitments and regulatory obligations.
For Google DeepMind specifically, the partnership dovetails with its Frontier Safety Framework, which sets out internal thresholds and mitigations tied to dangerous capability evaluations. Layering external evaluation from AISI on top arguably strengthens the credibility of those self-assessments and provides a degree of public-interest scrutiny that pure internal red-teaming cannot offer. It may also help DeepMind navigate diverging regulatory expectations across the UK, EU and US.
Questions remain, however, about how deeply any government institute can probe models within the tight timelines of commercial release cycles, and how findings will be communicated when evaluators identify concerns. AISI has so far published selective results from earlier rounds of testing, but full transparency is constrained by both security sensitivities and the proprietary nature of the systems involved. Observers are likely to watch closely whether deeper access translates into meaningfully different safety outcomes, or whether it primarily formalises a relationship that was already operating informally.
Either way, the announcement underlines that frontier AI safety is increasingly being shaped not just inside labs, but at the interface between developers and a new class of technical government bodies.
本ページの本文・要約は AI による自動生成です。正確性は元記事 (deepmind.google) をご確認ください。