DURC GPT: An Advanced AI Assistant for Identifying and Mitigating Dual-Use Research Risks
Slug: durc-gpt-ai-assistant-dual-use-research-risk-biosecurity Excerpt: DURC GPT is a specialised ChatGPT-based AI assistant designed to identify, evaluate, and mitigate risks in dual-use research across biosecurity, AI safety, and chemical research. This post explores its capabilities, use cases, and implications for responsible science governance. Tags: Dual-Use Research, Biosecurity, AI Safety, DURC, Science Governance, Biosafety, Artificial Intelligence Category: AI & Biosecurity Read Time: 9 min Author: Dr. Odongo Oduor Joseph
Introduction: The Dual-Use Dilemma in the Age of Generative AI
The history of science is inseparable from the history of dual-use risk. Every major advance in biology, chemistry, and information technology has carried within it the potential for both profound benefit and catastrophic harm. The discovery of recombinant DNA technology in the 1970s, the development of gain-of-function research methodologies in the 2000s, and the democratisation of CRISPR gene editing in the 2010s each expanded the frontier of human capability while simultaneously lowering the barrier to misuse. Today, as large language models (LLMs) become embedded in scientific workflows, a new dimension of dual-use risk has emerged — one that demands equally sophisticated governance tools.
DURC GPT (access here) is a purpose-built AI assistant designed to meet this challenge directly. Built on the ChatGPT platform and specialised through custom instructions and curated knowledge, DURC GPT serves as an intelligent advisory layer for researchers, biosafety officers, ethics committees, and policy analysts who need to identify, evaluate, and mitigate risks associated with dual-use research of concern (DURC) across three critical domains: biosecurity, AI safety, and chemical research.
What Is DURC GPT?
DURC GPT is a custom GPT model — a specialised configuration of OpenAI's ChatGPT — engineered to apply structured risk assessment frameworks to research proposals, experimental designs, publications, and policy documents. Unlike general-purpose LLMs, which approach biosecurity questions without domain-specific guardrails, DURC GPT is calibrated to the specific regulatory, ethical, and scientific vocabulary of dual-use research governance.
The tool is designed to assist with three broad categories of analysis:
Risk Identification — DURC GPT can parse research descriptions, grant proposals, and experimental protocols to flag elements that meet the criteria for dual-use concern under frameworks such as the U.S. DURC Policy (2012), the 2024 DURC-PEPP Policy, the Biological Weapons Convention (BWC), the Chemical Weapons Convention (CWC), and the Wassenaar Arrangement. It applies the seven categories of DURC concern defined by the National Science Advisory Board for Biosecurity (NSABB), including enhanced transmissibility, pathogenicity, immune evasion, and the disruption of protective countermeasures.
Risk Evaluation — Beyond identification, DURC GPT supports structured evaluation of risk magnitude and probability. It can assist users in completing institutional risk assessments, drafting biosafety committee submissions, and applying the dual-use research review criteria established by the WHO Advisory Committee on Variola Virus Research and analogous bodies. For AI safety applications, it draws on frameworks from the UK AI Safety Institute, the EU AI Act, and the NIST AI Risk Management Framework to assess whether AI systems or datasets carry dual-use potential.
Risk Mitigation — DURC GPT can propose mitigation strategies proportionate to the identified risk level, including experimental redesign to achieve scientific objectives without generating dangerous knowledge, enhanced biosafety containment recommendations, data access restriction protocols, and communication strategies for responsible disclosure.
Key Capabilities and Use Cases
Biosecurity Risk Screening
For molecular biologists, virologists, and synthetic biologists, DURC GPT provides a first-pass screening layer before institutional review. A researcher designing a gain-of-function experiment to study influenza transmissibility can describe their experimental approach to DURC GPT and receive a structured analysis of which NSABB DURC criteria the work potentially triggers, what institutional review pathways are required, and what mitigation options exist. This does not replace formal biosafety committee review — it prepares researchers to engage that review more effectively and reduces the likelihood of inadvertent policy violations.
The tool is particularly valuable in the context of synthetic biology, where the convergence of DNA synthesis, computational protein design, and automated laboratory systems has created what biosecurity scholars call the "democratisation paradox": the same tools that accelerate vaccine development and agricultural innovation can, in the wrong hands, enable the reconstruction of dangerous pathogens or the design of novel toxins. DURC GPT applies the Fink Report criteria and the Johns Hopkins Center for Health Security's biosecurity risk taxonomy to help users navigate this terrain.
AI Safety and Dual-Use AI Research
The dual-use problem is not confined to the life sciences. Advanced AI systems — particularly large language models, autonomous agents, and multimodal models — carry their own dual-use risks. A model trained to assist with cybersecurity research can be repurposed for offensive operations. A system designed to accelerate drug discovery can be queried for synthesis routes to controlled substances. A reasoning model built for strategic planning can be applied to adversarial scenarios.
DURC GPT addresses this dimension by applying AI safety evaluation frameworks to model descriptions, training datasets, and deployment contexts. It can assess whether a proposed AI system meets the criteria for "high-risk AI" under the EU AI Act's Annex III, whether it triggers the dual-use export control provisions of the Wassenaar Arrangement's Category 5 Part 2, and whether its capabilities align with the frontier AI safety commitments made at the 2023 Bletchley Declaration and the 2024 Seoul AI Summit.
Chemical Research Risk Assessment
Chemical dual-use risk — the potential for legitimate chemistry research to generate knowledge or materials applicable to chemical weapons development — represents a third domain where DURC GPT provides structured assistance. The tool draws on the Chemical Weapons Convention's schedules of controlled chemicals, the Australia Group's export control lists, and the Organisation for the Prohibition of Chemical Weapons (OPCW) technical secretariat guidelines to help chemists, pharmacologists, and materials scientists assess whether their work intersects with controlled substance categories.
How to Access and Use DURC GPT
DURC GPT is available directly through the ChatGPT platform. You can access it at:
https://chatgpt.com/g/g-69c163eed0fc8191899c95961adedd99-durc-gpt
A ChatGPT account (free or Plus) is required to interact with the tool. Once accessed, users can:
- Describe a research project, experimental protocol, or publication in natural language
- Ask DURC GPT to identify which dual-use risk criteria the work potentially triggers
- Request a structured risk evaluation with severity and probability assessments
- Ask for mitigation recommendations, including experimental redesign options
- Request a draft biosafety committee submission or institutional review narrative
- Query specific regulatory frameworks (DURC Policy, BWC, CWC, EU AI Act) for guidance on compliance
The tool is most effective when users provide specific, detailed descriptions of their research rather than generic queries. The more precise the input — including organism names, experimental techniques, intended applications, and geographic context — the more targeted and actionable the output.
Limitations and Responsible Use
DURC GPT is an advisory tool, not a regulatory authority. Its outputs should be treated as a starting point for institutional review, not a substitute for it. Several important limitations apply:
Knowledge Cutoff — Like all LLMs, DURC GPT's knowledge has a training cutoff date. Emerging pathogens, newly scheduled chemicals, and recently enacted regulations may not be fully reflected in its responses. Users should cross-reference outputs against current regulatory databases.
Jurisdictional Variation — Dual-use research governance frameworks vary significantly across jurisdictions. DURC GPT is primarily calibrated to U.S., EU, and international frameworks; national-level regulations in Kenya, Uganda, and other African Union member states may require additional consultation with national biosafety authorities.
Adversarial Robustness — Like all AI systems, DURC GPT can be prompted in ways that attempt to elicit harmful information. The tool incorporates safety guardrails, but researchers and institutions should be aware that no AI system is fully adversarially robust.
Complementarity, Not Replacement — DURC GPT is designed to complement, not replace, the expertise of biosafety officers, ethics committee members, and regulatory specialists. Its greatest value lies in preparing researchers to engage formal review processes more effectively and in providing a structured vocabulary for risk communication.
Implications for Biosafety Governance in Africa
For researchers and institutions in sub-Saharan Africa, DURC GPT represents a particularly significant resource. The region's biosafety governance infrastructure is uneven: while Kenya's Biosafety Authority and Uganda's National Biosafety Committee have made substantial progress in establishing regulatory frameworks for genetically modified organisms and pathogen research, the capacity for dual-use risk assessment — particularly for emerging biotechnologies and AI systems — remains limited relative to the pace of scientific advance.
DURC GPT can serve as a capacity-building tool in this context, providing researchers at institutions with limited access to specialised biosafety expertise with a structured framework for self-assessment. It can help East African scientists engage international collaboration frameworks — including the Global Partnership Against the Spread of Weapons and Materials of Mass Destruction and the Biological Threat Reduction Program — with greater confidence and precision.
Key Takeaways
DURC GPT represents a meaningful advance in the application of AI to biosecurity governance. By providing structured, domain-specific risk assessment across biosecurity, AI safety, and chemical research, it lowers the barrier to responsible dual-use risk management for researchers worldwide. Its direct URL — https://chatgpt.com/g/g-69c163eed0fc8191899c95961adedd99-durc-gpt — makes it immediately accessible to any researcher with a ChatGPT account. As dual-use risks continue to evolve with the pace of scientific advance, tools like DURC GPT will become an increasingly essential component of the responsible science infrastructure.
Frequently Asked Questions
What is DURC GPT? DURC GPT is a custom ChatGPT-based AI assistant designed to identify, evaluate, and mitigate risks associated with dual-use research of concern (DURC) across biosecurity, AI safety, and chemical research domains.
How do I access DURC GPT? DURC GPT is accessible at https://chatgpt.com/g/g-69c163eed0fc8191899c95961adedd99-durc-gpt. A free or paid ChatGPT account is required.
Can DURC GPT replace institutional biosafety review? No. DURC GPT is an advisory tool that prepares researchers for institutional review. It does not replace the authority of biosafety committees, institutional biosafety officers, or national regulatory bodies.
What regulatory frameworks does DURC GPT apply? DURC GPT applies the U.S. DURC Policy (2012), the 2024 DURC-PEPP Policy, the Biological Weapons Convention, the Chemical Weapons Convention, the Wassenaar Arrangement, the EU AI Act, and the NIST AI Risk Management Framework, among others.
Is DURC GPT suitable for researchers in Africa? Yes. DURC GPT is particularly valuable as a capacity-building resource for researchers in regions where specialised biosafety expertise is limited, including sub-Saharan Africa. It should be used alongside national biosafety authority guidance.
What types of research does DURC GPT cover? DURC GPT covers biological research (including gain-of-function, synthetic biology, and pathogen research), AI systems development, and chemical research with potential weapons applications.
