AI Usage Level Rubric
This rubric defines five levels of AI involvement in research tasks, from no AI usage to complete delegation. Research teams should use this framework to discuss and establish appropriate boundaries for their specific context, considering factors such as disciplinary norms, methodological requirements, and the nature of the doctoral contribution.
No AI Usage
The researcher completes the task entirely independently without any AI assistance. This is appropriate for tasks where unassisted human performance is essential to the integrity of the research or the development of the researcher, or where AI involvement would be inappropriate or impossible.
Light Assistance
AI serves as a reference tool or provides minor support. The researcher maintains full intellectual ownership and control, using AI only for explanations, clarifications, checking work, or overcoming minor obstacles. The AI's role is comparable to consulting a textbook or dictionary.
Moderate Collaboration
AI provides substantive suggestions, structures, or content that the researcher evaluates, adapts, and integrates. The researcher remains the decision-maker and author, but AI contributions meaningfully shape the output. This is comparable to receiving detailed feedback from a knowledgeable colleague.
Substantial Delegation
AI produces significant portions of the work product, which the researcher reviews, verifies, and takes responsibility for. The researcher's role shifts toward direction, curation, and quality assurance rather than primary production. Intellectual ownership becomes shared or ambiguous.
Full Offload
AI completes the task with minimal or no human input, verification, or intellectual engagement. The researcher accepts AI outputs without substantive evaluation or contribution. This level raises significant questions about authorship, learning, and the integrity of the doctoral contribution.
How to Use This Framework
This framework is designed to facilitate discussion within research teams about appropriate AI use. It is not prescriptive—what is appropriate will vary by discipline, methodology, institution, and the specific nature of the research.
For Supervisors
Use this framework to:
- Establish clear expectations with students early
- Discuss AI use as part of regular supervision
- Identify tasks where AI assistance is encouraged vs. discouraged
- Consider disciplinary and methodological implications
For Researchers
Use this framework to:
- Reflect on your current AI usage patterns
- Discuss boundaries proactively with supervisors
- Document AI use for transparency
- Ensure you are developing necessary skills
For Institutions
Use this framework to:
- Develop discipline-specific guidance
- Inform policy development
- Support training and development programmes
- Address assessment and integrity concerns
Orientation and Scoping
Level 5 not applicable: negotiation inherently requires human participation and relationship management
Level 5 not applicable: presentation and defence require human presence and understanding
Research Planning and Design
Research Planning
Research Design
Ethics and Approvals
Data Collection and Generation
Preparation
Level 5 not applicable: recruitment and access negotiation require human interaction and relationship-building
Level 5 not applicable for hands-on skills; partial offload possible only for conceptual learning
Active Data Collection
Data Management
Analysis and Interpretation
Data Processing
Analytical Work
Interpretation and Synthesis
Writing and Synthesis
Drafting
Revision and Refinement
Finalisation
Dissemination and Defence
Conference and Publication Activity
Thesis Examination
Level 5 not applicable: preparation must develop researcher's own understanding for the actual examination
All levels above 1 not applicable: the viva examination requires unassisted human participation. The researcher must be able to discuss and defend their work without AI assistance.
Cross-cutting Activities
Level 5 not applicable: skills require human development and cannot be fully offloaded
Level 5 not applicable: teaching requires human presence and engagement with students
Level 5 not applicable: attendance and participation require human presence
Levels 4-5 not applicable: relationships require human cultivation and cannot be delegated
Factors Affecting Appropriate AI Usage
What constitutes appropriate AI usage varies significantly based on context. Research teams should consider these factors when establishing boundaries for their specific situation.
Disciplinary Norms
Different fields have different expectations about tools, authorship, and intellectual contribution.
- STEM fields may accept more computational assistance
- Humanities may emphasise individual interpretation
- Social sciences vary by methodological tradition
- Creative disciplines have unique questions about AI co-creation
Methodological Paradigm
The epistemological foundations of the research affect what AI involvement means.
- Interpretive research may require human meaning-making
- Positivist research may accommodate more automation
- Critical approaches may require researcher reflexivity
- Mixed methods need consideration at each stage
Type of Doctoral Programme
The purpose and structure of the degree affects expectations.
- Traditional PhDs emphasise independent scholarship
- Professional doctorates may allow more practical AI use
- Practice-based doctorates raise unique questions
- Collaborative programmes may have different norms
Career Development
The PhD is a training programme, not just a research project.
- Which skills must the researcher develop?
- What competencies will future employers expect?
- How will AI proficiency itself be valued?
- What forms of expertise remain distinctly human?
Ethical Considerations
AI use raises specific ethical questions in research contexts.
- Transparency and disclosure requirements
- Data privacy when using cloud AI services
- Bias and fairness in AI-assisted analysis
- Environmental impact of AI computation
Institutional Requirements
Universities and funders may have specific policies.
- Existing academic integrity policies
- Funder requirements for transparency
- Journal and publisher policies
- Professional body guidelines
Questions for Research Teams
Use these questions to guide discussions about AI usage in your specific research context.
About the Task
- Is this task central to the doctoral contribution?
- Does it require skills the researcher must develop?
- Would AI assistance affect the validity of the research?
- Are there accuracy risks if AI makes errors?
About Transparency
- How will AI use be documented and disclosed?
- What would examiners, reviewers, or employers expect?
- Does the researcher understand what the AI produced?
- Can they explain and defend all aspects of the work?
About Learning
- What is the researcher missing by not doing this themselves?
- Will they need this skill in future roles?
- Is struggling with this task part of intellectual development?
- Could AI assistance prevent deeper understanding?