VET422 Practical use of artificial intelligence in biomedical research
Credits (ECTS):5
Responsible faculty:Veterinærhøgskolen
Course responsible:Andrew Michael Janczak
Campus / Online:Taught campus Ås
Teaching language:English
Limits of class size:50
Course frequency:Monthly meetings during teaching periods.
Nominal workload:Students are welcome to attend all meetings and work seminars throughout their PhD period, but they must participate in the aforementioned minimum of 10 meetings and associated work sessions. To fulfill the course's total workload of 125 hours, the independent work is standardized at 12.5 hours per meeting (including teaching time). This entails significant preparation and follow-up work in the form of documented preparation, practical testing of AI tools on own research-related tasks, oral presentations, and active participation. Students choose the meetings that are most relevant to their own research and work situation. The compulsory work can be completed within one year or spread over a longer period.
Teaching and exam period:The course is offered continuously throughout the year and has no fixed start or end date. Credits are awarded once the coursework has been completed and approved.
About this course
The course is organized as a seminar series with monthly meetings for PhD students and staff at NMBU. The goal is to build practical competence in the use of generative artificial intelligence as a support tool across academic activities.
The course provides an introduction to how language models work, their opportunities and limitations, and how they can be used responsibly in various parts of a researcher's daily work.
Topics include:
Prompt engineering and structuring instructions
AI as support for literature work and academic writing
AI for qualitative analysis of text and unstructured data
Pattern recognition and conceptual method development (including the design of research protocols and logical structuring of work processes)
Development of teaching materials and assessment tasks (including the legal distinction between generating tasks and the High-Risk activity of using AI to grade or evaluate students)
AI for administrative tasks such as grant writing, reporting, project planning, and navigating High-Risk rules in recruitment
AI in innovation processes and commercialization of research
Quality assurance and mandatory labeling of AI-generated content
Institutional and European guidelines for responsible use Privacy and data security Ethical considerations related to AI in academic practice
Furthermore, the course introduces agentic AI and autonomous workflows, covering the methodological complexities, the stricter requirements for manual quality assurance, and the legal implications of deploying such systems under the EU AI Act.
Learning outcome
Knowledge:
The candidate can explain how large language models work at a conceptual level and identify their main limitations, including hallucinations, biases, and a lack of genuine understanding.
The candidate is familiar with institutional and regulatory frameworks for the use of AI in research, teaching, and administration, including NMBU's guidelines and the EU AI Act (KI-forordningen). This includes understanding the strict legal requirements for High-Risk AI (such as student assessment and employment decisions) versus exempt scientific research AI.
The candidate knows where to find further support and resources for responsible AI use.
The candidate understands the methodological and regulatory distinctions between using standard AI models and autonomous AI agents, including how deploying agents may trigger stricter requirements for human oversight and alter legal roles (e.g., transitioning into a 'downstream provider') under the EU AI Act.
Skills:
The candidate can construct effective instructions (prompts) for a range of academic tasks, including literature reviews, writing support, data analysis, method development, the development of teaching materials, and administrative tasks such as grant writing and reporting. The candidate can critically evaluate AI-generated content, apply appropriate quality assurance procedures, and ensure compliance with transparency and labeling requirements for synthetic content. The candidate can configure AI tools for specific purposes using structured instruction sets. The candidate can independently assess when and how AI tools should be used in different parts of academic work, including when a Fundamental Rights Impact Assessment (FRIA) is required.
General competence:
The candidate can make informed decisions about the appropriate use of AI tools across the different phases of research, teaching, administration, and innovation. The candidate can contribute to discussions on responsible AI use within their academic environments. The candidate can critically assess the risks, benefits, and compliance requirements of integrating agentic AI into their workflows, taking full personal responsibility for processes that operate with a degree of autonomy. The candidate can stay updated on developments in AI tools and adapt their practice in line with new opportunities and guidelines.
Learning activities
Teaching support
Syllabus
Prerequisites
Assessment method
About use of AI
Examiner scheme
Mandatory activity
Notes
Teaching hours
Preferential right
Reduction of credits