How to use Artificial Intelligence (AI), write and cite in academic writing and what aids are permitted aids for exams
Artificial Intelligence (AI) can be a useful tool in your studies. On this page, you will find information and examples of good practices for using these tools.
Learn how to write academic text and quote correctly from the sources you use.
How to use Artificial Intelligence (AI)
Artificial Intelligence (AI) is a tool that can provide valuable learning support. Always consult your course responsible or advisor if you are uncertain about the use of AI. It is your responsibility to act with academic integrity and familiarize yourself with applicable ethical principles, laws, and regulations.
Ethical Principles and Guidelines
It is important to:
- Use AI as a support tool, not as a substitute for your own thinking
- Critically evaluate the quality of the responses you receive
- Always disclose the use of AI in academic work and clearly label material generated by language models
- Avoid using AI to process personal data or sensitive matters
- Adhere to academic and ethical guidelines for source usage
What is AI?
Artificial intelligence (AI) consists of self-learning systems based on neural networks, where artificial neurons communicate with each other to solve various tasks. A purpose of AI tools may be to identify patterns in large amounts of data. This type of AI program is well established and has been in use for several years, an example of which is Rikshospitalet's use of AI to analyze X-ray images in the event of a fracture.
Definitions
AI tools
Programs or platforms that use AI to generate text, analyze data, or perform other tasks.
Generative AI
AI systems that can generate new content such as text, images, or audio based on training data.
Language models (LLMs)
Large language models (LLMs) models are trained on enormous amounts of text data and are essentially statistical tools that can predict the next word in a given sequence. In other words, it is a context-specific probability distribution of words. This enables them to generate human-like text, answer questions, and perform other language-related tasks. There are many such models (programs) on the market, each with their own strengths and weaknesses. The most modern models—referred to as multimodal — are capable of understanding and generating information from several different types of data, or "modalities." These can include text, images, sound, video, and sensory data.
Small language models (SLMs) are also increasing in popularity. These models are trained on less, but carefully curated, data. They often perform as well—or better—than LLMs on certain tasks. They are also more sustainable as they are more resource efficient.
What does the AI-category for my course means?
Information about permitted use of AI in a course should be available in the online course description. The section should include a description of how AI is allowed to be used in various assessment forms and mandatory activities within the course. There are three possible categories for AI use:
K1 - No Use of AI
- In this category, the use of AI is not permitted under any circumstances.
- Example: In an on-campus exam or test, AI tools are typically not allowed.
K2 - Specified Use of AI
- In this category, the specific ways AI can be utilized are clearly defined.
- Example: For a group project, it may be allowed to use AI for brainstorming ideas and proofreading (language polishing), but not for generating content or solving problems.
- Students must provide a detailed description of their AI usage, specifying the tools or programs used and how they were applied.
- The group is responsible for ensuring that the final content adheres to the course rules and guidelines. For example, after using AI for proofreading, the group must ensure the final text complies with academic and ethical standards.
K3 - Full Use of AI
- This category allows unrestricted use of AI, as long as it aligns with the Guidelines for the Use of Artificial Intelligence (AI) at NMBU.
- Important Note: Ethical principles must always be followed, even when full use of AI is permitted.
- Example: Students may use AI tools freely for all aspects of an assignment, including content generation, analysis, and formatting, provided proper disclosure and adherence to ethical guidelines.
Remember!
- Regardless of the category, students are responsible for ensuring their use of AI complies with institutional policies and ethical standards.
- Transparency is key: always disclose your use of AI and specify the tools and methods applied.
Accuracy - probability and bias
It is in the nature of a language processing tool to generate incorrect information. It cannot distinguish right from wrong on its own and thus composes sentences based on probability, not judgment.
Here is an example: ChatGPT was asked the question: what is the most cited research article in economics of all time? The answer was "A Theory of Economic History" - written by Douglass North in 1969 and published in the Journal of Economic History. The only problem is that this article does not exist. By searching its own training data, it puts together the words that most often appear together based on the question:
"A Theory of Economic History" is the answer we receive from ChatGPT. The choice of author for the article is done the same way. Douglass North is the author who has published most about economics. Probability therefore dictates that he is the author.
Factual errors (hallucinations in technical terms) are something that is being actively worked on, and their frequency has decreased significantly since the launch of GPT-3 in the fall of 2022. The fact that the models are now connected to the internet and can search for specific information is of great significance. However, hallucinations and low-quality generated responses still occur. Different providers have different solutions, but Google's Notebook LM is a good example. The platform, based on Gemini, requires you to upload the sources you want it to base its responses on. It can only generate text based on what you have uploaded. This significantly reduces the occurrence of hallucinations and gives the user far greater control than a conventional chatbot can offer.
Even if the generated information from a language model does not contain factual errors, one must also be aware of the possibility of plagiarism. There are several examples of language models "generating" information identical to already published material. Reuse of generated content can therefore result in plagiarism accusations.
GPT and other language models also have built-in ethical guidelines. They primarily deal with offensive content and privacy, but they are also influenced by dominant political views. Language models can therefore have a built-in bias, or reproduce biased views represented in the training material. This affects their accuracy.
A language model also has the ability to give you the answer you want, even if it does not represent the consensus. This is especially important to be aware of when using platforms aimed at academia and literature searches. If you ask to have text generated based on a specific issue, you will get an answer that supports your claim (including factual sources), but it is not at all certain that this represents the field's general position. Contradictory research is, in other words, not considered unless you specifically ask about it. The models also do not have access to all peer-reviewed scientific literature. The suggestions you get are based on the "library" that the provider has managed to build up. They are often very incomplete and contain little contemporary scientific literature. Such platforms should therefore be regarded as a supplement, not a replacement for more conventional forms of information retrieval.
A final problem is the use of AI for summarizing academic texts. An AI tool cannot necessarily weigh what is important and what is not. A generated summary can therefore give a misleading representation of the text's content. However, several of the tools have added features that make them much more accurate and useful. Nevertheless, it is important to quality-assure the content
Prompting
When speaking with language models, it is important to be able to ask specific questions that contain a lot of context. This action is called prompting that can be explained as question formulations, instructions, or cues. Prompting is your way of giving the AI tool instructions on what you want it to do.
It can be challenging to create prompts that provide you with the desired response. Therefore, there is often need for adjustments, corrections, and follow-up questions in your prompts The stronger your grasp on the subject matter, the higher the likelihood of producing useful prompts. The most important thing is not to ask the perfect questions, but to try out and test. In some cases, you might get more accurate results if you prompt in English. For such AI tools to be useful to you, it is important to view the interaction as a dialogue.
References
It is important to point out that GPT et al. cannot be referred to as a source. Nor can you make use of sources that the tool states, as these may be not be real sources. This is because the language processing tool respond by searching and combining elements from their own training data. If you use AI in the process of developing your own text, you should be aware of:
- AI-generated text may contain errors, inaccuracies or be misleading. So always verify the text with several other sources.
- AI-generatied text t is not your own. If you use it in your work, you need to be open about which parts of the text is AI-generated, how it was generated and used in your work.
- AI-generated text usually does not refer to sources, or sources it refers to are not necessarily real or relevant. . To be an honest writer, you need to find, explore and reference real academic sources.
- AI-generated text may reflect biases or prejudices from the training data. By building on biases or prejudices ,you may contribute to reinforce such bias and prejudices. .
- Do not use AI to write the text for you, but as a support in the writing process, for example to get ideas and improve your own text.
Data storage and personal data
The majority of the companies that own and develop the major language processing tools store user data in the US. They require log in by creating a pas- sword, registering a phone number and/or email (alternatively via Facebook or a Microsoft account), but also store metadata such as your internet history, browser type and which content you search for and are exposed to when using the program. In addition, some of the language processing tools will be trained further by you interacting with them. Everything you write and all the answers you get are used to further develop the technology.
On the basis of regulations regarding data storage outside the EU (GDPR), NMBU or employees at NMBU cannot require students to use such tools as part of the teaching. However, this is something the tech companies are aware of and want to accommodate by changing their own data storage practices.
Available aids for exams at Campus
For exams or assessments on campus, there are specific rules regarding the use of aids. It is important that you know what the permitted aids for your exam/test are and only use these.
What aids are permitted and what not to bring
The course description online must specify which aids (aid code) are permitted during the exam for this course. If the course has "specified other aids," these must be detailed in Canvas, such as access to printed notes, tables, etc. This information will also be included on the exam paper you receive.
During an exam at campus, the open internet is not permitted, but access to certain websites can be available, e.g. Lovdata Pro.
What does the aid code (permitted aids) mean?
- A1: no calculator, no other aids
- A2: no calculator, other aids as specified
Canvas will specify what other aids are permitted. For example, it may include notes or tables that you must bring yourself, or tables and formulas that will be provided by the exam invigilator. - B1: calculator handed out, no other aids
A calculator will be provided to you by the exam invigilator. No other aids are permitted. - B2: calculator handed out, other aids as specified
A calculator will be provided to you by the exam invigilator. Additionally, Canvas will specify what other aids are permitted. - C1: all types of calculators, other aids as specified
You must bring your own calculator, suited to the course you are taking the exam in. Other permitted aids will be specified in Canvas. Your calculator: Must not contain files, not be connected to the internet or capable of communicating with other devices and not produce any sound.
Permitted aids
- You must bring permitted aids to the campus-based examination premises yourselves. You must ensure that the aids you bring with you do not contain any unauthorized notes
- Please see the examination page for what else you need to remember to bring, as well as the permitted equipment.
Non-permitted aids
- You are not allowed to bring or have access to other aids than those specifically permitted for the examination in question and you are not permitted to share aids during the examination.
- Any access to or use of mobile phones during a campus-based examination will be regarded as cheating. Accordingly, the same rules apply for other digital aids containing communication.
- Students are not permitted to communicate with each other or other persons during an examination, unless communication has been specified as a permitted aid in the examination question paper or the course description.
- Please see the examination page for equipment that is not permitted to bring or use.
Inspection of aids
For campus-based examinations where invigilators are used, any permitted aids brought by students may be checked individually or through random inspections by the invigilators.
Dictionaries at exams at campus
You may always bring a bilingual dictionary if the exam is held in a language other than your first language (mother tongue). That is to say: from your mother tongue to Norwegian or from your mother tongue to English.
You cannot bring an advanced dictionary that explains words and expressions.
The dictionary may not contain any own notes.
Available aids for home exams, assignments and the like
For home exams, assignments, etc., "all" aids are available, but there are still rules to follow. When using AI (Artificial Intelligence) or, for example, incorporating material from others' work, you must cite your sources. As a general rule, collaboration/communication with others is not permitted.
What aids are permitted and what is not permitted?
Students are not permitted to communicate or cooperate with each other or other persons during an examination, unless such communication has been specified as a permitted aid in the examination question paper or the course description.
During your exam, you should show what you have learned by answering the exam questions using your own words and formulations. This is also important to keep in mind if you use notes that contain direct transcripts from lecture foils etc. or eg. joint notes with other students and the like.
Use of AI
NMBU permits the use of AI-based programs when completing assignments that are part of a course's compulsory work, unless it is explicitly stated in the course description that AI-based programs are not allowed.
If you use AI, it is crucial that you familiarize yourself with its correct usage. Please refer to NMBU's specific guidelines in the section below.
The course description specifies the AI category assigned to each assessment in the course.
Remember!
- It is important that you learn the correct use of sources and references. See info on reference styles and literature lists and book a tutorial in academic writing - green link buttons at the top of the page.
- Write the answer in your own words. Do not "cut and paste" from others.
- See NMBU's own guidelines for the use of AI.
- If you have not been informed that collaboration/communication with others is allowed, then such collaboration will be considered cheating.
- Read more about Cheating and plagiarism at NMBU