Search

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
July 2024

Artificial Intelligence

A banana, a plant and a flask on a monochrome surface, each one surrounded by a thin white frame with letters attached that spell the name of the objects
Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

Artificial intelligence is difficult to pin down. The OECD defines it as

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

Artificial intelligence makes it possible to train computers on large amounts of data so that they can make predictions, decisions or complete tasks based on the patterns they learn. This ability makes artificial intelligence a disruptive force within society as computers become much more versatile.
Artificial intelligence has been proven able to automate tasks that before were recognised as high- level human cognitive functions, including some non-trivial creative tasks, but it also improves efficiency in many systems by making much more precise predictions than humans. For universities, the potential benefits of integrating artificial intelligence-powered solutions across institutional functions are fast becoming apparent. Exploration into actual – and theoretical – use cases highlights the capacity of artificial intelligence to improve institutional management, transform research processes and enhance practices in learning and teaching. However, the use of artificial intelligence has its own set of challenges: artificial intelligence systems learn in non-transparent ways, and it is at times not possible to understand why a machine takes a specific decision. Also, the data used for learning must be of a high quality and often of large quantities to enable artificial intelligence to deliver reliable results. Data that reflects (societal) prejudices will result in biased (and non- transparent) decisions. To use the technologies in a constructive way, which is trusted by the users, there needs to be human oversight and control in several steps of the process.

Artificial intelligence in data-driven governance, resource management and enhancing student services

The growth of artificial intelligence since the middle of the 2010s highlights the possibility of fine-grained data-driven governance, where both real world and digitally generated data, and the ability to analyse and predict patterns from these, can promote more efficient, rational and responsive decision-making among university leaders and enhance the anticipatory capacity of institutions. With the potential to harvest large amounts of data on learner behaviour, machines would, for example, be able to detect trends and predict challenges at an early stage and allow for interventions before a learner drops out of education. These analyses could also help institutions and individual teachers in developing fit-for-purpose curricula and institution-wide strategies for learning and teaching.

However, this use of artificial intelligence, which has direct consequences for learners and requires data collection on a large number of indicators, would also require high levels of control and transparency. For example, if the system labels a learner as being ‘at risk’ of repeated failure and thus of leaving the university, the data-driven reason for this decision should be transparent and free from bias (see also student data). Another issue with this use case would be the amount of data available for training the system. Given that the data would in many cases be sensitive, require individual consent and not be shareable across institutions, there would be difficulties in obtaining the amounts of data necessary for robust training for most solutions within one university. Universities are, however, experimenting and using systems for various solutions, for example to reduce the numbers of learners leaving the university.

In the UK, in particular, there has been much effort put into integrating data-driven technologies and artificial intelligence in university management with the ambition of enhancing the student experience. At times it is not clear whether such data-driven technologies would fall under what is normally defined as artificial intelligence, depending on the complexity of the tools and whether the analysis is done by an external consultant or not. The British case in particular is partly connected to a culture of data collection in the UK, where it is, for example, widely accepted to collect data on learners’ ethnic backgrounds while adhering to European privacy standards. In other countries, this would be either culturally or legally difficult to do. In Germany and France, for example, it is not deemed acceptable to register people according to religion and ethnicity. Artificial intelligence can also be used as a part of university services in the form of chatbots, where students can ask a computer about standard problems and get a human-sounding reply. It will be interesting to see universities introducing such institution-specific bots not only to be used as a general user interface to the university administration and its processes, but as a digital twin of the institution itself, representing its compiled knowledge, including research and teaching materials.

As artificial intelligence gets further integrated into society, the technology will also feature as a natural part of the management of building infrastructure, such as heating and cooling systems and general logistics from canteens to buying office supplies. Here, the potential for prediction of needs leads to better use of resources in general. Data-driven governance will also require investments into the integration of systems, overall data management and integration of its importance in the larger institutional culture. The discussions around developing smart buildings or a smart campus highlight many potential benefits in terms of more efficient use of buildings and resources, for example knowing which rooms are used and which are mostly empty. However, there is a clear trade-off, as well as legal limits in terms of privacy, between the amount of data that can be collected and the level of intrusiveness and monitoring that staff and students will accept. Moreover, a connected campus also poses challenges in terms of the necessity to think more directly about security.

Research – artificial intelligence as a tool in large-scale modelling and automation of workflows

Researchers are applying artificial intelligence as a tool on several levels. As is often the case in the digital transformation of universities, the research field is much less unified in terms of institutional solutions than learning and teaching. Much happens at the level of individual research teams, which are largely free to choose how they apply artificial intelligence in their projects. At the same time,  the potential for applying artificial intelligence in many research fields provides incentives for more interdisciplinary teams and structures in the form of centres for artificial intelligence.

Universities are clearly extremely important when it comes to research in artificial intelligence, developing new models and techniques within the field. Researchers here lay the foundations for the application of the technology in several areas.

For example, in fields that deal with large-scale modelling, artificial intelligence can be used as a tool for exploring models as well as for increasing efficiency in the research process. Artificial intelligence can be used across all disciplines: in astronomy, for instance, it is employed to work with large-scale models of the universe and to increase the efficiency of the computations wherein the models are generated. Likewise, extremely complex problems in areas of biochemistry, pharmaceutics or materials science can be solved using machine learning, but the range of possible usages is determined by human creativity.

On a more ambitious scale, integrating artificial intelligence systems into research workflows can automate large parts of the research process by using artificial intelligence to scan literature for hypotheses and methods, suggest new hypotheses based on the literature and perform automated analyses. This requires several different uses of artificial intelligence, including generative artificial intelligence such as ChatGPT, reading and systematising research papers, generating new proposals, detecting patterns in experiments or using big datasets like ‘digital twins’ of materials or complex systems. In fact, some research funding bodies such as the German DFG already explicitly accept the use of AI-models to compile research proposals, while reviewing activities are strictly exempted from using such tools.

The same challenges as with other uses of artificial intelligence apply: the system needs to use the correct data sources in order not to give false information. For example, using ChatGPT without any modifications of the sources will sometimes result in nonsensical results and references to authors and articles that do not exist. There is a substantial need for human oversight to determine whether a proposed solution or hypothesis is viable and relevant as well as to decide on the next steps in the process. Moreover, using generative artificial intelligence for research can generate false papers and studies, or even invent datasets, which make scientific fraud on a mass scale possible. Using artificial intelligence brings huge advantages in terms of efficiency, beyond what would be humanly possible. However, it also brings with it new human roles in terms of oversight and control. This is especially relevant when teaching students to utilise such methods in their studies, as lecturers expect students to evaluate the validity of results generated by AI tools while students often lack the experience and breadth of perspective to do so.

Image depicting an accelerated research process as a circle with automated elements of study, hypotheses proposal, automated laboratories for testing, assessment, reporting and new questions.

Technology driven automated research process – Pyzer-Knapp, E.O., Pitera, J.W., Staar, P.W.J. et al. Accelerating materials discovery using artificial intelligence, high performance computing and robotics. npj Comput Mater 8, 84 (2022)

Universities are, of course, highly active in the development of artificial intelligence and researching its uses. While this is mostly innocuous in that it applies to materials testing or optimisation of technical systems, the European regulation on artificial intelligence allows research in uses that are either considered high risk or illegal. This could, for example, be research in subliminal manipulation through artificial intelligence. Here, there must be adequate safeguards regarding security against misuse, research ethics and overall compliance with regulations on artificial intelligence.

Learning and teaching – artificial intelligence in assessment, curriculum development and examinations

The rapid massification of the use of generative artificial intelligence came with ChatGPT in early 2023. Many academics engaged in teaching saw this as highly disruptive, since ChatGPT provided a free service that could, in many cases, produce texts that could pass for – or even outperform— those of its human counterparts in a university exam, particularly exams that test formal knowledge.

Barchart showing how GPT4 outperforms GPT3 in many standardised test

Text, exam results of various versions of GPT – source https://openai.com/research/gpt-4

 

Some higher education institutions, notably Science Po in France, banned the tool in early 2023. Others, such as the University of Namur in Belgium, actively promoted experimentation with ChatGPT in learning and teaching. Despite scepticism in some corners of the community, more institutions and individuals embraced a constructive approach, realising that generative artificial intelligence is already a part of society, and that the role of universities is to encourage learners and educators to use it critically and responsibly. This includes creating awareness about the limits of generative artificial intelligence, for example the tendency of these programmes to ‘hallucinate’, meaning making up convincing but factually wrong statements. There are still debates about the disruptive nature of generative artificial intelligence. Some focus on the immediate skills required to use and interact with generative artificial intelligence (prompt engineering), while others have criticised universities for focusing too much on measuring and promoting skills that can be automated by artificial intelligence and too little on the skills that are more unique to the human way of thinking. There is also a risk in terms of inequality, as new tools – for example, ChatGPT 4 – can come at a high cost, which might not be affordable to all learners.

As described above, generative artificial intelligence has important applications in research. Newer models allow the user to limit the data used to generate results, considerably lowering the risk for the system to ‘hallucinate’, and they can summarise large quantities of research literature in a precise manner. ChatGPT 4 can, for instance, be used to find materials with certain chemical attributes if given a large body of chemical research.

Apart from discussion about the merits of generative artificial intelligence, there has also been discussion about using artificial intelligence for assessment. Digital assessment gained increased attention during the Covid pandemic, and it has great potential to reduce the large amount of time that academic staff use on assessment and exams. Using machine learning can also help testing understanding and identifying weaknesses or missed knowledge quickly, better preparing students  for an assessment where basic knowledge can be assumed. Such tools would need extra safeguards to be compliant with the EU regulation on artificial intelligence, which has defined some uses of artificial intelligence in education as being high risk.

Future projections – understanding the limits and mitigating the risks of artificial intelligence

It is still much too early to assess the actual impact of generative artificial intelligence on education and on society in general. We could imagine a future of large-scale implementation of this technology rendering much of what is taught at universities obsolete, or artificial intelligence could simply become a collection of tools to enhance the efficiency of critically-thinking individuals in a traditional university education. One question regarding generative artificial intelligence concerns digital sovereignty. The technology is mostly governed by large American companies, as only they appear to be able to afford the expensive training procedures utilising huge amounts of data, but they are not able or willing to subject the data to strict  IPR control. There are also concerns that these companies can and do pose constraints about what kind of texts can be produced, as well as concerns about their use of user data. Conscious of these risks, the University of Oslo has a version of ChatGPT which is hosted on its own servers, where the user interface allows for anonymous use, compliant with the privacy policies of the university. This service is also made available to other higher education institutions and for the non-profit sector. More radically, there is a Dutch initiative to build an own language model in cooperation with the academic sector (GPT-NL), which would be more transparent and allow for experimentation.

Artificial intelligence will fundamentally change learning and teaching both in terms of methods and content, and it will also change the research process. Part of that change will be to continuously develop the understanding of the uses of the technology and how humans can and must interact to guide and oversee the processes. This will certainly be an important task for universities in the future.