Artificial Intelligence Competence of K-12 Students Shapes Their AI Risk Perception: A Co-occurrence Network Analysis
Abstract.
As artificial intelligence (AI) becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. This research aimed to examine the relationships between perceived AI competence and risks among Finnish K-12 upper secondary students (n = 163) by utilizing a co-occurrence analysis. Students reported their self-perceived AI competence and concerns related to AI across systemic, institutional, and personal domains. The findings showed that students with lower competence emphasized personal and learning-related risks, such as reduced creativity, lack of critical thinking, and misuse, whereas higher-competence students focused more on systemic and institutional risks, including bias, inaccuracy, and cheating. These differences suggest that students’ self-reported AI competence is related to how they evaluate both the risks and opportunities associated with artificial intelligence in education (AIED). The results of this study highlight the need for educational institutions to incorporate AI literacy into their curricula, provide teacher guidance, and inform policy development to ensure personalized opportunities for utilization and equitable integration of AI into K-12 education.
1. Introduction
With the rise of artificial intelligence (AI), especially generative AI (GenAI), its influence on society and education is increasingly recognized. However, the integration of artificial intelligence in education (AIED) is not straightforward; on one hand, it has been shown to improve educational outcomes (Tlili2025-hj), but, on the other hand, there are also risks that can hinder the adoption (fu2025knowledgeable; Nemorin2023-jq; Topali2025-sl; Colonna2025-nd). Despite the general view of AI technologies having a positive impact on student learning, their effect particularly on students’ agency and self-regulation is understudied (darvishi2024impact). This has raised concerns about how the integration of AI tools affects students’ learning experiences, as well as their impact on knowledge development and skill acquisition (Ukwandu2025-pj; Casal-Otero2023-zv).
AI is not perceived solely as harmful or beneficial, but simultaneously as both a risk and an opportunity (schwesig2023using). However, perceptions and preferences regarding AI-related risks have received limited scholarly attention (wei2025understanding). Thereby, the relationship between risk perceptions and individuals’ willingness to adopt AI-based applications is understudied (schwesig2023using). Given that AI is affecting education and how teaching will be organized (e.g., Heilala2025-wg), it is crucial to explore further the types of risks that might hinder the adoption of AI tools in learning (sikstrom2024pedagogical). Research on risk perception (siegrist2020risk), especially in the context of AI, is essential for designing effective strategies that support informed decision-making (krieger2024systematic) and promote AI literacy (Casal-Otero2023-zv). Because ”risk is relative to the observer” (Kaplan1981-ec, p. 12), individual characteristics shape which factors are perceived as risks (wei2025understanding). Thus, this exploratory study aims to answer two research questions: First, which of the factors do upper secondary K-12 students perceive as risks? And second, how do K-12 students’ self-reported AI competence shape their perceptions of potential AI-related risks?
Finland has been one of the early advocates for adopting digital education solutions and practices111Finnish National Agency for Education: Exploring Finnish Digital Education. The same applies to AIED, as exemplified by the Finnish National Agency for Education, which has aligned new guidelines for the use of AI in teaching and learning in Spring 2025222Finnish National Agency for Education: Artificial intelligence in education – legislation and recommendations . AI technologies are novel to K-12 education (e.g., Filiz2025-lu), and there is a great demand for understanding how they should be integrated in teaching and learning (karan2023potential). Thus, this research targets general upper secondary education students, as K-12 education has received limited attention in AIED research (Akgun2022-kg; Filiz2025-lu; Heilala2025-wg). The results contribute to understanding the factors that upper secondary school students perceive as AI-related risks and how their self-reported AI-related competence influences these risk perceptions.
2. Background
AI technologies continue to reshape digital society and teaching practices, highlighting the need to understand the factors that facilitate or hinder their use. AI systems offer many benefits, but their integration into K–12 education also poses challenges (yoder2020gaining). In educational settings, research on AI-related risks has gained popularity, and, for example, six major risk areas have been identified through a systematic literature review in K-12 education: privacy and autonomy risks, AI biases, accuracy and functional risks, deepfakes and FATE—fairness, accountability, transparency, and ethics risks, social knowledge and skill-building risks, and risks associated with shifting the teacher’s role (karan2023potential). From the student perspective, the issues may raise several questions, for example: What happens to my sensitive data? Can I trust AI? How capable and accessible is AI? Where and how can I use AI? Am I allowed and able to use AI? Why do we still need teachers?
Risks regarding the use of AI occur at multiple levels. It entails short- and long-term risks, including technology-related risks (e.g., privacy breaches, cyber intrusions, and the inability to control malicious AI) (Zhu2025-wq; Li2023-lq), educational risks (Zhu2025-wq; Li2023-lq), economic and societal risks (e.g., job displacement) (Ghotbi2022-tn; Zhu2025-wq), and ethical risks (e.g., insufficient values and regulations) (schwesig2023using; Zhu2025-wq; Colonna2025-nd). Specifically, systemic risks refer to broad issues tied to the trustworthiness and functioning of AI itself (e.g., ai_act; Amoozadeh2024-ah; karan2023potential), such as bias, inaccuracy, FATE issues (Memarian2023-ke; Akgun2022-kg; Bissessar2023-df; Zhu2025-wq; Li2023-lq; Halton2025-jx; Filiz2025-lu; karan2023potential), or resource consumption (e.g., Galaz2021-nh). Institutional risks, in turn, pertain to rules, policies, and fairness within the educational system, including cheating with the aid of AI (Cavazos2025-zh), the unequal advantages of using AI (e.g., AI divide and digital divide) (Hammerschmidt2025-ho; Bissessar2023-df; Zhu2025-wq), or inconsistent guidelines from teachers and schools regarding the use of AI (Corbin2025-xy). The use of AI also involves personal risks that capture individual-level concerns, such as balancing between productivity and misconduct (Corbin2025-xy; Zhu2025-wq; Li2023-lq; Halton2025-jx), dependence on AI (e.g., metacognitive laziness (Fan2025-cg)), addiction (Al-Obaydi2025-bz), as well as potential negative effects on learning, creativity, critical thinking, and personal privacy (Bissessar2023-df; Zhu2025-wq; Filiz2025-lu; karan2023potential).
People’s perceptions of AI-related risks and opportunities are positively associated with their likelihood of adopting AI (schwesig2023using). Risk perception refers to how individuals process information or direct observations about potential hazards and risks of a technology and form judgments and beliefs about its seriousness, likelihood, and acceptability (Renn2013-ff). Anticipated risks and concerns of AI use can be considered as negative outcome expectancies. An outcome expectancy ”is defined as a person’s estimate that a given behavior will lead to certain outcomes” (Bandura1977-oq, p. 193). Negative outcome expectancies (i.e., perceived or experienced risks associated with using AI) can lead to reduced acceptance of AI technologies (e.g., Yue2023-ib), and initial negative experiences may be difficult to overcome (e.g., Henry1995-vt). Beyond just identifying what people are concerned about and why, research on risk perceptions informs the likelihood of encountering these risks, risk communication, and risk management (siegrist2020risk). Furthermore, awareness of potential risks is particularly important in educational settings, because without a clear understanding of AI’s roles and functions, educators struggle to implement AI effectively in learning activities (hwang2020vision).
3. Materials and methods
3.1. Data collection and participants
The research was conducted in a Finnish general upper secondary school (ISCED level 3) with approximately 400 students. The school has been engaged in a two-year development project on the educational use of GenAI, funded by the Finnish National Agency for Education. As part of the project, students were introduced to AI in academic counseling sessions, and the school has recently revised its code of conduct to explicitly address the use of AI during studies.
The questionnaire was administered during classes, and students participated voluntarily, with the option to withdraw at any time without any consequences. An information sheet and a consent form were sent to them two weeks in advance, allowing time to familiarize themselves with the study. In addition, they had the opportunity to ask the researchers for further clarification. A total of 163 students responded to the questionnaire, of whom 47% identified as women, 51% as men, and 2% as non-binary. Median year of birth of the respondents was 2008 (min 2006, max 2008), and approximately half were first-year students (53% first year, 34% second year, 13% third or fourth year).
3.2. Instruments
3.2.1. AI-related concerns and risks
The AI-related concerns instrument (Table 2) consisted of 14 binary response items capturing students’ perceptions of potential concerns and risks associated with using AI in schoolwork. Systemic risks included issues related to the functioning of AI, such as bias, inaccuracy, and resource utilization. Institutional risks covered concerns related to rules, policies, and fairness in education, including cheating, unfair advantage, teacher rules, school policies, and copyright. Personal risks reflected individual-level concerns about learning and wellbeing, such as reduced critical thinking, creativity, and learning, as well as fears of misuse, addiction, and privacy violations. Each item was presented as a binary checkbox (0 = not selected / No, 1 = selected / Yes), allowing students to indicate which concerns they considered relevant. Thus, each item can be considered as a binary risk perception variable (e.g., Lee2015-dw; Lund2014-qo).
3.2.2. Artificial intelligence competence
The AI competence instrument (Table 2) was designed to assess students’ self-reported ability to use AI tools in everyday and educational contexts. It consisted of four items capturing skills in general AI use and in everyday applications (e.g., Casal-Otero2023-zv), personalized learning support (e.g., Wu2025-jq), and information management (e.g., Chee2024-uo), each rated on a 5-point Likert scale. Higher scores on the instrument indicate greater perceived competence in applying AI tools effectively. The confirmatory factor analysis (CFA) utilizing polychoric correlation and robust diagonally weighted least squares (DWLS) estimation (e.g., HolgadoTello2010-be; Li2021-xf) supported a unidimensional structure for the four-item AI competence instrument. Model fit ((2) = 2.56, p = .279; CFI = 1.00; TLI = .999; RMSEA = .041; SRMR = .019) was excellent with respect to conventional criteria (e.g., Hu1999-do), indicating that the single-factor model adequately represented the data. The AI competence instrument demonstrated excellent reliability, with the coefficient indicating a lower-bound reliability estimate = 0.89, 95% CI [0.87–0.92].
AI competence sum scores ranged from 4 to 20 (M = 16.8, SD = 2.8, Mdn = 17). For a descriptive overview of AI-related concerns and risks with respect to competence, a median split (Mdn = 17) of AI competence score was used to assign students into low- and high-competence groups, allowing for a comparison of risk perceptions between students with relatively lower and higher self-reported AI competence. Figure 1 describes the distribution of AI-related concerns among students, showing overall prevalence by binary measurement item and how endorsements were split between low- and high-competence groups.
3.3. Co-occurrence network analysis
Applying a complexity science approach, concerns and risks were analyzed as a complex system (e.g., Sturmberg2021-tu; Stella2022-he; Borsboom2022-ms), where potential risks were considered as components of the broader web of beliefs (Porot2021-hj). A co-occurrence network (e.g., Bodner2022-mv; Cottica2020-qk) was used to capture the relational structure of the concerns and risks that students hold about AIED. The co-occurrence network graph (Figure 2) illustrates how students’ co-occurrence of AI-related concerns varies with respect to their AI competence, highlighting patterns in how risks are framed and associated with. The analysis approach draws from Epistemic Network Analysis (ENA) (Shaffer2016-zg) and network psychometrics (Borsboom2022-ms) by modeling co-occurring concerns as interconnected structures (e.g., Gao2025-jl; Pechey2012-zj), where edge weights capture conditional associations between items and network metrics quantify the relative importance of concerns within the overall system.
First, binary response items were transformed into a co-occurrence matrix, where each row represented a respondent and each column a pairwise combination of concerns. Cell values were coded as 0 or 1 to indicate whether a co-occurrence was present or absent for the respondent. Before the co-occurrence graph was constructed, a per-respondent -normalization (e.g., zaki2020data, p. 8) was applied:
where denotes the vector of concerns selected by student , is the value of item in that vector, and is the total number of items. Normalization was applied to control for differences in the number of concerns students selected, ensuring that co-occurrence patterns reflect relative rather than absolute frequencies and preventing overrepresentation of students who endorsed many concerns.
Second, to account for possible confounding influences of gender and year group, regression-adjusted edge weights were estimated by regressing each normalized edge weight on AI competence score while controlling for additional confounding variables using the ordinary least squares (OLS) regression:
where is the normalized co-occurrence weight for edge . The resulting regression coefficient for competence () was taken as the undirected edge weight, such that positive values indicate increasing co-occurrence with competence and negative values indicate decreasing co-occurrence. Lastly, to highlight the most salient co-occurrences, edges with absolute values below the global 75th percentile were pruned (e.g., Cottica2020-qk), and the remaining network was visualized with node sizes scaled in proportion to the maximum absolute eigenvector centrality across positive and negative subgraphs. Eigenvector centrality (Bonacich2007-qi; Castro2024-ym) quantifies the relative importance of each node (i.e., concern) within the full co-occurrence network, weighting nodes more highly if they are strongly connected to other highly central concerns.
4. Results
The main concerns receiving the most mentions were AI being inaccurate and biased, AI use for cheating, as well as the risks of learning less and becoming less critical and creative (Figure 1). Copyright violations and lack of consistent school rules were the least of the concerns. This pattern suggests that students were primarily concerned with the direct impacts of AI on their own learning quality and fairness, while institutional and legal concerns were perceived generally as less critical.
Students in the low-competence group (Figure 1) reported a greater number of concerns across nearly all categories compared to their high-competence peers. They more frequently related AI use to reduced critical thinking, reduced creativity, reduced learning, and fear-based risks, such as misuse and addictive use. Accuracy-related issues were also more common in the low-competence group. Differences in concerns about privacy, copyright, and institutional rules were less pronounced, and they received the fewest mentions overall. In general, the low-competence group of students expressed broader and more numerous concerns, with a strong emphasis on personal learning risks and fear-based issues, while high-competence students reported fewer concerns overall.
The co-occurrence network (Figure 2) analysis revealed systematic differences in how students with higher AI competence connected different concerns. Stronger positive associations (edges increasing with competence) were observed (Table 1), particularly between systemic and institutional risks, such as AI_cheating—Biased, Biased—Inaccurate, and AI_cheating—Teachers_rules. Other notable edges included links between bias and fairness-related concerns (Biased—Teachers_rules, Biased—Resources), as well as connections to personal and learning risks (AI_cheating—Less_creative, Less_learning—Privacy). In contrast, several edges decreased in strength with competence. The strongest negative associations were found for Less_creative—Less_critical, Less_creative—Resources, and Less_creative—Unfair_adv, suggesting that lower-competence students were more likely to connect creativity-related concerns with fairness and resource issues. Additional decreases were observed in connections involving Fear_misuse (e.g., with Inaccurate and Less_learning. Overall, these results indicate that higher-competence students tended to emphasize associations between systemic/institutional concerns (e.g., bias, inaccuracy, and school rules), whereas lower-competence students more strongly connected creativity- and misuse-related concerns.
Eigenvector centrality (EC) values in Figure 2 showed clear differences in the relative importance of concerns depending on AI competence. In this context, the sign of the eigenvector centrality indicates whether a concern is more structurally central in the co-occurrence network of higher-competence students (positive values) or lower-competence students (negative values). The most central concerns in the high-competence network were AI_cheating (EC = 0.53) and Biased (EC = 0.52), followed by Teachers_rules (EC = 0.36) and Inaccurate (EC = 0.34), suggesting that students with higher competence perceive systemic and institutional issues as tightly interconnected. In contrast, the most central concerns in the low-competence network were Less_creative (EC = -0.53) and Less_critical (EC = -0.49), indicating that lower-competence students primarily frame risks around threats to their own learning and cognitive development. Other concerns, such as Fear_misuse (EC = -0.38), Resources (EC = -0.34), and Unfair_adv (EC = -0.26), also appeared more central in the low-competence group, pointing to a broader emphasis on personal vulnerability and fairness.
| Edge | Direction | |
|---|---|---|
| AI_cheating — Biased | High Low | 0.022 |
| Biased — Inaccurate | High Low | 0.017 |
| AI_cheating — Teachers_rules | High Low | 0.015 |
| Less_creative — Less_critical | Low High | -0.015 |
| Biased — Teachers_rules | High Low | 0.012 |
| Biased — Resources | High Low | 0.012 |
| AI_cheating — Less_creative | High Low | 0.012 |
| Less_learning — Privacy | High Low | 0.011 |
| AI_cheating — Less_learning | High Low | 0.009 |
| AI_cheating — Inaccurate | High Low | 0.008 |
| AI_cheating — Resources | High Low | 0.008 |
| AI_cheating — Fear_addictive | High Low | 0.008 |
| Less_creative — Resources | Low High | -0.008 |
| Less_creative — Unfair_adv | Low High | -0.007 |
| Inaccurate — Less_learning | Low High | -0.007 |
| Resources — Unfair_adv | Low High | -0.007 |
| Biased — Fear_misuse | Low High | -0.006 |
| Fear_misuse — Less_learning | Low High | -0.006 |
| Fear_misuse — Inaccurate | Low High | -0.006 |
| Inaccurate — Teachers_rules | High Low | 0.006 |
| Biased — Less_critical | High Low | 0.006 |
| Less_learning — Teachers_rules | High Low | 0.006 |
| AI_cheating — Schools_policy | High Low | 0.006 |
5. Discussion
This study aimed to gain a deeper understanding of the perceptions of AI-related risks that raise concerns among Finnish upper secondary students. The study sheds light on which of the factors students perceive as main AI-related risks, and how students’ self-assessed AI competence influences their perception of these risks. The implications of these exploratory results are discussed, and directions for future research are outlined.
The findings suggest that students primarily frame AI-related risks in terms of trust, personal learning, and fairness, while placing less emphasis on broader institutional or legal dimensions. Students were particularly concerned about two systemic risks, AI’s inaccuracy and bias, and personal risks related to learning, specifically the reduction of critical thinking and creativity. From institutional risks, students were most concerned about the use of AI for cheating. Taken together, the findings suggest that students are aware of AI’s limitations and recognize the potential negative impact on learning. The results of this study further indicate that students’ perceptions of risk were linked to their perceived AI competence. Higher-competence students primarily emphasize systemic and institutional risks, while lower-competence students are more concerned with personal and learning-related risks. Furthermore, lower-competence students reported more concerns spanning nearly all presented risk types.
However, AI-related risks concern multiple stakeholders. For example, teachers often worry about unintended consequences and the potential reduction of human interaction (Alwaqdani2024-yc; Halton2025-jx; karan2023potential), and policymakers are concerned about securing an AI-ready workforce and the adoption of technology (Schiff2022-bl). Thus, AI-related risks should be considered at different levels (i.e., classroom, institutional, and policy-making) to ensure a comprehensive approach to AI adoption in education.
At the classroom level, teachers play a central role in mediating how students engage with AI tools (e.g., Filiz2025-lu). Lower-competence students were more strongly connected to fears of misuse and addiction with reduced learning, creativity, and critical thinking. Teachers could address these concerns by scaffolding AI use, creating safe spaces for experimentation, and openly discussing both the risks and opportunities associated with AI. Students with higher-competence have stronger links between concerns about cheating, bias, teacher rules, and systemic issues such as accuracy and resources. These students may benefit from clear and transparent guidelines on the acceptable use of AI, alongside the development of students’ evaluative skills for critically assessing AI outputs. In practice, teachers need to move beyond simply allowing or prohibiting AI use and instead actively integrate it to enhance human creativity and reasoning (e.g., Brynjolfsson2022-qc). Self-efficacy beliefs are suggested to mediate AI risk awareness (Wu2025-jq), and thus, these pedagogical practices may help to mitigate negative outcome expectancies while strengthening students’ capabilities for the responsible use of AI.
At the institutional level, the findings highlight the need to explicitly incorporate AI literacy into the curriculum to support the effective implementation of AIED. This study showed that students’ perceptions of their AI-related skills shape how they perceive AI risks, aligning with prior research that has established a link between AI self-efficacy and risk awareness (Wu2025-jq). Perception of risk is dependent on knowledge (Kaplan1981-ec), and thereby, without systematic and equitable teaching of AI literacy, disparities in knowledge and skills may widen, leading to unequal opportunities to benefit from AI tools (i.e., AI divide) (e.g., Zhu2025-wq; Hammerschmidt2025-ho). This echoes earlier debates surrounding the introduction of the internet and search engines into schools: initial fears of plagiarism and misinformation eventually led to the integration of digital and information literacy into curricula (Walraven2009-rd; Ma2007-vn). A similar transition is now needed for AI, where AI literacy should be integrated into curricula (Casal-Otero2023-zv) by considering students’ moral development (Chee2024-uo), and treated as a transversal skill (e.g., Bauer2025-ix; Rebelo2025-en).
At the policy level, policymakers should advance curriculum reform and teacher agency while promoting AI and data literacies (Ifenthaler2024-nr) and ethical practices (Schiff2022-bl) to ensure equitable and effective adoption of AIED (Chee2024-uo). The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) (ai_act) stresses the importance of ensuring sufficient AI literacy among stakeholders, enabling them to make informed decisions about AI systems (e.g., Colonna2025-nd). From a future perspective, failing to institutionalize AI literacy and competence building may leave education systems unprepared for future technological shifts. Research-based pedagogical interventions and guided use of AI in various educational contexts are therefore essential for developing the competence that enables safe, critical, and creative application of AI.
5.1. Limitations and future research
This research has certain limitations that should be acknowledged. First, the data were collected in a single Finnish upper secondary school engaged in an AI development project, which may limit the generalizability of the findings to other educational contexts and countries. Second, students’ AI competence was measured through self-reports, which may not accurately reflect their actual skills or AI usage practices. Third, the cross-sectional design prevents causal interpretations regarding the relationship between competence and risk perceptions. Lastly, the risks outlined in this research are not an exhaustive list, and other risk perceptions among the students can exist and emerge.
Future research could investigate how risk perceptions (i.e., siegrist2020risk) among more diverse stakeholders in education intersect and how these perceptions influence their educational experiences. Particular attention could be given to the roles of AI competence and risk perceptions in relation to student agency (e.g., Heilala2022-xt; Nemorin2023-jq), as competence may determine whether students approach AI as a supportive tool or perceive it as a limiting factor to autonomy and learning. Also, examining how students balance risks against potential opportunities (e.g., Halton2025-jx) can provide insights into the conditions under which AI is adopted constructively.
6. Conclusion
This research explored Finnish upper secondary students’ perceptions of AI-related risks and examined how these perceptions are shaped by their self-reported AI competence. The findings showed that students with lower competence tend to emphasize personal and learning-related risks, while higher-competence students focused more on systemic and institutional concerns. These differences suggest that AI competence plays a crucial role in how students evaluate both the risks and opportunities associated with AI in education. By highlighting the interplay between competence and risk perceptions, the research emphasizes the importance of promoting AI literacy in educational institutions to enable learners to engage critically, responsibly, and productively with AI tools.
| Finnish | English | Abbreviation |
|---|---|---|
| Opettajat voivat pitää tekoälyn käyttöä huijaamisena. | Teachers may consider the use of AI as cheating. | AI_cheating |
| Tekoälyn käyttö antaa joillekin oppilaille epäreilun edun, jos muut eivät käytä sitä. | The use of AI gives some students an unfair advantage if others do not use it. | Unfair_adv |
| Tekoäly tuottaa virheellistä tietoa. | AI produces inaccurate information. | Inaccurate |
| Tekoäly tuottaa vinoutunutta tietoa. | AI produces biased information. | Biased |
| Tekoälyn käyttö voi vähentää omaa kriittistä ajatteluani. | AI use may reduce my critical thinking. | Less_critical |
| Tekoälyn käyttö voi vähentää luovuuttani. | AI use may reduce my creativity. | Less_creative |
| Tekoälyn käyttö voi vähentää oppimistani. | AI use may reduce my learning. | Less_learning |
| Pelkään, että vahingossa syyllistyisin väärinkäytöksiin. | I fear I might accidentally misuse AI. | Fear_misuse |
| Pelkään, että käyttäminen olisi minulle liian koukuttavaa. | I fear using AI would be too addictive for me. | Fear_addictive |
| Henkilökohtaisia tietojani voi päätyä vääriin käsiin. | My personal data may fall into the wrong hands. | Privacy |
| Tekoälyn tuottama sisältö rikkoo tekijänoikeuksia. | AI-generated content violates copyright. | Copyright |
| Tekoälyn käyttö kuluttaa luonnonvaroja. | AI use consumes natural resources. | Resources |
| Opettajilla ei ole yhtenäisiä sääntöjä tekoälyn käytöstä opiskelussa. | Teachers do not have consistent rules on the use of AI in studying. | Teachers_rules |
| Kouluilla ei ole ajantasalla olevia käytäntöjä tekoälyn käytöstä opiskelussa. | Schools do not have up-to-date policies on the use of AI in studying. | Schools_policy |
| # | Finnish | English |
|---|---|---|
| 1 | Minulla on riittävät taidot käyttää tekoälytyövälineitä. | I have sufficient skills to use AI tools. |
| 2 | Osaan käyttää tekoälytyökaluja niin, että ne helpottavat arkeani. | I can use AI tools in ways that make my everyday life easier. |
| 3 | Osaan hyödyntää tekoälyä omassa oppimisessani. | I can make use of AI in my own learning. |
| 4 | Osaan hyödyntää tekoälytyökaluja tiedon etsimisessä. | I can use AI tools for searching information. |
Acknowledgements.
Support from the Research Council of Finland, under Grant No. 353325, is gratefully acknowledged.Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the author(s) used Grammarly and ChatGPT 5 in order to proofread and to provide suggestions to enhance the readability and grammatical clarity of the manuscript. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.