Enhancing learning experiences: EEG-based passive BCI system adapts learning speed to cognitive load in real-time, with motivation as catalyst

1 Introduction

Computer-Based Learning (CBL) is an educational approach that uses computer software to deliver, assist, and enhance the learning processes (Grizioti and Kynigos, 2020). The CBL environment learners use in their learning can take multiple forms, such as programs, applications, tools, and platforms (Grizioti and Kynigos, 2020). CBL provides students with instant feedback, individualized learning paths and greater flexibility, all of which can increase student engagement and comprehension (Grizioti and Kynigos, 2020; Mertens et al., 2022; Van der Kleij et al., 2015). As a result, CBL is increasingly used in educational programs as an important complement to conventional classroom teaching or as a stand-alone pedagogical method (Grizioti and Kynigos, 2020).

However, offering access to CBL does not guarantee a successful learning experience. For example, online courses allow many more students to enroll because the number of physical seats available in the classroom does not limit their capacity. Moreover, their accessibility makes it possible to take the course at any time, from anywhere in the world. Because of their greater capacity and the diversity of students enrolled in these online courses, the vast majority of online courses have been developed using the classic “one size fits all” approach, with little to no consideration of individual differences and learning abilities (Tekin et al., 2015; Wang and Lehman, 2021). In addition, the distance between the teacher and the students in CBL makes the assessment of learning needs and abilities even more difficult (Tekin et al., 2015). As a result, this can lead to low levels of learning engagement (Bawa, 2016; Dumford and Miller, 2018) and motivation (Ferrer et al., 2022; Fini, 2009; Mamolo, 2022; Wang and Lehman, 2021) among learners.

The need to tailor the learning experience to the individual learner has been observed, mentioned, and studied many times in the current literature (Klašnja-Milićević et al., 2011; Mutlu-Bayraktar et al., 2019; Tekin et al., 2015; Wu et al., 2020). In educational psychology, the concept of the Zone of Proximal Development (ZPD) developed by Lev Vygotsky draws the theoretical foundations that support personalized learning (Chaiklin, 2003; Tetzlaff et al., 2021; Vygotsky and Cole, 1978). This concept emphasizes the need to understand that each learner is at a different point in their cognitive development. According to Vygotsky, the ZPD represents the set of tasks or skills that a learner cannot yet perform alone but can perform with assistance (Vygotsky and Cole, 1978). Learning is not encouraged by tasks that are too simple or already within the scope of our current abilities, leading to a state of boredom (Vygotsky and Cole, 1978). Conversely, no learning occurs when tasks are overly complex and frustrating tasks that exceed our abilities (Vygotsky and Cole, 1978). Thus, maintaining a learner’s ZPD provides the ideal level of challenge to promote growth and development, which can be further enhanced by personalized support and guidance to improve academic performance over traditional “one-size-fits-all” teaching methods (Alamri et al., 2021).

Complementary to the ZPD, the concept of cognitive load is important for understanding and personalizing learning experiences (Mutlu-Bayraktar et al., 2019; Sweller, 2020; van Merriënboer and Ayres, 2005). Cognitive Load Theory (CLT) examines human cognitive architecture and provides insight into how learners process and retain information in memory (Curum and Khedo, 2021; Sweller, 1988; Sweller et al., 1998; Wouters et al., 2008). This theory considers the interplay between the working memory’s limited capacity and long-term memory (Kalyuga and Liu, 2015; Mutlu-Bayraktar et al., 2019). It defines cognitive load as the mental workload required to perform a learning task and emphasizes the importance of managing the mental effort required for effective learning (Kalyuga and Liu, 2015; Zhou et al., 2017a). Thus, performing a learning task requiring too much or too little mental effort will lead to less-than-optimal learning experiences and poor performances (De Jong, 2010). In a CBL environment, ZPD can serve as a tool to tailor educational tasks and support to suit the learner’s abilities, helping maintain cognitive load at an optimal level while learning. Unfortunately, current CBL environments only consider the learner’s perceived cognitive load as a global design consideration, disregarding their objective cognitive state evolution to fully tailor instructions to their abilities (Gerjets et al., 2014; Sweller, 2020). One solution to this problem is the real-time measurement of cognitive load through the electrical activity of the brain using an Electroencephalogram (EEG)-based Brain-Computer Interface (BCI) system.

BCIs facilitate direct communication between the brain and computers by converting the brain’s electrical signals into computer commands (Gao et al., 2021; Lotte et al., 2018; Zander and Kothe, 2011). Initially created to assist individuals with disabilities in controlling external devices (Värbu et al., 2022), BCIs now extend to passive systems that monitor cognitive states, such as attention, fatigue, engagement, and cognitive load (Zander and Kothe, 2011), enhancing cognitive functions through self-regulation and neurofeedback (Birbaumer et al., 2009). These systems provide feedback based on brain activity changes, forming a closed biocybernetic loop (Krol and Zander, 2017). BCIs potentially offer tailored learning experiences in education by adjusting educational content based on real-time brain activity analysis.

Thus, the purpose of this study is to investigate whether the use of a neuroadaptive interface would provide an optimal learning experience and increase learning gains with the following research question: “Does adapting the pace of information presentation to the learner’s real-time cognitive load using an EEG-based passive BCI enhance the learning experience?.” Specifically, we developed an EEG-based BCI system that adapts the speed of information presentation on the Interactive User Interface (IUI) according to the real-time cognitive load of the learners. We created a memory-based learning task following the ZPD theory to test our BCI system. The dynamic adaptive measures of our BCI are designed to help learners manage their cognitive load and stay within their ZPD for an optimal learning experience. We define an optimal learning experience as the intersection of increased learning gains, self-perceived cognitive absorption and satisfaction, and reduced self-perceived cognitive workload.

Furthermore, the limited research on the use of BCI in education fails to account for the impact of motivation on adaptation. While it is established that motivation influences the cognitive effort invested in a learning task (Paas et al., 2005), there is a dearth of information on this topic in the context of BCI-based learning. We also aim to investigate if the addition of a motivational factor while using the BCI would enhance the learning experience with the following research question: “To what extent is motivation a necessary condition for effective BCI adaptation?”

To the best of our knowledge, our study is the first of its kind, combining a novel BCI system and a memorization-based learning task developed following the ZPD theory. Our research stands out as very few papers study neuroadaptive interfaces in a CBL context. Existing papers on the topic have used BCIs to monitor different cognitive states (Andreessen et al., 2021; Marchesi and Riccò, 2013; Zammouri et al., 2018; Zhou et al., 2017b), detect and react to error potentials (Buttfield et al., 2006; Spüler et al., 2012), to adjust different interface parameters, such as task difficulty or content type (Eldenfria and Al-Samarraie, 2019) or provide user cognitive state feedback (Verkijika and De Wet, 2015). In contrast, we employ a BCI system that uses real-time data to estimate and classify cognitive load to adapt the speed of information presentation on the interface.

The remainder of this manuscript is organized as follows. We first present related literature and the development of the hypotheses. We then present the materials and methods used in this study, including core aspects of developing our BCI system. We then present our data analysis and study results. Findings are interpreted within the discussion section. Finally, the article concludes with a short conclusion encompassing limitations and future research avenues.

2 Related work 2.1 The need for individual learning paces within the zone of proximal development

The ZPD theory suggests that all students have different learning needs and abilities, therefore different ZPDs. Thus, within ZPD, each student assimilates and processes new information or acquires abilities differently; some learners need more time and effort than others to learn successfully (Hedegaard, 2012).

Studies have shown that in order to increase information retention and promote optimal learning experiences, learning pace must be adjusted and personalized to each student (Najjar, 1996; O'Byrne and Pytash, 2015; Shemshack and Spector, 2020). For example, Hasler et al. (2007) investigated the differences between imposed system-paced and personalized learner-paced groups on primary school students. They found that self-perceived cognitive load was lower and test performance was higher when students used the learner-paced system, which suggests that allowing students to control their own learning pace may improve learning outcomes. Andreessen et al. (2021) also investigated the effect of text difficulty and text presentation speed in a reading task on self-perceived mental workload. Some texts, varying in difficulty, were presented at the reader’s pace, and some were presented at a 40% faster pace. Cognitive load predicted values and subjective mental workload experienced were significantly higher when learners read at a fast-imposed speed.

In short, these studies demonstrate the importance of adapting learning tasks, educational content, and instructional strategies to each learner’s learning pace to promote an optimal learning experience. These studies also suggest that CBL environments facilitate the personalization of learning methods and processes.

2.2 Personalizing computer-based learning environments

CBL has created new opportunities for personalized learning in the digital era. Personalizing learning through CBL can help address each learner’s diverse learning needs by adapting instructional materials to their learning pace and progress, which can help optimize the ratio of challenge to support explained by the ZPD to suit each learner.

Recent CBL environment studies rely on users’ personal and learning data to create algorithms that personalize the learning experience. For example, Xiao et al. (2018) developed a personalized system that recommends learning materials based on an algorithm combining the student’s learning path and interests. Results from the pilot testing indicated that their system increased the learners’ learning outcomes and satisfaction levels. El-Sabagh (2021) developed an online learning environment that suggests content based on the student’s learning style and adapts the modules based on behavioral data (learning activities, errors, navigation). They found that the participants who used the adaptive learning environment had better overall performance scores and higher reported engagement levels than those who did not. Ku and Sullivan (2002) also developed an adaptive learning system that adapts mathematical questions based on the learner’s interests (favorite foods, sports, etc.) and discovered that the system enhanced the students’ learning achievement and positively affected their learning attitude. Finally, Tekin et al. (2015) developed eTutor, a personalized online learning platform, that learns the best order in which to deliver instructional materials with an algorithm based on the learner’s preferences and needs, and uses their feedback input on previously presented instructional contents (such as exam scores and time spent on a course) to adapt the educational material. They found that their system improved performance on assessments and achieved significant savings in the amount of time that students spent learning.

These studies have demonstrated that adaptive CBL environments can positively impact the learner’s learning experience. However, their assessment methods do not account for the learner’s real-time cognitive load, which can substantially affect learning effectiveness and efficiency (Sweller, 1988, 2020).

2.3 Cognitive load and measurement approaches

The CLT postulates the importance of minimizing the mental effort associated with the processing of the instructional design or the learning environment (Curum and Khedo, 2021; DeLeeuw and Mayer, 2008) that is unrelated to the learning itself (Extraneous Load) and managing the level of complexity of both the learning material and the learning task itself (also known as Intrinsic Load) (Sweller, 2010), in order to reduce the overall cognitive load and thereby optimize the use of working memory resources [known as Germane Load (Debue and van de Leemput, 2014) or Germane Processing (Sweller et al., 2019)]. We refer to this sweet spot as the “Goldilocks Zone” (Karran et al., 2019), where the overall cognitive load is optimized to enhance the learning process and increase performance.

ZPD and cognitive load are closely linked concerning the personalization and optimization of learning experiences. Learning tasks that align with a student’s ZPD are less likely to overwhelm them, helping to reduce their Extraneous Load (Schnotz and Kürschner, 2007). In addition, instruction tailored to a learner’s ZPD facilitates the learning and minimizes their Intrinsic Load (Schnotz and Kürschner, 2007). Thus, the ZPD makes it possible to evaluate the learner’s cognitive abilities to avoid cognitive overload and underload, leading to poor learning outcomes (Paas et al., 2004).

It is essential to measure and assess the cognitive load of learners to adjust their learning environments and enhance their learning experiences and outcomes. Today, self-reported measures remain the most used method to measure cognitive load in the research and development of various educational technology tools as they offer the learners’ perspectives on their experience (Anmarkrud et al., 2019; Brunken et al., 2003; Mutlu-Bayraktar et al., 2019). However, they cannot objectively and precisely capture and quantify the amount of mental work expended during the learning process (Mutlu-Bayraktar et al., 2019). Self-perceived measures also rely on the learners’ subjective awareness and perceptions, which involve a deeper reflection and thought process about their learning experience (Ayres, 2006). Learners must reflect upon their learning experience, considering the cognitive effort and mental processes involved, influenced by their level of metacognitive awareness. While subjective measures offer insights into the perception of cognitive load, they do not fully capture the learner’s evolving cognitive state, which is necessary to tailor instructions to their abilities. Utilizing physiological measurement tools such as eye movement data, hormone levels, heart rate variability, and brain activity (Riedl and Léger, 2016) can provide a more precise, reliable, valid and complementary continuous cognitive load assessment (Brunken et al., 2003).

Among the various tools available for brain imaging, EEG is one of the most used due to its non-invasive, cost-effective, convenient, accessible features and high temporal resolution (Abiri et al., 2019; Antonenko et al., 2010). EEG measures voltage fluctuations in cortical activity, which can be used to assess and infer mental states. Different cognitive processes are associated with variations in brainwave patterns, specifically frequency, amplitude, synchronization between neural networks, and Event-Related Potentials (ERPs) in response to stimuli (Riedl and Léger, 2016). Previous research on cognitive load suggests that theta (θ, 4–7 Hz) and alpha (α, 8–12 Hz) oscillations are associated with task difficulty, with alpha activity becoming desynchronized (or decreased) and theta activity becoming synchronized (or increased) as task difficulty increases (Antonenko et al., 2010; Gevins and Smith, 2003; Klimesch, 1999; Stipacek et al., 2003). Dynamic changes in alpha activity would mainly occur in the brain’s posterior regions, while changes in theta activity would mainly occur in the brain’s frontal regions (Cavanagh and Frank, 2014; Tuladhar et al., 2007). Prior research used a visuospatial working memory task to explore whether variations in brain activity synchronization within and between the frontal and parietal regions stem from differing central executive demands (Klimesch et al., 2005). The findings indicated that activity synchronization between these areas’ mirrors working memory’s executive functions: increased executive load leads to reduced anterior coupling in the upper alpha range (10–12 Hz) and heightened theta synchronization between frontal and parietal regions.

2.4 Brain-computer interfaces

BCIs enable direct brain-to-machine communication and interaction, allowing users to manipulate and engage with technology (Gao et al., 2021; Lotte et al., 2018; Zander and Kothe, 2011). BCI research has gained much popularity in recent years due to its potential medical applications (Gu et al., 2021), such as for neurorehabilitation in brain injury, motor disability and neurodegenerative diseases (Abiri et al., 2019; Chaudhary et al., 2016; Daly and Wolpaw, 2008; Pels et al., 2019; Vansteensel et al., 2023), detection and control of seizures (Liang et al., 2010; Maksimenko et al., 2017), and improvement of sleep quality and automatic sleep stages detection (Papalambros et al., 2017; Phan et al., 2019). Several studies have also looked at non-clinical applications, such as video games (Ahn et al., 2014; Kerous et al., 2018; Laar et al., 2013; Labonte-Lemoyne et al., 2018; Lalor et al., 2005; Lécuyer et al., 2008), marketing and advertisement (Bonaci et al., 2015; Mashrur et al., 2022; Tadson et al., 2023), neuroergonomics and smart environments (Carabalona et al., 2012; Kosmyna et al., 2016; Lin et al., 2014; Tang et al., 2018), and work monitoring and safety (Aricò et al., 2016; Demazure et al., 2019; Demazure et al., 2021; Karran et al., 2019; Roy et al., 2013; Venthur et al., 2010). A BCI is classified as a neuroadaptive interface (Riedl et al., 2014) when real-time adaptations occur on an interface presented on a computer.

Most BCIs use EEG to acquire brain signals (Lotte et al., 2018). Depending on the type of research conducted, EEG-based BCIs can be invasive (with electrodes placed directly on the surface of the brain) or non-invasive (with electrodes placed on the scalp of the subject) (Abiri et al., 2019). Invasive EEG-based BCIs have the advantage of directly measuring higher-quality brain signals, reducing external interference (Daly and Wolpaw, 2008). However, they require surgery to insert and remove the electrodes, exposing patients to several potential complications (Daly and Wolpaw, 2008; Värbu et al., 2022). In contrast, non-invasive EEG-based BCIs measure brain activity using electrodes placed on the scalp. The major drawback is that these electrodes are subject to several factors that affect the quality of the recorded signal, such as external noise, a weaker electrical signal, and even the physical movements of the subject (Padfield et al., 2019). Nevertheless, non-invasive EEG-based BCIs remain more popular due to their noninvasiveness while providing high temporal resolution and a low cost (Abiri et al., 2019; Cohen, 2017; Dimoka et al., 2012; Lotte et al., 2018; Värbu et al., 2022).

In general, brain signals are typically first acquired with an EEG (Lotte et al., 2018), which are then processed through a series of steps, including data preprocessing, feature extraction and signal classification (Padfield et al., 2019), before finally being interpreted by the BCI and used for its purpose (Abiri et al., 2019; Lotte et al., 2018).

There are three main BCI paradigms: active, reactive, and passive (Table 1). Active paradigms allow users to directly control the system by deliberately controlling their brain activity (Ahn et al., 2014; Angrisani et al., 2021; Zander and Kothe, 2011; Zander et al., 2009). For instance, users can employ mental imagery to imagine motor movements, allowing the system to replicate the intended action on a screen or with an external device, such as a mechanical arm (Steinert et al., 2019). In reactive paradigms, specific brain activity initiates predetermined actions from the system in response to external stimuli (Ahn et al., 2014; Wang et al., 2019; Zander and Kothe, 2011). Brain reactivity measured following external stimuli is associated with a specific command from the system, making this type of BCI very specific and efficient (Dehais et al., 2022). For example, Chen et al. (2017) used Steady-State Visual-Evoked Potentials (SSVEP) to develop a reactive BCI in a visual navigation task. SSVEPs were detected by the BCI when participants were looking at the sides of a flickering square in the middle of the screen, which allowed them to control the direction of the cursor. Finally, in a passive paradigm, brain activity is continuously monitored to differentiate or quantify mental states without user control, providing feedback as a system response. For example, Karran et al. (2019) developed an EEG-based passive BCI to measure and monitor users’ sustained attention in a long-duration business task. The system’s feedback consisted of countermeasures in the form of color gradients representing the participant’s sustained attention level and alerts when sustained attention was low as forms of system feedback to maintain sustained attention at an optimal level and improve performance.

www.frontiersin.org

Table 1. Overview of brain-computer interface (BCI) paradigms: control types, user involvement, applications and advantages.

Passive BCIs have garnered significant attention recently, especially for implementing closed-loop adaptations (Krol and Zander, 2017). In a passive closed-loop BCI, real-time brain activity and adaptive system actions continuously influence each other as part of a biocybernetics loop (Ahn et al., 2014; Krol and Zander, 2017; Pope et al., 1995; Roy et al., 2013; Zander and Kothe, 2011). This dynamic cycle begins when an assessed brain state triggers an adaptive response from the system. The system then provides feedback or adjusts the content to alter the current brain state, and so forth (Krol and Zander, 2017). The aforementioned study by Karran et al. (2019) is an example of a closed-loop BCI, as the system continuously monitors sustained attention and provides feedback according to the level measured to influence the user to increase their sustained attention. This biocybernetics loop continued until the end of the experiment.

2.5 Brain-computer interfaces in educational contexts

The application of BCIs in diverse settings demonstrates their innovative potential to enhance learning outcomes and empower learners through novel interactions with educational content. However, research on using BCIs in educational contexts is limited and inconsistent (Xia et al., 2023). Previous studies have primarily employed passive BCIs to achieve mental state assessments of users as they learn and interact with educational interfaces, subsequently personalizing learning according to the data collected (Krol and Zander, 2017). For example, Apicella et al. (2022) developed a wearable EEG-based system to detect and classify students’ cognitive and emotional engagement during learning tasks, leveraging brain signals to optimize adaptive learning platforms in real-time. Engagement was measured using EEG signal analysis through a Filter Bank and Common Spatial Pattern (CSP) method, followed by classification with a Support Vector Machine (SVM). The task involved a Continuous Performance Test (CPT) to modulate cognitive engagement, while emotional engagement was influenced by background music and social feedback. The system achieved classification accuracies of 76.9% for cognitive and 76.7% for emotional engagement. In addition, previous research on cognitive load and adaptive educational interfaces has mainly focused on the complexity of the educational material and the instructional guidance presented to the learner (Kalyuga and Liu, 2015; Mutlu-Bayraktar et al., 2019; Petko et al., 2020). These gaps in the literature have recently prompted researchers to investigate the transformative potential of passive closed-loop BCIs in learning contexts.

For instance, Yuksel et al. (2016) created a passive closed-loop BCI called Brain Automated Chorales (BACh), which adjusts the difficulty level of piano learning material according to cognitive workload measurements obtained through functional near-infrared spectroscopy (fNIRS). Adaptive measures of the system depended on learners’ cognitive workload throughout both the training and learning tasks, which were classified using a machine learning algorithm. The results suggest that the learners’ playing speed and performance accuracy improved when learning piano with the BACh system. Additionally, the learners reported a better learning experience with the system and noted that difficulty levels were appropriately adjusted. Additionally, Walter et al. (2017) designed a closed-loop EEG-based BCI that measures cognitive workload in real-time to adapt the difficulty of arithmetic problems presented in an online learning environment. Cognitive workload classifications were separated into three difficulty levels based on workload state predictions derived from a pre-trained regression model to determine the optimal range of cognitive workload for learning. Their findings demonstrated that participants who completed the experiment with the adaptive instructions achieved greater learning gains than those who completed the experiment without adaptivity. However, this difference was not statistically significant. Finally, Kosmyna and Maes (2019) created AttentivU, an EEG-based passive closed-loop BCI that measures engagement in real-time and triggers haptic feedback (vibrations from a scarf worn by the learner) when a drop in engagement is detected. The system used the engagement index proposed by Pope et al. (1995), which calculated the average power of theta, beta and alpha frequency components derived from Power Spectral Density to return a smoothed engagement index every 15 s. The two studies conducted with AttentivU yielded results indicating that haptic biofeedback driven by BCI redirected learners’ engagement to the task, resulting in enhanced performance on comprehension tests. These studies demonstrate the feasibility of closed-loop BCI systems within educational contexts to adapt and personalize learning to each learner.

The aim of the current study is to investigate the effects of an EEG-based passive closed-loop BCI on the learning experience in a memory-based learning task and contribute to the literature regarding the effects of closed-loop passive BCI on learning outcomes.

2.6 Hypotheses development

Our study aims to answer the following research question: “Does adapting the pace of information presentation to the learner’s real-time cognitive load using an EEG-based passive BCI enhance the learning experience?.” We hypothesize that (H1) “neuro-adaptivity enhances the learning experience compared to the absence of neuro-adaptivity” (Figure 1). This study defines the learning experience as a combination of objective and subjective measures of cognitive load and emotional state, specifically focusing on learning gains, perceived mental workload, perceived cognitive absorption, and satisfaction.

www.frontiersin.org

Figure 1. Conceptual framework illustrating the effects of neuro-adaptivity and motivation on learning outcomes.

Learning gains in this context represent an objective measure of the knowledge learned and memorized throughout the experimental task, allowing an assessment of the impact of the BCI on learning. Prior research suggests aligning learning speed with cognitive load can enhance efficiency and effectiveness (Petko et al., 2020). We propose that neuro-adaptivity leads to greater learning gains by optimizing the learning pace to the learner’s cognitive load. Thus, we hypothesize (H1a) that neuro-adaptivity leads to more significant learning gains compared to the absence of neuro-adaptivity (Figure 1).

Additionally, understanding how learners perceive and estimate their mental workload while working with and without the BCI, is necessary for evaluating the learning experience. Perceived mental workload refers to the perceived mental effort required to complete the learning task and its impact on the experience (Hancock and Meshkati, 1988), where higher perceived mental workload translates into a less optimal learning experience (Sweller, 1994). Therefore, we hypothesize that (H1b) “neuro-adaptivity reduces perceived mental workload compared to the absence of neuro-adaptivity” (Figure 1).

Derived from Csikszentmihalyi’s theory of flow (Csikszentmihalyi, 1975; Csikszentmihalyi, 2014), cognitive absorption is described as a state of total immersion when performing a task, characterized by high levels of engagement and focus (Agarwal and Karahanna, 2000). Previous studies have shown that higher levels of cognitive absorption while completing CBL tasks lead to higher satisfaction levels and better-perceived ease of use and usefulness of the learning tool (Saadé and Bahli, 2005; Salimon et al., 2021). Therefore, we hypothesize that (H1c) “neuro-adaptivity generates a higher self-perceived cognitive absorption level than the absence of neuro-adaptivity” (Figure 1).

Learner satisfaction reflects the degree to which learners feel engaged, satisfied, and fulfilled with their learning experiences (Martin and Bolliger, 2022; Wickersham and McGee, 2008). Previous research has shown that learner satisfaction leads to better learning outcomes (Martin and Bolliger, 2022). Therefore, we hypothesize (H1d) that “neuro-adaptivity generates a higher level of perceived satisfaction with the learning experience compared to the absence of neuro-adaptivity” (Figure 1).

Furthermore, we aim to examine the role of motivation, both intrinsic and extrinsic, in the learning experience during BCI utilization. Numerous studies have demonstrated the importance of motivation in achieving academic success, notably in CBL environments (Hu et al., 2016; Lepper and Malone, 2021; Nikou and Economides, 2016). To aid in this examination, we ask the following research question: “To what extent is motivation a necessary condition for effective BCI adaptation?”

In general, learners are more likely to be actively engaged and motivated when the learning experiences provided are specific to their ZPD (Shabani et al., 2010; Vygotsky and Cole, 1978). Self-determination theory (SDT) investigates the motivations of individuals in varying social contexts and situations. It identifies two types of motivation: intrinsic and extrinsic (Ryan and Deci, 2000a,b). When learners are intrinsically motivated, they will learn naturally, usually with interest and enjoyment, because of the benefits that the subject matter can bring (Ryan and Deci, 2000a,b). Whereas, extrinsic motivation occurs when learners compel themselves to learn to obtain a reward or avoid consequences (Ryan and Deci, 2000a,b). Extrinsic incentives such as money or prizes have been demonstrated to enhance learning performance (Schildberg-Hörisch and Wagner, 2020) by improving attention (Anderson, 2016; Small et al., 2005), effort (Schwab and Somerville, 2022), and working memory (Wimmer and Poldrack, 2022) and can motivate students to remain interested, engaged, and dedicated to their learning, resulting in greater learning outcomes (Festinger et al., 2009; Gong et al., 2021; Rousu et al., 2015).

These findings suggest that extrinsic motivators can support intrinsic motivation. Therefore, we will utilize extrinsic motivation in the form of a financial incentive to help answer our research question. We hypothesize that (H2) motivation moderates the effect of neuroadaptation by increasing its effectiveness and perception of an optimal learning environment when compared to the neuro-adaptive interface alone (Figure 1). More precisely, we hypothesize that (H2a) adding motivation to neuro-adaptivity helps to achieve greater learning gains compared to neuro-adaptivity alone; (H2b) adding motivation to neuro-adaptivity reduces perceived mental workload compared to neuro-adaptivity alone; (H2c) adding motivation to neuro-adaptivity generates a higher level of perceived cognitive absorption than neuro-adaptivity alone; (H2d) adding motivation to neuro-adaptivity generates a higher level of self-perceived satisfaction of the learning experience compared to neuro-adaptivity alone (Figure 1).

3 Materials and methods 3.1 Participants

Fifty-five participants participated in our study (27 ± 7.92 years old, 28 female), 36 university students, 19 took online classes or training regularly for professional or personal reasons. All participants were recruited by e-mail from our institution’s panel database. Participants were included based on age greater than 18 years old, normal or corrected-to-normal vision, having no history of neurological conditions, right-handedness, fluency in the French language, and high computer proficiency. Handedness was validated before the experiment with the Edinburgh Handedness Inventory (Caplan and Mendoza, 2011), and all other inclusion criteria were validated through the screening questionnaire. Participants signed a consent form before completing the study and were informed they could leave it anytime. Participants were compensated 100$ (CAD) for their participation. Our institution’s ethics committee approved the study under certificate 2023–5,071.

3.2 Experimental design 3.2.1 Experimental conditions

We utilized a 3 × 2 (type of adaptation x motivation) between-subject design. Participants were randomly assigned to a group prior to data collection and kept unaware of experimental factors. In the current study, conditions refer to type of Interactive User Interface (IUI): Control (C) no adaptivity (n = 17), stimuli are presented at predefined intervals; Adaptive (A) without motivation (n = 22), stimuli are presented at variable speeds based on a classification of user cognitive load; Adaptive (AM) with motivation (n = 16), stimuli are presented at variable speeds based on a classification of user cognitive load in the presence of financial motivation. For the AM group to provide extrinsic motivation, participants were informed that better overall task performance resulted in more entries in a $200 Visa prepaid gift card prize draw. To conform with ethical principles, all participants, regardless of experimental condition, received the same number of entries for the prize draw when the study concluded.

3.2.2 Phase one: calibration

As illustrated in Figure 2, the first part of the calibration phase consisted of a 90s baseline task used for post-hoc analyses, where participants had to stare at a black square in the middle of a grey screen. The second part of the calibration phase consisted of an n-back task to estimate personal threshold values of high and low cognitive load. These thresholds were then integrated into the BCI model to personalize the classifier’s thresholds and limits (Sections 3.2.4 and 3.3.2). This task was performed regardless of condition.

www.frontiersin.org

Figure 2. Schematic representation of the n-back task used in the calibration task (phase 1).

The n-back task was selected due to its popularity for manipulating memory load, which can serve as a proxy for cognitive load (Brouwer et al., 2012; Grimes et al., 2008; Wang et al., 2016) and its similarity to the learning task, which requires memory and recall of visual stimuli. In the n-back task, participants must assess whether each stimulus in a sequence corresponds to the stimulus presented n items earlier (Hogervorst et al., 2014). As n increases, the n-back task becomes more challenging, requiring more cognitive resources. A four-minute n-back task was administered in two parts: a 2-min 0-back task to assess low cognitive load and a 2-min 2-back task to assess high cognitive load, separated by a short break of 30 s. Each stimulus (letter) was presented for one second, followed by a two-second intertrial interval for both tasks, resulting in a new letter being presented every three seconds, totaling 40 iterations.

3.2.3 Phase two: learning task

One of the most frequent learning tasks in higher education involves memorizing course material for exams and practical applications due to the sheer quantity of information that must be learned within a limited time frame. To test our hypotheses, we adapted an existing constellation memorization learning task (Riopel et al., 2017).

Star constellations were chosen as the learning topic for two reasons. First, university students typically possess low prior knowledge about the subject. Second, even the most knowledgeable individuals easily encounter unfamiliar material. The task required participants to select the correct name of a constellation from three options associated with an image of one of the 88 constellations. The purpose was to examine the learning, forgetting, and spacing curves in online learning. This allowed us to design a valid task that could promote learning while inducing changes in cognitive load.

As indicated in Figure 3, participants were instructed to memorize as many constellations as possible by associating the presented constellation image with its corresponding name from a choice of four multiple-choice answers. The correct answer (feedback) was displayed after each question, regardless of whether it was answered correctly or not. Previous research has indicated that providing the correct answer to a question, irrespective of whether it was answered correctly, is essential in enhancing the retention of information and avoiding future mistakes (Butler et al., 2008; Kulhavy, 1977). The instructions remained the same throughout the learning task, which contained four blocks (i.e., trials) of questions, separated by short breaks of 30 s (see Section 3.2.5). Participants were required to memorize 32 constellations, each presented twice per block. The sequence of constellation presentation was pre-randomized before data collection and remained the same for all participants. However, the correct answer’s position among the four multiple-choice options and the three incorrect constellation names were randomized.

www.frontiersin.org

Figure 3. Example of a constellation from the learning experiment, presented on the interface.

3.2.4 Model of adaptivity and cognitive load classifications

The model of adaptivity used in our study was adapted from Karran et al. (2019) who conceived of an adaptive model of sustained attention, in which two thresholds denote the chance of failure, an upper soft limit beyond which chances of failure increase, and a lower hard limit beyond which failure is certain, the model is such that adaptive countermeasures are provided to keep a user of the BCI within the upper and lower bounds in what they term the “goldilocks” zone, i.e., neither to high nor too low. We chose this model because it was easily adapted to replace sustained attention with cognitive load, while keeping all thresholds the same. In the current study, we inverted the limits such that the upper limit represents cognitive overload and the certainty of failure, and the lower limit represents to little cognitive load and an increased chance of failure through inattention or boredom. The “goldilocks” zone represents the ZPD, which promotes an optimal cognitive load level, which is not too high or too low, through fluid and dynamic adaptations to enhance learning gains over time.

EEG analysis between the 0-back and the 2-back tasks controlling the False Discovery Rate (FQR, q = 0.05) of 17 pre-tests demonstrated a significant decrease in alpha-band activity within the parietal, occipital, and right temporal regions. However, the same analyses with a Bonferroni correction suggested a significant reduction in alpha-band (α) activity at the P7 electrode. Consequently, we exclusively used the P7 electrode when computing the cognitive load index, which aligns with the current literature (see Section 2.3). We used an index based on average alpha-band power in the parietal cortex (electrode P7) during 6-s sliding windows with no overlap to calculate the cognitive load.

Where CLcurrent represents a new real-time index value, i.e., the current cognitive load level, by calculating the alpha-band power activity during the ith 6-s sliding window, denoted by Pα,i .

As described in Section 3.2.2, the n-back task was used to determine baseline cognitive load thresholds. Specifically, cognitive load averages for the 0-back and 2-back tasks were calculated separately using the cognitive load index. This resulted in the creation of two thresholds, which represent “low average” and “high average” cognitive load. In addition, the average cognitive load for the entire n-back task was calculated.

CL¯0backandCL¯2back=∑i=1NCLcurrentN CL¯nback=CL0back+CL2back2

Where CL¯0back and CL¯2back denote the average cognitive load for the 0-back or the 2-back task, respectively. N represents the total number of 6-s sliding windows during the task, used to calculate the average of the task. CLcurrent represents the real-time cognitive load level, i.e., a new real-time index value, calculated with the cognitive load index. Finally, the average cognitive load level is calculated using the 0-back and 2-back task thresholds, denoted by CL¯nback .

The real-time index values were stabilized during the learning experiment using a 60-s sliding window that dynamically adjusted the average cognitive load over time. In other words, decisions on cognitive load classifications were made every 6 s based on the index compared with a moving average of the previous 60 s or the last 10 data points. This ensured that the classification would adjust to changes in the user’s cognitive state throughout the experiment. Additionally, analysis of the 17 pre-tests indicated a 125% increase in the amplitude of the alpha-band signal during the learning task compared with the n-back task. These results suggest that the thresholds should be 1.25 times higher than the average values obtained in n-back. Therefore, the resulting cognitive load value exceeding the “high average” threshold would result in a classification as “2” in the BCI system, indicating a high cognitive load level. Conversely, a resulting cognitive load value below the “low average” threshold would classify as “0” in the BCI system, indicating a low cognitive load level. Finally, when the resulting cognitive load value fell between the “high average” and “low average” thresholds, it would be converted to a “1” classifier in the BCI system, indicating an optimal level of cognitive load.

MAi=∑j=i−9iCLcurrent10 Class0=MAi×(CL¯0backCL¯nback)×1.25 Class2=MAi×(CL¯2backCL¯nback)×1.25

Where CLcurrent represents the real-time cognitive load value, calculated with the index. Therefore, MAi represents the moving average of the last ten cognitive load values at time i. The factor of 1.25 represents the threshold adjustment according to the results obtained in the pre-tests.

3.2.5 Adaptive rules of the interface and specifications

During the learning task, the adaptive Intelligent User Interface (IUI) modulated the information delivery speed (see Figure 4). Specifically, upon receiving high cognitive load classifications (“2”), the interface slowed information delivery, affording participants extended time for question response and correct answer processing. Conversely, low cognitive load classifications (“0”) triggered an increase in delivery speed, reducing the response and correct answer display time. No adjustment was made for classifications of average cognitive load (“1”), indicating optimal cognitive load. Following ZPD theory, we posited that these time adaptations would allow the learner to remain in their ZPD, leading to better learning outcomes. Thus, the baseline time window for displaying constellation questions and feedback was 5 s. Based on pre-test results, adjustments were made in 1-s increments within a 3 to 8-s range per item. Pre-tests revealed that presentations over 8 s diminished response efficiency and significantly lowered engagement, focus, and interest, aligning with existing research findings (Beck, 2005; Chipchase et al., 2017). The minimum time was set at 3 s to prevent the BCI system from getting confused between the brain’s processing of new information and high cognitive load levels (Anderson et al., 2011; Rosso et al., 2001; Vijayalakshmi et al., 2015). Finally, the constellation question and the feedback were presented for the same duration to ensure adequate time for participants to respond and process the correct answer.

www.frontiersin.org

Figure 4. Adaptive rules of the BCI system implemented in the experiment.

Figure 5 illustrates the learning task, which was structured into four blocks, interspersed with 30-s intervals. In the C group, question and feedback pacing remained constant across all blocks, adhering to a 5-s baseline. For the A and AM groups using the adaptive IUI, information delivery rates in the second and third blocks were modulated based on cognitive load classifications from the BCI; no adaptation was applied in the first and last blocks to assess the effect. To facilitate participant re-engagement post-breaks, the initial 30 s (or first three constellations) of the adaptive blocks maintained the baseline delivery speed of 5 s for both questions and feedback.

Comments (0)

No login
gif