Catálogo de publicaciones - libros

Compartir en
redes sociales


Intelligent Tutoring Systems: 8th International Conference, ITS 2006, Jhongli, Taiwan, June 26-30, 2006 Proceedings

Mitsuru Ikeda ; Kevin D. Ashley ; Tak-Wai Chan (eds.)

En conferencia: 8º International Conference on Intelligent Tutoring Systems (ITS) . Jhongli, Taiwan . June 26, 2006 - June 30, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Computers and Education; Multimedia Information Systems; User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet)

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-35159-7

ISBN electrónico

978-3-540-35160-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Automated Expert Modeling for Automated Student Evaluation

Robert G. Abbott

This paper presents automated expert modeling for automated student evaluation, or AEMASE (pronounced “amaze”). This technique grades students by comparing their actions to a model of expert behavior. The expert model is constructed with machine learning techniques, avoiding the costly and time-consuming process of manual knowledge elicitation and expert system implementation. A brief summary of after action review (AAR) and intelligent tutoring systems (ITS) provides background for a prototype AAR application with a learning expert model. A validation experiment confirms that the prototype accurately grades student behavior on a tactical aircraft maneuver application. Finally, several topics for further research are proposed.

- Assessment | Pp. 1-10

Multicriteria Automatic Essay Assessor Generation by Using TOPSIS Model and Genetic Algorithm

Shu-ling Cheng; Hae-Ching Chang

With the advance of computer technology and computing power, more efficient automatic essay assessment is coming to use. Essay assessment should be a multicriteria decision making problem, because an essay is composed of multiple concepts. While prior works have proposed several methods to assess students’ essays, little attention is paid to use multicriteria for essay evaluation. This paper presents a Multicriteria Automatic Essay Assessor (MAEA) based on combined Latent Semantic Analysis (LSA), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and Genetic Algorithm (GA) to assess essays. LSA is employed to construct concept dimensions, TOPSIS incorporated to model the multicriteria essay assessor, and GA used to find the optimal concept dimensions among LSA concept dimensions. To show the effectiveness of the method, the essays of students majoring in information management are evaluated by MAEA. The results show that MAEA’s scores are highly correlated with those of the human graders.

- Assessment | Pp. 11-20

Better Student Assessing by Finding Difficulty Factors in a Fully Automated Comprehension Measure

Brooke Soden Hensler; Joseph Beck

The multiple choice cloze (MCC) question format is commonly used to assess students’ comprehension. It is an especially useful format for ITS because it is fully automatable and can be used on any text.  Unfortunately, very little is known about the factors that influence MCC question difficulty and student performance on such questions.  In order to better understand student performance on MCC questions, we developed a model of MCC questions. Our model shows that the difficulty of the answer and the student’s response time are the most important predictors of student performance.  In addition to showing the relative impact of the terms in our model, our model provides evidence of a developmental trend in syntactic awareness beginning around the 2 grade.  Our model also accounts for 10% more variance in students’ external test scores compared to the standard scoring method for MCC questions.

- Assessment | Pp. 21-30

Authoring Constraint-Based Tutors in ASPIRE

Antonija Mitrovic; Pramuditha Suraweera; Brent Martin; Konstantin Zakharov; Nancy Milik; Jay Holland

This paper presents a project the goal of which is to develop ASPIRE, a complete authoring and deployment environment for constraint-based intelligent tutoring systems (ITSs). ASPIRE is based on our previous work on constraint-based tutors and WETAS, the tutoring shell. ASPIRE consists of the authoring server (ASPIRE-Author), which enables domain experts to easily develop new constraint-base tutors, and a tutoring server (ASPIRE-Tutor), which deploys the developed systems. Preliminary evaluation shows that ASPIRE is successful in producing domain models, but more thorough evaluation is planned.

- Authoring Tools | Pp. 41-50

A Teaching Strategies Engine Using Translation from SWRL to Jess

Eric Wang; Yong Se Kim

Within an intelligent tutoring system framework, the teaching strategy engine stores and executes teaching strategies. A teaching strategy is a kind of procedural knowledge, generically an if-then rule that queries the learner’s state and performs teaching actions. We develop a concrete implementation of a teaching strategy engine based on an automatic conversion from SWRL to Jess. This conversion consists of four steps: (1) SWRL rules are written using Protégé’s SWRLTab editor; (2) the SWRL rule portions of Protégé’s OWL file format are converted to SWRLRDF format via an XSLT stylesheet; (3) SweetRules converts SWRLRDF to CLIPS/Jess format; (4) syntax-based transformations are applied using Jess meta-programming to provide certain extensions to SWRL syntax. The resulting rules are then added to the Jess run-time environment. We demonstrate this system by implementing a scenario with a set of learning contents and rules, and showing the run-time interaction with a learner.

- Authoring Tools | Pp. 51-60

The Cognitive Tutor Authoring Tools (CTAT): Preliminary Evaluation of Efficiency Gains

Vincent Aleven; Bruce M. McLaren; Jonathan Sewall; Kenneth R. Koedinger

Intelligent Tutoring Systems have been shown to be effective in a number of domains, but they remain hard to build, with estimates of 200-300 hours of development per hour of instruction. Two goals of the Cognitive Tutor Authoring Tools (CTAT) project are to (a) make tutor development more efficient for both programmers and non-programmers and (b) produce scientific evidence indicating which tool features lead to improved efficiency. CTAT supports development of two types of tutors, Cognitive Tutors and Example-Tracing Tutors, which represent different trade-offs in terms of ease of authoring and generality. In preliminary small-scale controlled experiments involving basic Cognitive Tutor development tasks, we found efficiency gains due to CTAT of 1.4 to 2 times faster. We expect that continued development of CTAT, informed by repeated evaluations involving increasingly complex authoring tasks, will lead to further efficiency gains.

- Authoring Tools | Pp. 61-70

A Bayesian Network Approach for Modeling the Influence of Contextual Variables on Scientific Problem Solving

Ronald H. Stevens; Vandana Thadani

A challenge for intelligent tutoring is to develop methodologies for transforming streams of performance data into insights and models about underlying learning mechanisms. Such modeling at different points in time could provide evidence of a student’s changing understanding of a task, and given sufficient detail, could extend our understanding of how gender, prior achievement, classroom practices and other student/contextual characteristics differentially influence performance and participation in complex problem-solving environments. If the models had predictive properties, they could also provide a framework for directing feedback to improve learning.

In this paper we describe the causal relationships between students’ problem-solving effectiveness (i.e. reaching a correct solution) and strategy (i.e. approach) and multiple contextual variables including experience, gender, classroom environment, and task difficulty. Performances of the IMMEX problem set (n ~ 33,000) were first modeled by Item Response Theory analysis to provide a measure of effectiveness and then by self-organizing artificial neural networks and hidden Markov modeling to provide measures of strategic efficiency. Correlation findings were then used to link the variables into a Bayesian network representation. Sensitivity analysis indicated that whether a problem was solved or not was most likely influenced by findings related to the problem under investigation and the classroom environment while strategic approaches were most influenced by the actions taken, the classroom environment and the number of problems previously performed. Subsequent testing with unknown performances indicated that the strategic approaches were most easily predicted (17% error rate), whereas whether the problem was solved was more difficult (32% error rate).

- Bayesian Reasoning and Decision-Theoretic Approaches | Pp. 71-84

A Decision-Theoretic Approach to Scientific Inquiry Exploratory Learning Environment

Choo-Yee Ting; M. Reza Beik Zadeh; Yen-Kuan Chong

Although existing computer-based scientific inquiry learning environments have proven to benefit learners, effectively inferring and intervening within these learning environments remain an open issue. To tackle this challenge, this article will firstly address the issue on learning model by proposing . Secondly, aiming at effective modeling and intervening under uncertainty in modeling learner’s exploratory behaviours, decision-theoretic approach is integrated into INQPRO. This approach allows INQPRO to compute a probabilistic assessment on learner’s scientific inquiry skills ( and ), domain knowledge, and subsequently provides tailored hints. This article ends with an investigation on the accuracy of proposed learner model by performing a model walk-through with human expert and field trial evaluation with a total number of 30 human students.

- Bayesian Reasoning and Decision-Theoretic Approaches | Pp. 85-94

A Bayes Net Toolkit for Student Modeling in Intelligent Tutoring Systems

Kai-min Chang; Joseph Beck; Jack Mostow; Albert Corbett

This paper describes an effort to model a student’s changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNT-SM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.

- Bayesian Reasoning and Decision-Theoretic Approaches | Pp. 104-113

A Comparison of Decision-Theoretic, Fixed-Policy and Random Tutorial Action Selection

R. Charles Murray; Kurt VanLehn

(DT), an ITS that uses decision theory to select tutorial actions, was compared with both a Fixed-Policy Tutor (FT) and a Random Tutor (RT). The tutors were identical except for the method they used to select tutorial actions: FT employed a common fixed policy while RT selected randomly from relevant actions. This was the first comparison of a decision-theoretic tutor with a non-trivial competitor (FT). In a two-phase study, first DT’s probabilities were learned from a training set of student interactions with RT. Then a panel of judges rated the actions that RT took along with the actions that DT and FT would have taken in identical situations. DT was rated higher than RT and also higher than FT both overall and for all subsets of scenarios except help requests, for which DT’s and FT’s ratings were equivalent.

- Bayesian Reasoning and Decision-Theoretic Approaches | Pp. 114-123