ctrlnum article-11785
fullrecord <?xml version="1.0"?> <dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><title lang="en-US">A scoring rubric for automatic short answer grading system</title><creator>Hasanah, Uswatun; STMIK Amikom Purwokerto</creator><creator>Permanasari, Adhistya Erna; Gadjah Mada University</creator><creator>Kusumawardani, Sri Suning; Gadjah Mada University</creator><creator>Pribadi, Feddy Setio; Gadjah Mada University</creator><subject lang="en-US">automatic scoring; keyword matching; short answer; string-based similarity;</subject><description lang="en-US">During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing students&#x2019; learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the string-based similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 student&#x2019;s answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error (&#x1D440;&#x1D434;&#x1D438;) value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method.</description><publisher lang="en-US">Universitas Ahmad Dahlan</publisher><contributor lang="en-US"/><date>2019-04-01</date><type>Journal:Article</type><type>Other:info:eu-repo/semantics/publishedVersion</type><type>Other:</type><type>File:application/pdf</type><identifier>http://journal.uad.ac.id/index.php/TELKOMNIKA/article/view/11785</identifier><identifier>10.12928/telkomnika.v17i2.11785</identifier><source lang="en-US">TELKOMNIKA (Telecommunication Computing Electronics and Control); Vol 17, No 2: April 2019; 763-770</source><source>2302-9293</source><source>1693-6930</source><source>10.12928/telkomnika.v17i2</source><language>eng</language><relation>http://journal.uad.ac.id/index.php/TELKOMNIKA/article/view/11785/6432</relation><rights lang="0">Copyright (c) 2020 Universitas Ahmad Dahlan</rights><rights lang="0">http://creativecommons.org/licenses/by-sa/4.0</rights><recordID>article-11785</recordID></dc>
language eng
format Journal:Article
Journal
Other:info:eu-repo/semantics/publishedVersion
Other
Other:
File:application/pdf
File
Journal:eJournal
author Hasanah, Uswatun; STMIK Amikom Purwokerto
Permanasari, Adhistya Erna; Gadjah Mada University
Kusumawardani, Sri Suning; Gadjah Mada University
Pribadi, Feddy Setio; Gadjah Mada University
title A scoring rubric for automatic short answer grading system
publisher Universitas Ahmad Dahlan
publishDate 2019
topic automatic scoring
keyword matching
short answer
string-based similarity
url http://journal.uad.ac.id/index.php/TELKOMNIKA/article/view/11785
http://journal.uad.ac.id/index.php/TELKOMNIKA/article/view/11785/6432
contents During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing students’ learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the string-based similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 student’s answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error (MAE) value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method.
id IOS160.article-11785
institution Universitas Ahmad Dahlan
institution_id 62
institution_type library:university
library
library Perpustakaan Universitas Ahmad Dahlan
library_id 467
collection Bulletin of Electrical Engineering and Informatics
repository_id 160
subject_area Rekayasa
city KOTA YOGYAKARTA
province DAERAH ISTIMEWA YOGYAKARTA
repoId IOS160
first_indexed 2019-05-02T14:48:05Z
last_indexed 2020-07-20T01:42:21Z
recordtype dc
merged_child_boolean 1
_version_ 1720571383149232128
score 17.204899