scispace - formally typeset
Search or ask a question

Showing papers on "Plagiarism detection published in 2005"


Journal ArticleDOI
01 Sep 2005
TL;DR: This paper describes how automated assessment is incorporated into BOSS such that it supports, rather than constrains, assessment and the pedagogic and administrative issues that are affected by the assessment process.
Abstract: Computer programming lends itself to automated assessment. With appropriate software tools, program correctness can be measured, along with an indication of quality according to a set of metrics. Furthermore, the regularity of program code allows plagiarism detection to be an integral part of the tools that support assessment. In this paper, we describe a submission and assessment system, called BOSS, that supports coursework assessment through collecting submissions, performing automatic tests for correctness and quality, checking for plagiarism, and providing an interface for marking and delivering feedback. We describe how automated assessment is incorporated into BOSS such that it supports, rather than constrains, assessment. The pedagogic and administrative issues that are affected by the assessment process are also discussed.

289 citations


Journal Article
TL;DR: Plagiarism in writing essays is common among medical students and an explicit warning is not enough to deter students from plagiarism, according to a study of second year medical students attending Medical Informatics course.
Abstract: Aim. To determine the prevalence of plagiarism among medical students in writing essays. Methods. During two academic years, 198 second year medical students attending Medical Informatics course wrote an essay on one of four offered articles. Two of the source articleswere available in an electronic form and two in printed form. Two (one electronic and one paper article) were considered less complex and the other two more complex. The essays were examined using plagiarism detection software “ WCopyfind, ” which counted the number of words from matching phrases with six or more words. Plagiarism rate, expressed as the percentage of the plagiarized text, was calculated as a ratio of the absolute number of matching words and the total number of words in the essay. Results. Only 17 (9%) of students did not plagiarize at all and 68 (34%) plagiarized less than 10% of the text. The average plagiarism rate (% of plagiarized text) was 19% (5-95% percentile=0-88). Students who were strictly warned not to plagiarize had a higher total word count in their essays than students who were not warned (P=0.002) but there was no difference between them in the rate of plagiarism. Students with higher grades in Medical Informatics exam plagiarized less than those with lower grades (P=0.015). Gender, subject source, and complexity had no influence on the plagiarism rate. Conclusions.Plagiarism in writing essays is common among medical students. An explicit warning is not enough to deter students from plagiarism. Detection software can be used to trace and evaluate the rate of plagiarism in written student assays.

118 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluated an online plagiarism detec- tion system to determine whether (a) it would be practical to use in an academic setting and (b) it had an effect on student plagiarism.
Abstract: In this study, the author evaluated an online plagiarism detec- tion system to determine whether (a) it would be practical to use in an acade- mic setting and (b) it would have an effect on student plagiarism. The author analyzed graduate student papers for plagiarism over the course of 5 semesters. Students in the last 3 semesters plagiarized significantly less than did students in the 1st semes- ter, suggesting that students' aware- ness of the system and its use by the instructor may have acted as a deter- rent to plagiarism. Results showed that the system was a viable means to detect and discourage plagiarism in an academic environment. The author provides conclusions, limitations, and recommendations for faculty use of a plagiarism detection system.

80 citations


Journal ArticleDOI
TL;DR: A clustering oriented approach for facing the problem of source code plagiarism, designed such that it may be easily adapted over any keyword-based programming language and it is quite beneficial when compared with earlier plagiarism detection approaches.
Abstract: Efficient detection of plagiarism in programming assignments of students is of a great importance to the educational procedure. This paper presents a clustering oriented approach for facing the problem of source code plagiarism. The implemented software, called PDetect, accepts as input a set of program sources and extracts subsets (the clusters of plagiarism) such that each program within a particular subset has been derived from the same original. PDetect proposes the use of an appropriate measure for evaluating plagiarism detection performance and supports the idea of combining different plagiarism detection schemes. Furthermore, a cluster analysis is performed in order to provide information beneficial to the plagiarism detection process. PDetect is designed such that it may be easily adapted over any keyword-based programming language and it is quite beneficial when compared with earlier (state-of-the-art) plagiarism detection approaches.

67 citations


Journal ArticleDOI
TL;DR: A new and alternative set of classifications based primarily around the types of the metrics the engines use are proposed, intended to allow detection engines to be discussed and compared without ambiguity.
Abstract: Plagiarism detection engines are programs that compare documents with possible sources in order to identify similarity and so discover student submissions that might be plagiarised. They ar...

56 citations


Book ChapterDOI
02 Nov 2005
TL;DR: There is a minimum level of acceptable performance for the application of detecting student plagiarism, and it would be useful if the detector operated at a level that meant for a piece of work to fool the algorithm would require that the student spent a large amount of time on the assignment.
Abstract: The large class sizes typical for an undergraduate programming course mean that it is nearly impossible for a human marker to accurately detect plagiarism, particularly if some attempt has been made to hide the copying. While it would be desirable to be able to detect all possible code transformations we believe that there is a minimum level of acceptable performance for the application of detecting student plagiarism. It would be useful if the detector operated at a level that meant for a piece of work to fool the algorithm would require that the student spent a large amount of time on the assignment and had a good enough understanding to do the work without plagiarising.

54 citations


Journal ArticleDOI
TL;DR: The authors argue that plagiarism detection systems are often implemented with inappropriate assumptions about plagiarism and the way in which new members of a community of practice develop the skills to become full members of that community.
Abstract: This paper argues that the inappropriate framing and implementation of plagiarism detection systems in UK universities can unwittingly construct international students as ‘plagiarists’. It argues that these systems are often implemented with inappropriate assumptions about plagiarism and the way in which new members of a community of practice develop the skills to become full members of that community. Drawing on the literature and some primary data it shows how expectations, norms and practices become translated and negotiated in such a way that legitimate attempts to conform with the expectations of the community of practice often become identified as plagiarism and illegitimate attempts at cheating often become obscured from view. It argues that this inappropriate framing and implementation of plagiarism detection systems may make academic integrity more illusive rather than less. It argues that in its current framing – as systems for ‘detection and discipline’ – plagiarism detection systems may become a new micro-politics of power with devastating consequences for those excluded.

52 citations


Proceedings ArticleDOI
29 Jun 2005
TL;DR: A set of low-level syntactic structures that capture creative aspects of writing are presented and it is shown that information about linguistic similarities of works improves recognition of plagiarism (over tfidf-weighted keywords alone) when combined with similarity measurements based on tfidF- Weighted keywords.
Abstract: Using keyword overlaps to identify plagiarism can result in many false negatives and positives: substitution of synonyms for each other reduces the similarity between works, making it difficult to recognize plagiarism; overlap in ambiguous keywords can falsely inflate the similarity of works that are in fact different in content. Plagiarism detection based on verbatim similarity of works can be rendered ineffective when works are paraphrased even in superficial and immaterial ways. Considering linguistic information related to creative aspects of writing can improve identification of plagiarism by adding a crucial dimension to evaluation of similarity: documents that share linguistic elements in addition to content are more likely to be copied from each other. In this paper, we present a set of low-level syntactic structures that capture creative aspects of writing and show that information about linguistic similarities of works improves recognition of plagiarism (over tfidf-weighted keywords alone) when combined with similarity measurements based on tfidf-weighted keywords.

51 citations


Journal Article
TL;DR: The results show that the Google search engine can be used to effectively and efficiently detect potential occurrences of plagiarism in some master's theses, and the method described in the study could be used by theses advisors and other faculty as an alternative to anti-plagiarism software packages.
Abstract: The effectiveness and efficiency of the Google Search engine for detecting potential occurrences of word-for-word plagiarism in master's theses was investigated. 210 electronic master's theses from a sample of 260 completed in 2003 were examined. Undocumented phrases from each thesis were searched against the World Wide Web using the Google search engine. Exact phrases from each thesis were searched for 10 minutes. Matches--or potential occurrences of plagiarism were found in 27.14% of the theses searched. Matches were found on or before the first numbered page in 16 of the 57 theses containing suspect passages. The average time for finding a match was 3.8 minutes. The results show that the Google search engine can be used to effectively and efficiently detect potential occurrences of plagiarism in some master's theses. The method described in the study could be used by theses advisors and other faculty as an alternative to anti-plagiarism software packages. Further investigation is needed to determine whether Google's effectiveness is consistent across varied academic disciplines. Comparative studies of Google and anti-plagiarism software and services are needed as well. ********** The purpose of this research was to explore Google's potential for detecting occurrences of word-for-word (1) plagiarism in master's theses. The authors sought answers to these questions: 1. Is Google an effective tool for detecting plagiarism in master's theses? 2. Is Google an efficient tool for detecting plagiarism in master's theses? The first question relates to the nature of graduate research and the types of resources on the World Wide Web. Graduate level research in most academic disciplines requires extensive use of professional journals and monographs. Some of these materials are not available in electronic formats; those that are distributed electronically are often subscription-based and not freely available on the World Wide Web. Hence, it was unknown whether Google searches would retrieve sources of plagiarized material in Master's theses (this research was conducted prior to the release of Google Scholar in November 2004). The second question stems from the authors' interest in determining whether Google might provide a relatively fast 10 minutes or less--mechanism for thesis advisors interested in checking suspect passages of a thesis draft. The process of using search engines and periodical databases to detect plagiarism in student papers has been described by others (e.g. Ryan, 2000; Lathrop & Foss, 2000; Marshall, 1998). However, most published material on this topic is anecdotal and focuses on plagiarism in high school and undergraduate student papers. A literature search produced no studies on the effectiveness of Google or other search engines for detecting plagiarism in master's theses. Some universities are investing in anti-plagiarism software and services such as Turn-It-In to combat academic dishonesty. Plagiarism detection services typically require student papers to be submitted to professors in electronic format. Professors then submit the papers to the software company which runs the paper against its own database of online resources. The professor then receives reports from the company detailing which papers appear to contain plagiarism. While plagiarism detection software and services offer many benefits, they are not free. Moreover, some institutions are reluctant to use plagiarism detection software and services due to concerns about students' intellectual property and privacy rights--particularly since some companies add the content of submitted papers to their database. This practice raises concerns, even though companies such as Turn-It-In pledge to protect the content of submitted papers and do not make it available to customers (http://www.turnitin.com/static/legal/legal_document.html). The consequences of plagiarism for students and institutions, the increased availability of graduate theses, and the need for alternatives to commercial plagiarism detection software prompted this investigation. …

51 citations


Proceedings ArticleDOI
13 Mar 2005
TL;DR: A software tool that supports the detection of plagiarism that takes as input two sufficiently long pieces of English text under the Hypothesis that they stem from different authors and raises a plagiarism warning in case that significant stylistic similarities in the two texts can be found.
Abstract: We propose a software tool that supports the detection of plagiarism. The application domain of the tool are those cases in which a stylometric approach seems appears viable. The tool is language-sensitive. It takes as input two sufficiently long pieces of English text under the Hypothesis that they stem from different authors, and it raises a plagiarism warning in case that significant stylistic similarities in the two texts can be found.

45 citations


01 Jan 2005
TL;DR: Tertiary induction of new students needs to focus on developing an appreciation of the culture of enquiry that characterises learning at the tertiary level and that success is more likely if the students' goal is something positive: to achieve a new approach to learning, than if it is something negative: to avoid 'committing' plagiarism.
Abstract: Th e increasing ease of detecting internet plagiarism has intensifi ed debate in Australia, as well as the UK and the USA, on eff ective deterrents in the face of increasing evidence of plagiarism. Many universities are re-vamping their plagiarism policies and some conferences have themes entirely devoted to the subject of academic integrity. Policies and conference discussions relating to academic values and integrity have focussed on improved information on the rules of citation and attribution, coupled with more systematic vigilance and disciplinary procedures. Th e literature has also become increasingly insistent that information on rules of citation and attribution needs to be coupled with an appropriate apprenticeship into the conventions and language of academic writing. Yet there is a fi rst step that is still being overlooked, the initial induction of students into the research-led, evidence-based culture of academic endeavour. By focussing on rules and strategies for avoiding plagiarism, but ignoring the basic reasons for these requirements, we have put the cart before the horse. Th is paper suggests that tertiary induction of new students needs to focus fi rstly on developing an appreciation of the culture of enquiry that characterises learning at the tertiary level and that success is more likely if the students' goal is something positive: to achieve a new approach to learning, than if it is something negative: to avoid 'committing' plagiarism.

Journal ArticleDOI
23 Feb 2005
TL;DR: A new technique was used to analyse how students plagiarise programs in an introductory programming course, placing a watermark on a student's program and monitoring programs for the watermark during assignment submission, and it emerged that the recipient students performed significantly worse than the suppliers.
Abstract: We used a new technique to analyse how students plagiarise programs in an introductory programming course. This involved placing a watermark on a student's program and monitoring programs for the watermark during assignment submission. We obtained and analysed extensive and objective data on student plagiarising behaviour. In contrast to the standard plagiarism detection approaches based on pair comparison, the watermark based approach allows us to distinguish between the supplier and the recipient of the code. This gives us additional insight into student behaviour. We found that the dishonest students did not perform significantly worse than the honest students in the exams. However, when dishonest students are further classified into supplier and recipient, it emerged that the recipient students performed significantly worse than the suppliers.

Journal ArticleDOI
TL;DR: In this article, a case of serial plagiarism in the work of a graduate student in an online distance education program is discussed, and the complexity of the student's thinking and the manner in which the case was handled by the teacher and the university.
Abstract: The ease with which material may be ‘copied and pasted’ from the Internet into written work is raising concern in educational institutions, and particularly in those disciplines that use online sources and methods in their curriculum. A case of ‘serial plagiarism’ is discussed, in the work of a graduate student in an online distance education program. The complexity of the student’s thinking is emphasized, and the manner in which the case was handled by the teacher and the university. The use of an online plagiarism‐checking technology (Turnitin.com) and the value of such services are discussed. The case illustrates the importance of explaining the precise nature of plagiarism to students, of providing clear warnings about its consequences and of developing a careful institutional approach to plagiarism detection and prevention.

Journal Article
TL;DR: In this article, the authors explore the plagiarism dilemma from a librarian's vantage point, and outline the strong support that has been offered to teaching faculty with plagiarism problems by the Joan and Donald E. Axinn Library of Hofstra University.
Abstract: Introduction The proliferation of student plagiarism on university campuses is paralleled by the increasing number of articles appearing in academic journals presenting varying opinions on the topic. Opinions run the gamut from outrage at the student offenders to pointing fingers at faculty members who fail to create plagiarism-proof assignments. One also reads about controversial new methods for deterring and detecting plagiarism, most notably, online plagiarism detection systems. In surveying the literature, one can construct valid arguments for each point of view. This paper will explore the plagiarism dilemma from a librarian's vantage point, and will outline the strong support that has been offered to teaching faculty with plagiarism problems by the Joan and Donald E. Axinn Library of Hofstra University. It will also examine how Hofstra University decided to subscribe to Turnitin.com (www.turnitin.com), a popular but controversial online plagiarism detection system. As librarians, we know that detection is not the main objective in a campaign against plagiarism. Rather, universities should concentrate on educating students as to what constitutes plagiarism and how to avoid it. Consequently, as our last point we will summarize how Hofstra librarians are reaching out to both faculty and students in order to inform them about this fundamental concern. This paper will not necessarily offer the definitive philosophical answer to solving the plagiarism dilemma, but will attempt to convey a "reality" account of how we have dealt with student plagiarism at Hofstra University. Overview of the Plagiarism Problem Hofstra University is a mid-sized liberal arts university on Long Island with approximately 10,000 full- and part-time undergraduate students and about 3,700 graduate students. In addition, the Hofstra University School of Law has an enrollment of 1,700. In recent years, Hofstra, like other universities, has watched as students became adept at cutting and pasting from the Web, or purchasing papers from paper mills. Part of the dilemma is that many students are unfamiliar with what determines plagiarism and they stumble into it unawares, not only because they have never learned how to use sources, but sometimes because they have been taught that research means plagiarism (White 205). This sense of vagueness is exacerbated by the fact that, with the advent of the Internet, students have unlimited access to information. Additionally, the need for high GPAs to gain entrance to prestigious graduate schools creates an atmosphere of "anything goes" when it comes to completing research assignments. Even a school such as the University of Virginia, long noted for its honor system, has fallen victim to cheating scandals. When confronted with the possibility that some of his students might have plagiarized, Professor Louis Bloomfield of UVA devised a computer program that detected students who had used "recycled" papers from his previous classes. He discovered that 158 of the 500 students in his Physics 105-106 class had cheated (Cullen 2002). This discouraging incident highlights the extent of the plagiarism problem and it also underscores the fact that students' thirst for knowledge has been replaced by a quest for good grades. The problem is so huge that the popular media is now focusing attention to it. The CBS television news program 60 Minutes devoted a segment to cheaters and Professor Donald L. McCabe, founder of the Center for Academic Integrity (www.academicintegrity.org/), told Morley Safer that pressure has turned competitive schools like UVA into academic rat races. In addition to academic pressure, there is the general slackening of ethical codes in society that seems to give the students the go-ahead to succeed at any cost. Students hear of noted historians who have plagiarized, corporate accountants who have cooked the books, and alleged plagiarized material from the Internet being presented recently at a critical United Nations session on Iraq; sadly, they see no harm in a little cheating on their part. …

Journal ArticleDOI
TL;DR: The majority indicated that the use of Turnitin had helped them to reference correctly and write assignments in their own words, but only a minority had gained a more clear understanding of the definition of plagiarism.
Abstract: Introduction: Detecting and preventing academic dishonesty (cheating and plagiarism) is an issue for scholars. The aim of this study was to explore pharmacy students’ views on the use of Turnitin, an online plagiarism detection tool. Methods: All students in Years 3 and 4 of the BPharm course at the School of Pharmacy, the University of Auckland, were asked to complete an anonymous questionnaire looking at a number of issues including their views on using Turnitin and the penalties for those caught. Results: A 64% response rate was obtained. The majority indicated that the use of Turnitin had helped them to reference correctly and write assignments in their own words, but only a minority had gained a more clear understanding of thedefinition of plagiarism. Discussion:Students indicated wanting more feedback from tutors on the outcomes of submitting their work to Turnitin. Feedback from this study will be used to support the way in which Turnitin is used at the School. Further research is needed into the potential impact on learning outcomes


Proceedings ArticleDOI
TL;DR: In this study, three attribution techniques are extended, tested on a corpus of English texts, and applied to a book in the New Testament of disputed authorship.
Abstract: Authorship attribution has a range of applications in a growing number of fields such as forensic evidence, plagiarism detection, email filtering, and web information management. In this study, three attribution techniques are extended, tested on a corpus of English texts, and applied to a book in the New Testament of disputed authorship. The word recurrence interval based method compares standard deviations of the number of words between successive occurrences of a keyword both graphically and with chi-squared tests. The trigram Markov method compares the probabilities of the occurrence of words conditional on the preceding two words to determine the similarity between texts. The third method extracts stylometric measures such as the frequency of occurrence of function words and from these constructs text classification models using multiple discriminant analysis. The effectiveness of these techniques is compared. The accuracy of the results obtained by some of these extended methods is higher than many of the current state of the art approaches. Statistical evidence is presented about the authorship of the selected book from the New Testament.

Journal ArticleDOI
TL;DR: The results of this study suggest caution in the use of the Cloze procedure as a test of plagiarism, as difficult- to-read documents yielded significantly lower Cloze scores than their easier-to-read counterparts.
Abstract: The authors investigated whether scores on the Cloze procedure as a test of plagiarism would be significantly affected by the readability of text. Undergraduates were asked to either paraphrase or plagiarize legal documents. Approximately half of the participants in the paraphrase condition received documents that were difficult to read, whereas the other half received versions that were easy to read. About 2 weeks later, participants completed Cloze tests that were based on their rewritten paraphrased or plagiarized versions of the documents. As predicted, Cloze scores from the plagiarize condition were significantly lower than those from the paraphrase condition. However, difficult-to-read documents, whether they had been paraphrased or plagiarized, yielded significantly lower Cloze scores than their easier-to-read counterparts. In spite of some methodological shortcomings, the results of this study suggest caution in the use of the Cloze procedure as a test of plagiarism.

05 Sep 2005
TL;DR: This paper considers a submission and assessment system, called BOSS, that supports coursework assessment through collecting submissions, performing automatic tests for correctness and quality, checking for plagiarism, and providing an interface for marking and delivering feedback.
Abstract: Computer programming lends itself to automated assessment. With appropriate software tools program correctness can be measured, along with an indication of quality according to a set of metrics. Furthermore, the regularity of program code allows plagiarism detection to be an integral part of the tools that support assessment. In this paper, we consider a submission and assessment system, called BOSS, that supports coursework assessment through collecting submissions, performing automatic tests for correctness and quality, checking for plagiarism, and providing an interface for marking and delivering feedback. We present the results of evaluating the tool from three perspectives - technical, usability, and pedagogy.

Journal ArticleDOI
TL;DR: A method for measuring perceived similarity of visual products which avoids previous problems with subjectivity, and which makes it possible to pool results from respondents without the need for intermediate coding is described.
Abstract: Web page design guidelines produce a pressure towards uniformity; excessive uniformity lays a Web page designer open to accusations of plagiarism. In the past, assessment of similarity between visual products such as Web pages has involved an uncomfortably high degree of subjectivity. This paper describes a method for measuring perceived similarity of visual products which avoids previous problems with subjectivity, and which makes it possible to pool results from respondents without the need for intermediate coding. This method is based on co-occurrence matrices derived from card sorts. It can also be applied to other areas of software development, such as systems analysis and market research.

Journal ArticleDOI
TL;DR: The authors discussed the problem of plagiarism at all levels of academia - undergraduate, graduate, and unfortunately, even among authors of articles submitted to scholarly journals.
Abstract: In a recent issue of IT Professional, Colin Neill and Ganesh Shanmuganthan's article ("A Web-Enabled Plagiarism Detection Tool/ IT Professional, Sept.-Oct. 2004, pp. 19-23) provided an overview of plagiarism detection products and services as an introduction to a description of their own system. The authors discussed the problem of plagiarism at all levels of academia - undergraduate, graduate, and unfortunately, even among authors of articles submitted to scholarly journals. Although scholarly authors and, to some degree, graduate students don't deserve any special consideration when it comes to plagiarism, there are other factors to consider when discussing "plagiarism" among undergraduate students.

Proceedings Article
01 Jan 2005
TL;DR: This work builds upon previous work by Broder et al. and Heintze, specifically addressing a certain set of attacks that were discovered to be very powerful against previous systems and achieves robustness against these attacks with a new selection process.
Abstract: Text sifting is a method of quickly and securely identifying documents for database searching, copy detection, duplicate email detection and plagiarism detection. A small amount of text is extracted from a document using hash functions and is used as the document's fingerprint. We build upon previous work by Broder et al. [4,5] and Heintze [8], specifically addressing a certain set of attacks that we discovered to be very powerful against previous systems. We achieve robustness against these attacks with a new selection process. We also give theoretical and experimental results for these and other attacks on text sifting functions.

01 Jan 2005
TL;DR: The role of librarians in promoting academic integrity and educating students and faculty about information literacy has been surveyed by as mentioned in this paper, focusing on four areas relating to academic integrity: promotion, policy, education, and library involvement.
Abstract: In 2003, McGill University, a member of the Canadian “G10” research universities, undertook a limited trial of plagiarism detection software in specific undergraduate courses While it is estimated that 28 Canadian universities and colleges currently use text-matching software , the McGill trial received considerable attention from student, national and international media after a student refused to submit his work to the service and successfully challenged the university’s policy requiring the use of Turnitin™ While student and faculty reactions to the software have been mixed, debate about the use of text-matching software has served to promote awareness of the importance of academic integrity and the use of alternative methods of deterring plagiarism No final decision has yet been reached regarding the use of plagiarism detection software but the University is currently drafting policy for its general implementation in courses and specific use in cases of suspected plagiarism At the same time, it is working to develop collaborative initiatives involving key campus stakeholders, including the University administration, Teaching and Learning Services, librarians and student advocacy groups, to promote academic integrity at McGill In this study, we seek to determine how leading Canadian universities using text-matching software address issues of academic integrity Particular attention will paid to the role of librarians in promoting academic integrity and in educating students and faculty about information literacy Having identified seven of the G10 currently using Turnitin™, we intend to survey key stakeholders from each institution via electronic questionnaire for information on four areas relating to academic integrity: promotion, policy, education, and library involvement We expect to report a summary of our findings, paying special attention to the current situation at McGill

Proceedings ArticleDOI
19 Oct 2005
TL;DR: In this article, the authors address the challenges inherent in detecting academic dishonesty by examining a systems-approach to authentication of authorship of student work in an anytime/any-place environment.
Abstract: There are many types of academic dishonesty; deception, plagiarism, and even theft and fraud are among the most common. Many techniques have been developed to deal with dishonesty in classroom situations, but as instructional delivery increasingly migrates to online modalities, unique problems arise for the instructor in insuring that the work performed by students represents that student's efforts. When offering instruction in an anytime/any-place environment the challenges of insuring authentication of student work appears to be almost insurmountable because it is difficult to gather students into a central location or prevent off-line communication or assistance from others. This presentation addresses the challenges inherent in detecting academic dishonesty by examining a systems-approach to authentication of authorship of student work

Journal Article
TL;DR: A novel similarity model is proposed that supports alignment as well as shifting in plagiarism detection and a method for indexing the features extracted from each melody is suggested, and a methods for processing plagiarism Detection by using the index are suggested.
Abstract: Similar melody searching is an operation that finds such melodies similar to a given query melody from a music database In this paper, we address the development of a system that detects plagiarism based on the similar melody searching We first Propose a novel similarity model that supports alignment as well as shifting Also, we suggest a method for indexing the features extracted from each melody, and a method for processing plagiarism detection by using the index By our plagiarism detection system composers can easily searches for such melodies that are similar to their ones from music databases Through performance evaluation via a series of experiments, we show the effectiveness of our approach The results reveal that our approach outperforms the sequential-scan-based one in speed up to around 31 times

Journal ArticleDOI
TL;DR: This study shed some light on an ongoing research to develop a plagiarism detection portlet for java student assignments and promotes information sharing so users can build on their experiences at the institution.
Abstract: In an educational context, we are faced with similar challenges. How do we keep the adminis tration, faculty, staff and students well informed about institutional policies and procedures? How do we ensure the student body receives accurate and up-to-date information to help them achieve their educational and career goals? How to check for plagiarism cases? In addition, we hope to build learning communities-communities of students, instructors, administration, faculty and staff all collaborating and constructing strong relationships that provide the foundation for students to achieve their goals with greater success. We also want to promote information sharing so users can build on their experiences at the institution. Plus, we want to provide seamless integration with legacy and other applications in some easy, modifiable and reusable way. One solution to these goals is to provide a support tool for such learning through a learning portal. This portal should provide all users (e.g. students, instructors) with valuable information required. However, building and modifying learning portal is no small task, especially when you consider the shrinking budgets and limited resources in today's economy. Java portlet presents a new solution in which new functionality can be plugged to existing portals. This study shed some light on an ongoing research to develop a plagiarism detection portlet for java student assignments.

Book ChapterDOI
14 Sep 2005
TL;DR: The results reveal that the proposed novel similarity model outperforms the sequential-scan-based one in speed up to around 31 times, and a method for indexing the features extracted from every melody and processing plagiarism detection by using the index is suggested.
Abstract: This paper addresses the development of a system that detects plagiarism based on similar melody searching. Similar melody searching is to find the melodies similar to a given query melody from a music database. For this purpose, we propose a novel similarity model that supports alignment as well as shifting. Also, we suggest a method for indexing the features extracted from every melody, and a method for processing plagiarism detection by using the index. With our plagiarism detection system, composers can easily search for the melodies similar to their ones from music databases. Through performance evaluation via a series of experiments, we show the effectiveness of our approach. The results reveal that our approach outperforms the sequential-scan-based one in speed up to around 31 times.

Journal ArticleDOI
TL;DR: The ever expanding universe of online information is increasingly being used by students to outsource their learning assignments to address the gap between the physical ability to scan the information universe for un-cited re-used content and the trend towards ever increasing numbers of suspected plagiarism incidents.
Abstract: Our ever expanding universe of online information is increasingly being used by students to outsource their learning assignments. A wide range of factors are combining to create a plagiarism crisis in higher education. A bundle of solutions is required. Plagiarism detection systems like http://www.Turnitin.com are using information retrieval technology to address the gap between our physical ability to scan the information universe for un-cited re-used content and the trend towards ever increasing numbers of suspected plagiarism incidents.

Proceedings ArticleDOI
24 May 2005
TL;DR: The document duplication method developed by this research is DICOM (dynamic incremental comparison method) is highly active and targets on producing various detectors to embody a system to expeditiously and effectively detect duplicated documents and parts.
Abstract: Vast amount of information is generated and shared in this active digital information society. Due to easy access to information in our digital society, there are many cases of illegal counterfeiting and usage of personal information. Producing information with investment and effort is important indeed, but managing and protecting information is becoming a furthermore important issue. This is to promote a new detecting theory and solution for cases of intellectual property violations and plagiarizing digital contents. The document duplication method developed by this research is DICOM (dynamic incremental comparison method) is highly active. It targets on producing various detectors to embody a system to expeditiously and effectively detect duplicated documents and parts.