scispace - formally typeset
Search or ask a question

Showing papers on "Decimal published in 2022"


Journal ArticleDOI
TL;DR: A new method has been eliciting for encoding 2D and 3D color images using DNA strand construction as the basis for structuring the method, which achieved good results when compared with the results of other methods in terms of quality and time.
Abstract: — In this study, a new method has been eliciting for encoding 2D and 3D color images. The DNA strand construction was used as the basis for structuring the method. This method consisted of two main stages, the encryption and decryption stages. As each stage includes several operations to reach the desired goal. In the coding stage, a special table was prepared to show the mechanism of work. It starts with encoding the DNA bases into two binary orders, then two zeros are added to the string to finally consist of four binary bits whose size is parallel to the representation of a set of hexadecimal numbers represented in binary, where the XOR operation is then done between the two values to be the result is completely different from the original code. Then the binary values we obtained are converted to decimal values that are placed in an array with the same size as the image to be encoded. Finally, this last array was processed with the exponential function factor, so the final result is a 100% encoded image. In the decoding stage, another algorithm was built that reflects the work of what preceded it in the encryption stage, where the result was an exact copy of the original image. It is worth noting that standard images of different sizes were used as testing images. The performance evaluation of the method was calculated based on several factors: MSE, peak PSNR, and the time required to perform the encoding and decoding process. The method achieved good results when compared with the results of other methods in terms of quality and time.

14 citations



Journal ArticleDOI
TL;DR: In this article , the authors used Grammatical Evolution (GE) to generate an initial seed for the construction of a pseudo-random number generator (PRNG) and cryptographically secure (CS) PRNG.
Abstract: This work investigates the potential for using Grammatical Evolution (GE) to generate an initial seed for the construction of a pseudo-random number generator (PRNG) and cryptographically secure (CS) PRNG. We demonstrate the suitability of GE as an entropy source and show that the initial seeds exhibit an average entropy value of 7.940560934 for 8-bit entropy, which is close to the ideal value of 8. We then construct two random number generators, GE-PRNG and GE-CSPRNG, both of which employ these initial seeds. We use Monte Carlo simulations to establish the efficacy of the GE-PRNG using an experimental setup designed to estimate the value for pi, in which 100,000,000 random numbers were generated by our system. This returned the value of pi of 3.146564000, which is precise up to six decimal digits for the actual value of pi. We propose a new approach called control_flow_incrementor to generate cryptographically secure random numbers. The random numbers generated with CSPRNG meet the prescribed National Institute of Standards and Technology SP800-22 and the Diehard statistical test requirements. We also present a computational performance analysis of GE-CSPRNG demonstrating its potential to be used in industrial applications.

6 citations


Journal ArticleDOI
TL;DR: In this paper , the authors compared the performance of adjusted min-max normalization with other normalization methods in terms of accuracy and mean square error of the final classification outcomes, and proposed adjusted-2 min-Max normalization achieved a higher accuracy and a lower mean square errors than min-MAX normalization on each of the following datasets: white wine quality, Pima Indians diabetes, vertical column, and Indian liver disease datasets.
Abstract: In this research, the normalization performance of the proposed adjusted min-max methods was compared to the normalization performance of statistical column, decimal scaling, adjusted decimal scaling, and min-max methods, in terms of accuracy and mean square error of the final classification outcomes. The evaluation process employed an artificial neural network classification on a large variety of widely used datasets. The best method was min-max normalization, providing 84.0187% average ranking of accuracy and 0.1097 average ranking of mean square error across all six datasets. However, the proposed adjusted-2 min-max normalization achieved a higher accuracy and a lower mean square error than min-max normalization on each of the following datasets: white wine quality, Pima Indians diabetes, vertical column, and Indian liver disease datasets. For example, the proposed adjusted-2 min-max normalization on white wine quality dataset achieved 100% accuracy and 0.00000282 mean square error. To conclude, for some classification applications on one of these specific datasets, the proposed adjusted-2 min-max normalization should be used over the other tested normalization methods because it performed better.

6 citations


Journal ArticleDOI
TL;DR: In this article , a systematic review was conducted to describe the instructional foci of rational number interventions, determine the overall effect size, and explore potential moderators, and the majority of studies focused on teaching fraction magnitude and arithmetic.
Abstract: Understanding rational numbers is critical for secondary mathematics achievement. However, students with mathematics difficulties (MD) struggle with rational number topics, including fractions, decimals, and percentages. The purpose of this systematic review was to describe the instructional foci of rational number interventions, determine the overall effect size, and explore potential moderators. Forty-three studies were included and 150 effect sizes were meta-analyzed using robust variance estimation. The majority of studies focused on teaching fraction magnitude and arithmetic. An overall effect size of g = 1.02 [0.80, 1.25] was found for rational number interventions favoring treatment conditions over business as usual control. Proximal measures contributed to higher effect sizes than distal measures. Limitations included a high number of fraction interventions contributing to the overall effect size and a large amount of heterogeneity among study effect sizes.

5 citations


Journal ArticleDOI
01 Apr 2022-Optik
TL;DR: In this paper , an image tamper detection and correction technique using the principle of differencing and addressing the fall off boundary problem (FOBP) is proposed, which operates on 2 × 2 non-overlapped blocks.

4 citations


Journal ArticleDOI
01 Aug 2022
TL;DR: In this article , a hybrid memristor circuit consisting of a magnetic-controlled and a charge-controlled is designed, and an absolute value and a square root algorithm are embedded in the magnetic controlled model specially.
Abstract: A hybrid memristor circuit consisting of a magnetic-controlled and a charge-controlled is designed. In order to construct a relatively complex dynamical system, an absolute value and a square root algorithm are embedded in the magnetic-controlled model specially. This novel five-dimensional system is first analyzed with integer order for the existence of chaotic phenomena, and then implemented the fractional-order memristor system by the Grunwald–Letnikov algorithm. Five fractional order values of qi (i=1,2,3,4,5) are taken as identical and different in the numerical simulation, respectively. The bifurcation diagram, 0-1 test, SALI algorithm, and complexity analysis reveal the existence of nonlinear dynamic characteristics to the hybrid fractional-order memristor system. Finally, the commensurate and non-commensurate fractional-order systems based on the frequency domain method are implemented by FPGA technology, their accuracy can reach to 14 decimal places. The results of numerical calculation and circuit simulation have the same phase trajectory, which can verify that the hybrid memristors have practical values in the future.

4 citations


Journal ArticleDOI
TL;DR: In this article , the role of digital learning games in bridging the gender gap in middle school math education was investigated by studying Decimal Point, a math game that teaches decimal numbers and operations to 5th and 6th graders.
Abstract: There is an established gender gap in middle school math education, where female students report higher anxiety and lower engagement, which negatively impact their performance and even long-term career choices. This work investigates the role of digital learning games in addressing this issue by studying Decimal Point, a math game that teaches decimal numbers and operations to 5th and 6th graders. Through data from four published studies of Decimal Point, involving 624 students in total, the authors identified a consistent gender difference that was replicated across all studies – male students tended to do better at pretest, while female students tended to learn more from the game. In addition, female students were more careful in answering self-explanation questions, which significantly mediated the relationship between gender and learning gains in two out of four studies. These findings show that learning games can be an effective tool for bridging the gender gap in middle school math education, which in turn contributes to the development of more personalized and inclusive learning platforms.

4 citations


Journal ArticleDOI
TL;DR: This paper found that cross-notation knowledge could help learners to achieve a better understanding of rational numbers than could easily be achieved from within-nondiscrete knowledge alone, which is a hypothesis tested by reanalyzing three published datasets involving fourth-to-eight-grade children from the United States and Finland.

3 citations


Journal ArticleDOI
TL;DR: This article analyzed fraction and decimal arithmetic problems assigned by 14 teachers over an entire school year and found that five of the six documented biases in textbook problem distributions were also present in the classroom assignments.
Abstract: Imbalances in problem distributions in math textbooks have been hypothesized to influence students’ performance. This hypothesis, however, rests on the assumption that textbook problems are representative of the problems that students encounter in classroom assignments. This assumption might not be true, because teachers do not present all problems in textbooks and because teachers present problems from sources other than textbooks. To test whether distributions of problems that students encounter parallel distributions of textbook problems, we analyzed fraction and decimal arithmetic problems assigned by 14 teachers over an entire school year. Five of the six documented biases in textbook problem distributions were also present in the classroom assignments. Moreover, the same biases were present in 16 of the 18 combinations of bias and grade level (4th, 5th, and 6th grade) that were examined in assignments and textbooks. Theoretical and educational implications of these findings are discussed.

3 citations


Journal ArticleDOI
TL;DR: In this paper , age at the time of detection and surgery of dense unilateral cataract and investigation of best-corrected visual acuity (BCVA) in a nationwide register-based cohort study, based on the routine of maternity ward eye screening.
Abstract: Analysis of age at time of detection and surgery of dense unilateral cataract and investigation of best‐corrected visual acuity (BCVA) in a nationwide register‐based cohort study, based on the routine of maternity ward eye screening.


Proceedings ArticleDOI
05 Oct 2022
TL;DR: In this paper , the authors proposed a hybrid method for the encryption of images, which combines arithmetic and bit-level operations in the form of PRNG key data change with an S-Box as a confusion application element.
Abstract: The proliferation of multimedia production and exchange over the Internet has created a dire need for security measures to protect it. In this paper, we propose a hybrid method for the encryption of images. This kind of hybrid strategy calls for the application of two (varying in nature) components, arithmetic (in the form of a dot product operation applied on the decimal level) and bit-level operations (in the form of an application of a PRNG key data change), alongside an S-Box (as a confusion application element). The incorporation of an arithmetic component leads to the development of an encryption scheme that is fundamentally distinct from the majority of algorithms in the state-of-the-art. The proposed scheme demonstrates strong encryption performance, as can be seen from the results of the conducted tests.

Book ChapterDOI
01 Jan 2022
TL;DR: In this paper , a role-play-based simulation of diagnostic interviews on the topic of decimal fractions for mathematics pre-service teachers was developed, where participants either take on the role of a sixth grader, a teacher interviewing a sixth grade, or an observer watching the interview.
Abstract: Abstract One-on-one diagnostic interviews with school students have been proposed as learning opportunities to acquire diagnostic competences. Moreover, role-play-based simulations have proved promising to foster interactive competences similar to diagnosis during early phases of teacher and medical education. Thus, we developed a role-play-based simulation of diagnostic interviews on the topic of decimal fractions for mathematics pre-service teachers. During the role-play, participants either take on the role of a sixth grader, a teacher interviewing a sixth grader, or an observer watching the interview. Based on cognitive labs addressing criteria such as authenticity and immersion in the teacher’s diagnostic task in the role-play, we analyze the feasibility of the chosen simulation approach to measure and foster mathematics pre-service teachers’ diagnostic competences.

Journal ArticleDOI
TL;DR: The authors investigated children's use of conceptual knowledge in a new domain, decimal arithmetic, and found that most errors involved overgeneralization of strategies that would be correct for problems with different operations or types of number.
Abstract: To explain children's difficulties learning fraction arithmetic, Braithwaite et al. (2017) proposed FARRA, a theory of fraction arithmetic implemented as a computational model. The present study tested predictions of the theory in a new domain, decimal arithmetic, and investigated children's use of conceptual knowledge in that domain. Sixth and eighth grade children (N = 92) solved decimal arithmetic problems while thinking aloud and afterward explained solutions to decimal arithmetic problems. Consistent with the hypothesis that FARRA's theoretical assumptions would generalize to decimal arithmetic, results supported 3 predictions derived from the model: (a) accuracies on different types of problems paralleled the frequencies with which the problem types appeared in textbooks; (b) most errors involved overgeneralization of strategies that would be correct for problems with different operations or types of number; and (c) individual children displayed patterns of strategy use predicted by FARRA. We also hypothesized that during routine calculation, overt reliance on conceptual knowledge is most likely among children who lack confidence in their procedural knowledge. Consistent with this hypothesis, (d) many children displayed conceptual knowledge when explaining solutions but not while solving problems; (e) during problem-solving, children who more often overtly used conceptual knowledge also displayed doubt more often; and (f) problem solving accuracy was positively associated with displaying conceptual knowledge while explaining, but not with displaying conceptual knowledge while solving problems. We discuss implications of the results for rational number instruction and for the creation of a unified theory of rational number arithmetic. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Journal ArticleDOI
TL;DR: This paper found that cross-notation knowledge could help learners to achieve a better understanding of rational numbers than could easily be achieved from within-nondiscrete knowledge alone, which is a hypothesis tested by reanalyzing three published datasets involving fourth-to-eight-grade children from the United States and Finland.

Journal ArticleDOI
TL;DR: In this paper , the results of an experiment applied to 170 students from two Chilean universities who solved a task about reading a graph of an affine function in an online assessment environment where the parameters (coefficients of the graphed affine functions) are randomly defined from an ad-hoc algorithm, with automatic correction and automatic feedback.
Abstract: This paper shows the results of an experiment applied to 170 students from two Chilean universities who solve a task about reading a graph of an affine function in an online assessment environment where the parameters (coefficients of the graphed affine function) are randomly defined from an ad-hoc algorithm, with automatic correction and automatic feedback. We distinguish two versions: one of them with integer coefficients and the other one with decimal coefficients in the affine function. We observed that the nature of the coefficients impacts the mathematical work used by the students, where we again focus on two of them: by direct estimation from the graph or by calculating the equation of the line. On the other hand, feedback oriented towards the "estimation" strategy influences the mathematical work used by the students, even though a non-negligible group persists in the "calculating" strategy, which is partly explained by the perception of each of the strategies.

Journal ArticleDOI
18 Aug 2022-Symmetry
TL;DR: A novel reversible data-hiding scheme based on an enhanced reduced difference-expansion technique that has achieved better performance in comparison with previous solutions is proposed.
Abstract: Reversible data hiding is a data-hiding technique which has the ability to recover the original version from stego-images after the secret information is extracted. In this paper, we propose a novel reversible data-hiding scheme based on an enhanced reduced difference-expansion technique. In the proposed scheme, the original image is divided into non-overlapping quad-blocks for embedding data. Then, to enhance the security, the secret bits are encrypted based on the encryption key and a symmetry-based strategy. To improve embedding capacity further, two adjacent encrypted bits are converted into a corresponding decimal digit. Difference expansion (DE) technique is applied to embed a decimal form instead of a binary version. Moreover, to maintain the good image quality, the enhanced reduced difference-expansion technique is used to reduce the original difference values so that it is suitable for decimal embedding. The experimental results demonstrated that the proposed scheme has achieved better performance in comparison with previous solutions.

Journal ArticleDOI
TL;DR: In the early days of the computer revolution, a major controversy in both the academic and commercial communities involved in the development of modern computer architectures was brought up by the title of this column as discussed by the authors .
Abstract: The dichotomy created by the advent of computers and brought up by the title of this column was a major quandary in the early days of the computer revolution, causing major controversy in both the academic and commercial communities involved in the development of modern computer architectures. Even though the controversy was eventually decided (in favor of binary representation; all commercially available computers use a binary internal architecture), echoes of that controversy still affect computer usage today by creating errors when data is transferred between computers, especially in the chemometric world. A close examination of the consequences reveals a previously unexpected error source.

Book ChapterDOI
01 Jan 2022
TL;DR: The authors analyzes central core insights in measurement-related concepts and procedures focusing on understanding measurement instruments, the importance of reference points, the understanding of unit sizes, converting physical quantities and understanding of decimal numbers.
Abstract: AbstractDealing with numbers, quantities and units in measurement contexts is a central and also interconnecting content of mathematics and physics classes. Though, many students struggle with measurement tasks. Hence, this chapter analyzes central core insights in measurement-related concepts and procedures focusing on understanding measurement instruments, the importance of reference points, the understanding of unit sizes, converting physical quantities and understanding of decimal numbers. Common problems and mistakes of students are analyzed in detail and some hints how to overcome these problems are given.


Journal ArticleDOI
TL;DR: In this paper , the question of whether continued fractions (objects in $\mathbb ǫ) in real numbers are algebraic objects is addressed, and the link between automaticity and algebraicity is established concerning power series in finite characteristics, decimal expansion and continued fraction expansion of real numbers.
Abstract: The link between automaticity and algebraicity is well established concerning power series in finite characteristics, decimal expansion and continued fraction expansion of real numbers. But the question of whether continued fractions (objects in $\mathbb

Book ChapterDOI
28 Jun 2022
TL;DR: In this paper , the authors used SPSS and MS Excel tools to measure the accuracy of the round function, which is one of the functions that can be calculated in these two programs.
Abstract: Statistical operations are performed by various specialist programs, which can do these processes quickly and accurately. In SAS, STAT, and the analytical application SPSS, there are various functions that can be calculated. There is a Microsoft Excel program that can perform these calculations. Within the functions they are calculating, the level of some programs may differ from that of others. Sum, Average, Maximum, and Minimum are examples of these functions. This research can also measure the accuracy of the round function, which is one of these functions. In this study, I used ten-digit integers and three criteria: less than, equal to, and more than 5. In SPSS the decimals are valid for numeric variables only. Rounding procedures are performed based on whether the desired decimal place is less, equal, or greater than 5. Using the SPSS and MS Excel tools, ten digits were rounded. Except that Microsoft Excel truncates the remaining zeroes of the digit after the decimal point, the output findings are identical. It will truncate after the decimal point whenever the decimal place provided in the digit is wanted. Based on the decimal place in the digit number desired statistically and analytically, SPSS is more precise than MS Excel.

Journal ArticleDOI
TL;DR: In this article , a Five-Level Model of full professional development (PD) management, in the format of a decimal model of management of good human health (GHH) formation, is presented.
Abstract: The purpose of the study is to create a Five-Level Model of full professional development (PD) management, in the format of a decimal model of management of good human health (GHH) formation. To achieve the determined goal, the concepts of «occupational human health» and «comprehensive professional development» determining full compliance of a person’s professional activity with achievements in modern science and practice were defined. This is the highest level of professional development to achieve goals in a certain type of professional activity as a result of fulfilling the professional potential. Thereafter, to meet the objectives of the study, the last tenth letter «T» of the word «management» was used in the Conceptual decimal model of full innovative GHH management. It represents the highest, tenth level of the professional activity system, consisting of five types: work — labour — business — game — employment. These five types of human profess ional activity are at the core of the Five-Level Model and are placed at five hierarchical levels, in accordance with their relevance. To create the Five-Level Model, single-type tables were built that summarize the characteristics of each of the five types of professional activity: work — labour — business — game — employment and characteristics of the corresponding subjects. Reasonable conclusions have been drawn for each type of professional activity according to its hierarchical level in the Five-Level Model. This has helped to create the Five-Level Model of comprehensive PD management, in a format of the Conceptual model of full innovative management of GHH formation and determine that the Five-Level Model is a full hierarchical management model consisting of five types of professional activities in precise order according to their relevance: work, labour, business, game, service, which together fully cover the entire professional activity. It is proved that unique properties of the Five-Level Model allow for its regular use for full professional development management in any environment, including Ukrainian society.

Journal ArticleDOI
TL;DR: In this paper , the reciprocal sum of Proth primes with nine decimal digit precision is computed for the first n Proth numbers with tight upper and lower bounds, where the first decimal digit is unknown.
Abstract: Abstract Computing the reciprocal sum of sparse integer sequences with tight upper and lower bounds is far from trivial. In the case of Carmichael numbers or twin primes even the first decimal digit is unknown. For accurate bounds the exact structure of the sequences needs to be unfolded. In this paper we present explicit bounds for the sum of reciprocals of Proth primes with nine decimal digit precision. We show closed formulae for calculating the n th Proth number $$F_n$$ F n , the number of Proth numbers up to n , and the sum of the first n Proth numbers. We give an efficiently computable analytic expression with linear order of convergence for the sum of the reciprocals of the Proth numbers involving the $$\Psi $$ Ψ function (the logarithmic derivative of the gamma function). We disprove two conjectures of Zhi-Wei Sun regarding the distribution of Proth primes.

Journal ArticleDOI
TL;DR: In this paper , the authors used data transformation with normalization to increase the accuracy of the results from the wine dataset classification using the K-NN algorithm and showed that the best accuracy lies in the wine data set which has been normalized using the min-max normalization method with K = 1 of 65.92%.
Abstract: The range of values that are not balanced on each attribute can affect the quality of data mining results. For this reason, it is necessary to pre-process the data. This preprocessing is expected to increase the accuracy of the results from the wine dataset classification. The preprocessing method used is data transformation with normalization. There are three ways to do data transformation with normalization, namely min-max normalization, z-score normalization, and decimal scaling. Data that has been processed from each normalization method will be compared to see the results of the best classification accuracy using the K-NN algorithm. The K used in the comparisons were 1, 3, 5, 7, 9, 11. Before classifying the normalized wine dataset, it was divided into test data and training data with k-fold cross validation. The division of the data using k is equal to 10. The results of the classification test with the K-NN algorithm show that the best accuracy lies in the wine dataset which has been normalized using the min-max normalization method with K = 1 of 65.92%. The average obtained is 59.68%.

Posted ContentDOI
30 Aug 2022
TL;DR: This paper found that the ability to fluidly translate and compare magnitudes within and across notations is central to understanding of rational numbers, and that cross-notation magnitude comparison accuracy (i.e., fraction vs. decimal, percentage vs. fraction, and percentage vs., decimal) accounted for variance in math outcomes beyond that explained by magnitude representations of individual notations.
Abstract: We propose that integrated number sense, the ability to fluidly translate and compare magnitudes within and across notations, is central to understanding of rational numbers. Consistent with this hypothesis, two studies of 6th through 8th grade students (N=264 and N=46) indicated that accuracy comparing magnitudes within and across notations predicted overall math achievement and fraction number line and arithmetic estimation accuracy. Cross-notation magnitude comparison accuracy (i.e., fraction vs. decimal, percentage vs. fraction, and percentage vs. decimal) accounted for variance in math outcomes beyond that explained by magnitude representations of individual notations. The findings also revealed a percentages-are-larger bias, in which percentages are perceived as larger than equivalent fractions and decimals. Theoretical and instructional implications are discussed.

Journal ArticleDOI
TL;DR: In this paper , the effect of applying data normalization in the K-means method to classify student data which is used as a recommendation in the selection of UKT assistance is investigated.
Abstract: At Budi Darma University there are obstacles in providing UKT rocks where it is profitable and less targeted for students who get it. This happened because those who deserved this assistance were students who had difficulty in costs, therefore we needed a way by grouping student data based on their social level. In determining the students who deserve to get the rock, they can use the data of students who are undergoing their studies at Budi Darma University. By digging up information based on student data. So that the data can be used first, the data normalization is carried out in order to obtain more accurate data. Where student data can be grouped correctly, data normalization must be carried out. One of the normalization methods that are often used in normalizing data is the decimal scaling method which is a data transformation method with normalization to equalize the range of values ​​on each attribute with a certain scale by moving the decimal value from data in the desired direction After the data is normalized, the next process is to explore student data information by applying data mining. The application of data mining is carried out to obtain information in the form of student data groups that are used as a priority in obtaining UKT assistance. The method used in classifying student data is using the K-Means algorithm. The manual testing method is that there are 3 clusters where the number of clusters 0 cluster 1 and cluster 3 is the same as testing data mining applications, namely rapidminer so that those who deserve to be prioritized get tuition assistance based on the sample, namely cluster / grouping 0 which consists of 22 people. This study aims to see the effect of applying data normalization in the K-Means method to classify student data which is used as a recommendation in the selection of UKT assistance.

Journal ArticleDOI
TL;DR: The limit of detection (LOD) is defined as the lowest concentration of analyte signal variations that are statistically different from (and higher than) that recorded for the blank as discussed by the authors .
Abstract: Students learn early on in science courses that reliable results can only be achieved from accurate measurements. In some of the first general chemistry classes, but even more intensively in analytical chemistry classes, students are introduced to the concept of significant figures. They learn that a measurement is only as true and precise as the instrument used to carry it out, and that the level of skill of the measurer (and how they interpret and report the measurement) is also important. A classic example to introduce significant figures involves the measurement of an object’s length using a ruler. Let us say that the object is a string, and one has two rulers available to measure it. The first ruler has 1-cm markings, while the second one presents 0.1-cm markings. When performing any measurement, students learn that more accurate results will be achieved if one estimates and includes an additional digit to the measurement. Thus, as an example, let us say that the string’s length is determined to be between 3 and 4 cm using the first ruler. The measurer will then estimate the tenths decimal place and report the string’s length as 3.4 cm, for example. Although considered significant, the tenths place (i.e., 4) is an estimate and, therefore, it is uncertain. The other digit (i.e., 3) is both significant and certain. Using the second ruler, the string’s length falls between 3.4 and 3.5 cm. The measurer will again estimate an additional decimal place and report the length as 3.48 cm. In this example, the digits 3 and 4 are both significant and certain, while the hundredths place (i.e., 8) is significant but uncertain. Now, how does the concept of significant figures relate to limit of detection (LOD)? There are different interpretations for LOD, but it simply is the lowest concentration of analyte generating an instrument response that is statistically different from (and higher than) that recorded for the blank. According to the International Union of Pure and Applied Chemistry (IUPAC), the LOD is calculated as three times the standard deviation of the instrument response for the blank (Sb), divided by the calibration curve slope (m), i.e., LOD = 3Sb/m. The value Sb is calculated from repeated measurements of the blank solution, usually 10 - 20 replicates (n = 10 -20). Considering a normal distribution, 99.86% of the data is < (x ̅+3S) (i.e., < than the value corresponding to three standard deviations above the mean). Therefore, the LOD calculated at 3Sb is statistically different from, and has a 99.86% chance of being larger than the blank.4 At the LOD, the instrument response recorded for the analyte is certainly not due to the random variation of the blank signal (almost 100% certain). However, there is no certainty associated with the analyte concentration. In other words, one knows the analyte is present, but cannot quantify it with an adequate level of confidence. Considering Sb as noise, the analyte’s signal-to-noise ratio (S/N) at the LOD is 3 (i.e., the instrument response recorded for the analyte is three times higher than the noise). Because S/N is the reciprocal of the relative standard deviation (RSD), analyte signal variations higher than 33% (RSD = 1/3) are expected at the LOD. Therefore, the concentration corresponding to the LOD is, by definition, uncertain. Going back to the concept of significant figures, the LOD must be reported with a single digit, which is significant but uncertain. Many researchers fail to follow these fundamental concepts of statistics and analytical chemistry, as LODs are often reported with several significant figures (!). Perhaps, it is due to a natural difficulty to connect concepts learned at different points in one’s education. Nevertheless, “the case of the LOD” may foster an awareness that could facilitate the identification of instances in which the connection between different concepts is essential: from adequate reporting of analytical results to new ideas and discoveries.

Journal ArticleDOI
TL;DR: In this article , a qualitative research with the type of Classroom Action Research (CAR) was conducted to reveal the ability to convert fractions into percent and decimal forms and vice versa with the play method.
Abstract: The objectives of this research are: (1) to reveal the ability to convert fractions into percent and decimal forms and vice versa with the play method; (2) to reveal an increase in the ability to convert fractions into percent and decimal forms and vice versa by applying the playing method. This research is a qualitative research with the type of Classroom Action Research (CAR). Data was collected using the test method to collect data on student abilities and observation sheets were used to measure the effectiveness of the teaching and learning process. Based on the research results, the application of the play method can optimize teacher participation, student participation and the ability to convert fractions into percent and decimal forms and vice versa. The results of the ability to convert fractions into percent and decimal and vice versa in the first cycle reached 74.17 and reached 82.23 in the second cycle. By applying the play method in learning, it can increase the ability to convert fractions into percent and decimal forms and vice versa, this is indicated by an increase in students' abilities by 8.06.