Research practice and ethics in mathematics education
01 Jan 2020-
About: The article was published on 2020-01-01 and is currently open access. It has received 10 citations till now.
TL;DR: In this article, Courant and Robbins present What is Mathematics? An Elementary Approach to Ideas and Methods, which is a good book which deserves to run through many editions and is not a book on mathematical logic or philosophy: it deals not with the nature of mathematics but with its content.
Abstract: THIS is a thoroughly good book which deserves to run through many editions. It is not (as its title might suggest) a book on mathematical logic or philosophy: it deals not with the nature of mathematics but with its content. Its purpose is to show, not by general disquisitions but by concrete examples, drawn from almost every branch of pure mathematics, how mathematicians think and what they do. What is Mathematics? An Elementary Approach to Ideas and Methods. By Richard Courant and Herbert Robbins. Pp. xix + 521. (London, New York and Toronto: Oxford University Press, 1941.) 25s. net.
12 Mar 2017
TL;DR: The competence in problem determination, problem analysis, proposing and mapping several candidate solutions up to selecting the appropriate confirmation approach is important for establishing scientific character since undergraduate program in the University.
Abstract: The competence in problem determination, problem analysis, proposing and mapping several candidate solutions up to selecting the appropriate confirmation approach is important for establishing scientific character since undergraduate program in the University. Scientific character couldn't be obtained in instant, gradual understanding in basic of critical thinking and the way to conclude, even more to believe in something also requires appropriate scientific way of thinking. Concerning this responsibility, it is important to introduce the principle and basic concept of the nature in research activity since the beginning of the undergraduate program.
01 Jan 2019
TL;DR: Abah Abah et al. as discussed by the authors proposed a method to solve the problem of gender discrimination in the workplace. But, the method was ineffective. Email: email@example.com
Abstract: *Corresponding author: Joshua Abah Abah; Email: firstname.lastname@example.org
20 Mar 2020
TL;DR: In this paper, the authors investigated the use of modeling by mathematics teachers in their teaching of mathematics and found that majority of the teachers in Kolokuma/Opokuma local government area of Bayelsa state in Nigeria are not aware of modeling in mathematics education.
Abstract: This study investigated the use of modeling by mathematics teachers in their teaching of mathematics. In the specific objectives, it sought the mathematics teachers’ awareness in the use of modeling in mathematics education as well as the level of utilization. The study was conducted in Kolokuma/Opokuma local government area of Bayelsa state in Nigeria. It adopted a survey research design with a population of 47 mathematics teachers in ten secondary schools. A sample of 20 out of this population was used. To arrive at this, purposive sampling technique was used. Instrument for data collection was Modeling Awareness Inventory (MAI) which was validated by experts. The instrument was trial tested using Cronbach Alpha formula and had a reliability coefficient of 0.86. Descriptive statistic was used to answer all the research questions asked. It was found among others that Majority of the mathematics teachers in Kolokuma/Opokuma local government area are not aware of modeling in mathematics education. Suggestions on how to improve their awareness were also made.
TL;DR: The extent to which method biases influence behavioral research results is examined, potential sources of method biases are identified, the cognitive processes through which method bias influence responses to measures are discussed, the many different procedural and statistical techniques that can be used to control method biases is evaluated, and recommendations for how to select appropriate procedural and Statistical remedies are provided.
Abstract: Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.
01 Jan 1969
TL;DR: This chapter discusses research strategies and the Control of Nuisance Variables, as well as randomly Randomized Factorial Design with Three or More Treatments and Randomized Block Factorial design, and Confounded Factorial Designs: Designs with Group-Interaction Confounding.
Abstract: Chapter 1. Research Strategies and the Control of Nuisance Variables Chapter 2. Experimental Designs: an Overview Chapter 3. Fundamental Assumptions in Analysis of Variance Chapter 4. Completely Randomized Design Chapter 5. Multiple Comparison Tests Chapter 6. Trend Analysis Chapter 7. General Linear Model Approach to ANOVA Chapter 8. Randomized Block Designs Chapter 9. Completely Randomized Factorial Design with Two Treatments Chapter 10. Completely Randomized Factorial Design with Three or More Treatments and Randomized Block Factorial Design Chapter 11. Hierarchical Designs Chapter 12. Split-Plot Factorial Design: Design with Group-Treatment Confounding Chapter 13. Analysis of Covariance Chapter 14. Latin Square and Related Designs Chapter 15. Confounded Factorial Designs: Designs with Group-Interaction Confounding Chapter 16. Fractional Factorial Designs: Designs with Treatment-Interaction Confounding
01 Jan 1976
TL;DR: For centuries knowledge meant proven knowledge, proven either by the power of the intellect or by the evidence of the senses as discussed by the authors. But the notion of proven knowledge was questioned by the sceptics more than two thousand years ago; but they were browbeaten into confusion by the glory of Newtonian physics.
Abstract: For centuries knowledge meant proven knowledge — proven either by the power of the intellect or by the evidence of the senses. Wisdom and intellectual integrity demanded that one must desist from unproven utterances and minimize, even in thought, the gap between speculation and established knowledge. The proving power of the intellect or the senses was questioned by the sceptics more than two thousand years ago; but they were browbeaten into confusion by the glory of Newtonian physics. Einstein’s results again turned the tables and now very few philosophers or scientists still think that scientific knowledge is, or can be, proven knowledge. But few realize that with this the whole classical structure of intellectual values falls in ruins and has to be replaced: one cannot simply water down the ideal of proven truth - as some logical empiricists do — to the ideal of’probable truth’1 or — as some sociologists of knowledge do — to ‘truth by [changing] consensus’.2
"Research practice and ethics in mat..." refers background in this paper
...Lakatos, I. (1970). Falsification and the methodology of scientific research programs....
TL;DR: The American Statistical Association (ASA) released a policy statement on p-values and statistical significance in 2015 as discussed by the authors, which was based on a discussion with the ASA Board of Trustees and concerned with reproducibility and replicability of scientific conclusions.
Abstract: Cobb’s concern was a long-worrisome circularity in the sociology of science based on the use of bright lines such as p< 0.05: “We teach it because it’s what we do; we do it because it’s what we teach.” This concern was brought to the attention of the ASA Board. The ASA Board was also stimulated by highly visible discussions over the last few years. For example, ScienceNews (Siegfried 2010) wrote: “It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation.” A November 2013, article in Phys.org Science News Wire (2013) cited “numerous deep flaws” in null hypothesis significance testing. A ScienceNews article (Siegfried 2014) on February 7, 2014, said “statistical techniques for testing hypotheses...havemore flaws than Facebook’s privacy policies.” Aweek later, statistician and “Simply Statistics” blogger Jeff Leek responded. “The problem is not that people use P-values poorly,” Leek wrote, “it is that the vast majority of data analysis is not performed by people properly trained to perform data analysis” (Leek 2014). That same week, statistician and science writer Regina Nuzzo published an article in Nature entitled “Scientific Method: Statistical Errors” (Nuzzo 2014). That article is nowone of the most highly viewedNature articles, as reported by altmetric.com (http://www.altmetric.com/details/2115792#score). Of course, it was not simply a matter of responding to some articles in print. The statistical community has been deeply concerned about issues of reproducibility and replicability of scientific conclusions. Without getting into definitions and distinctions of these terms, we observe that much confusion and even doubt about the validity of science is arising. Such doubt can lead to radical choices, such as the one taken by the editors of Basic andApplied Social Psychology, who decided to ban p-values (null hypothesis significance testing) (Trafimow and Marks 2015). Misunderstanding or misuse of statistical inference is only one cause of the “reproducibility crisis” (Peng 2015), but to our community, it is an important one. When the ASA Board decided to take up the challenge of developing a policy statement on p-values and statistical significance, it did so recognizing this was not a lightly taken step. The ASA has not previously taken positions on specific matters of statistical practice. The closest the association has come to this is a statement on the use of value-added models (VAM) for educational assessment (Morganstein and Wasserstein 2014) and a statement on risk-limiting post-election audits (American Statistical Association 2010). However, these were truly policy-related statements. The VAM statement addressed a key educational policy issue, acknowledging the complexity of the issues involved, citing limitations of VAMs as effective performance models, and urging that they be developed and interpreted with the involvement of statisticians. The statement on election auditing was also in response to a major but specific policy issue (close elections in 2008), and said that statistically based election audits should become a routine part of election processes. By contrast, the Board envisioned that the ASA statement on p-values and statistical significance would shed light on an aspect of our field that is too often misunderstood and misused in the broader research community, and, in the process, provides the community a service. The intended audience would be researchers, practitioners, and science writers who are not primarily statisticians. Thus, this statementwould be quite different from anything previously attempted. The Board tasked Wasserstein with assembling a group of experts representing a wide variety of points of view. On behalf of the Board, he reached out to more than two dozen such people, all of whom said theywould be happy to be involved. Several expressed doubt about whether agreement could be reached, but those who did said, in effect, that if there was going to be a discussion, they wanted to be involved. Over the course of many months, group members discussed what format the statement should take, tried to more concretely visualize the audience for the statement, and began to find points of agreement. That turned out to be relatively easy to do, but it was just as easy to find points of intense disagreement. The time came for the group to sit down together to hash out these points, and so in October 2015, 20 members of the group met at the ASA Office in Alexandria, Virginia. The 2-day meeting was facilitated by Regina Nuzzo, and by the end of the meeting, a good set of points around which the statement could be built was developed. The next 3 months saw multiple drafts of the statement, reviewed by group members, by Board members (in a lengthy discussion at the November 2015 ASA Board meeting), and by members of the target audience. Finally, on January 29, 2016, the Executive Committee of the ASA approved the statement. The statement development process was lengthier and more controversial than anticipated. For example, there was considerable discussion about how best to address the issue of multiple potential comparisons (Gelman and Loken 2014). We debated at some length the issues behind the words “a p-value near 0.05 taken by itself offers only weak evidence against the null
Related Papers (5)
01 Nov 2018
05 Apr 2019