|P.Mean >> Category >> Writing research papers (created 2007-06-14).|
These pages discuss some of the issues that you need to consider when writing about a scientific or medical topic. Also see Category: Grant writing, Category: Post hoc power, Category: Presenting research data.
22. P.Mean: Data sharing (created 2012-11-21). I came across several interesting papers and editorials about data sharing.
21. P.Mean: What's the name of the test for comparing two proportions? (created 2012-09-12). A commonly used statistical test is the comparison of two independent proportions. For example, you are looking at the rate of steroid induced hyperglycemia among patients receiving high doses of steroids compared to the rate among patients receiving low doses. There are several terms that you can use here because there are several equivalent ways to test this hypothesis. I prefer to refer to the statistical method here as logistic regression. Here's why.
20. P.Mean: Free consultation means no co-authorship? (created 2012-03-19). I heard about an interaction between a client and one of the other statisticians working at the UMKC Research and Statistical Consult Service (RSCS). This statistician had mentioned the (very reasonable) expectation of getting co-authorship on any publication emanating from the consultation. Apparently this was a surprise to the client who claimed that co-authorship is inappropriate because the RSCS provides consulting for free.
19. The Monthly Mean: What statistical details belong in the methods section of your research paper? (August 2010)
18. P.Mean: Documenting negative results in a research paper (created 2001-10-11, revised 2011-04-26). Dear Professor Mean, I have just finished a well-designed research study and my results are negative. I'm worried about publication bias; most journals will only accept papers that show positive results. How do I document the negative findings in a research paper in a way that will convince a journal to accept my paper? -- Apprehensive Arturo
17. P.Mean: Creating LaTex formulas on the fly (created 2010-12-20). I don't use LaTeX a lot (though I should) because I am fairly happy with a proprietary product that I use for formulas, MathType. Still, there are some times when it would be nice to use a bit of LaTeX, and there's a web site that makes this easy.
16. P.Mean: How to avoid charges of plagiarism (created 2010-05-15). I'm not an expert on this, but I got a question about how to avoid charges of plagiarism in a thesis, especially the sections of the thesis that reviewed existing research and theoretical background. Here's how I responded.
15. P.Mean: Reporting Fisher's Exact test (created 2008-09-26). I am a doctoral student trying to handle stats analysis of very skewed data, so I think I've done all non-parametric tests by the book. My time for writing the report was up when suddenly, I was told by my advisor to change from Pearson chi square to Fisher's Exact test without mentioning any details how to report it and none of the books I have carry an example. Thank you so much for your very user/student friendly web page. I owe you!
14. P.Mean: What does "population-based" mean? (created 2008-09-12). I was reviewing an article that I am a co-author on and the author described the dearth of "population-based" studies. I had to think a bit, about what that term really meant.
13. P.Mean: How to report a one-tailed Fisher's Exact test (created 2008-07-12). Dear Professor Mean, Thank you for your informative page about the Fisher's Exact test. Can you please clarify how whether the test was 1 or 2-tailed affects the way that a significant result would be reported?
Brad Templeton. 10 Big Myths about copyright explained. Excerpt: "Note that this is an essay about copyright myths. It assumes you know at least what copyright is -- basically the legal exclusive right of the author of a creative work to control the copying of that work." [Accessed April 22, 2009]. Available at: http://www.templetons.com/brad/copymyths.html.
M. Castillo. Authorship and Bylines. American Journal of Neuroradiology. 2009;30(8):1455-1456. Excerpt: "From the ancient Greeks to Shakespeare, the question of authorship often arises. The issue of appropriate article authorshiphas always been of special interest to editors of scientific journals. In the biomedical sciences, as the complexity and funding of published studies increases, so does the length of the byline. Although a previous American Journal of Neuroradiology Editor-in-Chief already addressed this issue, I think it is time to revisit it.1 From my own experience, articles can be categorized according to the number of authors as follows: fewer than 2 authors (Editorials, Commentaries, Letters), fewer than 5 authors (Case Reports and Technical Notes), 5–10 authors (retrospective full-length articles), 10–15 (prospective, often grant-funded articles), more than 15 authors (reports of task forces, white papers, etc). Among so many authors, it is not uncommon to find individuals whose contributions are minimal and many times questionable. Who actually did enough work to be listed as an author? In other words, who can claim ownership rights in a particular intellectual property?" [Accessed August 14, 2010]. Available at: http://www.ajnr.org/cgi/reprint/ajnr.A1636v1.pdf.
Miguel Roig. Avoiding plagiarism, self-plagiarism, and other questionable writing practices: A guide to ethical writing. Excerpt: "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI), began sponsoring in 2002 the creation of instructional resources to address this pressing need. The present guide on avoiding plagiarism and other inappropriate writing practices was created, in part, to meet this need. Its purpose is to help students, as well as professionals, identify and prevent such practices and to develop an awareness of ethical writing. This guide is one of the many products stemming from ORI’s educational initiatives in the RCR." [Accessed December 7, 2009]. Available at: http://facpub.stjohns.edu/~roigm/plagiarism/Index.html.
Iain Hrynaszkiewicz. BMC Research Notes – adding value to your data. Posted on the BioMEd Central Blog, Thursdy, September 2, 2010. Excerpt: "Support for scientific data sharing is gathering more and more support in 2010, so rather than “why share data?” the question now is “how?”. Making data available in readily interpretable formats is vital to realising its value in driving new knowledge discovery, and BMC Research Notes today launches a new initiative aimed at promoting best practice in sharing and publishing data, with a focus on standardized, re-useable formats." [Accessed September 3, 2010]. Available at: http://blogs.openaccesscentral.com/blogs/bmcblog/entry/bmc_research_notes_wants_your
Iain Hrynaszkiewicz. A call for BMC Research Notes contributions promoting best practice in data standardization, sharing and publication. BMC Research Notes. 2010;3(1):235. Abstract: "BMC Research Notes aims to ensure that data files underlying published articles are made available in standard, reusable formats, and the journal is calling for contributions from the scientific community to achieve this goal. Educational Data Notes included in this special series should describe a domain-specific data standard and provide an example data set with the article, or a link to data that are permanently hosted elsewhere. The contributions should also provide some evidence of the data standard's application and preparation guidance that could be used by others wishing to conduct similar experiments. The journal is also keen to receive contributions on broader aspects of scientific data sharing, archiving, and open data." [Accessed September 3, 2010]. Available at: http://www.biomedcentral.com/content/3/1/235.
Sally Hopewell, Mike Clarke, David Moher, et al. CONSORT for Reporting Randomized Controlled Trials in Journal and Conference Abstracts: Explanation and Elaboration. PLoS Medicine. 2008;5(1):e20 EP -. Abstract: "Background: Clear, transparent, and sufficiently detailed abstracts of conferences and journal articles related to randomized controlled trials (RCTs) are important, because readers often base their assessment of a trial solely on information in the abstract. Here, we extend the CONSORT (Consolidated Standards of Reporting Trials) Statement to develop a minimum list of essential items, which authors should consider when reporting the results of a RCT in any journal or conference abstract. Methods and Findings: We generated a list of items from existing quality assessment tools and empirical evidence. A three-round, modified-Delphi process was used to select items. In all, 109 participants were invited to participate in an electronic survey; the response rate was 61%. Survey results were presented at a meeting of the CONSORT Group in Montebello, Canada, January 2007, involving 26 participants, including clinical trialists, statisticians, epidemiologists, and biomedical editors. Checklist items were discussed for eligibility into the final checklist. The checklist was then revised to ensure that it reflected discussions held during and subsequent to the meeting. CONSORT for Abstracts recommends that abstracts relating to RCTs have a structured format. Items should include details of trial objectives; trial design (e.g., method of allocation, blinding/masking); trial participants (i.e., description, numbers randomized, and number analyzed); interventions intended for each randomized group and their impact on primary efficacy outcomes and harms; trial conclusions; trial registration name and number; and source of funding. We recommend the checklist be used in conjunction with this explanatory document, which includes examples of good reporting, rationale, and evidence, when available, for the inclusion of each item. Conclusions: CONSORT for Abstracts aims to improve reporting of abstracts of RCTs published in journal articles and conference proceedings. It will help authors of abstracts of these trials provide the detail and clarity needed by readers wishing to assess a trial's validity and the applicability of its results." [Accessed March 4, 2009]. Available at: http://dx.doi.org/10.1371%2Fjournal.pmed.0050020.
R A Parker, N G Berman. Criteria for authorship for statisticians in medical papers. Stat Med. 1998;17(20):2289-2299. We organize a statistician's potential scientific and intellectual contributions to a medical study into three types of activities relating to design, implementation and analysis. For each type, we describe high-level, mid-level and low-level contributions. Using this framework, we develop a point system to assess whether authorship is justified. Although we recommend discussion and resolution of authorship issues early in the course of any project, our system is especially useful when this has not been done. [Accessed August 14, 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/9819828.
Robert Siebers. Data Inconsistencies in Abstracts of Articles in Clinical Chemistry. Clin Chem. 2001;47(1):149. Excerpt: "The abstract of a research article is considered to be the most important part of the article. It should contain, in a brief but concise form, the critical components of the scientific study being reported. It often is the only part of the article that is read and is widely available through biomedical databases. Thus, it is of fundamental importance that data reported in abstracts are consistent with those reported in the body of the article. Recent studies have reported that data in abstracts sometimes are inconsistent with those reported in the article. The aim of this study was to determine the abstract data inconsistency rate of Clinical Chemistry." [Accessed March 4, 2009]. Available at: http://www.clinchem.org/cgi/content/full/47/1/149.
Edward Mills, An-Wen Chan, Ping Wu, et al. Design, analysis, and presentation of crossover trials. Trials. 2009;10(1):27. Abstract: "OBJECTIVE: Although crossover trials enjoy wide use, standards for analysis and reporting have not been established. We reviewed methodological aspects and quality of reporting in a representative sample of published crossover trials. METHODS: We searched MEDLINE for December 2000 and identified all randomized crossover trials. We abstracted data independently, in duplicate, on 14 design criteria, 13 analysis criteria, and 14 criteria assessing the data presentation. RESULTS: We identified 526 randomized controlled trials, of which 116 were crossover trials. Trials were drug efficacy (48%), pharmacokinetic (28%), and nonpharmacologic (30%). The median sample size was 15 (interquartile range 8-38). Most (72%) trials used 2 treatments and had 2 periods (64%). Few trials reported allocation concealment (17%) or sequence generation (7%). Only 20% of trials reported a sample size calculation and only 31% of these considered pairing of data in the calculation. Carry-over issues were addressed in 29% of trial's methods. Most trials reported and defended a washout period (70%). Almost all trials (93%) tested for treatment effects using paired data and also presented details on by-group results (95%). Only 29% presented CIs or SE so that data could be entered into a meta-analysis. CONCLUSION: Reports of crossover trials frequently omit important methodological issues in design, analysis, and presentation. Guidelines for the conduct and reporting of crossover trials might improve the conduct and reporting of studies using this important trial design." [Accessed May 20, 2009]. Available at: http://www.trialsjournal.com/content/10/1/27.
Webpage: Peter Suber. A field guide to misunderstandings about open access (SPARC) Excerpt: "The woods are full of misunderstandings about OA. They thrive in almost every habitat, and the population soars whenever a major institution adopts an OA policy. Contact between new developments and new observers who haven't followed the annual migrations always results in a colorful boomlet of young misunderstandings. Some of these misunderstandings are mistaken for one another, especially in the flurry of activity, because of their similar markings and habitat. Some are mistaken for understanding by novices unfamiliar with the medley of variant plumage, adaptive camouflage, and deceptive vocalizations. This field guide should help you identify 25 of the most common visitors to your neck of the woods. Leave your binoculars at home. All of these can be seen with the naked eye. With no more than this guide, and some patient observation, every trip to a conference, and even an occasional faculty meeting, can be an enjoyable and educational outing. " [Accessed on April 21, 2011]. http://www.arl.org/sparc/publications/articles/openaccess_fieldguide.shtml.
Journal article: Kaj Sand-Jensen. How to write consistently boring scientific literature Oikos. 2007;116(5):723-727. Abstract: "Although scientists typically insist that their research is very exciting and adventurous when they talk to laymen and prospective students, the allure of this enthusiasm is too often lost in the predictable, stilted structure and language of their scientific publications. I present here, a top-10 list of recommendations for how to write consistently boring scientific publications. I then discuss why we should and how we could make these contributions more accessible and exciting." Accessed on March 30, 2011]. http://onlinelibrary.wiley.com/doi/10.1111/j.0030-1299.2007.15674.x/abstract
R. L. Glass. A letter from the frustrated author of a journal paper. Journal of Systems and Software. 2000;54(1):1. Excerpt: "Editor’s Note: It seems appropriate, in this issue of JSS containing the findings of our annual Top Scholars/Institutions study, to pay tribute to the persistent authors who make a journal like this, and a study like that, possible. In their honor, we dedicate the following humorous, anonymously-authored, letter!" [Accessed September 22, 2010]. Available at: http://dx.doi.org/10.1016/S0164-1212(00)00020-0.
LiquidPub. Liquid Publications: Scientific Publications meet the Web. Excerpt: "The LiquidPub project proposes a paradigm shift in the way scientific knowledge is created, disseminated, evaluated and maintained. This shift is enabled by the notion of Liquid Publications, which are evolutionary, collaborative, and composable scientific contributions. Many Liquid Publication concepts are based on a parallel between scientific knowledge artifacts and software artifacts, and hence on lessons learned in (agile, collaborative, open source) software development, as well as on lessons learned from Web 2.0 in terms of collaborative evaluation of knowledge artifacts." [Accessed August 10, 2010]. Available at: http://project.liquidpub.org/.
Josh Herigon. Paying for access to medical journals. Excerpt: "I’m not sure about the validity of this study: Free Access to U.S. Research Papers Could Yield $1 Billion in Benefits. Quantifying how much money will be saved by increased efficiency due to open access seems like fuzzy math at best. However, we do need better access to medical journal articles. As a researcher, I’ve constantly fought the battle against firewalled journals. I am fortunate to be part of a university that has excellent access to most of the published medical research I need. But I still come across what is the researchers’ equivalent of the “blue screen of death”: the “login or purchase this individual article for $30″ screen." [Accessed December 4, 2010]. Available at: http://www.kevinmd.com/blog/2010/11/paying-access-medical-journals.html.
Peter Bacchetti. Peer review of statistics in medical research: the other problem. BMJ. 2002;324(7348):1271-1273. Excerpt: "The process of peer review before publication has long been criticised for failing to prevent the publication of statistics that are wrong, unclear, or suboptimal. 1 2 My concern here, however, is not with failing to find flaws, but with the complementary problem of finding flaws that are not really there. My impression as a collaborating and consulting statistician is that spurious criticism of sound statistics is increasingly common, mainly from subject matter reviewers with limited statistical knowledge. Of the subject matter manuscript reviews I see that raise statistical issues, perhaps half include a mistaken criticism. In grant reviews unhelpful statistical comments seem to be a near certainty, mainly due to unrealistic expectations concerning sample size planning. While funding or publication of bad research is clearly undesirable, so is preventing the funding or publication of good research. Responding to misguided comments requires considerable time and effort, and poor reviews are demoralising---a subtler but possibly more serious cost." [Accessed July 7, 2010]. Available at: http://www.bmj.com/cgi/content/full/324/7348/1271.
The Plain Language Action and Information Network. Plain Language: Improving Communications from the Federal Government to the Public. Excerpt: " Plain language (also called Plain English) is communication your audience can understand the first time they read or hear it. Language that is plain to one set of readers may not be plain to others. Written material is in plain language if your audience can: Find what they need; Understand what they find; and Use what they find to meet their needs." [Accessed March 4, 2009]. Available at: http://www.plainlanguage.gov/.
Morgane Ethgen, Isabelle Boutron, Philippe Gabriel Steg, Carine Roy, Philippe Ravaud. Quality of reporting internal and external validity data from randomized controlled trials evaluating stents for percutaneous coronary intervention. BMC Medical Research Methodology. 2009;9(1):24. Abstract: "BACKGROUND: Stents are commonly used to treat patients with coronary artery disease. However, the quality of reporting internal and external validity data in published reports of randomised controlled trials (RCTs) of stents has never been assessed. The objective of our study was to evaluate the quality of reporting internal and external validity data in published reports of RCTs assessing the stents for percutaneous coronary interventions. METHODS: A systematic literature review was conducted. Reports of RCTs assessing stents for percutaneous coronary interventions indexed in MEDLINE and the Cochrane Central Register of Controlled Trials and published between January 2003 and September 2008 were selected. A standardized abstraction form was used to extract data. All analyses were adjusted for the effect of clustering articles by journal. RESULTS: 132 articles were analyzed. The generation of the allocation sequence was adequate in 58.3% of the reports; treatment allocation was concealed in 34.8%. Adequate blinding was reported in one-fifth of the reports. An intention-to-treat analysis was described in 79.5%. The main outcome was a surrogate angiographic endpoint in 47.0%. The volume of interventions per center was described in two reports. Operator expertise was described in five (3.8%) reports. The quality of reporting was better in journals with high impact factors and in journals endorsing the CONSORT statement. CONCLUSIONS: The current reporting of results of RCTs testing stents needs to be improved to allow readers to appraise the risk of bias and the applicability of the results." [Accessed April 16, 2009]. Available at: http://www.biomedcentral.com/1471-2288/9/24.
Diana Kornbrot. Reporting Non-significant Results: Summary. Excerpt: "Aims: The purpose of the survey was to determine expert views on how to report the results of studies that produced non-significant results. The ultimate aim is to produce agreed guidelines for non-significant results. Method: Respondents were presented with 2 scenarios of studies comparing 2 groups on the proportion of people with high blood pressure. Results were given as proportion in each group with high blood pressure, size of the study and the (non-significant) chi-square. Respondents were first asked for free form text on how the results should be reported to lay and professional audiences. They were then presented with multiple choices for the proportions of respondents with high bop in each group that would be expected in a replication study. Discipline, role, experience and further comments were also elicited." [Accessed March 4, 2009]. Available at: http://web.me.com/kornbrot/KornbrotNonSignificantSummary.htm.
Patrick Vandewalle, Jelena Kovacevic, Martin Vetterli. Reproducible Research. Excerpt: "Welcome on this site about reproducible research. This site is intended to gather a lot of information and useful links about reproducible research. As the authors (Patrick Vandewalle, Jelena Kovacevic and Martin Vetterli) are all doing research in signal/image processing, that will also be the main focus of this site." [Accessed October 5, 2010]. Available at: http://reproducibleresearch.net.
Martha Davis. Scientific papers and presentations. 2nd ed. San Diego: Academic Press Excerpt: "Scientific communication is essential for helping us use and take care of this earth. Researchers who discover the wonders of science must tell someone about their findings in clear, complete, and concise terms. To add to the pool of scientific knowledge, scientists must synthesize available information with what they discover. If a scientist garbles words or leaves out important points, messages become unclear and the progress of science suffers." Available at: http://lccn.loc.gov/2004050500.
Rick McLean, Brian H Richards, Janet I Wardman. The Effect of Web 2.0 on the Future of Medical Practice and Education: Darwikinian Evolution Or Folksonomic Revolution?. The Medical Journal of Australia. 2007;187(3):174-177. [Accessed March 4, 2009]. Available at: http://www.mja.com.au/public/issues/187_03_060807/mcl10181_fm.html.
Richard Horton, Richard Smith. Time to redefine authorship. BMJ. 1996;312(7033):723. Excerpt: "Physicists do it by the hundred; scientists do it in groups; fiction writers mostly alone. And medical researchers? Rarely now do they write papers alone, and the number of authors on papers is increasing steadily.1 Under pressure from molecular biologists, the National Library of Medicine in Washington is planning to list not just the first six authors in Index Medicus but the first 24 plus the last author.2 Notions of authorship are clearly in the eye of the beholder, and many authors on modern papers seem to have made only a minimal contribution.3 4 5 Few authors on modern multidisciplinary medical papers fit the 19th century notion of taking full responsibility for every word and thought included, and yet the cumbersome definition of authorship produced by the International Committee of Medical Journal Editors (the Vancouver Group) is based on that concept.6 The definition produced by editors seems to be out of touch with what is happening in the real world of research, and researchers and editors need to consider a new definition. The BMJ, Lancet, University of Nottingham, and Locknet (a network to encourage research into peer review7) are therefore organising a one day meeting on 6 June in Nottingham to consider the need for a new definition. All the members of the Vancouver Group will be there, and everybody is welcome." [Accessed August 14, 2010]. Available at: http://www.bmj.com/cgi/content/full/312/7033/723.
Jason W. Osborne, Abigail Holland. What is authorship, and what should it be? A survey of prominent guidelines for determining authorship in scientific publications.. Practical Assessment, Research & Evaluation. 2009;14(15). Abstract: "Before the mid 20th century most scientific writing was solely authored (Claxton, 2005; Greene, 2007) and thus it is only relatively recently, as science has grown more complex, that the ethical and procedural issues around authorship have arisen. Fields as diverse as medicine (International Committee of Medical Journal Editors, 2008), mathematics (e.g., American Statistical Association, 1999), the physical sciences (e.g., American Chemical Society, 2006), and the social sciences (e.g., American Psychological Association, 2002) have, in recent years, wrestled with what constitutes authorship and how to eliminate problematic practices such as honorary authorship and ghost authorship (e.g., Anonymous, 2004; Claxton, 2005; Manton & English, 2008). As authorship is the coin of the realm in academia (Louis, Holdsworth, Anderson, & Campbell, 2008), it is an ethical issue of singular importance. The goal of this paper is to review prominent and diverse guidelines concerning scientific authorship and to attempt to synthesize existing guidelines into recommendations that represent ethical practices for ensuring credit where (and only where) credit is due.
George Mason University. Zotero: The Next-Generation Research Tool. Excerpt: "Zotero [zoh-TAIR-oh] is a free, easy-to-use Firefox extension to help you collect, manage, and cite your research sources. It lives right where you do your work—in the web browser itself. " [Accessed March 4, 2009]. Available at: http://www.zotero.org/.
All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2010-12-20. The material below this paragraph links to my old website, StATS. Although I wrote all of the material listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright ownership of this material. The brief excerpts shown here are included under the fair use provisions of U.S. Copyright laws.
12. Stats: Calculating statistics on whole numbers (April 4, 2008). Dear Professor Mean, I have some data that has a resolution of 1, meaning that these are whole numbers only and no decimals. When I calculate the mean and standard deviation, should I report the actual value or should I round it?
11. Stats: Presenting unadjusted and adjusted estimates side by side (March 24, 2008). Someone on the Medstats discussion group asked about reporting the analysis of a model without adjustment for covariates along with the analysis adjusted for covariates. What is the purpose of reporting the unadjusted analysis?
10. Stats: Open-access (OA) journals and their impact on research and the practice of medicine (October 18, 2007). This talk will cover the following topics: What is open-access (OA)? What are the historical antecedents to OA? What are the costs and benefits of OA? How you can help promote OA?
9. Stats: Open Access Journals, part 2 (October 14, 2007). I'm giving a talk at the Grand Rounds for CMH on Thursday, October 18. I'm using PowerPoint for this talk, and I have a version in web format that skips the images but provides more content. Here are some resources I probably will not use in my talk.
8. Stats: Seminar on Open Access Journals (October 5, 2007). I was inspired by a talk by Jim Pitman, President of the IMS, at the Joint Statistical Meetings in August, "Open Access to Professional Information" so when an opportunity arose for me to give a talk at our local Grand Rounds, I suggested a similar talk with the title "Open Access Publication and its impact on research and the practice of medicine."
7. Stats: Seven different formulas for the odds ratio (June 14, 2007). I received an interesting email recently. Someone was studying the relationship between firearm suicides and veterans to answer the question: "Are veterans more likely to commit suicide with a firearm than non-veterans?" Then this person proceeded to find seven different formulas for the odds ratio and tried all of them. How I wish more people were this adventurous! Here are the results.
6. Stats: Another open site closes (May 23, 2005). I often link to interesting resources on the web, and it is always distressing to find that content which had been available freely is now available only through subscription or through membership in a professional society. This breaks all the links I have on my web pages and there is no way to fix these links. When you are providing educational examples to a broad audience, you cannot assume that most of your readers belong to a particular scientific or medical society or that they subscribe to a particular journal or magazine.
5. Stats: Public access to publications from NIH-funded research (February 15, 2005). The NIH is now requesting publication in PubMed Central of research publications that is supported in whole or in part by NIH grants. The authors can designate a time delay between the appearance of the publication in the journal itself and when the PubMed Central article is available.
4. Stats: Free Full Text on the Web (August 24, 2004). As I mention at the top of the weblog, I preferentially try to cite medical literature where there is free full text available on the web. There are several journals who make their full text freely available.
3. Stats: Data is/are (November 26, 2003). Dear Professor Mean, Should I write "data is" or "data are"?
2. Stats: Displaying tables of percentages (November 6, 2002). Dear Professor Mean, My colleagues and I argue over the most appropriate way for displaying tables of percentages. Must the row or column always add to 100%? Also, in cases where it is difficult to know which variable is dependent, how does one decide the best way to present the results? -- Garrulous Gail
1. Stats: E notation (September 3, 1999). Dear Professor Mean: In my regression output, I saw that following value: -2.384E-03. What does this E notation mean?
Browse other categories at this site
Browse through the most recent entries