I just finished reading a very nice book from Stephen Senn: Dicing with death. Chance, Risk and Health (Cambridge University Press, 2003). It is very nicely written; I am no expert in English but I found that the quality of the writing gives a special sense to statistical issues discussed in this book.
I already have another book from Stephen Senn: Senn, S.S. (2008). Statistical Issues in Drug Development (2nd ed.
Here is a brief overview of Testlet Response Theory and its Applications, by Wainer, Bradlow, and Wang (Cambridge University Press, 2007).
This book provides a very nice introduction to true score (which focus on test scores) and item response (which focus on item scores) theory, and discusses the advantages of using testlets as the basis of measurement. I like such clear overview of main concepts which form the basis of one’s field of study.
In a recent statistical seminar I attended, there was a discussion on statistical strategies to cope with treatment switching and the estimation of survival.
The presentation went around slides from Ian White :1 Methods for handling treatment switching: rank-preserving structural nested failure time models, inverse- probability-of-censoring weighting, and marginal structural models , which are taken from the HTMR network workshop on Methods for adjusting for treatment switches in late-stage cancer trials.
A bunch of papers in early view from Statistics in Medicine suddenly came out in my Google feed reader. Way too many to tweet them all, so here is a brief list of papers I should read during my forthcoming week off.
Some papers come from a special issue; others are ordinary research papers.
Vexler, A., Tsai, W-M., Malinovsky, Y. Estimation and testing based on data subject to measurement errors: from parametric to non-parametric likelihood methods.
My recent lectures focus on Mokken Scale Analysis which is based on an item response model known as the monotone homogeneity model.
Mokken scale analysis can be seen as a hierarchical scaling method where we make the assumption that there exist an underlying latent trait that explain covariation between item responses. As in other IRT models, items are thought to be orderable by ‘difficulty’ such that any individual who endorses a particular item should endorse one with a lower difficulty (as in Guttman scaling, but perfectly ordered response patterns are rarely met in practice, especially in some fields of research).
I just received my copy of Introduction to statistical inference, by Jack C. Kiefer (Springer, 1987).1 After having read the first two chapters I wonder: How come I didn’t start with that book when I was studying elementary statistics!
I recently had to give a short series of lectures on statistics (at a very introductory level) where I started with an illustration of a coin tossing experiment (starting at slide #7).
Some further reading notes on the dimensional vs. categorical approaches to mental disorders.
I just uploaded a BibTeX file as gist 1106828 on github. An htmlized version is available here: dsm5_minimal.html. It is all about the revised version of the Diagnostic and Statistical Manual of Mental Disorders (DSM). Most of these papers come from the references list available on dsm5.org.
As a follow-up to a previous post on Psychometrics, measurement, and diagnostic medicine, here is a good article describing why a dimensional approach to the assessment and diagnosis of personality disorders is necessary in place of the well-established but controversial categorical approach: Assessment and diagnosis of personality disorder: Perennial issues and an emerging reconceptualization, by Lee A.
This is a short review of Ensemble Methods in Data Mining, by G. Seni and J. Elder (Morgan & Claypool, 2010).
I won’t get over the whole textbook, but rather summarize the introductory chapter which provides a nice overview of how ensemble methods work and why they are interesting from a predictive perspective.
As it is well known, there is a large variety of DM algorithms whose predictive accuracy depends on the problem at hand.
Here are four papers dealing with the reporting of subgroup analysis and using baseline data (and their pitfalls).
There might be plenty of other papers on this topic available through Google, but these ones focus on RCTs and biomedical research. Below is just a few recap’ of the critical points raised in these papers.
A brief overview Wang et al.(1) provide general guidelines for reporting subgroup analysis. Based on their insert p.
As a complement to the references I gave in an earlier post on Cronbach’s alpha, here are some further thoughts.
I am rereading Health Measurement Scales. A practical Guide to their development and use, by D.L. Streiner and G.R. Norman (Oxford Univ. Press, 2008 4th. ed.). I thought it would be a good opportunity to put again some words about Cronbach’s alpha.
This is a very nice textbook on the developement and validation of measurement instruments, with a lot of examples in the field of health outcomes or clinical research.