My recent lectures focus on Mokken Scale Analysis which is based on an item response model known as the monotone homogeneity model.
Mokken scale analysis can be seen as a hierarchical scaling method where we make the assumption that there exist an underlying latent trait that explain covariation between item responses. As in other IRT models, items are thought to be orderable by ‘difficulty’ such that any individual who endorses a particular item should endorse one with a lower difficulty (as in Guttman scaling, but perfectly ordered response patterns are rarely met in practice, especially in some fields of research).

Lastly, I attended a talk by Hervé Guyon about PLS path modeling and the debate about formative vs. reflective measurement. The discussion was motivated by applications in business and managment, but there were also meaningful contributions in psychological research and psychometrics.
Some key papers are listed below:
Howell, R.D., Breivik, E., and Wilcox, J.B. (2007). Reconsidering formative measurement. Psychological Methods, 12(2), 205-218. Bagozzi, R.P. (2007). On the meaning of formative measurement and how it differs from reflective measurement: Comment on Howell, Breivik, and Wilcox (2007).

Some (old) random notes found by chance on my iPhone.
van der Maas, H. L. J. & Wagenmakers, E.J. (2005). The Amsterdam Chess Test: a psychometric analysis of chess expertise. American Journal of Psychology, 118: 29-60. With accompagnying website, Testing chess ability.
In reference to the multi-trait multi-method framework: Nussbeck, F.W., Eid, M., Geiser, C., Courvoisier, D.S., and Lischetzke, T. (2009). A CTC(M-1) Model for Different Types of Raters.

That is certainly a minor issue of terminology, but what is best: “contemporary” or “modern” psychometrics?
I have often encountered the term modern psychometrics when speaking of Item Response Theory models as opposed to the Classical Test Theory approach where all statistical indicators rely on untransformed (raw) scores. There is even a book with this title, precisely: Rust, J. and Golombok, Modern Psychometrics: Science of Psychological Assessment, 2008, 3rd ed., Routledge.

Some further reading notes on the dimensional vs. categorical approaches to mental disorders.
I just uploaded a BibTeX file as gist 1106828 on github. An htmlized version is available here: dsm5_minimal.html. It is all about the revised version of the Diagnostic and Statistical Manual of Mental Disorders (DSM). Most of these papers come from the references list available on dsm5.org.
As a follow-up to a previous post on Psychometrics, measurement, and diagnostic medicine, here is a good article describing why a dimensional approach to the assessment and diagnosis of personality disorders is necessary in place of the well-established but controversial categorical approach: Assessment and diagnosis of personality disorder: Perennial issues and an emerging reconceptualization, by Lee A.

As a complement to the references I gave in an earlier post on Cronbach’s alpha, here are some further thoughts.
I am rereading Health Measurement Scales. A practical Guide to their development and use, by D.L. Streiner and G.R. Norman (Oxford Univ. Press, 2008 4th. ed.). I thought it would be a good opportunity to put again some words about Cronbach’s alpha.
This is a very nice textbook on the developement and validation of measurement instruments, with a lot of examples in the field of health outcomes or clinical research.

I always found Dave Garson’s tutorial on Reliability Analysis very interesting. However, all illustrations are with SPSS. Here is a friendly R version of some of these notes, especially for computing intraclass correlation.
Background They are different versions of the intraclass correlation coefficient (ICC), that reflect distinct ways of accounting for raters or items variance in overall variance, following Shrout and Fleiss (1979) (cases 1 to 3 in Table 1):

I shamelessly realized I will be missing the Psychoco 2011 workshop. Here are some notes from the program about current research in psychometrics with R.
Differential Item Functioning analysis Several packages have been released on CRAN since two years or so. This includes:
difR, from D. Magis and coll., that allows to test for uniform and non-uniform DIF effects in the case of dichotomous items. In its current stage, ten methods are implemented: Mantel-Haenszel, Standardization, Breslow-Day, Logistic regression, Lord’s chi-square test, Raju’s area, Likelihood-ratio test, Generalized Mantel-Haenszel, Generalized logistic regression, Generalized Lord’s chi-square test psychotree, from Carolin Strobl and coll.

Just a few words about the 6th CARME conference, on Correspondence Analysis and Related Methods. I only attended a few sessions, but it was a great opportunity to see what’s actually going on with data analysis of tabular data.
The conference was held on the Agrocampus in Rennes. I went in the same place two years ago for the UseR! 2009 conference (I found it too crowdy, but anyway there was really great stuff presented here).

Here is a stack of papers about multivariate data analysis (grabbed from a course by Gilbert Saporta that was helded in 2010, CNAM Paris) that I should (have) read.
Jensen, D.R. and Ramirez, D.E. (2008). Anomalies in the Foundations of Ridge Regression. International Statistical Review, 76(1), 89-105. Hyvarinen, A. and Oja, E. (2000). Independent component analysis: algorithms and applications. Neural networks, 13, 411-430. Scholkopf, B., Smola, A., and Muller, K.

All material © 2018 Christophe Lalanne