[6] Keith Curry Lance and Marti Cox, both of the Library Research Service, took issue with HAPLR reasoning backwards from statistics to conclusion, point out the redundancy of HAPLR's statistical categories, and question its arbitrary system of weighting criteria.
Somehow, unique among American public or private institutions, libraries are just too varied and too local to be compared.
Yet despite these assertions, the authors urge individuals to use the NCES Public Library Peer Comparison tool (nces.ed.gov/surveys/libraries/publicpeer/) to do this impossible task.
[9] Ray Lyons and Neal Kaske later argued for greater recognition of the strengths and limitations of ratings.
The authors also note that HAPLR calculations perform invalid mathematical operations using ordinal rankings, making comparisons of scores between libraries and between years meaningless.
[11] This method rates on four equally weighted per-capita statistics with comparison groups based on total operating expenditures: library visits, circulation, program attendance, and public internet computer use.
Audit Commission personnel base the reports on statistical data, long-range plans, local government commitment to the library, and a site visit.