Using Multivariate Techniques to Measure the Performance of R&D Programs: A Case Example
Main Article Content
Abstract
Performance management systems implemented in science-based government organizations have traditionally focused on research inputs and activities, rather than outputs or outcomes. However, recent legislative changes in several countries now require individual programs to report on their progress towards the achievement of organizational and governmental strategic objectives. In a substantive field where peer review remains the standard evaluation method against which scientific success is judged, performance measurement activities have often been articulated around complex techniques taken from the sciences and economics that yield little useful information to key decision makers (Geisler, 2002; McDonald & Teather, 2000; Roessner, 2002).
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Cozzens, S. E. (1997). The knowledge pool: Measurement challenges in evaluating fundamental research programs. Evaluation and Program Planning, 20 (1), 77-89. DOI: https://doi.org/10.1016/S0149-7189(96)00038-9
https://doi.org/10.1016/S0149-7189(96)00038-9 DOI: https://doi.org/10.1016/S0149-7189(96)00038-9
Geisler, E. (2002). What Do We Know About: R&D Metrics in Technology-Driven Organizations. Paper prepared by invitation for the Center for Innovation Management Studies at North Carolina University. Retrieved September 14, 2005, from http://cims.ncsu.edu/documents/rdmetrics.pdf
Hair, J. F. Jr., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis (5th Edition). Upper Saddle River, NJ: Prentice Hall.
Harman, K. M. (2004). Producing 'industry-ready' doctorates: Australian Cooperative Research Centre approaches to doctoral education. Studies in Continuing Education, 26 (3), 387-404. DOI: https://doi.org/10.1080/0158037042000265944
https://doi.org/10.1080/0158037042000265944 DOI: https://doi.org/10.1080/0158037042000265944
Lindman, H. R. (1974). Analysis of Variance in Complex Experimental Designs. San Francisco, CA: W.H. Freeman.
McDonald, R., & Teather, G. (2000). Measurement of S&T performance in the Government of Canada: From outputs to outcomes. Journal of Technology Transfer, 25, 223-236. DOI: https://doi.org/10.1023/A:1007837009747
https://doi.org/10.1023/A:1007837009747 DOI: https://doi.org/10.1023/A:1007837009747
Osborne, J. (2002). Notes on the use of data transformations. Practical Assessment, Research and Evaluation, 8 (6). Retrieved December 4, 2004, from http://PAREonline.net/getvn.asp?v=88n6.
Roessner, D. (2002). Outcome Measurement in the United States: State of the Art. Paper presented at the Annual Meeting of the American Association for the Advancement of Science, Boston, MA.
Rogers, M. (1998). The Definition and Measurement of Innovation. Melbourne Institute Working Paper, No. 10/98. Retrieved September 14, 2005 from, http://melbourneinstitute.com/publications/working/1997-1999wp.html
Scheirer, M.A. (2000). Getting more "bang" for your performance measures "buck". American Journal of Evaluation, 21, 139-149. DOI: https://doi.org/10.1016/S1098-2140(00)00075-8
https://doi.org/10.1177/109821400002100202 DOI: https://doi.org/10.1177/109821400002100202