問卷設計 (Survey Design) 問卷設計
Released已發布Apply rigorous survey design principles including construct operationalization, Likert scale development, reliability and validity assessment, and common method variance control. Use this skill when the user designs questionnaires, develops measurement items, needs to evaluate Cronbach's alpha or AVE, or when they ask 'how do I operationalize this construct', 'is my scale reliable', or 'how do I control for CMV'.
學術研究技能:問卷設計 (Survey Design) 分析與應用。
Overview概述
Survey design translates theoretical constructs into measurable items through systematic operationalization, scale development, and psychometric validation. Rigorous surveys ensure that observed scores reliably and validly represent the intended constructs while controlling for method artifacts such as common method variance.
When to Use使用時機
- Measuring perceptions, attitudes, beliefs, or behavioral intentions
- Operationalizing latent constructs from a theoretical framework
- Developing or adapting multi-item Likert scales
- Planning a quantitative study that relies on self-report data
When NOT to Use不適用時機
- Objective behavioral data or archival data are available and more appropriate
- The construct is better measured through experiments or observations
- Population is unreachable via survey (extremely low literacy, no sampling frame)
- Research question is exploratory and constructs are not yet well-defined
Assumptions前提假設
IRON LAW: A survey measures PERCEPTIONS, not objective reality — and common
method variance inflates correlations when predictor and criterion come
from the same source.
Key assumptions:
- Respondents understand items as intended (semantic equivalence)
- Responses are honest and not systematically biased by social desirability
- The construct domain is adequately sampled by the items
- Items within a scale are reflective indicators of the same underlying construct
Framework 框架
Step 1 — Construct Operationalization
Define each construct's conceptual domain from theory. Specify dimensions and sub-dimensions. Generate item pool from literature, expert judgment, and qualitative input (3-5 items per dimension minimum).
Step 2 — Scale Design and Pretesting
Choose response format (5-point or 7-point Likert). Avoid double-barreled, leading, or ambiguous items. Conduct cognitive interviews or expert panel review. Pilot test with N ≥ 30.
Step 3 — Assess Reliability and Validity
Reliability: Cronbach's alpha ≥ 0.70, composite reliability (CR) ≥ 0.70. Convergent validity: AVE ≥ 0.50, factor loadings ≥ 0.60. Discriminant validity: Fornell-Larcker criterion or HTMT < 0.90. See references/ for formulas.
Step 4 — Control for Common Method Variance
Procedural remedies: separate predictor and criterion temporally, use different scale formats, guarantee anonymity. Statistical remedies: Harman's single-factor test (necessary but not sufficient), marker variable technique, CFA with common method factor.
Output Format輸出格式
Gotchas注意事項
- Cronbach's alpha is a lower bound of reliability and assumes tau-equivalence; CR is preferred
- High reliability with low validity means you are precisely measuring the wrong thing
- Reverse-coded items reduce acquiescence bias but often form artifactual method factors in CFA
- Harman's single-factor test is widely used but has very low power to detect CMV
- Translation and back-translation do not guarantee measurement invariance across cultures
- Response rate below 30% raises non-response bias concerns even with adequate sample size
References參考資料
- DeVellis, R. F. (2017). Scale Development: Theory and Applications (4th ed.). Sage.
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research. Journal of Applied Psychology, 88(5), 879-903.
- Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate Data Analysis (8th ed.). Cengage.