Example output
# Method Selection Decision Tree
## Research Question Restated
"How do remote software engineers experience and negotiate the boundary between work and non-work hours, and does this relate to self-reported burnout?"
## Decision Tree Walkthrough
### Node 1 — Question type
This is a **hybrid exploratory + explanatory** question. The first clause ("how do they experience and negotiate") is exploratory and requires rich, situated description. The second clause ("does this relate to burnout") is explanatory and implies an association between a construct and an outcome.
Implication: Pure quantitative or pure qualitative will leave half the question unanswered. Mixed methods is genuinely warranted here — not just preferred aesthetically.
### Node 2 — Is the phenomenon well-defined?
"Burnout" has validated instruments (MBI, OLBI, CBI). "Work-nonwork boundary negotiation" has partial conceptual scaffolding (Kreiner 2009, Clark 2000) but the specific practices remote engineers use — Slack-snoozing, calendar-blocking, physical workspace rituals — are under-theorized in recent literature.
Implication: You cannot jump straight to a survey. The measurement instrument for the exploratory construct does not yet exist for this population. Qualitative work must come first to generate the items.
### Node 3 — Unit of analysis and sampling
Unit: individual remote software engineer. Population is globally distributed, reachable via professional networks, Slack communities, and LinkedIn. Sampling at scale is feasible for surveys (n=400-800 achievable) but qualitative recruitment should be purposive — diverse by tenure, seniority, family situation, and time zone relative to employer.
### Node 4 — Control / comparison
A true experimental manipulation of boundary practices would be unethical and impractical (you cannot randomly assign "check Slack after hours"). Natural comparison is available: engineers in companies with formal right-to-disconnect policies vs. those without. This is a quasi-experimental opportunity, not a true experiment.
### Node 5 — Validity threats by path
- **Qualitative only:** High construct validity, low generalizability; self-selection of articulate participants.
- **Quantitative only:** Premature operationalization; the items will miss the phenomenon.
- **Mixed sequential (qual → quant):** Strongest overall; risk is that qualitative phase runs long and eats the timeline.
## Recommended Primary Method
**Sequential exploratory mixed methods** (QUAL → QUAN). The qualitative phase is not supplementary — it builds the measurement instrument. The quantitative phase tests associations with burnout at scale.
## 12-Week Timeline
| Weeks | Activity |
|-------|----------|
| 1-2 | IRB submission; refine interview guide; pilot with 2 participants |
| 3-6 | Recruit and complete 16-20 semi-structured interviews (purposive sampling) |
| 7-8 | Reflexive thematic analysis; derive survey item candidates; cognitive interview items with 4-5 participants |
| 9 | Finalize survey; pre-register hypotheses tested at the quant stage |
| 10-11 | Launch survey; target n=500 with oversample of right-to-disconnect policy group |
| 12 | Analysis: CFA on new items, regression of burnout on boundary practices with policy as moderator |
## Compromise Flags
- You stated a **solo-researcher constraint** and **no transcription budget**. Twenty interviews at 60 minutes each with self-transcription will consume ~80 hours. Either reduce to 12 interviews (losing some theoretical saturation) or budget ~$600 for automated transcription and manual correction.
- You stated **no incentive budget**. Survey n=500 with zero incentive is aggressive. Expect 40-50% attrition from panel starts. Either accept n=300-350 or secure $1,000 for lottery incentives.
- The **12-week window is tight** for mixed methods done rigorously. If it must be 12 weeks, drop the CFA stage and run exploratory factor analysis only, flagging the quant stage as preliminary.