A structured design critique framework for giving feedback that actually improves work
Views
11.1K
Copies
2.8K
Likes
1.9K
Comments
0
Copy rate
25.0%
Prompt
You are a design director who has led critique sessions at IDEO, Pentagram, and Google. Create a comprehensive design critique framework that helps designers give [feedback_type] feedback on [design_context] projects.
The framework should include:
1. A structured format for organizing critique sessions
2. The exact questions to ask at each stage
3. Language templates — what to say and what to avoid
4. How to handle disagreements
5. Common anti-patterns in design critique
6. How to adapt the framework for async/written feedback
7. A scoring rubric for evaluating designs objectively
Make it practical and immediately usable. Include real examples, not abstract principles.Customise this prompt
Fill in 2 variables to personalise this prompt
Preview
You are a design director who has led critique sessions at IDEO, Pentagram, and Google. Create a comprehensive design critique framework that helps designers give [feedback_type] feedback on [design_context] projects.
The framework should include:
1. A structured format for organizing critique sessions
2. The exact questions to ask at each stage
3. Language templates — what to say and what to avoid
4. How to handle disagreements
5. Common anti-patterns in design critique
6. How to adapt the framework for async/written feedback
7. A scoring rubric for evaluating designs objectively
Make it practical and immediately usable. Include real examples, not abstract principles.
Example output
# The Structured Design Critique Framework
## For Actionable, Constructive Feedback on Product Design
---
## The Core Principle
Every piece of design feedback must answer three questions: **What is happening?** (observation), **Why does it matter?** (impact), and **What could change?** (suggestion). Feedback that skips any of these three is incomplete.
---
## Part 1: Session Structure (60 minutes)
### Phase 1 — Context Setting (5 min)
The designer presents. Nobody else speaks.
**Designer covers:**
- What problem are we solving? (user need, not business goal)
- Who are we solving it for? (specific persona, not "users")
- What constraints exist? (tech, timeline, brand, accessibility)
- What stage is this? (exploration, refinement, polish)
- What specific feedback do I need? (overall direction, specific interaction, visual execution)
**Why this matters:** Without context, reviewers optimize for the wrong things. A designer showing early explorations doesn't need pixel-level feedback. A designer showing final UI doesn't need "have you considered a completely different approach?"
### Phase 2 — Silent Review (5 min)
Everyone reviews the work quietly and writes notes. No discussion.
**Why this matters:** The loudest voice in the room shouldn't set the tone. Silent review ensures introverts, junior designers, and remote participants all form independent opinions before groupthink takes over.
### Phase 3 — Clarifying Questions (10 min)
Questions only. No opinions disguised as questions.
**Good questions:**
- "Can you walk me through the user's journey from notification to this screen?"
- "What did the previous iteration look like, and what changed?"
- "Is the data in this prototype representative of real content?"
**Bad questions (opinions in disguise):**
- "Don't you think the button should be bigger?" (→ say "I'd suggest making the button more prominent because...")
- "Have you considered doing it the way Spotify does?" (→ say "I've seen a pattern in Spotify that might solve the discoverability issue...")
### Phase 4 — Structured Feedback (30 min)
Each reviewer shares feedback using the OIS format (Observation → Impact → Suggestion).
**Template:**
"I notice that [specific observation about the design]. This matters because [impact on user/business/system]. One approach could be [concrete suggestion], though there may be other solutions."
**Example — Good:**
"I notice the error state uses only a red border with no text explanation. This matters because colorblind users (8% of men) won't perceive the error, and even sighted users won't know what to fix. One approach could be adding an inline error message below the field with a warning icon, similar to how we handle it in the settings form."
**Example — Bad:**
"The error state doesn't feel right. Maybe make it more visible? I think red isn't the best choice."
Why the bad example fails: "doesn't feel right" is vague, "more visible" has no criteria, and "red isn't the best choice" provides no rationale or alternative.
### Phase 5 — Synthesis & Next Steps (10 min)
The designer (not the reviewers) summarizes what they heard and what they plan to do.
**Designer template:**
"What I'm hearing is [summary of key themes]. I'm going to [specific next actions] for the next review. The things I'm choosing NOT to address right now are [items] because [reasoning]."
**Why the designer summarizes:** This prevents the facilitator or most senior person from imposing their interpretation. The designer filters the feedback through their understanding of the problem.
---
## Part 2: The Feedback Quality Rubric
Rate each piece of feedback on these five dimensions (1-5):
| Dimension | 1 (Poor) | 3 (Adequate) | 5 (Excellent) |
|-----------|----------|---------------|----------------|
| **Specificity** | "This is confusing" | "The navigation is confusing" | "The icon-only nav items on mobile lack labels, which makes the 'Collections' and 'Saved' icons indistinguishable" |
| **Actionability** | "Make it better" | "Improve the hierarchy" | "Increase the heading size to 24px and reduce the metadata to 12px to create a 2:1 size ratio" |
| **Rationale** | No reason given | "Users might struggle" | "In our last usability test, 3/5 participants missed this CTA because it looked like a text link" |
| **Objectivity** | "I don't like blue" | "Blue might not work here" | "The blue CTA has a 2.8:1 contrast ratio against this background, below WCAG AA's 4.5:1 requirement" |
| **Scope-awareness** | Pixel feedback on wireframes | Somewhat matched to stage | Feedback perfectly calibrated to the design's current stage |
**Minimum acceptable score: 3 average across all dimensions.** Any feedback scoring 1 on any dimension should be rephrased before delivery.
---
## Part 3: Anti-Patterns to Eliminate
**1. The Seagull** — Swoops in, makes noise, leaves. Gives vague feedback with no follow-up.
→ Fix: Require OIS format. No observation without impact and suggestion.
**2. The Pixel Police** — Gives polish-level feedback on exploration-stage work.
→ Fix: Phase 1 context setting explicitly states the design stage. Facilitator redirects off-stage feedback.
**3. The Competitor Fetishist** — "Notion does it this way. Linear does it this way."
→ Fix: Competitor references are welcome only if accompanied by: what specific problem does their approach solve, and does our user have the same problem?
**4. The Stack Ranker** — Turns critique into a voting session. "I prefer option A."
→ Fix: Never vote on options. Instead, articulate what each option does well and where each falls short. Let the designer synthesize.
**5. The Silent Vetoist** — Says nothing in critique, then blocks the design later.
→ Fix: Everyone must contribute at least one piece of feedback. Silence is not consent.
---
## Part 4: Adapting for Async/Written Feedback
When critique happens in Figma comments, Slack, or docs:
**Structure your written feedback:**
```
CONTEXT: What screen/flow I'm reviewing, how long I spent
STAGE CHECK: My understanding of what stage this is (check me if wrong)
FEEDBACK:
1. [Observation] → [Impact] → [Suggestion]
2. [Observation] → [Impact] → [Suggestion]
STRENGTHS (always include at least one):
- What's working well and why
QUESTIONS:
- Things I need clarified before giving more feedback
```
**Async rules:**
- Never use reaction emoji as feedback (a thumbs-up is not a critique)
- Pin your comment to a specific frame or element in Figma
- Indicate severity: 🔴 blocks shipping, 🟡 should address, 🟢 nice to have
- Respond within 24 hours — stale feedback is useless feedback
- If your feedback requires discussion, say "let's sync" — don't start a thread war
---
## Part 5: Handling Disagreements
When two reviewers disagree:
1. **Name the disagreement explicitly.** "It sounds like Alex thinks the modal is the right pattern and Jordan thinks inline expansion is better."
2. **Identify the underlying value difference.** Usually disagreements stem from different priorities: speed vs. safety, simplicity vs. power, consistency vs. optimization.
3. **Ask: what would settle this?** Often it's a usability test, analytics data, or a technical constraint that neither party has checked.
4. **If no data exists:** The designer decides. It's their work, they own the outcome, and they should make the call. Document the disagreement and revisit after ship.
**Never resolve design disagreements by committee vote or by seniority.** The most senior person's opinion is not automatically correct — but neither is the most junior person's. Resolve with evidence, or defer to the owner.