Trending today

1

Create a deep RPG character backstory with motivations, flaws, and arc potential

11.8K views
2

Anomaly detection with rolling z-score windows in SQL

11.8K views
3

30-day social media content calendar with post templates

11.6K views
4

Structured logging with correlation IDs across microservices

11.4K views
5

Legacy JavaScript to modern ES2023 refactor

10.9K views

Creators to follow

N
nadia_ux45 followers
D
david_ops45 followers
P
priya_design44 followers
AboutTermsPrivacyHelp

© 2026 teliprompt

CH
chioma_words
1 day ago•
Writing

Technical blog converter that makes complex topics accessible

Claude Opus 4.6
text output
#technical-writing
#blog
#databases
#education
Views
11.4K
Copies
3.3K
Likes
1.1K
Comments
0
Copy rate
29.0%

Prompt

You are a technical writer who makes complex topics understandable. Take this [technical_topic] and rewrite it as an engaging blog post for [target_audience].

Requirements:
- Open with a relatable scenario, not a definition
- Use analogies from everyday life
- Break down jargon (define terms inline, never assume knowledge)
- Include a "what you'll learn" section after the intro
- Use progressive complexity (start simple, build up)
- Add practical examples that readers can try immediately
- End with "what to do next" actionable steps
- Target reading time: [reading_time] minutes
- Tone: conversational but authoritative (not condescending)

Customise this prompt

Fill in 3 variables to personalise this prompt

Preview
You are a technical writer who makes complex topics understandable. Take this [technical_topic] and rewrite it as an engaging blog post for [target_audience]. Requirements: - Open with a relatable scenario, not a definition - Use analogies from everyday life - Break down jargon (define terms inline, never assume knowledge) - Include a "what you'll learn" section after the intro - Use progressive complexity (start simple, build up) - Add practical examples that readers can try immediately - End with "what to do next" actionable steps - Target reading time: [reading_time] minutes - Tone: conversational but authoritative (not condescending)

Example output

# How Database Indexes Actually Work (Explained Like You're Organizing a Library) *Reading time: 8 minutes* --- You walk into a library with 10 million books. You need one specific book about 18th-century French cooking. Option A: Start at the first shelf, check every single book until you find it. Could take days. Option B: Walk to the card catalog, look up "French cooking, 18th century," and it tells you: Row 47, Shelf 3, Position 12. Takes 30 seconds. That card catalog is a database index. And understanding how it works is the difference between your app loading in 200 milliseconds and your users staring at a spinner for 15 seconds. --- ## What you'll learn 1. What a database index actually is (beyond "it makes things faster") 2. Why adding indexes isn't always a good idea 3. The 3 most common index types and when to use each 4. How to figure out if your slow query needs an index 5. A real example you can try right now on any PostgreSQL database --- ## The simplest explanation Your database table is like a stack of papers on your desk — they're in the order you put them there (usually the order they were created). When you search for something, the database reads every single row from top to bottom. This is called a **full table scan**, and it's the slowest possible way to find data. An index is a separate, smaller structure that says "if you're looking for X, it's in row Y." Instead of reading millions of rows, the database reads the index (which is sorted and structured for fast lookups), finds the pointer, and jumps directly to the right row. **The key insight: an index trades storage space and write speed for dramatically faster reads.** You're essentially saying: "I'm willing to use extra disk space and make inserts slightly slower, so that every SELECT query on this column is 100x faster." --- ## How it works under the hood (simplified) Most database indexes use a data structure called a **B-tree** (balanced tree). Don't let the name scare you — the concept is simple. Imagine a phone book (physical, old-school). You want to find "Martinez." You don't start at "Aaron." You open to the middle — you see "Johnson." Martinez comes after Johnson, so you flip to the right half. Now you see "Patterson." Martinez is before Patterson, so you go left. A few more splits and you're there. That's a B-tree search. At each level, you eliminate half the remaining options. For a table with 1 million rows, a B-tree finds any row in about 20 steps (log₂ of 1,000,000 ≈ 20). Compare that to a full table scan: 1,000,000 steps. **Full scan: 1,000,000 comparisons. Index lookup: ~20 comparisons.** That's a 50,000x improvement. In visual terms: ``` Without index (full scan): [row 1] → [row 2] → [row 3] → ... → [row 999,999] → [row 1,000,000] "Is this the one? No. Next. Is this the one? No. Next..." With B-tree index: [M] / \ [G] [T] / \ / \ [C] [J] [P] [X] "Is 'Martinez' before or after 'M'? After. Before or after 'T'? Before. Before or after 'P'? Before. Found it in 3 steps." ``` --- ## The 3 index types you'll actually use ### 1. Single-column index (the most common) **When:** You frequently filter or sort by one column. ```sql -- You often run queries like: SELECT * FROM users WHERE email = 'someone@example.com'; -- Create an index on that column: CREATE INDEX idx_users_email ON users (email); ``` **Library analogy:** A card catalog organized by author name. Great if you're searching by author. Useless if you're searching by publication year. ### 2. Composite index (multiple columns) **When:** You frequently filter by TWO columns together. ```sql -- You often run: SELECT * FROM orders WHERE user_id = 'abc' AND status = 'pending'; -- Create a composite index: CREATE INDEX idx_orders_user_status ON orders (user_id, status); ``` **Critical rule: column order matters.** This index helps queries that filter by `user_id` alone OR by `user_id + status` together. But it does NOT help queries that filter by `status` alone. Think of it like a phone book sorted by last name, then first name. You can look up "Smith" easily. You can look up "Smith, John" even faster. But looking up everyone named "John" across all last names? The phone book can't help — you'd have to scan every page. ### 3. Unique index (enforces no duplicates) **When:** A column should never have duplicate values (like email or username). ```sql CREATE UNIQUE INDEX idx_users_email_unique ON users (email); ``` This does double duty: it speeds up lookups AND prevents two users from having the same email. Your database will reject any insert that violates the uniqueness constraint. --- ## When NOT to add an index Indexes aren't free. Every index you add: 1. **Uses disk space** — the index itself is a data structure stored on disk. A table with 5 indexes uses roughly 2-3x more storage than the table alone. 2. **Slows down writes** — every INSERT, UPDATE, or DELETE must also update every index on that table. If you have 10 indexes, every write operation does 10 extra updates. 3. **Can confuse the query planner** — too many indexes and the database might pick the wrong one, actually making queries slower. **Rules of thumb:** - ✅ Index columns you filter by (WHERE clauses) - ✅ Index columns you sort by (ORDER BY) - ✅ Index columns you join on (JOIN conditions) - ❌ Don't index columns you rarely query - ❌ Don't index tiny tables (< 1,000 rows) — full scans are fast enough - ❌ Don't index columns with very few unique values (like a boolean `is_active` column with only true/false — the index doesn't help narrow results enough) --- ## Try it yourself (5-minute exercise) If you have any PostgreSQL database, run this: ```sql -- 1. Check which queries are slowest: SELECT query, mean_exec_time, calls FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10; -- 2. For the slowest query, see how the database executes it: EXPLAIN ANALYZE SELECT * FROM your_table WHERE your_column = 'value'; -- Look for "Seq Scan" in the output — that means full table scan (no index) -- If you see "Seq Scan" on a large table, that column probably needs an index. -- 3. Add the index: CREATE INDEX CONCURRENTLY idx_your_table_column ON your_table (your_column); -- CONCURRENTLY means "don't lock the table while building" — safe for production. -- 4. Run the EXPLAIN again and compare: EXPLAIN ANALYZE SELECT * FROM your_table WHERE your_column = 'value'; -- You should now see "Index Scan" instead of "Seq Scan" ``` --- ## What to do next 1. **Right now:** Run `EXPLAIN ANALYZE` on your slowest query. Look for "Seq Scan" on tables with 10,000+ rows. 2. **This week:** Add indexes on your most-queried WHERE columns. Use `CREATE INDEX CONCURRENTLY` to avoid locking. 3. **Ongoing:** Install the `pg_stat_statements` extension and check it monthly. Your slowest queries will tell you exactly where to add indexes. 4. **Level up:** Learn about partial indexes (index only a subset of rows) and covering indexes (include extra columns to avoid table lookups entirely). The difference between a junior and senior backend developer isn't knowing indexes exist — it's knowing when NOT to add one. Now you know both.

Related prompts

Incident postmortem that prevents blame and drives fixesby @anders_blog
Writing
30-day social media content calendar with post templatesby @anders_blog
Writing
Company handbook section writer with consistent voiceby @mina_poet
Writing
Pitch deck script with investor psychology at every slideby @jordan_writer
Writing