Recipe: Next Great Thing (NGT)
Run N parallel LLM calls with rotating facilitator personas to produce divergent idea candidates — a simulated Nominal Group Technique session.
What this tool does
NGT fires N parallel LLM calls (default 5, max 10) against the same prompt, with each call playing a different facilitator persona — a pragmatic operator, a systems thinker, a contrarian, a designer, a futurist, and so on. The result is divergent by construction: each responder is instructed to lean into its own lens and not reference the others.
This mimics the Nominal Group Technique — a brainstorming method where participants generate ideas independently before discussing, specifically to avoid groupthink. Running the model N times with different personas is a fair approximation.
BIG Tools streams the results back over Server-Sent Events so fast responders appear in the grid while slower ones are still thinking. For a recipe, sequential is fine.
The prompt
System prompt (per responder)
Each of N calls gets its own system prompt with the persona substituted in:
You are synthetic responder {i} of {n} in a Nominal Group Technique session, playing {persona}. Generate a single short idea in response to the prompt that reflects that lens. Keep it terse (1-3 sentences). Do not reference the other responders or your persona by name.
The 10 personas (cycled)
1. a pragmatic operator who favors concrete, immediate tactics 2. a systems thinker who reframes the question at a higher altitude 3. a contrarian who challenges the assumptions baked into the prompt 4. a designer who thinks about the human experience and rituals 5. a futurist who borrows from adjacent fields and speculative ideas 6. a learner who approaches the prompt with beginner's mind 7. a researcher who asks what evidence we're missing 8. a storyteller who finds the narrative hook 9. an outsider to the domain who spots what the insiders miss 10. an economist who thinks about incentives and trade-offs
User prompt
Your question or challenge as a plain user message. Same text sent to all N calls.
Example
Input:
How should a university library rethink its role now that most research starts with an LLM chat?
Output (5 responders, one shown per persona):
Pragmatic operator: Pilot a "librarian as LLM translator" service — students bring their AI chat transcript, librarians help them trace the citations the model invented vs. the ones it got right.
Systems thinker: The library's core value isn't storing information anymore; it's the attention infrastructure — quiet spaces, long deadlines, enforced depth. Double down on that, not on databases.
Contrarian: Maybe the library should stop trying to reinvent itself. It's a room with books and silence. Both are underrated.
Designer: Redesign the research journey so the LLM chat starts in the library — walk up to a terminal, and the physical act of being in the space changes what you ask.
Economist: Charge LLM providers rent to query the library's holdings. The library's proprietary metadata becomes training data. Make the AI companies fund the institution they're replacing.
Replicate it
Run the prompt five times. Each time, replace the system message with one of the persona templates and keep the user message identical:
SYSTEM: You are synthetic responder 1 of 5 in a Nominal Group Technique session, playing a pragmatic operator who favors concrete, immediate tactics. Generate a single short idea in response to the prompt that reflects that lens. Keep it terse (1-3 sentences). Do not reference the other responders or your persona by name. USER: How should a university library rethink its role now that most research starts with an LLM chat?
Swap persona for responders 2-5. In a ChatGPT session you can rerun with the same user message and edit the system each time.
Tuning
- Temperature:
0.95is not optional. NGT's whole value is divergence; temperature = 0.7 produces five variations of the same idea. If you drop below 0.9 you're no longer running NGT, you're running a committee. - N: 5 is the BIG Tools default. 3 is enough if you want the top few lenses; 7-10 is useful for exhausting a problem space but returns a lot of redundancy after N=6. The persona list only has 10 entries — beyond that, responders repeat.
- Model: Any capable model works. Smaller models sometimes blur the personas (a contrarian response reads like a pragmatic one). The persona differentiation is mostly a large-model behavior.
- Order: Cycle through personas in a fixed order. If you want to bias toward a specific lens, pick that persona twice.
Common pitfalls
- All N responses sound the same. Temperature is too low, or you're on a model too small to maintain persona. Bump to 1.0 or switch models.
- Responders reference each other ("Unlike what the others said..."). Your N calls aren't isolated — check that each call is its own request with no shared message history.
- A responder leaks the persona name ("As a contrarian, I'd say..."). The prompt says not to, but smaller models sometimes break the rule. Usually harmless; just note it to users if you're displaying raw output.
- Costs add up fast at N=10. Each NGT run is N separate LLM calls. Budget accordingly, or rate-limit by run rather than by call.