Before we build the library, I want to be clear about what separates a useful framework from a saved prompt that sits in a folder and never gets used.
A good prompt framework has four qualities.
First, it is structured around the task, not the content. It tells you what information to provide and in what order, but it does not fill in that information for you. The client details, the destination specifics, the situational context: those change every time. The structure stays the same.
Second, it includes the elements that produce quality. The Module 4 prompting framework, task, format and constraints, is the backbone. But a tested framework goes further: it includes the specific follow-up prompts that have proven useful for this type of task, the common pitfalls you have learned to instruct against, and the editorial notes that remind you what to check in the output.
Third, it is tested. A framework earns its place in your library by producing consistently good results across multiple uses, not by looking well-structured on paper. If you have used a framework three times and refined it each time based on what the output needed, that framework is earning its place. If you wrote it once and have not tested it, it is a draft, not a library entry.
Fourth, it evolves. Every time you use a framework and make an edit to the output, that edit is data. If the same edit appears twice, the framework needs to be updated. A prompt library that is not being refined is a prompt library that is slowly becoming less useful. The best frameworks are the ones that have been used the most and adjusted the most.