Let me show you what this looks like in practice.
I have a piece of AI-generated output here: a pre-departure email for a client travelling to Zurich and Portugal. It was produced using the framework from Module 5 with the context document loaded. On first reading it is well-structured, in the right register, and covers the key elements the clients need before departure.
Now I am going to run through the two-tier review.
Tier one first. I am scanning for anything the client will act on. There is a reference to visa requirements: I need to confirm that against the current entry regulations for their specific nationality. There is a mention of the internal charter schedule for their transfer from Maun to the concession: I need to confirm the timing against the operator’s confirmed booking, not the AI’s version. There is a note about what to bring for the mobile safari section: I want to confirm the operator’s specific packing guidance rather than the tool’s general advice.
Each of those checks takes thirty seconds to a minute. I either confirm the detail is correct and move on, or I correct it in the output before the email goes anywhere.
Tier two. I am reading the whole email now as the client will read it. The opening paragraph is warm but slightly long: I am going to tighten it. The second paragraph uses a phrase that reads as generic travel copy rather than advisor voice: I am going to rewrite that line. The closing is good: it references something specific this couple communicated when they booked, and it lands the right emotional note.
The whole review took under three minutes. The email is now accurate, sounds like my practice, and is ready to send. Without AI, this email would have taken fifteen to twenty minutes to draft from scratch. With AI and the review habit, it took five minutes total, including the verification. That is the arithmetic that matters.