I want to address something that trips people up, because I have seen it derail an otherwise promising start with AI more times than I can count.
AI produces incorrect information. Not occasionally. Regularly. It does so with complete confidence and without signalling uncertainty. This is called hallucination, and it is a structural feature of how language models work, not a bug that is about to be patched.
For travel advisors, the areas of highest risk are specific and worth knowing: visa and entry requirements, health and vaccination requirements, pricing and availability, property-specific details, local regulations, transport logistics. These are the areas where an AI-generated error taken at face value and passed to a client can cause real harm.
We have an entire module on verification later in the course, and it is less onerous than it sounds. But right now I want to focus on the mindset point, which is this: when AI produces something that is wrong, that is not a verdict on whether AI is useful. It is information about what the prompt did not specify, or about the category of information that always requires a primary source check.
Treat weak or incorrect output as diagnostic data. Ask yourself: what did my prompt assume that the tool could not know? What category of information is this, and should I be verifying it regardless of where it came from?
That reframe, from frustration to curiosity, is one of the most useful habits you can build. It turns every imperfect response into a lesson that makes the next one better.