Skip to main content
AI tools for trip planning: still need humans

AI tools for trip planning: still need humans

We ran the experiment

In late 2024 and early 2025, we spent three months systematically testing AI planning tools against our own Costa Rica itinerary work. The methodology was simple: take questions we know the answers to from years of research and ground-truth experience, ask several AI tools the same questions, and score the accuracy of the responses.

The results were interesting enough to warrant a post, and not in the way that either AI enthusiasm or AI skepticism might predict. The tools are better than their critics claim at some things. They are worse than their proponents claim at others. And the specific failures in the Costa Rica context are, in our view, the important ones for travelers to understand.

What the AI tools actually get right

Let us start with the fair accounting.

For general orientation questions — “what are the main regions of Costa Rica?”, “what is the best time to visit?”, “what vaccinations do I need?” — the AI responses we tested were consistently accurate. They correctly described the dry season and green season dynamics, the microclimatic differences between the Pacific and Caribbean coasts, and the entry requirements for US and EU citizens. For travelers starting from zero knowledge, this general orientation is genuinely useful and the AI delivers it efficiently.

For logistics frameworks — “how do I get from San José to La Fortuna?”, “what are my transport options for the Nicoya Peninsula?” — the AI responses were accurate on the major routes. Shuttle services, the Puntarenas-Paquera ferry option, the domestic flight alternatives: these were correctly described in broad strokes.

For accommodation category questions — “what type of lodging is typical in each price range?” — the responses were calibrated correctly. The AI did not invent hotel brands or make specific claims that we could not verify.

So: general orientation, logistics frameworks, and broad category understanding — these are areas where AI provides real value to someone in the early research phase.

Where the AI falls down in ways that matter

Here is where the honest accounting becomes more uncomfortable for the AI optimists.

Specific prices: every AI tool we tested produced price estimates that were either outdated (reflecting pre-2023 costs that are now significantly low) or simply wrong. The specific numbers cited — park fees, tour prices, hotel night costs — were consistently below actual 2024-2025 prices. A traveler budgeting based on AI-provided price estimates for Costa Rica would arrive underprepared by 25-40%.

Road conditions: the AI consistently described Costa Rica roads in ways that underestimated the difficulty of the 4WD requirement for secondary routes. One tool told us that “a regular sedan is fine for most routes.” We know from experience that anyone driving to Monteverde’s village center, Drake Bay’s access road, or the southern Caribbean coastal route in a sedan is going to have a bad day. The AI’s road condition information was uniformly too optimistic.

Current park rules: the Corcovado mandatory guide requirement, the SINAC advance reservation systems, the Tuesday closure at Manuel Antonio — these operational details were inconsistently captured. One tool told us that Corcovado could be visited independently with just a ranger check-in. That is no longer true and has not been since 2014. A traveler arriving at Sirena without a certified guide would be turned away.

Seasonal wildlife accuracy: the AI correctly identified turtle nesting at Tortuguero as a July-October phenomenon but then described the peak incorrectly. One tool described August as “less reliable” when in fact August and September are the peak green turtle nesting months — among the best of the entire season.

Tour operator quality: no AI tool we tested was able to provide meaningful guidance on which operators within a category were trustworthy and which should be avoided. This is the category of knowledge that requires either personal experience or networks of trusted reviewers — neither of which an AI language model has.

Manuel Antonio Park: guided walking tour with a naturalist

The hallucination problem in a tourism context

AI hallucination — confident generation of false information — is particularly dangerous in travel planning because the confident, well-formatted presentation of false information looks exactly like the confident, well-formatted presentation of true information.

In our testing, we encountered specific hallucinations:

One tool named a lodge in Drake Bay that does not exist. It described the lodge’s amenities, location, and approximate pricing in detail. The confidence of the response was indistinguishable from accurate content. A traveler who tried to book this lodge would simply find nothing at the address provided.

Another tool described a road route from Puerto Jiménez to Sirena that does not exist — you cannot drive to Sirena. You can hike to Sirena (a 22km, 8-hour trail hike) or boat to it. The AI described a driving option with a specific distance and estimated time. This is not a minor error; a traveler relying on it would arrive at Puerto Jiménez with no workable plan.

A third tool provided a list of bus departure times from San José to La Fortuna that did not match any actual schedule we could verify. Bus schedules change seasonally and the AI’s data was clearly outdated — but the specific times were presented with the authority of factual statements.

The meta-problem: AI doesn’t know what it doesn’t know

The deepest issue with AI travel planning is not the specific errors — it is the absence of appropriate uncertainty. An experienced human planner knows the edges of their own knowledge: they know which information changes seasonally, which operators have variable quality, which roads are situation-dependent, which recommendations require current verification.

AI tools currently provide very similar levels of confidence across both stable information (Costa Rica is in Central America) and unstable information (this bus departs at 9am). The user has no reliable way to distinguish the two.

We are not saying AI is useless for trip planning. We are saying it is a starting point that requires verification, not an endpoint that produces a bookable itinerary.

How we actually use AI in our planning process

For our own work — building and maintaining this site, planning research trips, answering reader questions — we use AI tools in specific constrained ways.

We use AI for first-draft framework generation: “give me a structure for a 10-day itinerary that covers Arenal, Monteverde, and Manuel Antonio.” The output gives us a skeleton that we then verify and populate from our own experience and current research.

We use AI for question generation: “what are the logistics questions a first-time traveler to the Osa Peninsula is likely to have?” The output surfaces questions we might not have thought to address.

We use AI to check our own writing for clarity: “does this explanation of the Maritime Zone Law make sense to a non-lawyer?” The feedback is useful for identifying where our assumptions about reader knowledge are wrong.

We do not use AI to produce specific prices, current park rules, specific operator recommendations, or road condition guidance. For all of these, the human network — our own field experience, verified sources, readers who have traveled recently — is more reliable.

San José: guided city tour with National Theater visit

What this means for how you plan

If you are using AI tools to plan your Costa Rica trip, do this:

Use the AI for orientation and question-generation. Take its itinerary suggestions as a starting framework. Then verify every specific fact that is operationally important: park reservation requirements (at SINAC’s own website), current prices (from operator websites or booking platforms), road conditions (from recent traveler forums like TripAdvisor’s Costa Rica forum or the Costa Rica expat Facebook groups).

Be especially cautious about AI information on anything that changes regularly: park fees, border procedures, shuttle schedules, road conditions in wet season, operator business status.

And recognize that the category of knowledge that AI cannot provide — the guide who is genuinely excellent versus merely certified, the beach that is photographed but is actually dangerous for swimming, the restaurant that looks good on maps but is mediocre on the plate — is exactly the category that experienced human sources specialize in.

We are writing this from years of ground-truth experience. The AI was trained on text about that experience. There is a difference.

For the things the AI handles well — general seasonal guidance, broad itinerary frameworks, climate questions — our best time to visit guide and planning hub cover the same ground with verified current information.