Cinematic Videos and the Question of Accuracy
I love using artificial intelligence (AI) tools to help tell my ancestors’ stories. Over time, I’ve found that tools like Google NotebookLM can pull meaningful details from my genealogy narrative reports—sometimes highlighting aspects of an ancestor’s life that I hadn’t fully appreciated before.
Its audio overviews and cinematic videos, in particular, can transform a written report into something engaging and memorable.
But recently, I ran into a situation that made me pause—and rethink how I use these tools.
When the Visuals Don’t Match the Facts
I asked NotebookLM to create a cinematic video for my great-great-grandmother, Sarah Ellen Ralston, titled “Anatomy of an Obituary: The Geography of Endurance.”
The storytelling was beautiful.
But then I noticed something unsettling.

The video included what appeared to be a historical photograph of her husband, Richmond Fisk Hammond. At first glance, it looked believable—very much like the real photos I have of him.
But it wasn’t him.
It was an AI-generated image.
And that’s where the problem begins.
Trying to Correct the Record
To avoid confusion, I uploaded an actual photograph of Richmond Hammond into NotebookLM and asked if it could recreate the video using the real image.

The response was helpful—but also limiting:
“I understand wanting to use the real photo… Unfortunately, I am unable to incorporate custom uploaded photos into the video overviews. The visuals are generated automatically…”
In other words, the tool simply can’t guarantee visual accuracy when it comes to people.
And for genealogists, that matters.
Because once a video is shared, viewers may assume every image in it is authentic.
A Better Approach
So I tried a different strategy.
Instead of asking for a cinematic video with people, I asked:
“Can you create a cinematic video about Sarah Ellen Ralston without using images of people?”
That one change made all the difference.
The resulting video still told her story—but without introducing misleading imagery. No invented faces. No confusion about identity. Just the narrative, supported by neutral visuals.
What This Means for Genealogists
AI-generated storytelling tools are powerful—but they’re not neutral. They make creative choices, especially with visuals.
And when those visuals look like historical photographs, they can unintentionally blur the line between fact and fiction.
For those of us working to document real lives and real people, that’s a line we need to keep very clear.
For now, I’ll continue using cinematic videos—but with a more cautious approach:
- Avoiding AI-generated “photographs” of individuals
- Using abstract or non-human imagery when possible
- Clearly labeling AI-generated content when I share it
Final Thoughts
I still love what AI tools can do. They help bring our ancestors’ stories to life in ways that weren’t possible just a few years ago.
But this experience was a good reminder:
just because something looks real doesn’t mean it is.
And in genealogy, accuracy always comes first.
