What happens when humans think they can do anything?
Sometimes they build dinosaur theme parks. At least they do in Steven Spielberg’s 1993 movie, Jurassic Park (based on a book of the same name by Michael Crichton). Scientists have discovered how to recreate dinosaurs from preserved DNA in a fossilized mosquito. However, after an incident with a hungry velociraptor, a team of paleontologists and scientists are brought in to tour the island park to vet its safety.
The film proposes an interesting question for us to consider today, particularly with the recent developments in AI technology.
While on tour of the park facilities and seeing the scientists at work, one of the characters, Ian Malcom (played by the ever-so-charming Jeff Goldblum), questions the ethics of dinosaur cloning. “Your scientists,” he blurts out, “were so preoccupied with whether or not they could, they didn't stop to think if they should.”
And that’s the big question!
Ethical questions first
In the movie, mayhem ensues as Dennis Nedry, a park technician, shuts down the park to escape off the island with dino-DNA, in hopes of trying to sell it off.
As a result, in an epically iconic scene, the T-rex is able to escape from her paddock. (When she’s able to break past her fence and is freed, she lets out a loud T-rex roar. One of the best scenes in film! The way Spielberg is able to build suspense is just chef’s kiss.)
When the park is running smoothly and the deadly creatures live behind their electrized fences, away from tasty people, all is well. But when all hell breaks loose, and the T-rex and velociraptors escape, it becomes evident that maybe the ethical questions should have been asked beforehand. Like seriously asked and answered.
Likewise, ethics are worth thinking about with AI technology too.
Power and responsibility
Mandated with authority over the whole planet, the human race was given an immense amount of power when they were created in God’s image. But to quote another iconic movie, with power comes great responsibility.
Power isn’t inherently bad or evil, but when wielded for bad purposes, it’s being used improperly. Whether or not scientists are allowed to use their God-given power to re-create dinosaurs is certainly an ethical grey area.
Ian Malcolm’s question is crucial: Just because we can, does it mean we should? I will admit, discerning the answer to that question in every situation is extremely difficult, and the discussion slows down human progress. Maybe that isn’t a bad thing. Especially when human motives are mixed.
Mixed motives
What guides human development? Greed? The common good?
It’s probably a combination of both. I don’t believe most people have pure intentions, one way or the other.
Look at the park: The whole thing fell apart because of Denis Nedry’s greed and hope to make a quick buck. But he’s not the only one to blame. Even if there were some hoped-for developments in medicine (for the common good), the park owners and scientists were really hoping to make money. Are dinosaurs needed for medical breakthroughs? No. But for entertainment? Certainly!
A combination of motives played a part in the development of the dinosaurs and the theme park. And there will be mixed motives in the tech advancements in your lifetime too.
AI today
AI (artificial intelligence) technology is a big trend. But there are consequences to AI technologies. The proliferation of AI-produced photos, videos, and deepfakes adds to further intentional dissemination of misinformation and disinformation, for example.
AI doesn’t always get it right, either, sometimes unintentionally producing results that are made-up [1].
Then there is the reality that AI may replace human interaction in many forms. According to one study, the top use of AI in 2025 was personal development and therapy [2]. Is that really a good thing? Therapy is relationship-based, and AI is not a person with feelings and experiences, which is an important part of a therapist’s expertise. We need real human interactions.
AI has the potential to be treated like a deity in our society—all-knowing, and ever-present, without the benevolence of the true God of the Bible. I don’t envision AI taking over the world, enslaving countless people, and exacting revenge. But I feel nervous.
I’m nervous especially because tech companies working in AI innovation are not necessarily in it for the common good. They are interested in making money, particularly through harvesting personal data of users and using it for targeted advertising, among other things. Is that something we want?
Just because we can, does it mean we should? What are the consequences, overall, of the continued development of artificial intelligence, even if they seem minor or not as immediate? I want to propose the question before the metaphorical T-rex is let loose and there’s no going back.
Wisdom needed
Humankind was created in the image of God, and with that, like the Creator himself, comes the ability to create. So I’m not trying to downplay or discourage technological development or human progress. I like my MacBook, Google Nests, and streaming services. I like how much modern medicine has progressed.
I myself have sought to understand AI more, and the benefits of it in my own work. I am seeking to learn how to use it responsibly. I’ve used it to help beef up my language skills, and am exploring how to use it in other administrative tasks. It’s handy for transcribing virtual meetings. But I’m still hesitant.
How do we avoid letting the T-rex out of her cage, or prevent the greedy Denis Nedrys of the world from creating chaos? There are probably always negative implications for tech advancements. Mostly, I think, in the developing world, where it’s easier to exploit resources in countries with smaller economies and fewer human rights.
Wisdom is the key ingredient here. Technological progress does not equal becoming wise [3]. Wisdom comes from experience, knowing what’s worked and hasn’t worked.
Technology changes very quickly. Controlling technological development is not easy, so guardrails are paramount. Certainly we need to ask: What are the guardrails that governments can put in place? But each of us also needs to be asking: What can I do in my own life to guard against contributing to the negative consequences of technological development?
Tony Reinke has written a very helpful book called God, Technology, and the Christian Life to help Christians discern their use of technology. Taking personal responsibility for your own use of technology is important. You can use technology wisely, in a way that minimizes the possible negative consequences of technological development.
I highly recommend Reinke’s final chapter, “How Should We Use Technology Today?” as you seek to think through these issues. He offers thirteen points about how to live as a Christian with technology. You can also find more of his work here: https://www.desiringgod.org/authors/tony-reinke.
You have choice
Simply put, humans were made to create, and to push the limits of possibilities. Yet what I’ve learned from Jurassic Park is that caution is not a bad thing.
Just because the dinosaurs are on an island, doesn’t mean you should visit them.
Personally engaging in the next big innovation isn’t necessary. When the next big social media app comes out, or the game-changing AI updates are released, or the self-driving car hits the market, it’s perfectly okay to sit on the sidelines, assessing the effects of using the latest technology.
But it’s also okay to explore new things. That’s part of what God created us to do.
Notes
[1] Anna Choi and Katelyn Xiaoying Mei, “What are AI hallucinations? Why AIs sometimes make things up,” The Conversation, March 21, 2025; accessed April 17, 2025.
[2] Marc Zao-Sanders, “How People Are Really Using Gen AI in 2025,” Harvard Business Review, April 9, 2025; accessed April 17, 2025. https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025.
[3] Tony Reinke, God, Technology, and the Christian Life (Wheaton, IL: Crossway, 2022), 223.
