If you spend time with kids, you know that when they’re staring at screens, sometimes no manner of yelling, hand waving, or shooting flares into the sky will get their attention. They’re in another dimension.
According to a 2017 Common Sense Media study, children age 8 and under spend 35 percent of their day looking at a mobile screen, compared to 4 percent in 2011. While that sounds ominous, it’s not much different than the TV addiction of my youth—that mind-numbing attention sucker of after-school and Saturday-morning cartoons.
In Ernest Cline’s science-fiction book Ready Player One, his characters are immersed—nearly 24/7—in an alternate universe called the OASIS. The computer-generated, virtual-reality world (created by an autistic genius/recluse named James Halliday) is the only bearable escape from a deteriorating society.
Set in the 2040s, the novel paints a scene of planet Earth in decline: a global energy crisis, catastrophic climate change, famine, and disease. Many characters are refugees living in crime-ridden slums called “The Stacks”—trailer homes stacked together with shoddy metal construction. Others are indentured slaves subsisting inside an evil corporation called IOI.
The only shred of hope is for a savvy gunter (Easter-egg hunter) to find the Easter egg Halliday hid inside the OASIS—before the greedy executives of IOI get to it—and use the prize winnings and control of the OASIS for good, not evil. In their quest to win the contest, Wade Watt (avatar name: Parzival) and his online friends Aech, Art3mis, Daito, and Shoto put their problem-solving skills to extreme tests.
Expectations of a Blended Physical-Virtual World
I grew up in a generation in which the future we wanted to make was the one we read about in science-fiction books. Kids and young adults growing up now are reading books like Ready Player One, which has attractive outcomes to them. Although Cline paints a dystopian landscape, there are positive messages underneath apocalyptic fears.
The simple truth is that a whole generation is growing up expecting a blended physical and virtual world and a new set of interfaces, workflows, and technology to support it.
Gamers (and I don’t mean the Fortnite kind) have the ability to play out multiple scenarios virtually before committing to a final direction. They have built an inherent skill for using technology to teach themselves things and to see around corners. People who use games as tools to explore and learn—not hide and ruminate—will have an advantage in the coming decades.
To avoid disruption, companies will have to accept that the digital-native generation coming up now will have a different comfort level with VR collaboration than older generations have. It’s been true for every major generational shift. Younger generations will be looking for work that allows them to integrate their physical and digital lives, and leaders have to be ready for that expectation.
The Gamer’s Advantage
While Wade (Parzival) realized his endless immersion in a virtual world was cauterizing his humanity—for example, he hadn’t seen the sky, exercised, or eaten proper food in months—all that practice at playing out scenarios virtually was sharpening his problem-solving skills. The virtual world was also a safe place to fail because, as long as he stayed ahead of the competition, he had multiple shots to get something right.
I look at my daughter: She loves this experiential application called Reading Eggs. She gets more things wrong than she gets right, but there’s no one watching and no judgment. If I’m sitting with her doing a math problem, she’s much more tense about failing, partly because I’m dad, and that’s a pain in the ass. But when she’s playing that game, and it bonks at her—bonk, bonk—she just keeps going.
She’s comfortable with failing over and over until she figures out what the formula is. This virtual experimentation lets her see around corners because she’s comfortable using a computer to go through multiple scenarios to find the right one. Gamers have a fluid capacity for this.
VR Collaboration With Strangers
One of the great subthemes of Ready Player One is that the virtual world lets people who wouldn’t have collaborated in the real world solve problems together in novel ways. They were better together, and they were diverse by default. While I don’t believe virtual experiences and anonymity will be necessary to spawn diverse teams in distant locations in the future, I like the notion that when you can’t see who people are in “real life,” you judge them solely on how they respond to you and what their abilities are.
If anonymity brings two people together who never would have met before, is that a problem? Anonymity is only a problem when it enables the worst side of the Internet—trolls and cyberbullies. But if someone is timid and afraid to express her amazing idea, and anonymity allows her to express it to someone she wouldn’t have approached in real life, and that idea catalyzes something, how is that bad?
That’s one of the hopeful themes of Ready Player One: how these characters came together from diverse backgrounds and never discussed what their backgrounds were. They solved problems together and then had to reconcile who they were in the real world. And in the end, they realized they were better than they thought.
I see an environment like the OASIS as a way for people to test, integrate, fail, make decisions, and rapidly assemble a group of people from all over the world and with different points of view to solve a problem. I don’t personally want to work that way, but it doesn’t matter what I want. The digital natives will want that. They have a thousand friends they’ve never met. Working at a distance or collaborating with people virtually is not going to seem weird to them, especially given the fidelity of tools being created.
Future designers and engineers may not need that final face-to-face meeting to close an outcome. They’ll just communicate, design, shape, analyze, and make decisions together inside an extended-reality (XR) experience—and then call it a day.
Challenging All Assumptions
Every meaningful, fulfilling job is a riddle to be solved. That’s one reason (along with perceived job security) why parents are pushing their kids to learn how to code. But it’s unclear if there will be as many people coding in the future as there are today. Software development is no less disruptable than, say, truck driving. The truth is, programming will get automated, too; fewer people will actually be coders.
However, everyone will have to be technologically savvy, and some of the most important jobs of the future will be those that bridge the gaps between machines and humans.
People who can navigate a machine-enhanced world with a problem-solving approach—exploiting technology and human interaction to give them an edge—will thrive amazingly. And that will require creativity, flexibility, and scenario-planning skills.
Like gunters seeking the Easter egg, many people want to help create a world they are proud to live in—one where the standard of living increases rapidly for those most in need, and the world humans leave behind will sustain and nourish the generations that follow.
What if technologies like generative design can help designers and engineers do that by constantly challenging their assumptions and keeping their thinking agile? Personally, I always want my assumptions challenged. That’s where growth lives. And technology can absolutely do that.