[The Amazon Echo] is opening up a vast new realm in personal computing, and gently expanding the role that computers will play in our future.
Jaron Lanier, a keynote speaker at the WIPO Conference on the Global Digital Content Market from April 20 to 22, 2016, is a Silicon Valley insider, a virtual reality pioneer and one of the most celebrated technology writers in the world. But he is increasingly concerned about today’s online universe. He explains why and what it will take to turn things around.
What are your main concerns about the digital market today?
We have seen an implosion of careers and career opportunities for those who have devoted their lives to cultural expression, but we create a cultural mythology that this hasn’t happened. Like gamblers at a casino, many young people believe they may be the one to make it on YouTube, Kickstarter or some other platform. But these opportunities are rare compared to the old-fashioned middle-class jobs that existed in great numbers around things like writing, photography, recorded music and many other creative pursuits.
Economically, the digital revolution has not been such a good thing. Take the case of professional translators. Their career opportunities have been decreasing much like those of recorded musicians, journalists, authors and photographers. The decimation started with the widespread Internet and is continuing apace. But interestingly, for professional translators the decrease is related to the rise of machine translation.
Automated translations are mash-ups of real-life translations. We scrape the translations made by real people millions of times a day to keep example databases up to date with current events and slang. Elements of these phrases are then regurgitated into usable machine translations. There is nothing wrong with that system. It’s useful, so why not? But the problem is we are not paying the people whose data we are taking to make these translations possible. Some might call this fraud.
All these systems that throw people out of work create an illusion that a machine is doing the work, but in reality they are actually taking data from people – we call it big data – to make the work possible. If we found a way to start paying people for their actual valuable contributions to these big computer resources, we could avoid the employment crisis that otherwise we will create.
So what needs to be done to ensure a sustainable digital economy?
The obvious starting point is to pay people for information that is valuable and that comes from them. I don’t claim to have all the answers, but the basics are simple and I am sure it can be done.
Some sort of imposed socialist system where everybody is the same would be ruinous. We should expect some degree of variation. But right now a handful of people – those inheriting traditional monopolies like oil and the increasingly powerful big computer networks – have a giant chunk of the world’s wealth and it’s having a destabilizing impact. While an oil monopoly might control the oil, it won’t take over everything in your life, but information does, especially with greater automation.
If we expect computers to pilot cars and operate factories, the employment that is left should be the creative stuff, the expression, the IP. But if we undermine that, we are creating an employment crisis of mass proportions.
That’s where IP comes in. The general principle that we pay people for their information and contributions is critical if we want people to live with dignity as machines get better.
But IP needs to be made much more sophisticated and granular. It needs to be something that benefits everybody – as commonplace as having pennies in your pocket.
It is the only future that gives people dignity as the machines get better.
IP is a crucial thread in designing a humane future with dignity.
How would you like to see the digital landscape evolve?
I would like to see more systems where ordinary people can get paid when they contribute value to digital networks; systems that improve their lives and expand the overall economy.
Economic stability occurs when you have a bell curve, with a few super-rich people and a few poor people but most people somewhere in the middle. At present, we have a winner-takes-all situation where a few do really well and everybody else falls into a sea of wannabees who never quite make it. That’s not sustainable.
You are supporting the Conference on the Global Digital Content Market that WIPO is hosting. Why is that?
IP is a crucial thread in designing a humane future with dignity. Not everybody can be a Zuckerberg or run a tech company, but everybody could – or at least a critically large number of people could – benefit from IP.
IP offers a path to the future that will bring dignity and livelihood to large numbers of people. This is our best shot at it.
Who are your heroes and why?
There are many, but they include:
- J.M. Keynes, he was the first person to think about how to really manage an information system.
- E.M. Forster for The Machine Stops, written in 1907, which foresees our error with a very critical eye.
- Alan Turing, who stayed a kind person even as he was tortured to death.
- Mary Shelley who was a keen observer of people and how they can confuse themselves with technology.
And of course my friend Ted Nelson. He invented the digital media link and was perhaps the most formative figure in the development of online culture. He proposed that instead of copying digital media, we should keep one copy of each cultural expression on a digital network and pay the author of that expression an affordable amount whenever it is accessed. In this way, anyone could earn a living from their creative work.
What is your next book about?
Dawn of the New Everything: First Encounters with Reality and Virtual Reality is a memoir and an introduction to virtual reality. It will be out soon.
It’s not necessarily unique to wonder about human interaction and why people do the things they do, but for Danielle Ishak, her curiosity went a bit further than a day dream. Working in the area of Human Factors and Ergonomics, Danielle studies humans interacting with synthetic humans. Yep, she works with robots. Find out how Danielle discovered the industry, fell in love with it, and what advice she would give women looking into careers in STEM.
Occupation: Work at SAP as a Human-Factors Professional investigating software interfaces and Human-Robot Interaction
Last Thing You Read: Quantifying the User Experience by Jeff Sauro and James R Lewis
How did you get started?
My story follows that generic narrative of students in college getting inspired by their professors. In undergrad, I studied interactive media and one of my professors introduced us to the concept of the user experience when interacting with tangible things such as robots. I always knew that I was interested in humans and why they act or respond in certain ways, but I didn’t know how to apply this to a real occupation that was not pure psychology. I did some research and then came across the discipline of Human Factors and Ergonomics and realized that my interest was a field that was actually fairly high in demand in the economy. My interest grew tremendously when I realized I could investigate humans interacting with synthetic humans (aka humanoid robots). That concept simply blew my mind so I applied for a master’s program shortly after my discovery. This process took me two whole years after college, so in the meantime, I gained some experience through working various marketing jobs. I think it was great to work in other environments to figure out not necessarily what I wanted to do, but more about what I didn’t want to do.
New show uses an abstract visual language to depict the intersection of URL with IRL.
A new group exhibition generates both creative and art-focused perspectives towards the neverending back and forth between physical and virtual spheres. From curator Tina Sauerländer, who previously brought us PORN TO PIZZA—Domestic Clichés, an investigation into how porn, pets, plants, and pizza took over the internet, WHEN THE CAT’S AWAY, ABSTRACTION continues this dig into how the web is shaping new behaviors and contemporary senses of well-being.
In 2012, Paul Miller, a 26-year-old journalist and former writer for The Verge, began to worry about the quality of his thinking. His ability to read difficult studies or to follow intricate arguments demanding sustained attention was lagging. He found himself easily distracted and, worse, irritable about it. His longtime touchstone—his smartphone—was starting to annoy him, making him feel insecure and anxious rather than grounded in the ideas that formerly had nourished him. “If I lost my phone,” he said, he’d feel “like I could never catch up.” He realized that his online habits weren’t helping him to work, much less to multitask. He was just switching his attention all over the place and, in the process, becoming a bit unhinged.
Subtler discoveries ensued. As he continued to analyze his behavior, Miller noticed that he was applying the language of nature to digital phenomena. He would refer, for example, to his “RSS feed landscape.” More troubling was how his observations were materializing not as full thoughts but as brief Tweets—he was thinking in word counts.
When he realized he was spending 95 percent of his waking hours connected to digital media in a world where he “had never known anything different,” he proposed to his editor a series of articles that turned out to be intriguing and prescriptive. What would it be like to disconnect for a year? His editor bought the pitch, and Miller, who lives in New York, pulled the plug.
For the first several months, the world unfolded as if in slow motion. He experienced “a tangible change in my ability to be more in the moment,” recalling how “fewer distractions now flowed through my brain.” The Internet, he said, “teaches you to expect instant gratification, which makes it hard to be a good human being.” Disconnected, he found a more patient and reflective self, one more willing to linger over complexities that he once clicked away from. “I had a longer attention span, I was better able to handle complex reading, I did not need instant gratification, and,” he added somewhat incongruously, “I noticed more smells.” The “endless loops that distract you from the moment you are in,” he explained, diminished as he became “a more reflective writer.” It was an encouraging start.
But if Miller became more present-minded, nobody else around him did. “People felt uncomfortable talking to me because they knew I wasn’t doing anything else,” he said. Communication without gadgets proved to be a foreign concept in his peer world. Friends and colleagues—some of whom thought he might have died—misunderstood or failed to appreciate Miller’s experiment.
Plus, given that he had effectively consigned himself to offline communications, all they had to do to avoid him was to stay online. None of this behavior was overtly hostile, all of it was passive, but it was still a social burden reminding Miller that his identity didn’t thrive in a vacuum. His quality of life eventually suffered.
What we do about it may turn out to answer one of this century’s biggest questions. A list of user-friendly behavioral tips—a Poor Richard’s Almanack for achieving digital virtue—would be nice.
But this problem eludes easy prescription. The essence of our dilemma, one that weighs especially heavily on Generation Xers and millennials, is that the digital world disarms our ability to oppose it while luring us with assurances of convenience. It’s critical not only that we identify this process but also that we fully understand how digital media co-opt our sense of self while inhibiting our ability to reclaim it. Only when we grasp the inner dynamics of this paradox can we be sure that the Paul Millers of the world—or others who want to preserve their identity in the digital age—can form technological relationships in which the individual determines the use of digital media rather than the other way around.
Levy writes that when we choose to cast aside “the devices and apps we use regularly, it should hardly be surprising if we miss them, even long for them at times.” But what I felt was more general. I didn’t miss my smartphone, or the goofy watch I own that vibrates when I receive an e-mail and lets me send text messages by speaking into it. I didn’t miss Twitter’s little heart-shaped icons. I missed learning about new things.
it became clear to me that, when I’m using my phone or surfing the Internet, I am almost always learning something. I’m using Google to find out what types of plastic bottles are the worst for human health, or determining the home town of a certain actor, or looking up some N.B.A. player’s college stats. I’m trying to find out how many people work at Tesla, or getting the address for that brunch place, or checking out how in the world Sacramento came to be the capital of California.
What I’m learning may not always be of great social value, but I’m at least gaining some new knowledge—by using devices in ways that, sure, also distract me from maintaining a singular focus on any one thing. I still read deeply, and study things closely, and get lost for hours at a time in sprawling, complicated pieces of literature.
You’re used to watching your breath on the cushion, but what about when you’re cleaning out your inbox?
In his new book, Mindful Tech: How to Bring Balance to Our Digital Lives, David M. Levy offers lessons in single- and multi-tasking when engaging with technology and encourages readers to visually record themselves while checking email to gauge their physical reactions.