Privacy is the New Celebrity

Aza Raskin on the Decommodification of Human Attention and How to Build a Universal Translator - Ep 11

November 18, 2021 MobileCoin
Privacy is the New Celebrity
Aza Raskin on the Decommodification of Human Attention and How to Build a Universal Translator - Ep 11
Show Notes Transcript

In this episode, Lucy Kind interviews Aza Raskin, the co-founder of the Center for Humane Technology, and the co-founder and executive director of the Earth Species Project.  Aza grew up in Silicon Valley (his father started the Macintosh project at Apple) and began his career at Mozilla where he was the inventor of the infinite scroll. Lucy asks him if he regrets this innovation.  Aza expands on his vision for cultivating a world where human attention is not just decommodified but valued as our most treasured resource. He also explains the tech behind the Earth Species Project's efforts to decode animal communication and the implications of a universal translator on deep human listening. Aza is a multitude of metaphors and his vision creates hope for a new era of humane technology.

Speaker 2 (00:13)
Welcome back. This is Privacy is the new celebrity conversation at the intersection of Privacy and technology. I'm Lucy Kind, and I'm excited to be your host today for episode eleven today on the show, we are speaking with Aza Raskin. Aza is the cofounder of the center for Humane Technology, a nonprofit organization dedicated to reimagining our digital infrastructure with technology that supports our wellbeing democracy and shared information environment. He's also the co founder and executive director of the Earth Species Project, which works to decode animal communication. Aza, I'm so excited you decided to join us on Privacy as a new celebrity.

Speaker 1 (00:53)
Lucy, it's my pleasure to be here to start off.

Speaker 2 (00:58)
Can you tell us a bit about the center for Human Technology? What's the mission of this organization and what sort of work do you do?

Speaker 1 (01:05)
Maybe your listeners have seen the documentary The Social Dilemma, which came out in 2020, and I think that documentary really tells the story of what center for Humane Technology is about. So I grew up in maybe a little bit of an atypical Silicon Valley household. My father, Jeff Raskin, was the person who created the Macintosh project at Apple. So I grew up carried inside of a Macintosh carry case, which is sort of the reason why I think I didn't have any friends in middle school or high school.

Speaker 1 (01:43)
But there was a sense early in Silicon Valley that technology should be made to enhance collective human wisdom. My father made the McIntosh, and he pushed back against Jobs, who was working on the Lisa at the time, who wanted it to have a character display that he could only show text on the screen. And Jeff really wanted it to have a bitmap display that it could show graphics because he wanted to compose music, and that is that technology should take the best parts about what makes us human and extend those in the sense that a paintbrush is a technology.

Speaker 1 (02:25)
Language is a technology, a piano or a cello is a technology that technology should serve us. Center for Humane Technology we founded in 2018 because it had become incredibly obvious that technology wasn't enhancing collective human intelligence and wisdom, but going the other way that we have the power of gods. But we do not have the wisdom or prudence or love of gods. And we've really seen that come to fruition now. We were very early in this conversation, but we've seen this come to friction with Francis Haugen, the Facebook whistleblower coming to testify in front of Congress and saying, hey, Facebook, but this applies to every other of the attention economy.

Speaker 1 (03:18)
Companies are breaking our ability to have relationships, to have functioning democracy. One of my favorite examples here is these surveillance capitalist kinds of companies are extracting value from us, our data, our attention standard extractive business model. But unlike oil, which is another extractive business model, that when oil extracts from the ground and then pollutes into our common and creates oil spills. It causes great harm, but doesn't fundamentally change the government's ability to regulate oil companies. When Facebook or any of the Infotec companies in the race to the bottom of our brainstem pollutes, it's dumping division and misinformation and polarization into our democracies.

Speaker 1 (04:17)
And it breaks our government's ability to regulate. So the longer that these things go on, the less we function as a democracy and the less we as individuals feel like we're living full whole selves. So that's what the center for Humane Technology was created for was to really highlight these problems and then push for solutions.

Speaker 2 (04:42)
Yeah, definitely. Well, thank you for that very illuminating example. So what sort of work do you do?

Speaker 1 (04:49)
We work at a wide range of places, and we think of it as internal pressure. That is like helping people who are on the inside of these companies use the voice they already have to create change, external pressure so that's things like the social dilemma, which was seen by 100 and 4150 million people in 190 countries and changed the way people talked about Facebook Tik Tok Twitter, it sort of showed the mechanism behind which they work, and that creates external pressure. Then there is regulation pressure. So that's working with the EU with the US government, governments around the world to find smart ways to incent the right kinds of behavior.

Speaker 1 (05:39)
And then finally, there is aspirational pressure that is showing examples and pointing at examples of where we could go.

Speaker 2 (05:49)
So how would you define humane technology?

Speaker 1 (05:53)
My father called humane technology. That technology which is responsive to human needs and considered a human frailties. I think that's a really good place to start. E. O. Wilson has this wonderful summarization of where we are as a species and the problem that we face, and that is we have Paleolithic emotions, medieval institutions, and Godlike technology.

Speaker 2 (06:21)
Yeah.

Speaker 1 (06:21)
So if we do not have a clear eyed look at where we as human beings are vulnerable, then as we use increasingly powerful technology, we will increasingly break ourselves. So the metaphor I like to use here is imagine human beings as a kind of origami. And there are certain ways that we bend and fold and other ways that we don't. And if you bend us the wrong way, we break or we tear. So there's a study in design called ergonomics, which is the study of how our bodies bend and fold.

Speaker 1 (06:58)
And if you don't understand ergonomics, then you design chairs that give us back aches scoliosis that really hurts us now and later in life. There's also a field of cognetics, which is the study of the ergonomics of the mind. In what ways does your mind bend in full? You can only remember seven. Plus, your mind is two things that if you put people in groups surrounded with aspirational identities, people conform. That's the SolomonASH study on social conformity. There are these fundamental truths about how human beings work and work together.

Speaker 1 (07:28)
And if you don't understand that, then we make systems that when we bend and fold, it just tears the whole thing. So what is humane technology? It's sort of like that beautiful unit origami where it's finding all the ways to make beautiful shapes that you can stack together to make functional, incredible cohesive communities, societies.

Speaker 2 (07:50)
What a wonderful metaphor. Discussions surrounding surveillance, capitalism, other forms of profiteering from consumer data is a common topic for this podcast, and your organization is very much focused on these issues. So at the risk of getting a little dark for a second, what are some of the least humane or most problematic technologies that exist right now in your opinion?

Speaker 1 (08:11)
Well, the point in some sense of your podcast is to center around Privacy, but it's interesting to note that just working and focusing on Privacy will not be enough. So one example, right, is Facebook's 2018 change to their ranking algorithm. So as part of our work, our original work was time well spent. We put a whole bunch of pressure on Facebook, and actually, Google and Apple both released their screen time features as a response to time will spent Zuckerberg at the beginning of 2018 said, all right, we won't optimize for engagement anymore.

Speaker 1 (08:53)
We're going to optimize for this thing called meaningful social interaction. And you're like, what is meaningful social interaction? How do you measure it? And it turns out the way they measured it was based on how much it caused other people. You post something, how much do they react to it and actually alike was only worth one point. Getting angry or one of these more emotive things was worth five points. And the more that it caused the people in your social graph to react have an emotional reaction.

Speaker 1 (09:22)
The more viral it went, the more they promoted it. The insight is, of course, the more viral something is, the more virus it is likely to be and harmful. So they made this change. And they immediately got a response from EU parties like Poland and Spain. But it's also as far afield as India and the political parties came back to them and they said, hey, we know you made a change. And Facebook was like, yeah, sure, we make a change. People say that all the time.

Speaker 1 (09:58)
They're like, no, really. Look, we know you made a change to the algorithm because we used to post things like white Papers policy, and they wouldn't get the most engagement. But they got some engagement. Now they get zero engagement, which makes sense. They're not the things that emotionally engage people. And so they said, we have been forced to change our political stances, the things that we post to go much more negative. One party said they're like, 50 50. They skewed now like 80% negative attack ads, even though we don't want to.

Speaker 1 (10:31)
This is what we're forced to do. And this should really make us stop because it's often said that culture is upstream of politics. But what we're seeing here is that Facebook and these information technology companies are not just upstream of culture and politics, but control both. That is the asymmetric power they now have over our societies. And that's really important to understand, because we have long had stories about AI running Amok paperclip optimizers that you send them off to make paper clips, and soon they're tearing apart the world to make more paperclips.

Speaker 1 (11:16)
And we're like, oh, those are dumb stories. You just be able to turn it off. And yet here we have exactly a paperclip Mactosimizer. This is about engagement, reactivity, and we are unable to turn it off. And it is directly putting its hand over the invisible hand of the market. It's the new digital hand of the world, and there is no Democratic oversight. These are political parties that are saying we do not want to go negative and divisive, but are being forced to by an algorithm.

Speaker 1 (11:53)
This is the runaway AI problem. And note, where in here was Privacy broken? It wasn't. So these things run even deeper than our current regulatory processes. These are new species of attacks against humanity, and we need new species of protections.

Speaker 2 (12:15)
Yeah. I mean, on that note, you're also credited with coming up with the Infinite Scroll in 2006, which anyone who uses Instagram or TikTok will be familiar with. And critics have argued that this is a form of addictive technology that's actually caused a lot of harm for users. So in your role advocating for human technology, do you regret this innovation? How do you reconcile the ways Infinite Scroll has been used for profit over consumer health with your current role advocating?

Speaker 1 (12:45)
Great question. It's certainly part of my waking up process. The realization is that if you calculate with pretty conservative numbers, how much time that one invention of Infinite Scroll wastes it's over 100 million human lifetimes per day.

Speaker 2 (13:03)
That's insane.

Speaker 1 (13:05)
Right. And do I regret it? One of the things the way I always answer that question is, I don't think Infinite Scroll was so amazing of an invention that if I hadn't come up with it, somebody else would have. And that really speaks to the fact that even if you were to break up Facebook today, they're still the exact same market forces driving all of their competitors. So if Facebook doesn't do it, then TikTok will. So I think what I would rather do is go back and say, AHA, we need to package a better kind of philosophy along with these technologies and where I place Infinite Scroll right.

Speaker 1 (13:49)
And back then, it made a lot of sense. Ajax had just come out. If you remember there's, like, MapQuest and you couldn't pan the map, you had to click a button and wait for things to load. Google Maps just come out so you could keep panning. And if you're scrolling down a page and you get to the bottom of a list of search results. Yeah, just load the next one. That makes so much sense. If I've caused a user to make a choice they don't care about.

Speaker 1 (14:13)
As a designer, I failed. But the bug in human centered design is that it puts human bugs at the center. We have to be protective of human bugs. And so here's the sort of the thought experiment I want all your listeners to think about. There is a technopop fixation which we call the singularity. And the singularity is when draw sort of a line which represents human strengths, and that's not going up and not going down. And then there's an exponential coming up beneath it. And that's the power of technology to overwhelm or overpower.

Speaker 1 (14:54)
And you can see when that point crosses over the human strengths line. That's the singularity. That's when we've lost all control. But that framing misses something really important that is there's a line much lower than human strengths, and that's human vulnerability or human weakness and technology starts to undermine the thing that we are weakest at way before it overwhelms our strengths. And then you see Infinite Scroll as just one specific attack, which undermines our ability to remember to stop. If we're not giving a stopping cue, just like you're drinking a glass of wine, you remember to stop and you get to the bottom.

Speaker 1 (15:35)
If it fills up automatically, you're unaware of it. You drink like, 70% more information. Overload is another example. Or decreasing attention spans is another example. So one of my favorite sort of recent questions that have been posed. And this came from, like, a high up government official in the EU. And he asked the question, who do you think China views as China's own greatest rival to their power? If you were anything like me, you'd probably answer the US. But that's not what they think. It's their own Internet technology companies.

(16:26)
Wow.

Speaker 1 (16:27)
Interesting, because they're the ones that have the purview, the ability to influence what a billion people think and do and see every single day. And in that frame, it makes perfect sense that they banned their version of Tik Tok for kids. And so it's only usable for 40 minutes a day. Friday, Saturday, Sunday, same thing with video games. They wanted to stop frying the minds of their youth. They started going after celebrity and influencer culture, and they banned the rankings of celebrities by popularity. They tanked their version of Uber, came out in the US stock market, and they killed it.

Speaker 1 (17:07)
They shot in the head the day that it came out. And all these things are really confusing until you realize that the information technology companies are upstream of politics and upstream of culture, and you have to deal with it. So here's the metaphor, like, imagine technology is a kind of brain implant into society. And we have these two competing sort of visions. Right now we have closed digital authoritarian style companies. China is just one example of it. And there they're intentionally deploying their technology to make a cohesive full organism, not using our values.

Speaker 1 (17:47)
And on the Western side, we're just letting market forces wire up this brain implant so that every neuron is connected to every other neuron for maximum viral broadcast. And what do you expect to get but massive social epilepsy. And here's the US writhing on the ground, foaming at the mouth. That's exactly what we see. And the challenge in front of us is not just do we mendrule 230. Do we fix Privacy regulation? It's how do we update what democracies were supposed to be about for the 21st century so we can come together and solve our big problems and make it to the 22nd century?

Speaker 1 (18:25)
If we don't, then we're going to be hit by climate change and increasing pandemics social inequality. And we're not going to be able to do anything about it. Where all of the other countries that might not have our values are using their compounding interest exponential tech to own the values of the future.

Speaker 2 (18:45)
Yeah, definitely. It feels like we've pretty quickly moved from an Internet environment where knowledge is power to one where it's an attention economy and perception is reality. It's kind of crazy to think about. So what do you think we can do to stop or at least mitigate the potential harm of technologies that have had negative results for users or for society?

Speaker 1 (19:08)
Yeah, I think a big one is to realize that human attention is the most important commodity and needs to be protected as a fundamental human, right? Shoshana Zubroff, who wrote Surveillance capitalism has, I think, a beautiful way of saying it, which is we have banned the sale of a number of human products. We have banned the sale of human orphans. We have banned the sale of human organs. It makes sense that as technology's powering continues to increase, we should ban the sale of human futures, of the ability to predict what we're going to do and buy and trade those.

Speaker 1 (19:57)
It's sort of at the end of the social dilemma. Justin Rosenstein, who is one of the cofounders of the like button Coin Ventures. Rather, has this wonderful line where he says, So long as whales are worth more dead than alive and a tree is worth more as lumber than a tree, then we, as human beings, are going to be worth more misinformed, polarized, isolated and depressed than we are as living our full lives completely commodified. Yeah, exactly. Like we were being turned into dead slabs of plastic human behavior, predictable human behavior.

Speaker 1 (20:36)
We could just ban that. This is the first time we've ever really encountered something like surveillance capitalism. We could just say that is not acceptable if we were to drop down a little bit lower. One of the really cool things that came out of the Facebook files that Francis Haagen disclosed is there is so much that could be done right now. So I found this really interesting. Internal researchers at Facebook found that if they limited the number of hops that something could be shared to two.

Speaker 1 (21:10)
That is, I could write some content you could share it. One other person can share it. But after that, the reshare button disappears. You can still copy and paste thing. But just like that really simple one click reshare goes away after two hops that does more to reduce miss and disinformation than literally the billions of dollars they're spending on content moderation and AI systems, because the things that go viral are the things most likely to be a virus. So you could imagine Facebook just taking out that feature, and that would do a whole bunch.

Speaker 1 (21:49)
The problem is, of course, all companies, including Web, three crypto companies are competing fundamentally for our attention. So really, this is a game of which platform can make content go viral the fastest, because whichever platform makes content go viral, the fastest will beat out the others because it's taking more of our attention. And so this is where we need to have some very clear rules. We're going to look back ten years from now, and we're going to say either we have created regulation that sort of shoots this sort of like viral beast in the head so that it's chimeral, multi head, stop rising and we've been able to move forward as a society or we haven't.

Speaker 1 (22:30)
And we're still rising on the floor so that's I think the core set of what we could do. I think one more really interesting idea is Francis and others have called for a new regulation like regulator that regulates these new breeds of companies. And the immediate question, of course, that comes into your mind is whichever side you're on left or right, do you trust the other side to have regulatory power over these companies that control culture and politics? And the answer is probably not. So how do you sidestep this?

Speaker 1 (23:07)
What is a Democratic process? And there's this beautiful process called the deliberative process. Deliberative Democratic process. It's actually used in Mongolia and a number of other places. But in Mongolia to amend their Constitution, what do they do? They invite 1000 people in from all over the nation to come sit for a week. They ask questions in small groups to experts about whatever they're deliberating, and people's views change wildly through this process, sometimes 50 60 percentage points. Texas uses 1996 around renewable resources. And it was through these small group deliberative processes where they decided, yes, we want to invest both Democrats and Republicans, rural and urban, into renewables.

Speaker 1 (23:58)
And this is why now Texas is the number one on wind in the US. And so you can imagine applying this kind of deliberative Democratic process, where it's everyday people, given the chance to really understand the problems and be in contact with other people that are across whatever divide to help solve some of these really thorny challenges by putting true Democratic oversight on the core design imperatives and metrics of any social company that reaches a critical size.

Speaker 2 (24:34)
Awesome. Well, we've gone over a handful of interesting technologies that could really use some regulation. What role do you think Privacy plays in regulating these technologies and advancing humane technologies?

Speaker 1 (24:50)
So where I think of Privacy most is coming in a basic human right, especially in countries that use technology more authoritarianly. If that's a word. China. These countries are clearly using the technology. But the US also has a mass surveillance kind of strain that puts the fourth estate and the autoimmune functions of democracies the press at risk. So that's clearly a problem. So Privacy is a core tenant. But as I already described, it's not enough because Cambridge Analytica scraped Facebook, bought Facebook's data and was able to predict people's personality traits.

Speaker 1 (25:42)
Ocean the Big Fives openness conscientiousness extroversion. I can't even remember the last one, but neuroticism. And it turns out Gloria Marks did some research that you can get 70% as good on order of predicting somebody's personality just by looking how they move a curse around the screen, not even what's on the screen itself.

Speaker 2 (26:09)
Oh, wow.

Speaker 1 (26:10)
Just by tracking your eye motion and looking your eyes via webcam, you can determine if you're on a drug, whether it's alcohol, whether it's MDMA from the motion of your wrist. If you were wearable, that's enough to predict depression.

Speaker 2 (26:29)
Wow.

Speaker 1 (26:30)
Uber data, right. Just like knowing where your phone is. And by the way, a lot of the apps leak your location data. You know, if you're having an affair with whom you're having an affair when it started where your kids live, the idea of Privacy, as it stands, just doesn't live up to these new species of attacks because human beings, we throw off entropy and so much can be discovered about us without our knowledge or our consent. So we're going to have to come up with new ways of saying, okay, you can try to restrict how much data is collected.

Speaker 1 (27:08)
Philip Brosdale from Second Life, he's working on a new VR thing. He discovered that 1 second of data of somebody's head motion in, like a VR headset is enough to uniquely identify them as good as your fingerprint. Just like your gate. Like if you're walking and you've covered your face, your gate is unique as you walk just as much as a fingerprint. So I think a lot of the old way of thinking about Privacy is information you have to hoard and keep private just does not work anymore.

Speaker 1 (27:34)
And we need to have new modes of understanding. What can you understand about someone? What can you predict about someone? What you can do with it and block those kinds of usages and make all of those downstream things illegal?

Speaker 2 (27:48)
Yeah. Sounds like we've already stepped into some sort of surreal dystopia, but to get a little less dark. Are there any technologies or organizations you want to highlight that you consider humane technologies or helping to solve some of these problems?

Speaker 1 (28:04)
Yeah. I think one of the coolest is everything that Audrey Tang is doing in Taiwan. She has been really working to Institute sets of technologies that does a little bit what I was talking about with the Deliberative democracies, inviting people in real time and using technology to highlight interesting places of consensus, like rough consensus across divide. So it's looking for imagine Twitter. But instead of sorted for the things that get you most angry, it's sorted for the most surprising places of consensus. And they actually use these kinds of technologies to find a solution around gay marriage in Taiwan, where they found that the older generation had a specific view of what marriage meant.

Speaker 1 (29:03)
The younger generation had a very different view of what marriage meant, and they were able to find the kinds of values that were shared between these two groups and craft a new kind of essentially civil Union that fit the needs of both. And I think that's really awesome. Another group that I think is doing really interesting work is more in common. They did a whole bunch of polling around something called what's called the perception gap, where they measured. It's hard to measure whether a belief is true or false, but it's easy to measure whether about a belief is true or false.

Speaker 1 (29:45)
What do you think? Republicans think about X? What do you think Democrats think about Y? And it turns out there's a huge perception gap that we're not seeing the other side accurately. If you ask Democrats, what percentage of Republicans think all Muslims are bad for the US? Democrats estimate it's like 85%. And in reality it's like less than 15%.

(30:08)
Wow.

Speaker 1 (30:09)
And the same thing is true in reverse the other way around. And so we're fighting with a Mirage, generally speaking. And the bigger the perception gap, the more likely you are to view the other side as bigoted or hateful as nonhuman. And so you could imagine some technology which looks at over time what content is helping people see each other accurately, and what content is preying on the perception gap. And, of course, misinformation and disinformation hate speech. These are all things that prey on an inflated, incorrect, grotesque view of the other side.

Speaker 1 (30:57)
So if you decrease all that content, you also get rid of a lot of the sort of, like, clickbaity terrible, obscene content and like to hate kind of content. And what's cool about the perception gap work is that this is entirely objective, right? You're not saying is our vaccines good are vaccines bad? You're just asking people for their perceptions of the other side, and they're asking that side what they actually think. And so it's a purely objective measure and normally a very subjective, fraught world. And so I love technologies like that.

Speaker 2 (31:33)
Yeah. It's super fascinating work. And speaking of perception gap and nonhuman perception. I'd love to pivot to your Earth species project as a linguist in a previous life, I find this project so fascinating. At first glance, it looks like you're basically trying to use technology to talk to animals. Can you tell us what the project is all about?

Speaker 1 (31:53)
Yeah. At first glance, that is correct. Except I would switch it around and say it's more about listening than it is about talking. I think moments of change and transformation happen when you listen, they don't happen as much when you talk. So we are using sort of the latest in machine learning field called unsupervised machine translation to decode nonhuman communication and look to see if there's such a thing as nonhuman language. And there are a couple of studies that we've run across that I think are really interesting and intriguing.

Speaker 1 (32:44)
One of them is an unpublished thesis in 1019 and 94 from University of Hawaii and is actually recreating or extending some older research. And here they taught Dolphins two gestures. And the first gesture was do something you've never done before, which is to mean innovate. And it's surprising that you could communicate to another being on this planet that's nonhuman the idea or concept of innovation. But you can you give the signal and the Dolphins will do something they haven't done before that session. And then they taught a second gesture, which is together.

Speaker 1 (33:23)
And they would say to two Dolphins do something you have not done before together and the Dolphins go down and they exchange Sonic information. And when they come out, they both do the same thing at the same time they haven't done before.

(33:39)
Wow.

Speaker 1 (33:41)
That just really makes you go, wow. Okay. Maybe there's really there. And then the second part is, why do we even think this is possible now? And there is a big breakthrough in AI in 2017. And that breakthrough was that you could translate between two human languages without the need for any examples or Rosetta Stone without a dictionary. And the way you do it is sort of surprising. You ask the AI to build a shape that represents a language. Imagine a Galaxy, every star is a word, words that mean similar things are near each other.

Speaker 1 (34:22)
And then the way concepts relate encodes the geometric relationship between things. So that's a little abstract. But imagine the word dog has relationship to man. Dog has relationship to cat, dog has relationship to Wolf and to fur. And if you imagine the set of all relationships, it fixes it in a point in space. And if you do this with every word to every other word, it's sort of like solving a massive multidimensional Sudoku puzzle and out pops a rigid shape that represents all the internal relationships of a language.

Speaker 1 (35:00)
You don't know what anything means. You just know how they relate, and you then take that shape. This was the breakthrough. You take that shape, say, for English. And you take that shape for Japanese, and you're like, eh, these shapes couldn't possibly be the same sort of Galaxy. And indeed, they're not the same. But if you sort of blur your eyes, you can rotate one on top of the other. And the point which is dog, ends up in roughly the same spot in both languages.

Speaker 2 (35:26)
Wow. This literally sounds like Jetson's technology like an invention people have been dreaming about for decades.

Speaker 1 (35:32)
It's a universal translator. It's like BabbleFish, like Hitchhiker's Guide to the Galaxy. And what I find so profound is that it doesn't work just for English and Japanese, but German and Esperanto and Finnish, which is a weird language, and Aramaic and Urdu, and they work better and worse, depending on how distant the relationships are. And yet we're all still humans, all seeing the same world, teasing out the same relationships between things. And what's crazy is that they all sort of share this kind of very rough, universal shape.

Speaker 1 (36:07)
And I think in a time of such profound division, seeing that there's this hidden underlying structure that unifies us all is incredibly beautiful, definitely.

Speaker 2 (36:19)
And how is it coming along? Any achievements from the Earth Species project you'd like to share with us?

Speaker 1 (36:24)
Yeah. One of the things I should say is just to finish this thought experiment is where we're headed is okay. We can build this shape for human languages and text. Can we build this shape for animal communication? So first we have to work on extending from text to audio. So that's right at the edge of machine learning. There's been one or two papers that have just been able to translate just listening to French, listening to German and doing the translation audio to audio. But that's going to take some work.

Speaker 1 (36:59)
But once you can build that shape, the question becomes, take the shape, which is dolphin communication and rotate it into the shape, which is human communication. And is there an overlap? And because Dolphins, I don't know if you know this, but they will pass around pufferfish and get high. Puff, puff, pass. They have grief. They have dialects. They have the equivalent of accents, like New York versus New Jersey versus California accent. They have culture. They pass down there's a lot. That's the same. So we should expect some part of these shapes to overlap.

Speaker 1 (37:42)
But then they can speak to two different Dolphins at the same time. They can, like, bifurcate their Sonic streams. So that's really different. They see the world in sound as well as in sight. They just live in a very different world. They sleep with half of their brain at a time. They never fully go to sleep. So there's a lot that's different. So you'd expect some of the shape to not overlap the part that overlaps. We'd expect direct translation, the part that doesn't overlap. We'd expect to not be able to directly translate something into the human experience.

Speaker 1 (38:17)
Yet we can see that there's a there that there's something there we can start asking questions about. And I don't know which of these two places are going to be more interesting, the things we can directly translate or the parts that we can see, but we don't know. And I think about this all the time. We, as human beings, have been speaking vocally for on order 100,000 years, maybe 200,000 years. Dolphins and whales have been communicating vocally for 30 million years.

Speaker 2 (38:41)
Oh, wow.

Speaker 1 (38:42)
Just imagine what we can learn by listening.

Speaker 2 (38:45)
Yeah, definitely. In some ways, they are our cultural ancestors. Right. And so many of these rituals that we've claimed as just human, it's so interesting to see them over there.

Speaker 1 (38:55)
Yeah, totally. And I think that sort of speaks to the heart of the question of Earth species is that we, as humans, are constantly saying, we are the only species that and we are the only species that uses tools. But, of course, in the 60s, Jane Goodall found chimps used tools. And since then, Crows used tools, octopus used tools. And so now we, as human beings were like, well, we are the language using species. And in fact, language is an incredible thing. It's the thing that's let us build civilization, right?

Speaker 1 (39:26)
It's intergenerational knowledge transfer that lets us accumulate scientific wisdom and build large scale social structures. And you've all Ferrari's sense. It's like it is our shared stories and myths that enable flexible cooperation that give rise to my ability to talk to you over Zoom. It's fundamental to who we are. A lot of the traditions, like religious traditions start with the world. In the beginning, there was the word or in the Arabic traditions.

Speaker 2 (40:00)
Definitely.

Speaker 1 (40:01)
The universe begins with. Ohm. And so it's very core to our identity as who we are. And I think that's what the Earth species is asking, which is what changes with our relationship ourselves, with ourselves and to the natural world when we discover that other animals have rich interior lives and are communicating about it.

Speaker 2 (40:23)
Definitely. Yeah. With new technology, it's always important to understand the why behind the tech. Right. Which is kind of what you've just touched on of why do you think it's important to understand animal communication? So how do you think it will benefit our society or the animals, including ourselves, who live on this planet?

Speaker 1 (40:42)
I always think here about empathy and then just speaking very personally, my greatest moments of personal transformation have happened when I can see myself through somebody else's eyes. I'll get some piece of feedback, and it will let me see some part of myself that I didn't really want to see. I'll see some way that I was doing harm that I wasn't aware of. And when I can do that act of deep listening, I've changed. It changes me. And I think that's what we're aiming for at our species is this transformative act of listening, really concretely.

Speaker 1 (41:33)
One of the things we're really inspired by is there are these biologists Katie Payne and Roger Payne. When the 160s discovered the songs of the Humpback whale, he heard the songs of the Humpback whale, and they found it so profound and so beautiful that they produced a record. And when people heard Humpback singing, that record went on Voyager One as the golden record to represent not just all of humanity but all of Earth. It was like the first track after human Greetings. It creates Star Trek Four, which I think is awesome.

Speaker 1 (42:13)
Go back in time and save the whales. And it was played in front of the General UN assembly and is credited with it's the galvanizing moment that deep sea whaling was banned and reason why we still have the humpbacks today. Another moment when we got this shift in perspective is when human beings went to the moon, when there were human feet on the Moon, we saw Earth in those photos like blue marble and Earthrise. And we're looking back and we can see how tiny we are in the black fathom of space, and we realize how fragile and connected we are.

Speaker 1 (42:53)
The overview effect was what astronauts call it that shifted culture, right? It wasn't just those images, but it's the whole process. And in that moment, we passed the Clean Air Act. The modern environmental movement was born. Earth Day was founded. The EPA was founded, NOAA was founded. And this was in the Nixon era. So there are these moments that can become movements when you see yourself from the outside from a perspective, not your own. And my biggest hope with our species. And there's a lot of science to be done to get here.

Speaker 1 (43:40)
Is that just like the moment when humanity invented the telescope and we pointed the telescope up at the universe and discovered that Earth was not at the center, the Copernicus revolution that AI can be our new telescope. And we're pointing it up at the stars of the universe, the natural world around us. And what we're going to discover is that humanity is not the center of the universe. And while that won't just instantly change us and all of our behavior and our institutions, it changes the paradigm that we think about ourselves in that we're not apart from nature, that we're a part of nature, that we're not defending nature, but that we are nature defending itself.

Speaker 1 (44:31)
And that is the shift that I would love for Earth species to be a part in making.

Speaker 2 (44:36)
Yeah. What a wonderful thought for a paradigm shift. I love it. So I want to switch gears for a second and ask a question that we like to ask all our guests. When did you first realize that Privacy was important to you?

Speaker 1 (44:52)
Well, very early on in my career, I started a little company. It got bought by Mozilla, actually, the very first company that they did or that Mozilla ever bought. And I was part of starting Mozilla labs. And so I really grew up believing deeply in the values of the open web. I don't think most people know this, but I wrote the original spec to introduce geolocation to the web. And I was thinking a lot. We were all thinking a lot about the asymmetric power that companies get when they can predict your behavior, even back then.

Speaker 1 (45:46)
And so in those original designs, there is a little slider that let you change how granular the information you're providing was. Are you sharing your exact location? Are you sharing it to the block level? Are you sharing it just to the city level or to the country level? And all of those, of course, never made it through. But we were thinking about it a lot. I worked on a project in, I believe, 2010 on Privacy icons, which was to say, sort of a creative Commons for how your data is used, whether third parties are allowed to use it.

Speaker 1 (46:29)
It's called secondary use. How long data was going to be retained for whether companies would fight subpoenas, so that at the very least, there would be transparency in the Privacy marketplace and gather the EFF, all the groups that think about it. And one of the big problems from that time that I remember was that it's so abstract Privacy. It's really hard to make people care because they only really care when it's violated. And then it's still abstract. You're like, point to me on your body where Privacy hurt you.

Speaker 1 (47:11)
You can't. Right. And so one of the big realizations I had when we're working on the social dilemma was, how can you make Privacy something that people can viscerally feel? Because I think our job as communicators is to take things that are invisible and not just make them visible, but make them visceral feelable. And the insight was this Jeff Rolowski, the director, kept asking, what are these algorithms? What do they do? How do they work? What's going on inside of these servers? What is AI? And it's like, oh, you know what?

Speaker 1 (47:54)
Technically, it's just a whole bunch of matrix multiplication, but it's also a really good mental model to think of inside of these servers. There's a little data voodoo doll that represents you, and it starts generic as a general model of just human behavior. But over time, these companies are collecting your click trails and your toenail clippings and your digital hairs, and they're sticking on this doll until it looks more and more and more like you until it can start to predict you better than you know yourself.

Speaker 1 (48:35)
So you know, there's that meme when people will say, like, Facebook, listens to me because I will be having a conversation with a person about some product that I've never talked about before. And then I open up Instagram, and there it is in my feed. They must have the microphone on. And the truth is that it's much more disturbing than that. It's not that these companies are listening to you. It's that they have look like models. They have a whole bunch of people that act very similarly to you.

Speaker 1 (49:06)
They have their data dolls. Your data doll looks enough like theirs that they can just look at this look like model to figure out what your actions would be. They serve it back to you. And it's so accurate that you think it's magic that they have to be listening to do it. And what I love about the voodoo doll, it became sort of like the core backing metaphor for the social dilemma. Yeah, I remember that image, right? It makes it visceral. It makes you care because you realize Oli Moley, Facebook and Google and Twitter.

Speaker 1 (49:34)
There's one of these dolls for one out of every three human beings on this planet. And when you play or if I were to play Gary Kasparov and chess and I will get destroyed every single time. Why? Because Gary Kasparov can see how I'm going to move. He can see all the moves I'm going to make way before I make them. So when we are playing a supercomputer that can do the same thing about us has our full history. Everything we've ever done has this data doll.

Speaker 1 (50:05)
We have simply no chance. It's an asymmetric power over. So anyway, that's I guess, my journey into Privacy.

Speaker 2 (50:13)
Wow. Yeah. I mean, I definitely still remember that very visceral image. The social dilemma. The doll is getting put together. It's crazy. These things are happening without our explicit knowledge, right?

Speaker 1 (50:25)
Yeah.

Speaker 2 (50:26)
What do you think is the predominant Privacy technology that doesn't exist right now but should exist.

Speaker 1 (50:35)
If we can figure out a way of understanding, not just the data that is taken from me and thinking about it as ownership, but shifting our thinking to what can be computed and inferred about me and the level of asymmetry of knowledge. If we could understand or get our hands on that quantified level of asymmetry of power, that would be a fundamental shift. I think about fiduciary relationships. What is a fiduciary relationship? Fancy term. But there are throughout history examples of relationships where one party has an asymmetric power over the other.

Speaker 1 (51:26)
So doctor, lawyer, therapist. They know information about you that they could use to exploit you. Think about a therapist. There's a reason why therapists are not allowed. It's illegal for them to date their clients because they have knowledge that can be used to exploit. Imagine if your doctor had a business model or a therapist. Let's go with therapist. Say your therapist had a business model where they take whatever all of your secrets are and your deepest insecurities, and they turn around to advertisers and they're like, here are all their insecurities who wants to bid on them, and then they come back to you and they start recommending you products in your therapy session.

Speaker 1 (52:06)
That would be a terrible world and so that's why we have this concept of a fiduciary. That is, when there's an asymmetric power over the person who has more power is bound by law to act in the interest of it's a duty of care to their client. We need to have this at the Infotec level because as we've already talked about Google, Facebook, Twitter, TikTok, they all know more about you than your lawyer, doctorate therapist combined. So they need to be in an asymmetric fiduciary duty to you.

Speaker 1 (52:43)
And that level of duty of care should be bound to the level of asymmetry of power. And that concept is missing from sort of the terms and the laws and the thinking. I think around Privacy.

Speaker 2 (53:01)
Yeah, definitely. Totally agree. Final question thinking. Big picture. I think we're in a moment right now where there's not a lot of optimism around tech. It seems like companies like Google and Facebook are always in the headlines for some new harm they've caused their users. On this podcast, we've covered things like AI, social media and content moderation, and, of course, Privacy. The conversation often turns to how do we mitigate the harm caused by technology? So with that in mind, do you think technology is a cause for good in this world?

Speaker 2 (53:32)
And are you optimistic about the ability of tech to solve more problems than it causes?

Speaker 1 (53:40)
I try to stay away from two optimism pessimism because I'd rather just see things as they are and then try to steer from there. That said, I have hope, not a huge cross section of hope, but it is there if we change the incentive, the environment that technology is competing in. If we, as human beings, can look into that mirror with clear eyed compassion and be like, Ah, got it. Here's our strengths here's our weaknesses. Here's how we should design to be like an exoskeleton around our weaknesses.

Speaker 1 (54:19)
Then we could we could go from that EO. Wilson line of we have paleithic emotions, medieval institutions and Godlike technology to embracing our palithic emotions and upgrading our medieval institutions and giving ourselves the collective wisdom to wield our Godlike technology that is totally within the realm of possibility, but it will not happen alone. These companies will not do it because they're in a zero sum knife fight for our attention. So we have to change the playing field in order for this to succeed. And if we do, I think there's this really beautiful sort of middle ground where we don't have uncontrolled, unfettered chaos, and we don't have top down authoritarianism.

Speaker 1 (55:21)
But we have this new middle path where we use the exponential technology of access to to upgrade the very Democratic governance process itself. I know that's a little abstract, but nonetheless, I think there is hope there.

Speaker 2 (55:41)
Well, thank you for sharing your hope. It's nice to hear a hopeful perspective. We've been speaking with Azaraskin, entrepreneur, inventor, writer and cofounder of the center for Humane Technology and the Earth DC Project Aza this has been an absolute pleasure. Thank you for joining us on Privacy is the New Celebrity.

Speaker 1 (56:06)
It's been so much fun. Thank you for having me on that's.

Speaker 2 (56:11)
It for episode Eleven. As always, don't forget to subscribe to Privacy is the New Celebrity. If you like what you hear, please leave us a review for more amazing content. Check out mobilecoinradio. Com where you can find our radio show every Wednesday at 06:00 p.m.. Pacific Time. That's also where you can find our full archive of podcast episodes. I'm Lucy Kind. Our producer is Sam Anderson, and our theme music was composed by David West. Bomb and please remember, Privacy is a choice we deserve.