234: Civil Society and Community Relationships with Michael Garfield

May 12th, 2021 · 1 hr 1 min

About this Episode

02:13 - Michael’s Superpower: Being Able to Creatively Digest and Reconstruct Categories

09:39 - Recognizing Economic Value of Talents & Abilities

18:49 - The Edge of Chaos; Chaos Theory

  • “Life exists at the edge of chaos.”

23:23 - Reproducibility Crisis and Context-Dependent Insight

28:49 - What constitutes a scientific experiment?

38:03 - The Return of Civil Society and Community Relationships; Scale Theory

49:28 - Fractal Geometry

More amazing resources from Michael to check out:

Reflections:

Jacob: Some of the best ideas, tv shows, music, etc. are the kinds of things that there’s not going to be an established container.

Rein: “Act always so as to increase the number of choices.” ~ Heinz von Foerster

Jessica: Externality. Recognize that there’s going to be surprises and find them.

Michael: Adaptability is efficiency aggregated over a longer timescale.

This episode was brought to you by @therubyrep of DevReps, LLC. To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode

To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps. You will also get an invitation to our Slack community this way as well.

Transcript:

JACOB: Hello and welcome to Episode 234 of Greater Than Code. My name is Jacob Stoebel and I’m joined with my co-panelist, Rein Henrichs.

REIN: Thanks, Jacob and I’m here with my friend and co-panelist, Jessica Kerr.

JESSICA: Thanks, Rein and today, I’m excited to introduce our guest, Michael Garfield.

He’s an artist and philosopher and he helps people navigate our age of accelerating weirdness and cultivate the curiosity and play we need to thrive. He hosts and produces two podcasts, The Future Fossils Podcast & The Santa Fe Institute's Complexity Podcast. Yay, complexity!

Michael acts as interlocutor for a worldwide community of artists, scientists, and philosophers—a practice that feeds his synthetic and transdisciplinary “mind-jazz” performances in the form of essay, avant-guitar music, and painting! You can find him on Bandcamp, it’s pretty cool.

Refusing to be enslaved by a single perspective, creative medium, or intellectual community, Michael walks through the walls between academia and festival culture, theory and practice.

Michael, welcome to Greater Than Code!

MICHAEL: Thanks! I’m glad to be here and I hope that I provide a refreshingly different guest experience for listeners being not a coder in any kind of traditional sense.

JESSICA: Yet you’re definitely involved in technology.

MICHAEL: Yeah, and I think the epistemic framing of programming and algorithms is something that can be applied with no understanding of programming languages as they are currently widely understood. It’s just like design is coding, design of the built environment, so.

JESSICA: And coding is a design.

MICHAEL: Indeed.

JESSICA: Okay, before we go anywhere else, I did not prepare you for this, but we have one question that we ask all of our guests. What is your superpower and how did you acquire it?

MICHAEL: I would like believe that I have a superpower in being able to creatively digest and reconstruct categories so as to drive new associations between them for people and I feel like I developed that studying integral theory in grad school.

I did some work under Sean Esbjörn-Hargens at John F. Kennedy University looking at the work of and work adjacent to Ken Wilber, who was trying to come up with a metatheoretical framework to integrate all different domains of human knowledge. All different types of inquiry into a single framework that doesn't attempt to reduce any one of them to any other and then in that process, I learned what one of my professors, Michael Schwartz, called creative deconstruction. So showing how art can be science and science can be art and that these aren't ontologically fixed categories that exist external to us. Looking at the relationship between science as a practice and spiritual inquiry as a practice and that kind of thing. So it's an irreverent attitude toward the categories that we've constructed that takes in a way a cynical and pragmatic approach to the way that we define things in our world. You know.

REIN: Kant was wrong. [laughs]

MICHAEL: It's good to get out of the rut. Obviously, you’ve got to be careful because all of these ideas have histories and so you have to decide whether it's worth trying to redefine something for people in order to open up new possibilities in the way that these ideas can be understood and manipulated. It's not, for example, an easy task to try and get people to change their idea about what religion is. [laughs]

JESSICA: Yeah. More than redefined. It's almost like undefined.

MICHAEL: Hm. Like Paul Tillich, for example. Theologian Paul Tillich said that religion is ultimate concern. So someone can have a religion of money, or a religion of sex, but if you get into these, if you try to interpose that in a debate on intelligent design versus evolutionary theory, you'll get attacked by both sides.

JESSICA: [chuckles] That’s cosmology.

MICHAEL: Yeah. So it's like – [overtalk]

JESSICA: Which is hard to [inaudible] of money, or sex.

MICHAEL: Yeah, but people do it anyhow.

JESSICA: [laughs] Yeah. So deconstructing categories and seeing in-between things that fits through your walking through walls, what categories are you deconstructing and seeing between lately?

MICHAEL: Well, I don't know, lately I've been paying more attention to the not so much tilting after the windmills of this metamorphic attitude towards categories, but looking at the way that when the opportunity comes to create a truly novel category, what are the forces in play that prevent that, that prevent recognizing novelty as novelty that I just –

JESSICA: Do you have any examples?

MICHAEL: Yeah, well, I just saw a really excellent talk by UC Berkeley Professor Doug Guilbeault, I think is how you say his name. I am happy to link his work to you all in the chat here so that you can share it.

JESSICA: Yeah, we’ll link that in the show notes.

MICHAEL: He studies category formation and he was explaining how most of the research that's been done on convergent categorization is done on established categories. But what happens when you discover something truly new? What his research shows is that basically the larger the population, the more likely it is that these categories will converge on something that's an existing category and he compared it to island versus mainland population biogeography.

So there's a known dynamic in evolutionary science where genetic drift, which is just this random component of the change in allele frequencies in a population, the larger the population, the less likely it is that a genetic mutation that is otherwise neutral is going to actually percolate out into the population. On an island, you might get these otherwise neutral mutations that actually take root and saturate an entire community, but on the mainland, they get lost in the noise.

You can look at this in terms of how easy it is for an innovative, artistic, or musical act to actually find any purchase. Like Spotify bought the data analysis company, The Echo Nest, back in 2015 and they ran this study on where emergent musical talent comes from. It comes from places like Australia, the UK, and Iceland, because the networks are small enough. This is a finding that's repeated endlessly through studies of how to create a viral meme that basically, or another way –

JESSICA: You mean a small enough pool to take hold?

MICHAEL: Yeah. That basically big science and large social networks online and these other attempts, anywhere we look at this economies of scale, growing a given system, what happens is—and we were talking about this a little before we got on the call—as a system scales, it becomes less innovative. There's less energy is allocated to –

JESSICA: In America?

MICHAEL: Yeah. Bureaucratic overhead, latencies in the network that prevent the large networks from adapting, with the same agility to novel challenges. There's a lot of different ways to think about this and talk about this, but it basically amounts to, if you want to, you can't do it from the conservative core of an organization. You can't do it from the board of directors.

JESSICA: Oh.

MICHAEL: You have to go out onto – like why did they call it fringe physics? It's like, it is because it's on the fringe and so there's a kind of –

JESSICA: So this would be like if you have like one remarkably lowercase agile team inside your enterprise, one team is innovating and development practices. They're going to get mushed out. Whereas, if you have one team innovating like that in a small company, it might spread and it might become dominant.

MICHAEL: Yeah. I think it's certainly the case that this speaks to something I've been wondering about it in a broader sense, which is how do we recognize the economic value of talents and abilities that are like, how do we recognize a singular individual for their incompressible knowledge and expertise when they don't go through established systems of accreditation like getting a PhD? Because the academic system is such that basically, if you have an innovative contribution, but you don't have the credentials that are required to participate in the community of peer review, then people can't even – your contribution is just invisible. The same is true for how long it took, if you look at economic models, it took so long for economic models to even begin to start addressing the invisible labor of women in at home like domestic labor, or what we're now calling ecosystem services.

So there's this question of – I should add that I'm ambivalent about this question because I'm afraid that answering it in an effective way, how do we make all of these things economically visible would just accelerate the rate at which the capitalist machine is capable of co-opting and exploiting all of these. [chuckles]

REIN: Yeah. You also have this Scott Seeing Like a State thing where in order to be able to even perceive that that stuff is going on, it has to become standardized and you can't dissect the bird to observe its song, right?

MICHAEL: Totally. So obviously, it took almost no time at all for consumer culture to commodify the psychedelic experience and start using to co-opt this psychedelic aesthetic and start using it in advertising campaigns for Levi's Jeans and Campbell Soup and that kind of thing. So it’s this question of a moving frontier that as soon as you have the language to talk about it, it's not the ineffable anymore.

REIN: Yeah.

MICHAEL: There's a value to the ineffable and there's a value to – it's related to this question of the exploitation of indigenous peoples by large pharmaceutical companies like, their ethnobotanical knowledge. How do you make the potential value of biodiversity, something that can be manufactured into medicine at scale, without destroying the rainforest and the people who live in it?

Everywhere I look, I see this question. So for me, lately, it's been less about how do we creatively deconstruct the categories we have so much as it is, what is the utility of not knowing how to categorize something at all and then how do we fix the skewed incentive structures in society so as to value that which we currently do not know how to value.

JESSICA: Because you don’t have a category for it.

MICHAEL: Right. Like right now, maybe one of the best examples, even though this is the worst example in another way, is that a large fraction of the human genome has been patented by Monsanto, even though it has no known current biomedical utility. This is what Lewis Hyde in his book, Common as Air, called “the third enclosure” of the common.

So you have the enclosure of the land that everyone used to be able to hunt on and then you have the enclosure of intellectual property in terms of patents for known utilities, known applications, and then over the last few decades, you're starting to see large companies buy their way into and defend patents for the things that actually don't – it's speculative. They're just gambling on the idea that eventually we'll have some use for this and that it's worth lawyering up to defend that potential future use. But it's akin to recognizing that we need to fund translational work. We need to fund synthesis. We need to fund blue sky interdisciplinary research for which we don't have an expected return on investment here because there's –

JESSICA: It's one of those things that it’s going to help; you're going to get tremendous benefits out of it, but you can't say which ones.

MICHAEL: Right. It's a shift perhaps akin to the move that I'm seeing conservation biology make right now from “let's preserve this charismatic species” to “let's do everything we can to restore biodiversity” rather than that biodiversity itself is generative and should be valued in its own regard so diverse research teams, diverse workplace teams. We know that there is what University of Michigan Professor Scott Page calls the diversity bonus and you don't need to know and in fact, you cannot know what the bonus is upfront.

JESSICA: Yeah. You can't draw the line of causality forward to the benefit because the point of diversity is that you get benefits you never thought of.

MICHAEL: Exactly. Again, this gets into this question of as a science communications staffer in a position where I'm constantly in this weird dissonant enters zone between the elite researchers at the Santa Fe Institute where I work and the community of complex systems enthusiasts that have grown up around this organization. It's a complete mismatch in scale between this org that has basically insulated itself so as to preserve the island of innovation that is required for really groundbreaking research, but then also, they have this reputation that far outstrips their ability to actually respond to people that are one step further out on the fringe from them.

So I find myself asking, historically SFI was founded by Los Alamos National Laboratory physicists mostly that were disenchanted with the idea that they were going to have to research science, that their science was limited to that which could be basically argued as a national defense initiative and they just wanted to think about the deepest mysteries of the cosmos. So what is to SFI as SFI as to Los Alamos?

Even in really radical organizations, there's a point at which they've matured and there are questions that are beyond the horizon of that which a particular community is willing to indulge. I find, in general, I'm really fascinated by questions about the nonlinearity of time, or about weird ontology. I'm currently talking to about a dozen other academics and para-academics about how to try and – I'm working, or helping to organize a working group of people that can apply rigorous academic approaches to asking questions that are completely taboo inside of academia.

Questions that challenge some of the most fundamental assumptions of maternity, such as there being a distinction between self and other, or the idea that there are things that are fundamentally inaccessible to quantitative research. These kinds of things like, how do we make space for that kind of inquiry when there's absolutely no way to argue it in terms of you should fund this? And that's not just for money, that's also for attention because the demands on the time and attention of academics are so intense that even if they have interest in this stuff, they don't have the freedom to pursue it in their careers. That's just one of many areas where I find that this kind of line of inquiry manifesting right now.

REIN: Reminds me a lot of this model of the edge of chaos that came from Packard and Langton back in the late 70s. Came out of chaos theory, this idea that there's this liminal transitionary zone between stability and chaos and that this is the boiling zone where self-organization happens and innovation happens. But also, that this zone is itself not static; it gets pushed around by other forces.

MICHAEL: Yeah, and that's where life is and that was Langton's point, that life exists at the edge of chaos that it's right there at the phase transition boundary between what is it that separates a stone from a raging bonfire, or there’s the Goldilocks Zone kind of question. Yeah, totally.

REIN: And these places that were at the edge of chaos that were innovative can ossify, they can move into the zone of stability. It's not so much that they move it's that, I don't know, maybe it's both. Where the frontier is, is constantly in motion.

MICHAEL: Yeah, and to that point again, I tend to think about these things in a topographical, or geographical sense, where the island is growing, we're sitting on a volcano, and there's lots you can do with that metaphor. Obviously, it doesn't make sense. You can't build your house inside the volcano, right? [laughs] But you want to be close enough to be able to watch and describe as new land erupts, but at a safe distance. Where is that sweet spot where you have rigor and you have support, but you're not trapped within a bureaucracy, or an ossified set of institutional conventions?

JESSICA: Or if the island is going up, if the earth is moving the island up until the coastline keeps expanding outward, and you built your house right on the beach. As in you’ve got into React when it was the new hotness and you learned all about it and you became the expert and then you had this great house on the beach, and now you have a great house in the middle of town because the frontier, the hotness has moved on as our massive technology has increased and the island raises up. I mean, you can't both identify as being on the edge and identify with any single category of knowledge.

MICHAEL: Yeah. It's tricky. I saw Nora Bateson talking about this on Twitter recently. She's someone who I love for her subversiveness. Her father, Gregory Bateson, was a major player in the articulation of cybernetics and she's awesome in that sense of, I don't know, the minister's daughter kind of a way of being extremely well-versed in complex systems thinking and yet also aware that there's a subtle reductionism that comes in that misses –

JESSICA: Misses from?

MICHAEL: Well, that comes at like we think about systems thinking as it's not reductionist because it's not trying to explain biology in terms of the interactions of atoms. It acknowledges that there's genuine emergence that happens at each of these levels and yet, to articulate that, one of the things that happens is everything has to be squashed into numbers and so it’s like this issue of how do you quantify something.

JESSICA: It's not real, if you can't measure it in numbers.

MICHAEL: Right and that belies this bias towards thinking that because you can't quantify something now means it can't be quantified.

JESSICA: You can’t predict which way the flame is going to go in the fire. That doesn't mean the fire doesn't burn. [chuckles]

MICHAEL: Right. So she's interesting because she talks about warm data as this terrain, or this experience where we don't know how to talk about it yet, but that's actually what makes it so juicy and meaningful and instructive and –

JESSICA: As opposed to taking it out of context. Leave it in context, even though we don't know how to do some magical analysis on it there.

MICHAEL: Right, and I think this starts to generate some meaningful insights into the problem of the reproducibility crisis. Just as an example, I think science is generally moving towards context dependent insight and away from – even at the Santa Fe Institute, nobody's looking for a single unifying theory of everything anymore. It's far more illuminating, useful, and rigorous to look at how different models are practical given different applications.

I remember in college there's half a dozen major different ways to define a biological species and I was supposed to get up in front of a class and argue for one over the other five. I was like, “This is preposterous.”

Concretely, pun kind of intended, Biosphere 2, which was this project that I know the folks here at Synergia Ranch in Santa Fe at the Institute of Ecotechnics, who were responsible for creating this unbelievable historic effort to miniaturize the entire biosphere inside of a building. They had a coral reef and a rainforest and a Savannah and a cloud desert, like the Atacama, and there was one other, I forget.

But it was intended as a kind of open-ended ecological experiment that was supposed to iterate a 100 times, or 50 times over a 100 years. They didn't know what they were looking for; they just wanted to gather data and then continue these 2-year enclosures where a team of people were living inside this building and trying to reproduce the entire earth biosphere in miniature.

So that first enclosure is remembered historically as a failure because they miscalculated the rate at which they would be producing carbon dioxide and they ended up having to open the building and let in fresh air and import resource.

JESSICA: So they learned something?

MICHAEL: Right, they learned something. But that project was funded by Ed Bass, who in 1994, I think called in hostile corporate takeover expert, Steve Bannon to force to go in there with a federal team and basically issue a restraining order on these people and forcibly evict them from the experiment that they had created. Because it was seen as an embarrassment, because they had been spun in this way in international media as being uncredentialed artists, rather than scientists who really should not have the keys to this thing.

It was one of these instances where people regard this as a scientific failure and yet when you look at the way so much of science is being practiced now, be it in the domains of complex systems, or in machine learning, what they were doing was easily like 20 or 30 years ahead of its time.

JESSICA: Well, no wonder they didn’t appreciate it.

MICHAEL: [chuckles] Exactly. So it's like, they went in not knowing what they were going to get out of it, but there was this tragic mismatch between the logic of Ed Bass’ billionaire family about what it means to have a return on an investment and the logic of ecological engineering where you're just poking at a system to see what will happen and you don't even know where to set the controls yet. So anyway.

JESSICA: And it got too big. You talked about the media, it got too widely disseminated and became embarrassed because it wasn't on an island. It wasn't in a place where the genetic drift can become normal.

MICHAEL: Right. It was suddenly subject to the constraints imposed upon it in terms of the way that people were being taught science in public school in the 1980s that this is what the scientific method is. You start with a hypothesis and it's like what if your –

JESSICA: Which are not standards that are relevant to that situation.

MICHAEL: Exactly. And honestly, the same thing applies to other computational forms of science. It took a long time for the techniques pioneered at the Santa Fe Institute to be regarded as legitimate. I'm thinking of cellular automata, agent-based modeling, and computer simulation generally.

Steven Wolfram did a huge service, in some sense, to the normalization of those things in publishing A New Kind of Science, that massive book in whatever it was, 2004, or something where he said, “Look, we can run algorithmic experiments,” and that's different from the science that you're familiar with, but it's also setting aside for a moment, the attribution failure that that book is and acknowledging who actually pioneered A New Kind of Science. [chuckles]

JESSICA: At least it got some information out.

MICHAEL: Right. At least it managed to shift the goalpost in terms of what the expectations are; what constitutes a scientific experiment in the first place.

JESSICA: So it shifted categories.

MICHAEL: Yeah. So I think about, for example, a research that was done on plant growth in a basement. I forget who it was that did this. I think I heard this from, it was either Doug Rushkoff, or Charles Eisenstein that was talking about this, where you got two completely different results and they couldn't figure out what was going on. And then they realized that it was at different moments in the lunar cycle and that it didn't matter if you put your plant experiment in a basement and lit everything with artificial bulbs and all this stuff. Rather than sunlight, rather than clean air, if you could control for everything, but that there's always a context outside of your context.

So this notion that no matter how cleverly you try to frame your model, that when it comes time to actually experiment on these things in the real world, that there's always going to be some extra analogy you've missed and that this has real serious and grave implications in terms of our economic models, because there will always be someone that's falling through the cracks.

How do we actually account for all of the stakeholders in conversations about the ecological cost of dropping a new factory over here, for example? It's only recently that people, anywhere in the modern world, are starting to think about granting ecosystems legal protections as entities befitting of personhood and this kind of thing.

JESSICA: Haven’t we copyrighted those yet?

MICHAEL: [laughs] So all of that, there's plenty of places to go from there, I'm sure.

REIN: Well, this does remind me of one of the things that Stafford Beer tried was he said, “Ponds are viable systems, they’re ecologies, they're adaptive, they're self-sustaining. Instead of trying to model how a pond works, what if we just hook the inputs of the business process into the pond and then hook the adaptions made by the pond as the output back into the business process and use the pond as the controlling system without trying to understand what makes a pond good at adapting?” That is so outside of the box and it blows my mind that he was doing this, well, I guess it was the 60s, or whatever, but this goes well beyond black boxing, right?

MICHAEL: Yeah. So there's kind of a related insight that I saw Michelle Girvan gave at Santa Fe Institute community lecture a few years ago on reservoir computing, which maybe most of your audience is familiar with, but just for the sake of it, this is joining a machine learning system to a source of analog chaos, basically. So putting a computer on a bucket of water and then just kicking the bucket, every once in a while, to generate waves so that you're feeding chaos into the output of the machine learning algorithm to prevent overfitting. Again, and again, and again, you see this value where this is apparently the evolutionary value of play and possibly also, of dreaming.

There's a lot of good research on both of these areas right now that learning systems are all basically hill climbing algorithms that need to be periodically disrupted from climbing the wrong local optimum. So in reservoir computing, by adding a source of natural chaos to their weather prediction algorithms, they were able to double the horizon at which they were able to forecast meteorological events past the mathematic limit that had been proven and established for this. That is like, we live in a noisy world.

JESSICA: Oh, yeah. Just because it’s provably impossible doesn't mean we can't do something that's effectively the same thing, that's close enough.

MICHAEL: Right. Actually, in that example, I think that there's a strong argument for the value of that which we can't understand. [laughs] It's like it's actually important. So much has been written about the value of Slack, of dreaming, of taking a long walk, of daydreaming, letting your mind wander to scientific discovery.

So this is where great innovations come from is like, “I'm going to sleep on it,” or “I'm going to go on vacation.” Just getting stuck on an idea, getting fixated on a problem, we actually tend to foreclose on the possibility of answering that problem entirely. Actually, there's a good reason to – I think this is why Silicon Valley has recognized the instrumental value of microdosing, incidentally. [laughs] That this is that you actually want to inject a little noise into your algorithm and knock yourself off the false peak that you've stranded yourself on.

JESSICA: Because if you aim for predictability and consistency, if you insist on reasonableness, you'll miss everything interesting.

MICHAEL: Or another good way to put it is what is it, reasonable women don't make history. [laughs] There is actually a place for the –

JESSICA: You don’t change the system by maximally conforming.

MICHAEL: Right.

JESSICA: If there is a place for…

MICHAEL: It’s just, there is a place for non-conformity and it's a thing where it's like, I really hope and I have some optimism that what we'll see, by the time my daughter is old enough to join the workforce, is that we'll see a move in this direction where non-conformity has been integrated somehow into our understanding of how to run a business that we actively seek out people that are capable of doing this.

For the same reason that we saw over the 20th century, we saw a movement from one size fits all manufacturing to design your own Nike shoes. There's this much more bespoke approach.

JESSICA: Oh, I love those.

MICHAEL: Yeah. So it's like we know that if we can tailor our systems so that they can adapt across multiple different scales, that they're not exploiting economies of scale that ultimately slash the redundancy that allows an organization to adapt to risk. That if we can find a way to actually generate a kind of a fractal structure in the governance of organizations in the way that we have reflexes. The body already does this, you don't have to sit there and think about everything you do and if you did, you’d die right away.

JESSICA: [laughs] Yeah.

REIN: Yeah.

MICHAEL: If you had to pass every single twitch all the way up the chain to your frontal cortex

JESSICA: If we had to put breathe on the list. [laughs]

MICHAEL: Right. If you had to sit there and approve every single heartbeat, you'd be so dead. [overtalk]

JESSICA: Oh my gosh, yeah. That's an energy allocation and it all needs to go through you so that you can have control.

REIN: I just wanted to mention, that reminded me of a thing that Klaus Krippendorff, who's a cybernetics guy, said that there is virtue in the act of delegating one's agency to trustworthy systems. We're talking, but I don't need to care about how the packets get from my machine to yours and I don't want to care about that, but there's a trade-off here where people find that when they surrender their agency, that this can be oppressive. So how do we find this trade-off?

MICHAEL: So just to anchor it again in something that I find really helpful. Thinking about the way that convenience draws people into these compacts, with the market and with the state. You look over the last several hundred years, or thousand years in the West and you see more and more of what used to be taken for granted as the extent in terms of the functions that are performed by the extended family, or by the neighborhood, life in a city, by your church congregations, or whatever. All of that stuff has been out boarded to commercial interests and to federal level oversight, because it's just more efficient to do it that way at the timescales that matter, that are visible to those systems.

Yet, what COVID has shown us is that we actually need neighborhoods that suddenly, it doesn't – my wife and I, it was easy to make the decision to move across country to a place where we didn't know anybody to take a good job. But then suddenly when you're just alone in your house all the time and you've got nobody to help you raise your kids, that seems extremely dumb.

So there's that question of just as I feel like modern science is coming back around to acknowledging that a lot of what was captured in old wives’ tales and in traditional indigenous knowledge, ecological knowledge systems that were regarded by the enlightenment as just rumor, or…

JESSICA: Superstition.

MICHAEL: Superstition, that it turns out that these things actually had, that they had merit, they were evolved.

JESSICA: There was [inaudible] enough.

MICHAEL: Right. Again, it wasn't rendered in the language that allowed it to be the subject of quantitative research until very recently and then, suddenly it was and suddenly, we had to circle back around. Science is basically in this position where they have to sort of canonize Galileo, they're like, “Ah, crap. We burned all these witches, but it turns out they were right.”

There's that piece of it. So I think relatedly, one of the things that we're seeing in economist samples and Wendy Carlin have written about this is the return of the civil society, the return of mutual aid networks, and of gift economies, and of the extended family, and of buildings that are built around in courtyards rather than this Jeffersonian everyone on their own plot of land approach. That we're starting to realize that we had completely emptied out the topsoil basically of all of these community relationships in order to standardize things for a mass big agricultural approach, that on the short scale actually does generate greater yield.

It's easier to have conversations with people who agree with you than it is – in a way, it's inexpedient to try and cross the aisle and have a conversation with someone with whom you deeply and profoundly disagree. But the more polarized we become as a civilization, the more unstable we become as a civilization. So over this larger timescale, we actually have to find ways to incentivize talking to people with whom you disagree, or we're screwed. We're kicking legs out from under the table.

REIN: At this point, I have to name drop Habermas because he had this idea that there were two fundamental cognitive interests that humans have to direct their attempts to acquire knowledge. One is a technical interest in achieving goals through prediction and control and the other is a practical interest in ensuring mutual understanding.

His analysis was that advanced capitalist societies, the technical interest dominates at the expense of the practical interest and that knowledge produced by empirical, scientific, analytic sciences becomes the prototype of all knowledge. I think that's what you're talking about here that we've lost touch with this other form of knowledge. It's not seen as valuable and the scientific method, the analytical approaches have come to dominate.

MICHAEL: Yeah, precisely. [laughs] Again, I think in general, we've become impoverished in our imagination because again, the expectations, there's a shifting baseline. So what people expect to pull out of the ocean now is a fish that you might catch off just a commercial, or a recreational fishing expedition. It's a quarter of the size of the same species of fish you might've caught 50, 70 years ago and when people pull up this thing and they're like, “Oh, look at –” and they feel proud of themselves.

I feel like that's what's going on with us in terms of our we no longer even recognize, or didn't until very recently recognize that we had been unwittingly colluding in the erosion of some very essential levels of organization and human society and that we had basically sold our souls to market efficiency and efficient state level governance.

Now it's a huge mess to try and understand. You look at Occupy Wall Street and stuff like that and it just seems like such an enormous pain in the ass to try and process things in that way. But it's because we're having to relearn how to govern neighborhoods and govern small communities and make business decisions at the scale of a bioregion rather than a nation.

JESSICA: Yeah. It's a scale thing. I love the phrase topsoil of community relationships, because when you talk about the purposive knowledge that whatever you call it, Rein, that is goal seeking. It's like the one tall tree that is like, “I am the tallest tree,” and it keeps growing taller and taller and taller, and it doesn't see that it's falling over because there's no trees next to it to protect it from the wind. It's that weaving together between all the trees and the different knowledge and the different people, our soul is there. Our resilience is there.

REIN: Michael, you keep talking about scale. Are you talking about scale theory?

MICHAEL: Yeah. Scaling laws, like Geoffrey West's stuff, Luis Bettencourt is another researcher at the University of Chicago who does really excellent work in urban scaling. I just saw a talk from him this morning that was really quite interesting about there being a sweet spot where a city can exist between how thinly it's distributed infrastructurally over a given area versus how congested it is. Because population and infrastructure scale differently, they scale at different rates than you get –

REIN: If I remember my West correctly, just because I suspect that not all of our listeners are familiar with scale theory, there's this idea that there are certain things that grow super linearly as things scale and certain things that grow sub linearly. So for example, the larger a city gets, you get a 15% more restaurants, but you also get 15% more flu, but you also get 15% less traffic.

MICHAEL: Yeah. So anything that depends on infrastructures scales sub linearly. A city of 2 million people has 185% the number of gas stations, but anything that scales anything having to do with the number of interactions between people scales super linearly. You get 115% of the – rather you get, what is it, 230%? Something like that. Anyway, it's 150%, it's 85% up versus 115% up. So patents, but also crime and also, just the general pace of life scale at 115% per capita. So like, disease transmission.

So you get into these weird cases—and this links back to what we were talking about earlier—where people move into the city, because it's per unit. In a given day, you have so much more choice, you have so much more opportunity than you would in your agrarian Chinese community and that's why Shenzhen is basically two generations old. 20 million people and none of them have grandparents living in Shenzhen because they're all attracted to this thing. But at scale, what that means is that everyone is converging on the same answer. Everyone's moving into Shenzhen and away from their farming community.

So you end up – in a way, it's not that that world is any more innovative. It's just, again, easier to capture that innovation and therefore, measure it. But then back to what we were saying about convergent categories and biogeography, it's like if somebody comes up with a brilliant idea in the farm, you're not necessarily going to see it. But if somebody comes up with the same brilliant idea in the city, you might also not see it for different reasons. So anyway, I'm in kind of a ramble, but.

JESSICA: The optimal scale for innovation is not the individual and it's not 22 million, it's in between.

MICHAEL: Well, I feel like at the level of a city, you're no longer talking about individuals almost in a way. At that point, you're talking about firms. A city is like a rainforest in which the fauna are companies. Whereas, a neighborhood as an ecosystem in which the fauna, or individual people and so, to equate one with the other is a potential point of confusion.

Maybe an easier way to think about this would be multicellular life. My brain is capable of making all kinds of innovations that any cell, or organ in my body could not make on its own. There's a difference there. [overtalk]

JESSICA: [inaudible].

MICHAEL: Right. It's easier, however, for a cell to mutate if it doesn't live inside of me. Because if it does, it's the cancer – [overtalk]

JESSICA: The immune system will come attack it.

MICHAEL: Right. My body will come and regulate that.

JESSICA: Like, “You’re different, you are right out.”

MICHAEL: Yeah. So it's not about innovation as some sort of whole category, again, it's about different kinds of innovation that are made that are emergent at different levels of organization. It's just the question of what kinds of innovation are made possible when you have something like the large Hadron Collider versus when you've got five people in a room around a pizza. You want to find the appropriate scale for the entity, for the system that's the actual level of granularity at which you're trying to look at the stuff, so.

REIN: Can I try to put a few things together here in potentially a new way and see if it's anything? So we talked about the edge of chaos earlier and we're talking about scale theory now, and in both, there's this idea of fractal geometry. This idea that a coastline gets larger, the smaller your ruler is.

In scale theory, there's this idea of space filling that you have to fill the space with things like capillaries, or roads and so on. But in the human lung, for example, if you unfurled all of the surface area, you'd fill up like a football field, I think. So maybe there's this idea that there's complexity that's possible, that’s made possible by the fractal shape of this liminal region that the edge of chaos.

MICHAEL: Yeah. It's certainly, I think as basically what it is in maximizing surface area, like you do within a lung, then you're maximizing exposure. So if the scientific community were operating on the insights that it has generated in a deliberate way, then you would try to find a way to actually incorporate the fringe physics community.

There's got to be a way to use that as the reservoir of chaos, rather than trying to shut that chaos out of your hill climbing algorithm and then at that point, it's just like, where's the threshold? How much can you invite before it becomes a distraction from getting anything done? When it's too noisy to be coherent.

Arguably, what the internet has done for humankind has thrown it in completely the opposite direction where we've optimized entirely for surface area instead of for coherence. So now we have like, no two people seem to be able to agree on reality anymore. That's not useful either.

REIN: Maybe there's also a connectivity thing here where if I want to get from one side of the city to the other, there are 50 different routes. But if I want to get from one city to another, there's a highway that does it.

MICHAEL: Yeah, totally. So it's just a matter of rather than thinking about what allows for the most efficient decisions, in some sense, at one given timescale, it's how can we design hierarchical information, aggregation structures so as to create a wise balance between the demands on efficiency that are held at and maintained at different scales.

SFI researcher, Jessica Flack talks about this in her work on collective computation and primate hierarchies where it’s a weird, awkward thing, but basically, there is an evolutionary argument for police, that it turns out that having a police system is preventing violence. This is mathematically demonstrable, but you also have to make sure that there's enough agency at the individual level, in the system that the police aren't in charge of everything going on. It's not just complex, it's complicated.

[laughs] We've thrown out a ton of stuff on this call. I don't know, maybe this is just whetting people's appetite for something a little bit more focused and concise.

JESSICA: This episode is going to have some extensive show notes.

MICHAEL: Yeah. [chuckles]

JESSICA: It's definitely time to move into reflections.

JACOB: You were talking, at the very beginning, about Spotify. Like how, when unknown ideas are able to find their tribe and germinate. I was reading about how Netflix does business and it's very common for them to make some new content and then see how it goes for 30 days and then just kill it. Because they say, “Well, this isn't taking off. We're not going to make more of it,” and a lot of people can get really upset with that. There's definitely been some really great things out on Netflix that I'm like, for one on the one hand, “Why are you canceling this? I really wanted more,” and it seems like there's a lot of the people that do, too.

What that's making me think about as well for one thing, I think it seems like Netflix from my experience, is not actually marketing some of their best stuff. You would never know it’s there, just in the way of people to find more unknown things.

But also, I'm thinking about how just generally speaking some of the best ideas, TV shows, music, whatever are the kinds of things that there's not going to be an established container, group of people, that you can say, “We want to find white men ages 25 to 35 and we're going to dump it on their home screen because if anyone's going to like it, it's them and if they do, then we keep it and if they don't move, we don't.” I feel like the best things are we don't actually know who those groups are going to be and it's going to have a weird constellation of people that I couldn't actually classify. So I was just thinking about how that's an interesting challenge.

JESSICA: Sweet. Rein, you have a thing?

REIN: Yeah. I have another thing. I was just reminded of von Foerster, who was one of the founders of Second-order cybernetics. He has an ethical imperative, which is act always so as to increase the number of choices. I think about this actually a lot in my day-to-day work about maximizing the option value that I carry with me as I'm doing my work, like deferring certain decisions and so on. But I think it also makes sense in our discussion as well.

JESSICA: True. Mine is about externalities. We talked about how, whatever you do, whatever your business does, whatever your technology does, there's always going to be effects on the world on the context and the context of the context that you couldn't predict. That doesn't mean don't do anything. It doesn't mean look for those. Recognize that there's going to be surprises and try to find them. It reminds me of sometimes, I think in interviewing, we’re like, “There are cognitive biases so in order to be fair, we must not use human judgment!”

[laughter]

Which is not helpful. I mean, yes, there are cognitive biases so look for them and try to compensate. Don't try to use only something predictable, like an algorithm. That's not helpful. That's it.

MICHAEL: Yeah. Just to speak to a little bit of what each of you have said, I think for me, one of the key takeaways here is that if you're optimizing for future opportunity, if you're trying to—and I think I saw MIT defined intelligence in this way, that AI could be measured in terms of its ability to – AGI rather could be measured in terms of its ability to increase the number of games steps available to it, or options available to it in the next step of an unfolding puzzle, or whatever. Superhuman AGI is going to break out of any kind of jail we try to put it in just because it's doing better at this.

But the thing is that that's useless if we take it in terms of one spaciotemporal scale. Evolutionary dynamics have found a way to do this in a rainforest that optimizes biodiversity and the richness of feeding relationships in a food web without this short-sighted quarterly return maximizing type of approach.

So the question is are you trying to create more opportunities for yourself right now? Are you trying to create more opportunities for your kids, or are you trying to transcend the rivalrous dynamics? You've set yourself up for intergenerational warfare if you pick only one of those. The tension between feed yourself versus feed your kids is resolved in a number of different ways in different species that have different – yeah. It is exactly, Rein in the chat you said, it reminds you of the trade-off between efficiency and adaptability and it's like, arguably, adaptability is efficiency aggregated when you're looking at it over a longer timescale, because you don't want to have to rebuild civilization from scratch.

So [chuckles] I think it's just important to add the dimension of time and to consider that this is something that's going on at multiple different levels of organization at the same time and that's a hugely important to how we actually think about these topics.

JESSICA: Thinking of scales of time, you’ve thought about these interesting topics for an hour, or so now and I hope you'll continue thinking about them over weeks and consult the show notes. Michael, how can people find out more about you?

MICHAEL: I'm on Twitter and Instagram if people prefer diving in social media first, I don't recommend it. I would prefer you go to patreon.com/michaelgarfield and find future fossils podcasts there. I have a lot of other stuff I do, the music and the art and everything feeds into everything else. So because I'm a parent and because I don't want all of my income coming from my day job, I guess Patreon is where I suggest people go first. [laughs]

Thank you.

JESSICA: Thank you. And of course, to support the podcast, you can also go to patrion.com/greaterthancode. If you donate even a dollar, you can join our Slack channel and join the conversation. It'll be fun.

Support Greater Than Code