242: Considering The Social Side of Tech with Trond Hjorteland

July 21st, 2021 · 48 mins 25 secs

About this Episode

01:20 - The Superpower of Sociotechnical System (STS) Design: Considering the Social AND the Technical. The social side matters.

09:14 - The Origins of Sociotechnical Systems

18:42 - Design From Above vs Self-Organization

  • Participative Design
  • Idealized Design
  • Solving Problems is not Systems Thinking

29:39 - Systemic Change and Open Systems

37:47 - The Fourth Industrial Revolution

Reflections:

Jessica: “You are capable of taking in stuff that you didn’t know you see.” – Trond

Trond: In physics we do our best to remove the people and close it as much as possible. In IT it's opposite; We work in a completely open system where the human part is essential.

Rein: What we call human error is actually a human’s inability to cope with complexity. We need to get better at managing complexity; not controlling it.

This episode was brought to you by @therubyrep of DevReps, LLC. To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode

To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps. You will also get an invitation to our Slack community this way as well.

Transcript:

REIN: Welcome to Episode 242 of Greater Than Code. I’m here with my friend, Jessica Kerr.

JESSICA: Thanks, Rein and I'm excited because today we are here with Trond Hjorteland.

Trond is an IT architect aspiring sociotechnical systems designer from the consulting firm Scienta.no—that's no as in the country code for Norway, not no as in no science. Trond has many years of experience with large, complex, business critical systems as a developer and an architect on middleware and backend applications so he's super interested in service orientation, domain driven design—went like that one—event driven architectures and of course, sociotechnical systems, which is our topic today! These happen in industries across the world like telecom, media, TV, government.

Trond’s mantra is, “Great products emerge from collaborative sensemaking and design.” I concur.

Trond, welcome to Greater Than Code!

TROND: Thank you for having me. It's fun being here.

JESSICA: Trond, as a Northern European, I know our usual question about superpowers makes you nervous. So let me change it up a little bit: what is your superpower of sociotechnical system design?

TROND: Oh, that's a good one. I'm glad you turned it over because we are from the land of the Jante, as you may have heard of, where people are not supposed to be anything better than anybody else. So being a superhero, that's not something that we are accustomed to now, so to speak.

So the topic there, sociotechnical system, what makes you a superhero by having that perspective? I think it's in the name, really. Do you actually join the social and the technical aspects of things, whatever you do?

But my focus is mainly in organizations and in relation to a person, or a team cooperating, designing IT solutions, and stuff like that, that you have to consider both the social and the technical and I find that we have too much – I have definitely done that. Focused too much on the technical aspects and not ignoring the social aspects, but at least when we are designing stuff we frequently get too attached to the technical aspects. So I think we need that balance.

JESSICA: Yeah.

TROND: So I guess, that is my superpower I get from that.

JESSICA: When we do software design, we think we're designing software, which we think is made of technical code and infrastructure, and that software is made by people and for people and imagine that. Social side matters.

TROND: Yeah, and I must say that since Agile in the early 2000s, the focus on the user has been increasing. I think that's better covered than it used to be, but I still think we miss out on we part that we create software and that is that humans actually create software.

We often talk about the customer, for example. I guess, many of your listeners are creating such a system that actually the customers are using, like there’s an end user somewhere. But frequently, there's also internal users of that system that you create like backend users, or there's a wide range of others stakeholders as well and – [overtalk]

JESSICA: Internal users of customer facing systems?

TROND: For example, yes. Like back office, for example. I'm working for our fairly large telecom operation and of course, their main goal is getting and keeping the end users, paying customers, but it's also a lot of stuff going on in the backend, in the back office like supporting customer service support, there is delivery of equipment to the users, there's shipment, there is maintenance, all that stuff, there's assurance of it. So there's a lot of stuff going on in that domain that we rarely think of when we create their IT systems, I find at least.

JESSICA: But when we're making our software systems, we're building the company, we're building the next version of this company, and that includes how well can people in the back office do their jobs.

TROND: Exactly.

JESSICA: And us, like we're also creating the next version of software that we need to change and maintain and keep running and respond to problems in. I like to think about the developer interface.

TROND: Exactly, and that is actually, there’s an area where the wider sociotechnical term has popped up probably more frequently than before. It's actually that, because we think of the inter policies we need and organize the teams around for example, services are sometimes necessary and stuff like that.

JESSICA: Inter policies, you said.

TROND: Yeah, the inter policies offices go into this stuff. So we are looking into that stuff. We are getting knowledge on how to do that, but I find we still are not seeing the whole picture, though. Yes, that is important to get the teams right because you want them to not interact too much but enough so we want – [overtalk]

JESSICA: Oh yeah, I love it that the book says, “Collaboration is not the goal! Collaboration is expensive and it's a negative to need to do it, but sometimes you need to.”

TROND: Yeah, exactly. So that'd be a backstory there. So the main system, I think and the idea is that you have a system consisting of parts and what sociotechnical systems focus a lot about is the social system. There is a social system and that social system, those parts are us as developers and those parts are stakeholders of course, our users and then you get into this idea of an open system. I think it was Bertanlanffy who coined that, or looked into that.

JESSICA: Bertanlanffy open systems.

TROND: Open system, yeah.

JESSICA: Fair warning to readers, all of us have been reading this book, Critical Systems Thinking and the Management of Complexity by Michael C. Jackson and we may name drop a few systems thinking historical figures.

TROND: Yes, and Bertanlanffy is one of those early ones. I think he actually developed some of the idea before the war, but I think he wrote the book after—I'm not sure, 1950s, or something—on general system thinking. It's General Systems Theory and he was also looking into this open system thing and I think this is also something that for example, Russell Ackoff took to heart.

So he had to find four type of systems. He said there was a mechanical system, like people would think of when they hear system, like it's a technical thing. Like a machine, for example, your car is a system. But then they also added, there was something more that's another type of system, which is animal system, which is basically us. We consist of parts, but we have a purpose that is different from us than a car that makes us different.

And then you take a lot of those parts and combine them, then you've got a social system. The interesting thing with the social system is that that system in of its own have a purpose, but also, the parts have a purpose. That's the thing which is different from the other thing. For an animal system, your parts don't have a purpose. Your heart doesn't really have a purpose; it's not giving a purpose. It doesn't have an end goal, so to speak that. There's nothing in – [overtalk]

JESSICA: No, it has a purpose within the larger system.

TROND: Yeah.

JESSICA: But it doesn't have self-actualization.

TROND: It's not purposeful. That's probably the word that I – [overtalk]

JESSICA: Your heart isn't sitting there thinking, going beat, beat, beat. It does that, but it's not thinking it.

TROND: No, exactly.

[laughter]

TROND: So I think actually Ackoff and I think there was a book called On Purposeful Systems, which I recommend. It's really a dense book. The Jackson book, it's long, but it's quite verbose so it's readable. Like the On Purposeful Systems is designed to be short and concise so it's basically just a list of bullet points almost. It's just a really hard read. But they get into the difference between a purposeful system and a goal-seeking system. Your heart will be goal-seeking. It has something to achieve, but it doesn't have a purpose in a sense.

So that's the thing, which is then you as a person and you as a part of a social system and that's where I think the interesting thing comes in and that's where we're sociotechnical system really takes this on board is that in a social system, you have a set of individuals and you also have technical aspects of those system as well so that's the sociotechnical thing.

JESSICA: Now you mentioned Ackoff said four kinds of systems.

TROND: Yeah.

H: Mechanical, animal, social?

TROND: And then there’s ecological.

JESSICA: And then ecological, thanks.

TROND: Yeah. So the ecological one is that where every parts have a purpose like us, but the whole doesn't have a purpose on its own. Like the human kind is not purposeful and we should be probably. [laughs] For example, with climate change and all that, but we are not. Not necessarily.

REIN: This actually relates a little bit to the origins of sociotechnical systems because it came about as a way to improve workplace democracy and if you look at the history of management theory, if you look at Taylorism, which was the dominant theory at the time, the whole point of Taylorism is to take purposefulness away from the workers.

So the manager decides on the tasks, the manager decides how the tasks are done—there's one right way to do the tasks—and the worker just does those actions. Basically turning the worker into a machine. So Taylorism was effectively a way to take a social system, affirm a company, and try to turn it into an animate system where the managers had purpose and the workers just fulfilled a purpose.

TROND: Exactly.

REIN: And sociotechnical system said, “What if we give the power of purposefulness back to the workers?” Let them choose the task, let them choose the way they do their tasks.

TROND: Exactly, and this is an interesting theme because at the same time, as Taylor was developing his ideas, there were other people having similar ideas, like sociotechnical, but we never heard of them a late like Mary Parker Follett, for example. She was living at the same time, writing stuff at the same time, but the industry wasn't interested in listening to her because it didn't fit their machine model. She was a contrary to that and this was the same thing that sociotechnical system designers, or researchers, to put it more correctly, also experienced, for example, in a post-war England, in the coal mines.

JESSICA: Oh yeah, tell us about the coal mines.

TROND: Yeah, because that's where the whole sociotechnical system theory was defined, or was first coined what was there. There was a set of researchers from the Tavistock Institute for Human Relations, which actually came about like an offshoot of the Tavistock Clinic, which was working actually with people struggling from the war after the Second World War.

JESSICA: Was that in Norway?

TROND: No, that was exactly in England, that was in London. Tavistock is in London.

JESSICA: Oh!

TROND: Yeah. So it was an offshoot of that because there were researchers there that had the knowledge that there was something specific about the groups. There was somebody called Bion and there was a Kurt Lewin, which I think Jessica, you probably have heard of.

JESSICA: Is that Kurt Lewin?

TROND: Yes, that's the one. Absolutely.

JESSICA: Yeah. He was a psychologist.

TROND: Yeah. So he was for example, our main character of the sociotechnical movement in England in the post-war was Eric Trist and he was working closely with Lewin, or Lewin as you Americans call him.

They were inspired by the human relations movement, if you like so they saw they had to look into how the people interact. So they observed the miners in England. There was a couple of mines where they had introduced some new technology called the longwall where they actually tried to industrialize the mining. They have gone from autonomous groups into more industrialized, like – [overtalk]

JESSICA: Taylorism?

TROND: Yes, they had gone all Taylorism, correct.

JESSICA: “Your purpose is to be a pair of hands that does this.”

TROND: Exactly, and then they had shifts. So one shift was doing one thing, then other shift was doing the second thing and that's how they were doing the other thing. So they were separating people. They had to have been working in groups before, then they were separated to industrialize like efficiently out of each part.

JESSICA: Or to grouplike tasks with each other so that you only have one set of people to do a single thing.

TROND: Yeah. So one group was preparing and blowing and breaking out the coal, somebody was pushing it out to the conveyors, and somebody else was moving into the instrument, or the machinery to the next place. This is what's the three partnership shifts were like.

What they noticed then is that they didn't get the efficiency that they expected from this and also, people were leaving. People really didn't like this way of working; there was a lot of absenteeism and there were a lot of crows and uproar and it didn't go well, this new technology which they had too high hopes for.

So then Trist and a couple of others like Bamford observed something that happened in one of the mines that people actually, some of them self-organized and went back to the previous way of working in autonomous teams plus using this new technology. They self-organized in order to actually to be able to work in this alignment, but this was the first time that I saw this type of action that they actually created their own semi-autonomous teams as they called them.

JESSICA: So there was some technology that was introduced and when they tried to make it about the technology and get people to use it the way they thought it would be most efficient, it was not effective.

TROND: Not effective?

JESSICA: But yet the people working in teams were able to use the technology.

TROND: Yeah. Actually, so this is the interesting part is when you have complex systems then you can have self-organization happening there and these workers, they were so frustrated. They’re like, “Okay. Let's take matters in our own hands, let's create groups where we can actually work together.” So they created these autonomous groups and this was something that Eric Trist and Ken Bamford observed. So they saw that when they did that, the absenteeism and the quality of work-life increased a lot and also, productivity increased a lot.

There were a few mines observed that did this and they compared to other mines that didn't and the numbers were quite convincing. So you should think “Oh, this would use them,” everybody would start using this approach. No, they didn't. Of course, management, the leadership didn't want this. They were afraid of losing the power so they worked against it.

So just after a few research attempts, there wasn't any leverage there and actually, they increased the industrialization with a next level of invention was created that made it even worse so it grinded to a halt.

Sociotechnical was a definer, but it didn't have the good fertile ground to grow. So that's when they came to my native land, to Norway.

JESSICA: Ah.

TROND: Yeah. So Fred Emery was one of those who worked with Trist and Bramforth a lot back then and also traced himself, actually came to Norway as almost like a governmental project. There was a Norwegian Industrial Democracy Program, I think it was called, it was actually established by – [overtalk]

JESSICA: There was a Norwegian Industrial Democracy Program. That is so not American.

TROND: [laughs] Exactly. So that probably only happen in Norway, I suppose and there were a lot of reasons for that. One of them is, especially as that we struggled with the industry after the war, because we were just invaded by Germany and was under rule so we had nothing to build.

So they got support from America, for example, to rebuild after the war, but also, Norwegians are the specific type of persons, if you like. They don't like to be ruled over. So the high industrial stuff didn't go down well with the workers even worse than in England, but not in mines because we don't have any mines so just like creating nails, or like paper mills.

Also, the same thing happened as I said, in England, that people were not happy with the way these things were going. But the problem is in Norway that this was covering all the mines, not just a few mines here and there. This was going all the way up to the – the workers unions were collaborating with the employers unions. So they were actually coming together.

This project was established by these two in collaboration and actually, the government was also coming and so, there were three parts to this initiative. And then the Tavistock was called in to help them with this project, or the program to call it. So then it started off your experiments in Norway and then I went more – in England, they observed mostly, like the Tavistock, and in Norway, they actually started designing these type of systems, political systems, they're autonomous work groups and all that.

They did live experiments and the like so there was action research as a way of – [overtalk]

JESSICA: Oh, action research.

TROND: Yeah, where you actually do research on the ground. This was also from Kurt Lewin, I believe.

So I know they did a lot of research there and got similar results as in England. But also, this went a bit further than Norway. This actually went into the law, how to do this. So like work participation, for example and there was also this work design thing that came out of it. It’s like workers have some demands that goes above just a livable wage. They want the type of job that meant something, where they were supposed to grow, they were supposed to learn on the job, they were supposed to – there were a lot of stuff that they wanted and that was added to actually the law. So this is part of Norwegian law today, what came out of that research.

JESSICA: You mentioned that in Norway, they started doing design and yet there's the implication that it's design of self-organizing teams. Is that conflict? Like, design from above versus self-organization.

TROND: Yes, it did and that is also something that I discovered in Norway so well-observed, Jessica. This is actually what happened in Norway. So the researchers saw that they were struggling to getting this accepted properly by the workers, then I saw okay, they have to get the workers involved. Then they started with this, what they call participative design. The workers were pulled in to design the work they worked on, or to do together with the researchers, but the researchers were still regarded as experts still. So there was a divide between the researches and the workers, but the workers weren't given a lot of freewill to design how they wanted this to work themselves.

One of the latest experiments, I think the workers weren't getting the full freedom to design and I think it was the aluminum industry. I think they were creating a new factory and the workers weren’t part of designing how they should work in that factory, this new factory. They saw that they couldn't just come in and “This is how it works in the mines in England, this is how we're going to do it.” That didn't work in Norway.

REIN: And one of the things that they've found was that these systems were more adaptable than Taylorism. So there was one of these programs in textile mills in India that had been organized according to scientific management AKA Taylorism. And what they found, one of the problems was that if any perturbation happened, any unexpected event, they stopped working. They couldn't adapt and when they switched to these self-organizing teams, they became better at adaptation, but they also just got more production and higher quality. So it was just a win all around. You're not trading off here, it turns out.

JESSICA: You can say we need resilience because of incidents. But in fact, that resilience also gives you a lot of flexibility that you didn't know you needed.

TROND: Exactly. You are capable of taking in stuff that you couldn't foresee like anything that happens because the people on the ground who know this best and actually have all the information they need are actually able to adapt. Lots better then to have a structure like a wild process, I think.

REIN: One of the principles of resilience engineering is that accidents are normal work. Accidents happen as a result of normal work, which means that normal work has all of the same characteristics. Normal work requires adaptation. Normal work requires balancing trade-offs competing goals. That's all normal work. It just, we see it in incidents because incidents shine a light on what happened.

TROND: I think there was an American called Pasmore who coined this really well. He said, “STS design was intended tended to produce a win-win-win-win. Human beings were more committed, technology operated closer to the potential and the organization performed better overall while adapting more readily to changes in its environment.” This has pretty much coining what STS is all about.

REIN: Yeah. I’m always on the lookout because they're rare for these solutions that are just strictly better in a particular space. Where you're not making trade-offs, where you get to have it all, that's almost unheard of.

JESSICA: It's almost unheard of and yet I feel like we could do a lot of more of it. Who was it who talks about dissolving the problem?

REIN: Ackoff.

TROND: That’s Ackoff, yeah.

JESSICA: Yeah, that’s Ackoff in Idealized Design.

TROND: Where he said – [overtalk]

REIN: He said, “The best way to solve a problem is to redesign the system that contains it so that the problem no longer exists.”

TROND: Yeah, exactly.

JESSICA: And in software, what are some examples of that that we have a lot? Like, the examples where we dissolve coordination problems by saying the same team is responsible for deployment?

REIN: I've seen problem architectures be dissolved by a change in the product. It turns out that a better way to do it for users also makes possible a better architecture and so you can stop solving that hard problem that was really expensive.

JESSICA: Oh, right. So the example of item potency of complete order buttons: if you move the idea generation to the client, that problem just goes away.

TROND: Yeah, and I have to say another example is if you have two teams that work well together. [chuckles] You have to communicate more. Okay, but that doesn't help because that's not where our problem is. If you redesign the teams, for example, then if they – instead of having fun on the backend teams, if you redesign, you have no verticals, then you haven't solved the problem. You have resolved it. It is gone because they are together now in one thing.

So I think there is a lot of examples of this, but it is a mindset because people tend to say, if there is something problem, they want to analyze it as it is and then figure out how to fix the parts and then – [overtalk]

JESSICA: Yeah, this is our obsession with solving problems!

TROND: Yes.

JESSICA: Solving problems is not systems thinking.

TROND: No, it’s not. Exactly.

JESSICA: Solving problems is reactive. It feels productive. It can be heroic. Whereas, the much more subtle and often wider scope of removing the problem, which often falls into the social system. When you change the social system, you can resolve technical problems so that they don't exist. That's a lot more congressive and challenging and slower.

TROND: It is and that is probably where STS has struggled. It didn't struggle as much, but that is also here compared to the rest of the world. They said because you have to fight – there is a system already in place and that system is honed in on solving problems as you were saying.

JESSICA: That whole line management wants to solve the problem by telling them what workers want to do and it's more important that their solution work, then that a solution work.

TROND: Yes, exactly and also, because they are put in a system where that's normal. That is common sense to them. So I often come back to that [inaudible] quote is that I get [inaudible], or something like that is that because a person in a company, he’s just a small – In this large company, I'm just a small little tiny piece of it; there's no chance in anyhow that I can change it.

JESSICA: Yeah. So as developers, one reason that we focus on technical dilutions and technical design is because we have some control over that.

TROND: Yes.

JESSICA: We don't feel control over the social system, which is because you can never control a social system; you can only influence it.

TROND: So what I try to do in an organization is that I try to find a, change agents around in the organization so I get a broader picture not only understanding it, but also record broader set of attacks, if you like it—I'm not just calling it attacks, but you get my gist—so you can create a more profound change not just a little bit here, a little bit though. Because when you change as society, if we solve problems, we focus on the parts and we focus on the parts, we are not going to fix the hole. That is something that Ackoff was very adamant about and he’s probably correct. You can optimize – [overtalk]

JESSICA: Wait. Who, what? I didn’t understand.

TROND: Ackoff.

JESSICA: Ackoff, that was that.

TROND: So if you optimize every part, you don't necessarily make the system better, but he said, “Thank God, you usually do. You don't make it worse.” [laughs]

REIN: Yeah. He uses the example of if you want to make a car, so you take the best engine and the best transmission, and you take all of the best parts and what do you have? You don't have a car. You don't have the best car. You don't even have a car because the parts don't fit together. It's entirely possible to make every part better and to make the system worse and you also sometimes need to make a part worse to make the system better.

TROND: And that is fascinating. I think that is absolutely fascinating that you have to do that. I have seen that just recently, for example, in our organization, we have one team that is really good at Agile. They have nailed it almost, this team. But the rest of the organization are not as high level and good at Agile and the organization is not thrilled to be Agile in a sense because it's an old project-oriented organization so it is industrialized in a sense. Then you have one team that want to do STS; they want to be an Agile super team. But when they don't fit with the rest, they actually make the rest worse.

So actually, in order to make it the whole better, you can't have this local optimizations, you have to see the whole and then you figure out how to make the whole better based on the part, not the other one.

JESSICA: Yeah. Because well, one that self-organizing Agile team can't do that properly without having an impact on the rest of the organization.

TROND: Exactly.

JESSICA: And when the rest of the organization moves much more slowly, you need a team in there that's slower. And I see this happen. I see Agile teams moving too fast that the business isn't ready to accept that many changes so quickly. So we need a slower – they don't think of it this way, but what they do is they add people. They add people and that slows everything down so you have a system that's twice as expensive in order to go slower. That's my theory.

TROND: The fascinating thing, though—and this is where the systems idea comes in—is that if you have this team that really honed this, that they have nailed the whole thing exactly, they’re moving as fast as they can and all that. But the rest of it, they’ll say it’s not, then you have to interact the rest of organization, for example.

So they have been bottlenecked everywhere they look. So what they end up doing is that they pull in work, more work than they necessarily can pull through because they have to. Unless they just have to sit waiting. Nobody feels – [overtalk]

JESSICA: And then you have nowhere to fucking progress.

TROND: Exactly. So then you make it worse – [overtalk]

JESSICA: Then you couldn’t get anything done.

TROND: Exactly! So even a well-working team would actually break in the end because of this.

REIN: And we’ve organized organizations around part maximization. Every way of organizing your business we know of is anti-systemic because they're all about part optimization. Ours is a list of parts and can you imagine going to a director and saying, “Listen, to make this company better, we need to reduce your scope. We need to reduce your budget. We need to reduce your staff.

TROND: Yeah. [laughs] That is a hard sell. It is almost impossible.

So where I've seen it work—no, I haven't seen that many. But where I’ve seen that work, you have to have some systemic change coming all the way from the top, basically. Somebody has to come in and say, “Okay, this is going to be painful, but we have to change. The whole thing has to change,” and very few companies want to do that because that’s high risk. Why would you do that? So they shook along doing that minor problem-solving here and there and try to fix the things, but they are not getting the systemic change that they probably need.

JESSICA: Yeah, and this is one of the reasons why startups wind up eating the lunch of bigger companies; because startups aren't starting from a place that's wrong for what they're now doing.

TROND: Exactly. They are free to do it. They have all the freedom that we want the STS team to have. The autonomous sociotechnical systems teams, those are startups. So ideally, you’re consisting a lot of startups.

REIN: And this gets back to this idea of open systems and the idea of organizationally closed, but structurally open.

TROND: Yeah.

REIN: It comes from [inaudible] and this idea is that an organization, which is the idea of the organization—IBM as an organization is the idea of IBM, it's not any particular people. IBM stays IBM, but it has to reproduce its structure and they can reproduce its structure in ways that change, build new structure, different structure, but IBM is still IBM.

But organizations aren't static and actually, they have to reproduce themselves to adapt and one of the things that I think makes startups better here is that their ability to change their structure as they produce it, they have much more agility. Whereas, a larger organization with much more structure, it's hard to just take the structure and just move it all over here.

TROND: Exactly.

JESSICA: It’s all the other pieces of the system fit with the current system.

TROND: Yeah. You have to share every part in order to move.

JESSICA: Right.

REIN: And also, the identity of a startup is somewhat fluid. Startups can pivot. Can you imagine IBM switching to a car company, or something?

TROND: I was thinking exactly the same; you only see pivots in small organizations. Pivots are not normal in large organizations. That will be a no-go. Even if you come and suggested it, “I hear there's a lot of money in being an entrepreneur.” I wouldn't because that would risk everything I have for something that is hypothetical. I wouldn't do that.

REIN: Startups, with every part of them, their employees can turn over a 100%, they can get a new CEO, they can get new investors.

JESSICA: All at a much faster time scale.

TROND: Also, going back to Ackoff, he's saying that we need to go get out of the machine age. Like he said, we have been in the machine age since the Renaissance, we have to get out of that and this is what system thinking is. It’s a new age as they call it. Somebody calls it the information age, for example and it’s a similar things. But we need to start thinking differently; how to solve problems. The machine has to go, at least for social systems. The machine is still going to be there. We are going to work with machines. We're going to create machines. So machines – [overtalk]

JESSICA: We use machines, but our systems are bigger than that.

TROND: Yes.

JESSICA: Systems are interesting than any machine and when we try build systems as machines, we really limit ourselves.

TROND: So I think that is also one of the – I don't know if it's a specific principle for following STS that says that man shouldn't be an extension of the machine, he should be a part of machine. He should be using the machine. He should be like an extension of the machine.

JESSICA: Wait. That the man being an extension of machine, the machine should be an extension of man?

TROND: Yeah.

JESSICA: Right. [inaudible] have a really good tool, you feel that?

TROND: Mm hm.

REIN: This actually shows up in joint cognitive systems, which shares a lot with sociotechnical systems, as this idea that there are some tools through which you perceive the world that augment you and there are other tools that represent the world. Some tools inside you and you use them to interact with the world, you interact with the world using them to augment your abilities, and there are other tools that you have just a box here that represents the world and you interact with the box and your understanding of the world is constrained by what the box gives you.

These are two completely different forms of toolmaking and what Stafford Beer, I think it might say is that there are tools that augment your variety, that augment your ability to manage complexity, and there are tools that reduce complexity, there are tools that attenuate complexity.

JESSICA: Jean Yang was talking about this the other day with respect to developer tools. There are tools like Heroku that reduce complexity for you. You just deploy the thing, just deploy it and internally, Heroku is dealing with a lot of complexity in order to give you that abstraction. And then there are other tools, like Honeycomb, that expose complexity and help you deal with the complexity inherent in your system.

TROND: Yeah. Just to go back so I get this quote right is that the individual is treated as a complimentary to machine rather than an extension of it.

JESSICA: Wait, what is treating this complimentary to machine?

TROND: The individual.

JESSICA: The individual.

TROND: The person, yeah. Because that is what you see in machine shops and those are also what happened in England when they called mining work again, even more industrialized, people are just an extension of the machine.

JESSICA: We don't work like that.

TROND: Yeah. I feel like that sometimes, I must admit, that I'm part of the machine. That I'm just a cog in the machine and we are not well-equipped to be cogs in machines, I think. Though, we should be.

REIN: Joint cognitive systems call this the embodiment relation where the artifact is transparent and it's a part of the operator rather than the application so you can view the world through it but it doesn't restrict you. And then the other side is the hermeneutic relation. So hermeneutics is like biblical hermeneutics is about the interpretation of the Bible. So the hermeneutic relation is where the artifact interprets the world for you and then you view the artifact.

So like for example, most of the tools we use to respond to incidents, logs are hermeneutic artifacts. They present their interpretation of the world and we interact with that interpretation. What I think of as making a distinction between old school metrics and observability, is observability is more of an embodiment relationship. Observability lets you ask whatever question you want; you're not restricted to what you specifically remember to log, or to count.

TROND: Exactly. And this is now you're getting into the area where I think actually STS – now we have talked about a lot about STS in the industrial context here, but I think it's not less, maybe even more relevant now because especially when we're moving into the so-called Fourth Industrial Revolution where the machines have taken over more and more. Like, for example, AI, or machine learning, or whatever. Because then the machine has taken more and more control over our lives.

So I think we need this more than even before because the machines before were simple in comparison and they were not designed by somebody in the same sense that for example, AI, or machine learning was actually developed. I wouldn't say AI because it's still an algorithm underneath, but it does have some learning in it and we don't know what the consequences of that is, as I said. So I think it's even more relevant now than it was before.

JESSICA: Yeah.

TROND: [chuckles] I'm not sure if you're familiar with the Fourth Industrial Revolution, or see, that is.

JESSICA: Or hear something about it. You want to define it to our listeners?

TROND: Somebody called it this hyperphysical systems.

JESSICA: Hyperphysical?

TROND: Yes, somebody called it hyperphysical systems. I'm not sure if you want to go too much into that, to be honest, but.

So the Fourth Industrial Revolution is basically about the continuous automation of manufacturing and industrial practices using smart technology, machine-to-machine communication, internet of things, machine learning improves communication and self-monitoring and all that stuff. We see the hint of it, that something is coming and that is that different type of industry than what we currently are in.

I think the Industrial 4.0 was probably coined in Germany somewhere. So there's a definition that something is coming out of that that is going to put the humans even more on the sideline and I think for us working in I, we see some of this already. The general public, maybe don't at the same level.

REIN: So this reminds me of this other idea from cognitive systems that there are four stages, historical stages, in the development of work. There's mechanization, which replaces human muscle power with mechanical power and we think of that as starting with the original industrial revolution, but it's actually much older than that with agriculture, for example. Then there's automation, there's a centralization, and then there's computerization. Centralization has happened on a shorter time span and computerization has happened at a very short time span relative to mechanization.

So one of the challenges is that we got really good at mechanization because we've been doing it since 500 BC. We're relatively less good at centering cognition in the work. The whole point of mechanization and automation was to take cognition out of the work and realizing you have to put it back in, it's becoming much more conspicuous that people have to think to do their work.

TROND: Yeah.

JESSICA: Because we're putting more and more of the work into the machine and yet in much software system, many software systems especially like customer facing systems, we need that software to not just be part of the machine, to not do the same thing constantly on a timescale of weeks and months. We need it to evolve, to participate in our cognition as we participate in the larger economy.

TROND: Yeah.

REIN: And one of the ironies of this automation—this comes from Bainbridge’s 1983 paper—is that when you automate a task, you don't get rid of a task. You make a new task, which is managing the automation, and this task is quite different from the task you were doing before and you have no experience with it. You may not even have training with it. So automation doesn't get rid of work; automation mutates work into a new unexpected form.

JESSICA: Right. One of the ironies of automation is that now you have created that management at the automation and you think, “Oh, we have more automation. We can pay the workers less.” Wrong. You could pay the workers more. Now collectively, the automation plus the engineers who are managing it are able to do a lot more, but you didn't save money. You added a capability, but you did not save money.

REIN: Yeah, and part of that is what you can automate are the things we know how to automate, which are the mechanical tasks and what's left when you automate all of the mechanical tasks are the ones that require thinking.

TROND: And that's where we're moving into now, probably that's what the Fourth Industrial Revolution is. We try and automate this stuff that probably shouldn't be automated. Maybe, I don’t know.

JESSICA: Or it shouldn’t be automated in a way that we can’t change.

TROND: No, exactly.

REIN: This is why I'm not buying stock in AI ops companies because I don't think we figured out how to automate decision-making yet.

JESSICA: I don't think we want to automate decision-making. We want to augment.

TROND: Yeah, probably. So we're back to that same idea that the STS said we should be complimentary to machine, not an extension of it.

JESSICA: Yes. That's probably a good place to wrap up?

TROND: Yeah.

REIN: Yeah. There's actually a paper by the way, Ten Challenges in Making Automation A Team Player.

JESSICA: [laughs] Or you can watch my talk on collaborative automation.

TROND: Yeah.

JESSICA: Do you want to do reflections?

REIN: Sure.

JESSICA: I have a short reflection. One quote that I wrote down that you said, Trond in the middle of something was “You are capable of taking in stuff that you didn’t know you see,” and that speaks to, if you don't know you see it, you can't automate the seeing of it. Humans are really good at the everything else of what is going on. This is our human superpower compared to any software that we can design and that's why I am big on this embodiment relation. Don't love the word, but I do love tools that make it easier for me to make and implement decisions that give me superpowers and then allow me to combine that with my ability to take input from the social system and incorporate that.

TROND: I can give it a little bit of an anecdote. My background is not IT. I come from physics—astrophysics, to be specific—and what we were drilled in physics is that you should take the person out of the system. You should close the system as much as possible. Somebody said you have to take a human out of it if you want observe. Physics is you have no environment, you have no people, there's nothing in it so it's completely closed, but we work and here, it's complete opposite. I work in a completely open system where the human part is essential.

JESSICA: We are not subject to the second law of thermodynamics.

TROND: No, we are not. That is highly restricted for a closed system. We are not. So the idea of open system is something that I think we all need to take on board and we are the best one to deal with those open systems. We do it all the time, every day, just walking with a complex open system. I mean, everything.

JESSICA: Eating.

TROND: Eating, yeah.

REIN: And actually, one of the forms, or the ways that openness was thought of is informational openness. Literally about it.

JESSICA: That’s [inaudible] take in information.

TROND: Yeah. Entropy.

JESSICA: Yeah.

TROND: Yeah, exactly. And we are capable of controlling that variance, we are the masters of that. Humans, so let's take advantage of that. That's our superpower as humans.

REIN: Okay, I can go.

So we've been talking a little bit about how the cognitive demands of work are changing and one of the things that's happening is that work is becoming higher tempo. Decisions have to be made more quickly and higher criticality. Computers are really good at making a million mistakes a second. So if you look at something like the Knight Capital incident; a small bug can lose your company half a billion dollars in an instant.

So I think what we're seeing is that this complexity, if you combine that with the idea of requisite variety, the complexity of work is exploding and what we call human error is actually a human's inability to cope with complexity. I think if we want to get human error under control, what we have to get better at is managing complexity, not controlling it – [overtalk]

JESSICA: And not by we and by we don’t mean you, the human get better at this! This system needs to support the humans in managing additional complexity.

REIN: Yeah. We need to realize that the nature of work has changed, that it presents these new challenges, and that we need to build systems that support people because work has never been this difficult.

JESSICA: Both, social and technical systems.

TROND: No, exactly. Just to bring it back to where we started with the coal miners in England. Working there was hard, it was life-threatening; people died in the mines. So you can imagine this must be terrible, but it was a quite closed system, to be honest, compared to what we have. That environment is fairly closed. It isn't predictable at the same size, but we are working in an environment that is completely open. It's turbulent, even. So we need to focus on the human aspect of things. We can't just treat things that machines does work.

JESSICA: Thank you for coming to this episode of Greater Than Code.

TROND: Yeah, happy to be here. Really fun. It was a fun discussion.

REIN: So that about does it for this episode of Greater Than Code. Thank you so much for listening wherever you are. If you want to spend more time with this awesome community, if you donate even $1 to our Patreon, you can come to us on Slack and you can hang out with all of us and it is a lot of fun.

Support Greater Than Code