00:16 – Welcome to “Diamonds Are For Gender” …we mean, “Greater Than Code!”
00:56 – Origin Story, Superpowers, and Data Science
04:20 – Diversity and Career Paths in Data Science
10:51 – Ethical Debates Within the Data Science Field
17:21 – Software Development and Engineering; Failure Modes in Software
21:44 – Failure Modes in Democracy; Voting Machine Software
33:37 – Working for a Government Contractor
36:21 – Data Patterns and Tampering
39:00 – Open Data and Open Science
45:59 – Falsifying Data
Want to help make us a weekly show, buy and ship you swag, and bring us to conferences near you? Join or Slack community and support us via Patreon!
Or tell your organization to send sponsorship inquiries to firstname.lastname@example.org.
To jumpstart these efforts, we are very excited to announce that we have been selected for this month’s Fund Club project!
Coraline: Considering all the ways something can fail.
Sam: The world that I live in and the kind of software development practices that I take for granted are extraordinary niche.
Emily: Tech conferences and their decadence vs academic/corporate conferences.
Are you Greater Than Code?
Submit guest blog posts to email@example.com
Please leave us a review on iTunes!
SAM: Hello and welcome to Episode 37 of ‘Diamonds Are For Gender.’ I’m Sam Livingston-Gray and here is my co-host, Coraline Ada Ehmke.
CORALINE: Your co-host on Greater Than Code, Sam. Just a gentle reminder. We have an incredible guest today, Emily Gorcenski is joining us. By title, Emily G. is a senior data scientist at Simple but by practice, they are transgender activist, hockey player and technologist, passionately working in the intersection of computing and society. Their passions include technology ethics, regulation of computing and of course, posting selfies on Twitter. Don’t we all love that? Hi, Emily.
EMILY: How’s it going? Thank you for having me.
CORALINE: It’s going great. What should we know about you for people who don’t already know you from Twitter fame? What makes you unique and special and what are your superpowers?
EMILY: I don’t really know what makes me unique. I think my superpowers is I’m very good at complaining very publicly and that’s really what my Twitter is all about. But I guess the thing that makes me unique is I’ve just had a very strange kind of career enter in tech and being in tech, I didn’t study computer science. I didn’t plan to be a data scientist. I just stumbled into this and along the way have managed to gather really unique array of experiences that have pushed me towards where I am now and what I’m working on.
Most of my background is in, actually abstract mathematics, which is not the most common thing that we use code for. But I’ve turned that into weird kind of work experience like epidemiology and clinical psychology. I’ve ran clinical trials and stuff like that so it’s a different path than what most people take come into data science.
CORALINE: Was data science something that always interested you?
EMILY: Honestly, no. I wanted to build airplanes. I wanted to be a fighter pilot when I was a kid and that was never going to happen because my vision, because I didn’t work hard enough since school to get into the academy or anything like that. I decided that I wanted to build a fighter planes so that’s what I went to college to study. When I was there, I fell in love with math instead and using computers to solve math problems that humans can’t. That’s really what I focused on.
My goal was I was going be a math professor. I was going to write fancy computer algorithms to solve complicated problems and turns out that it didn’t really happen either. I never went to grad school. I ended up getting really sick as an undergrad and taking a bunch of time off. When I came back, I said, “I cannot do this grad student salary for seven years. There is no way.”
I entered industry and I did eventually get to work on fighter planes and it turns out that working on fighter planes is exceptionally boring. Also, kind of stinks and I realized I was building weapons and just decided that I didn’t want to be doing that anymore.
CORALINE: What’s your first data science job?
EMILY: That’s a complicated one. I guess the first time I had the position title of data scientist was at Simple and I’ve only been there for about nine months at this point. But I’ve been working in data for 15 plus years. We just call it different stuff: signal processing, quantitative analytics, data engineering, database-whatever. The term data scientist itself has an interesting very recent origin because when companies started realizing that they had the ability to both generate and store data at scale, they realized that they didn’t have people that knew how to do anything with it and most of those people were in academia so they threw a bunch of money at them and said, “We can still call you a scientist. Just come work for us.”
SAM: I imagine that the employment situation in academia is such that works pretty well?
EMILY: Yes, it’s a really big difference when you’re doing your fifth year of a postdoc or whatever and making 35K and somebody dangles six figures in front of you and says, “You can work 40 hours a week, not 140.”
CORALINE: I can see why that might be a motivating factor, definitely.
CORALINE: One of the interesting things to me about data science, I’ve been interviewing a lot lately and a couple of the companies that I have talked to have data science departments. I’m seeing, at least on the gender access, a lot more diversity in data science than in general tech. Has that been your experience too?
EMILY: That has definitely been my experience in tech-oriented companies that have data science teams. Even my team is very diverse that I work with now. That’s not been my case for people who had the traditional role coming up through traditional engineering companies like the aerospace firms, that kind of thing. But I do think that a lot of what’s new, what’s being developed in data science now has to do with computer vision type problems, things like self-driving cars and algorithms that can process images to classify, to judge what is the style of this object or whatever.
A lot of that work is motivated by work in cognitive psychology and visual neuropsychology or studying the visual cortex. A lot of people are coming over from that space because they’re working with convolutional neural network there. They’re working with all this technology. They’re familiar with the Python libraries. A lot of them tend to be women. A lot of people that are working in that field in academia are women so I think there’s been some migration from that area. I think that has helped encouraged more women who are very talented to see like, “There is career in technology that works for me and I can do this.”
SAM: I’m curious about the role that background and training plays in all of this. Are there multiple career paths into data science? Is it pretty much straight through math or do you get to recruit from STEM all over the place or what?
EMILY: I think it’s STEM all over the place. I know people without college degrees working in data science. I know people with PhDs working in data science. I know people have PhDs in psychology. I know people with degrees in economics and accounting and all sorts of fields. Math is one aspect of it but data science is a really broad field with a lot of different requirements. I don’t think that there is a single best career path to go through.
You’re not going to do great data scientists if you just hire seven computational mathematicians. They’re just going to do the same thing. There’s a lot of exploration. There’s a lot of diversity needed to get to the answers and the insights that a team is supposed to deliver.
SAM: That’s really interesting. I guess, whenever I see data science, I think of Python and statistics and I have enjoyed Python in the past and my stats classes that were algebra based for fun but the calc one was really, really hard for me. I think that’s something that I could do in another lifetime but that’s really interesting to hear.
CORALINE: I’ve only dabbled in data science, working on machine learning algorithms but I found it fascinating. But for me, the math was a really barrier because math is definitely not my strong point.
CORALINE: That’s pretty much the approach I was taking. The algorithms I was using were implemented generally in C and Ruby so I got it talking to C libraries. I didn’t have to understand the mechanics of what the algorithms were doing. I just had to interact with them. I just select them, of course and understand conceptually what they’re doing. But it was mainly a matter of interfacing via APIs and giving the data that I wanted and treating them as a black box.
SAM: You still have to know what questions to ask and what questions it’s possible to ask, right?
EMILY: Yeah, absolutely. There’s a lot of domain knowledge that you need to know in order to drive towards the right insights. Certainly, being able to know what the weaknesses of a certain algorithm versus another is a help. But I think that’s why data science works. It’s a really compelling team-oriented approach for a company. If you have a good team that has somebody that really [inaudible] all the math but somebody else that is really a domain-savvy with the insights, that’s a perfect pairing because the domain-savvy person can point like a pre-filter for the signal, from the noise.
Then, the math person can go and say, “We don’t want to use a PCI for this. We want to use K-means clustering because of blah-blah-blah-blah-blah,” that there’s a lot of good opportunities for having that diversity, having that range of experience on a team.
CORALINE: How is data science use at Simple?
EMILY: I want to be careful about how much we say here because Simple is in the finance industry. There’s not much that we can reveal. What I can say is that as a data scientist at Simple, I work on product development. I have a product team and we’re always looking to understand how people use money, save money, spend money. Our mission as a company is to help people feel comfortable with their money. One of the things that we do is we look at how do we help them with that, how do we drive better customer experiences using data and how do we understand our customers based on the data that we have.
CORALINE: I guess not a lot of other companies, maybe in the financial space especially for fraud prevention as well and then another spaces for recommendation engines, things on those lines.
EMILY: Yeah, there is definitely a lot of use of that. There are companies out there that specialized in building things like fraud metrics. They have access to massive data stores where they can look and see what the difference between a fraudster and a non-fraudster is. There is definitely some use for that within the finance space. But also, even things like retail space, people do retail fraud. That’s still a thing that exists. Yeah, there are companies that work on that.
SAM: That’s interesting. Mentioning the use of data science in retail, for example makes me think of my own personal threshold for when it’s okay to work at a company. Obviously, you decided that working in construction of fighter jets was not for you. I’ve haven’t formalized it but I feel like my own personal line is somewhere right around selling people stuff they don’t need. I wonder, how much in the field of data science crosses that particular line and I guess more generally, are there ethical debates within the field? Are there areas of discussion around that?
EMILY: There are, yes. Data science is a field that is one of my passions that has amazing ability to do unintentional harm. There’s dozens of case studies out there. I’m sure you all know Carina Zona, she gives a fantastic talk. Cathy O’Neil has an amazing book called, ‘Weapons of Math Destruction,’ and that goes into some case studies of that. I’ve done some work in this space on my own but data science does raise some interesting ethical questions because you’re essentially inferring things about a person that may or may not be true, that have material impact on what they experience and how they experience your product and how they get to move through the world. It is a very tricky space to work in.
As far as debates in the field, there are definitely people talking about it but I want there to be more. I don’t think that there’s enough talk about, “Are we doing things ethically? Are we able to use these algorithms safely? Do we have ways to protect users from harm?” I do a lot of work in tech ethics and the way that I define that is that ethics is about the analysis of risk in the mitigation of harm. That is something that comes from my experience working in medical devices and in doing clinical research.
It’s not about never doing the wrong thing. It’s not about getting 100% accuracy. It’s about making sure that we have, as a team or as a company or as technologists, established transparent practices for assessing what the harm profiles are, taking steps to mitigate them and having a framework for remediation when those things do go wrong. I think that data science is still a lot better at that.
SAM: With medical devices, I guess there’s the obvious case of the machine that delivers orders of magnitude more radiation than it should because of programming error but how does that come up when you’re doing data analysis?
EMILY: That was Therac-25, for anyone that wants to look that up in Wikipedia. That was a device back in the early 1980s. It ended up killing four people. When we look at data, there’s all sorts of ways that we can learn from that case study. One of them being that a consequence of Therac-25 in medical device regulation is that, in order to put software on a medical device, depending on its risk profile, you don’t have to go through the specific process but you have to do a failure analysis.
If you don’t do failure analysis, the FDA will not allow you to sell a medical device. Data science could really benefit from that and there is a process called FMEA. I believe it stands for Failure Mode Effects Analysis. It’s a really fascinating thing that I really do wish that we could do more, not just in data science but software engineering in general, where you basically gather your team, go through a brainstorming session and you think of all the ways that something could fail.
It could be something like somebody pulls the plug out, somebody drops their phone or whatever. Then you rank each one of those failures on three axis from one to 10. The first being the severity of harm: what goes wrong if this failure happens, how severe is the harm, given that it occurs.
What’s the probability of harm, if it happened? Say, your failure is that your battery explodes, what’s the probability that harm happens if the battery explodes. That might be like 100%, right? But if the power goes out, that harm might be like 1%, it has to be like a confluence of factors in order for harm to actually occur. The probability of harm, the severity of harm and then, the important one is that detectability of harm if it occurs. How do you detect the failure if it occurs?
In medical device, a great example for this is you have an IV monitor and your failure mode is the power cord — somebody trips in the power cord. The severity of that could be very severe, like if the pumps stops pumping medicine, the person might die. As for detectability, how do you detect if the power cord is out. If you are just designing the device, you might be like, “Oh, no. That’s bad.” It would be almost impossible so that’s going to be like a nine or a 10.
Then you go back through your design process and you say, “This is a really bad score. Let’s build in a fail-safe,” so the fail-safe that we have in hospitals is if the power goes out for an IV pump, it makes an audible sound. It starts beeping really loudly, really annoying. Using a battery backup, that takes that detectability score from like a nine down to like a three.
That’s a process that we could learn from a data science and software engineering in general, like what are the ways that this algorithm could fail and harm people? Well, maybe it gets trained on bad data so now, you have an algorithm running in the wild and it adapts to bad data. Now, all of your customers [inaudible], all of your credit scores are bad so they get charged a higher interest rate or something like that. That’s a really evident way of harming somebody and it might be hard to detect because it’s really hard to validate encrypt data but it is something that could happen. If we go through those exercises, we can start developing safer algorithms.
CORALINE: I think that model could be adapted even when the consequences of something going wrong, just involve system downtime or just involved availability of the service. The scope isn’t just limited to life or death situations or situations that severely impact an individual person’s life. But it sounds like they have more to do with just general resiliency.
EMILY: Absolutely. In physical engineering spaces, we go through processes like this all the time. If you build a bridge, you go through a process like this. If you’re building a factory or some power plant or anything like that, you definitely go through these like what are the failures? It’s an iterative process and you take all three of those scores and they’re all rank from one to ten, you multiply them all together and that gives you an idea of where the highest risk things are. You start with the highest risk things and then you kind of work your way down until you run out of money.
CORALINE: It’s interesting what software development part from engineering and what is doesn’t. We definitely want to consider ourselves engineers that has been a long time habit of software developers and about software engineering. But it seems we like to skip the hard parts, the parts that actually require discipline and actually require hard work like planning for failure conditions, like detailed software is very opposed these days to detail planning of any kind, documentation. All of these things are hard and we just simply avoid them but they’re really a core to true engineering as a discipline.
EMILY: It very much is and software is different by nature than physical engineering. I know that people have rehashed this debate over and over and over but I still fail to see any valid arguments that say that things like documentation are not part of software developers job, things like risk analysis need to be part of that process. Our industry has evolved around this idea of rapid deployment, rapid delivery, go-go-go-go-go. If you’re not iterating fast enough, if you’re not disrupting, you’re not being productive enough. My answer to that is if our entire industry is built on skipping these necessary steps, then we don’t really have an industry. We just have an accident waiting to happen.
SAM: A distributed series of accidents, really.
EMILY: Yeah, pretty much. There are great people out there doing site reliability stuff, doing operations stuff that have built in lots of safeguards when things go wrong and that’s great and every operations engineer that I’ve ever talked to would really love to sit a dev down and say, “Stop coding for a week.”
CORALINE: I just heard a story yesterday about a friend of mine does dev ops work and there was all sorts of pressure on him to get the code that was developed during the current sprint deployed so that sprint velocity was maintained, sort of like playing the numbers there and the Chef framework that he is using in his job is not very resilient and whenever he goes to deploy something to staging, it’s turning problems that could come back to bite the company in the ass when it comes time to move to production. He’s filling this pressure to get things deployed but he has no confidence in the infrastructure at all and the development team doesn’t want to take the time to beef up that infrastructure because they’re so busy delivering features and that just sounds like a recipe for disaster.
EMILY: Yeah, certainly it can be. Cycling that back to data science, we have things like that all the time. The problem is it doesn’t really work like you can patch a feature in software, you can write a hack and you write your comment be like, “This is a hack. Get this out by Friday,” and then you put it into your tech that column and in data science, that’s often not the case like you train these algorithms and you can’t patch the training of them because under the hood, we don’t know how they work.
They say, “Here’s data. Here is output. Okay,” and then something happens like the self-driving car gets into an accident and they want to fix just that. Well, it’s really hard to go and just fix that part of the algorithm that made that decision. It’s almost impossible to do that. That rapid iteration, like people are really trying to do that in data science but if you pay attention to just any services that you use, you’ll see those failures top up and in stuff that is built on machine learning. Just look at the ads that you get. You take an ad blocker off and look at some of the absurd ads that you get every once in a while.
CORALINE: My favorite was when Amazon had a recommendation for me, it said, “Because you bought the Zombie Survival Guide –,” and it recommended a speculum to me.
CORALINE: I have a screen shot. I have proof because it was so absurd and I’m like, “What led to this particular recommendation? What data fed into this algorithm and decided that those things were related?” I have no idea.
SAM: It is certainly more entertaining than the classic, “You just bought a fire extinguisher. Let us recommend ten more fire extinguishers to you.”
CORALINE: Yeah, there is that.
SAM: That’s great.
CORALINE: Emily, we’ve talked about failure modes in software and I would argue that what we’re experiencing today is a failure mode in democracy and there are lots of components of that from secret cabals working on health care reform to not allowing reporter’s access to firing people who are investigating you. But the issues actually go back further than that with the voting process itself. We’ve seen not enough discussion of disenfranchisement of voters but even those who do manage to vote are not safe from failure modes. I know you’ve done a lot of analysis of this. Would you like to talk about that?
EMILY: Yeah, absolutely. This is a really interesting part of my life and part of my work. Electronic voting machines are pretty ubiquitous. I just voted at with one yesterday here in Virginia. My medical device experience has taught me a lot about how software is regulated. Working in that space and working for the government for so many years, I have an ability to be able to quickly navigate through government documents and find where they are.
I spent my Thanksgiving break last year and this was right around the time that there was lots of allegations of possible impropriety and Jill Stein was doing the recounts and all that. I decided to look into voting machines and to see if they’re regulated by the federal government and to what extent they are and what the software process is. What I ended up finding was that — and some other people have looked into this prior to me doing so — they are actually regulated. There is a set of voluntary guidelines that are out there. It’s up to the states to decide to what extent they implement them.
Several states, I believe that there are 12 states don’t implement or don’t mandate any certification process. Then, there are some that require full compliance to the… Is it VVSG? I forget exactly the acronym, the voting standards for voting machines. I dug into this and I went through and I started reading the test reports. These manufacturers, in order to get them certified, they don’t have to do certification if they want to use them and say Michigan, which doesn’t require anything. You can just sell a voting machine to Michigan. That’ll be like, “Great. This is awesome.”
But if you want to do it in a state that does have requirements, you have to go through this process. You have to have independent labs accredit the entire machine, not just the software but the hardware. They want to check to see if things like, “Do they work in improper humidity like high humidity? How much force is it take to break open the box to get its paper ballots?” All sorts of stuff.
Part of that process is reviewing the software so I started looking into the software review process and found that it is near to non-existent. It is effectively the same as running the software through a code linter. The things that they’re looking for are things like line length violations. They’re looking for functions that have too many arguments. Is nesting too deep? There is no auditing of the vote handling pathways. There’s no security audit mandatory and in many cases, the inspectors do not actually review the source code itself. They only review the comments.
EMILY: There are a small handful of companies that build better machines. There are a small handful of labs that test them. Those companies that build them, they’re generally small businesses and they have to report what their tech stack is and there’s these devices are built with a mix of C#, C++, Java, COBOL. Visual Basic is in the stack. It’s a mess. The fact that something so critical to our nation’s infrastructure can be so easily compromisable is a real concern.
CORALINE: Is there evidence that compromises [inaudible]?
EMILY: That’s interesting. There’s a certain evolving situation that’s actually in the news this morning. There’s all sorts of allegations about Russia’s involvement in election tampering. We know that they spend up lots of media efforts. Even back in January when the first CIA/NSA report was published, they did say that Russia had attempted to explore voting systems and our voter registration rolls.
Recently, it has come out that they’ve also targeted the machines themselves. Now, I saw a report that’s unconfirmed, that there may have been some efforts to modify vote tallies in these machines. None of that is for certain. I still maintain the stance that vote tallies have not been modified because I don’t see any evidence of it in the outcome. But it would be really, really easy to shift in such a subtle way that it would be impossible to detect. Wisconsin was won by 20,000 votes out of millions of people. All you need to do at that point is shift a small percentage of votes in every machine in order to get that. That would be almost impossible to detect.
Let’s say, you shifted five votes per machine for 10,000 machines or for 5,000 machines or whatever it might be, that could have a huge impact but you would never ever detect that through statistical methods. It’s really difficult to tell. Lots of people have looked into voting machines security and have said this is really bad. We know that Russian hackers are very good so it’s actually possible that they looked into it and they’ve managed to hack the machines.
All of that work is great, I think that the people doing that are super intelligent for me. If I were Russia and I want to get bad code into the machines, I would profile every single person that works at ES&S and Diebold and figure out who’s in financial trouble, who’s got gambling debts, who’s doing something unsavory in their personal lives and offer them, “Here’s $5,000. Put this code into the machine,” and the thing is that’s super easy to do because if you look at the test reports for all these machines and I’ve read all 31 test reports that were available up to November of last year, the fact that things like line length violations are getting flagged by software auditors, it tells me that there are not adequate development processes in place.
I can write a five-line Python script that will tell me if my lines are too long. It’s a trivial amount of work to put that into a CI system. If you’re not doing that, that to me means that you don’t use CI properly. It means that you don’t have proper peer review practices. It means that you’re probably not using version control properly. It would be trivially easy for somebody to take a bribe from the Russian government to put a DLL or something into that software that would modify those totals.
SAM: And then, boom! You compromised everything from that manufacturer.
EMILY: And not only that but it would pass the bill checksum so you would never be able to do the forensics after the fact. The only way you catch it is through source code analysis and these are all closed-source machines.
CORALINE: That brings to mind, I did see some talk of open source in voting machine software. Open source is of course not a panacea but making your algorithms in your code public is pretty intrinsic to security work in general because like a secret algorithm is considered less safe, less reliable than a public algorithm, right?
EMILY: Yeah, definitely. At the very minimum, the vote handling pathways need to fully vetted. They need to be open. They need to be independently audited. The fact that they’re not, it’s astonishing. It would be like getting on an airplane where nobody looked at the flight control code. You would never ever do that. Flight control code isn’t open source either but we have processes, we have regulations for how that gets built. We have faith that it’s probably not going to fail and if it does fail, it’s going to fail in some way that is detectable and safe.
Voting machines is kind of a Wild West. There’s no way validating like where’s the failure coming from? Is it coming from the scanner that scans a paper ballot? Is it the database? Think of how many times you’ve had a postgres error. Lots of these machines are taking those that’s scanned and just shoving it to postgres.
SAM: Yeah, and that’s assuming there even is a paper ballot to work with. I mean, a voter verifiable paper trail is seems to me to be like the bare minimum that you could possibly do and it’s a really important fail-safe but it’s also a really expensive fail-safe if you have to go back and employ an army of people to manually do a recount on paper, right?
EMILY: Yeah. You know, random paper trail audits for the ballots would be mandatory. Random audits would go a long way and you just build that into the expense of running an election. For me, that’s a trivially small cost for securing voting, securing democracy.
SAM: Yeah, that’s right. I totally hadn’t considered that sampling approach. I was thinking more of like in the event of an actual recount. But yeah, you’re right. That’s much simpler.
CORALINE: You said a lot of the manufacturers of voting machines are small businesses. Diebold, of course is an exception to that but who are the people working there and why is it that companies that are in the retail space can afford to pay six figures for a software developer to make sure that Amazon recommendation engine works as expected and yet, we’re not hiring those people to write something that’s fundamental to the health of our democracy?
EMILY: Yeah. That’s interesting. I often talk about this when I’m in a ranty mood, how there’s two different software industries or two different tech industries. There’s the one that we’re probably most familiar with, which is web-focused and very public and Silicon Valley and ping-pong and beer and all of that. Then there’s another tech industry, which is the offshoot from engineering, the offshoot from your typical, like your GE’s, your Lockheed’s and stuff like that. I think that those companies still have strong recruiting pool. Those industries don’t overlap a lot. You can talk to a 20-year Java developer and have no idea what they’re talking about.
Likewise say, you would tell them about all sorts of deployment, scripts and like I’m using Chef and Puppet and they’d be like, “I don’t know what you’re talking about,” and you could even be writing in the same language. You could be writing in a lot of the same technology. I think that companies like voting machine companies, I’m sure that their software developers are very good but culturally, it’s very different. It doesn’t have the same Silicon Valley mindset. They’re still rooted in very much the old top-down waterfall management processes for the most part.
If you look at some of the Glassdoor reviews for some of these companies, employees report things like documented bugs being left into production, even if they’re possibly fatal bugs because they don’t want to go through the expense of re-auditing the software. You do have definitely some issues with regulatory structure that need to be worked on but I strongly think that we need to re-evaluate how we handle that. But it’s a different world and I just think that a lot of the practices that we’ve developed has not translated over there.
CORALINE: I would actually argue. I started software development in the 90s and I grew up with waterfall, basically. I would argue that voting machine software is probably well-suited to waterfall development, if you can handle the expense of waterfall development because waterfall to me brings up test for very specific requirements, lots of planning, lots of documentation. That’s what I want from voting software. I don’t want voting software that’s written in one week’s sprints.
EMILY: Yeah, absolutely. There’s certainly something to that but the penalty of waterfall is that you end up getting to that late cycle where you have to leave bugs in, where you have to leave flaws in so there needs to be something in between Agile and waterfall. People have developed things and to varying extents but it definitely is interesting like if you took a developer from any of the big tech companies out there now and you put them in that environment to work on voting machines, they would last a week and be like, “I can’t deal with this. I can’t deal with a world that doesn’t have a proper CI practice. I can’t live with the world that doesn’t have that structures and plans in this way.”
SAM: For me it would be pair programming, I’m sure. I’ve been surprised to find out that was a common practice. I’m curious about their career path for people who work in these companies too. I imagine that the incentives are much more aligned towards people who work in the same job for five or ten years, don’t move around a lot, don’t necessarily go to a lot of conferences. That’s just my own personal bias. I don’t know if there’s any way to check that.
EMILY: I get that sense very strongly myself and my experience there is I spent eight years working for a government contractor that was very much like that. We had six years to vest our 401k. We had flexible hours and all sorts of the perks like that and even though they claimed that it was not a hierarchical structure, it is very hierarchical structure. Lots of people that worked at the Northrop Grumman, things like that say, how great it was, how lucky you millennials are to have a place like this and you should be grateful for the money that we’re giving you, type of thing is a very old school mentality.
Leaving it was hard because they do make it difficult to move around. They do lock you in and it’s hard to get career exposure. I’ve struggle a lot with exposure when I left and was trying to find a job because I didn’t have any experience in the typical tech industry or the industry that we’re familiar so nobody knew what to do with my resume, nobody knew what to do with who I was, what my background was. It was definitely a struggle. I had to get permission to go to a conference. I just submit things to conferences now like I just go and I work remote so I’ve been probably working from states that my company doesn’t even know that I’m in at the moment or something like that.
SAM: Or countries.
EMILY: Yeah, or countries. I spent a month working from Prague. I spent three weeks in Germany. It’s totally a different experience.
SAM: I crack up just a minute ago when you said nobody knew what to do with my resume because of course, for years I worried that my resume, I grew up with this advice that you didn’t want to have too many short jobs on your resume because it would show that you are a job hopper, that you aren’t committed.
CORALINE: I’m doomed.
SAM: Right. The longest I’ve been anywhere was Living Social where I was at for two years and I think seven months. It’s hilarious to me to hear that when you came out of this company that you worked out for eight years, nobody knew what to do with your resume. It’s just so totally backwards. I love it.
EMILY: Yeah. I don’t know if it’s a generational thing or what but there’s definitely some hilarity in that space and I think, kind of cycling back to the voting machines, I don’t know anybody that works for them. I don’t want to slander their business or their developers or anything like that but when you read what they have to say on Glassdoor and when you look at just the evidence that’s out there, you can see a manifestation of a very different form of development thought.
SAM: I was curious earlier, you were talking about statistical methods of detecting vote tampering and there has been threads on Twitter about people finding curious patterns in some of the data that they’ve looked at. I wonder how valid any of those are? How would you be able to tell if votes looked weird?
EMILY: If you were to ask five different statisticians you’ll get ten different answers.
EMILY: Yeah, probably. I think it’s very difficult. I don’t know what method I would use. I don’t know how I would go about doing that, looking at that data but there is interesting stuff out there. It’s easy to get into a trap when you see things that tell you things that you want to be true because you look at data and you see a pattern in that data that fits what you want to believe. You believe that the data is subjective so it’s very easy to be like, “Look at this. This precinct has a vote ratio of exactly 3:2. What are the chances that out of 13,000 voters, it’s going to be exactly three to two? That’s obviously evidence of tampering.”
Well, maybe. Maybe that ratio is super weird but you need to weigh that against every precinct in the country in every possible outcome and the [inaudible] that are impossible. Then you also have to wait against the fact that you expect a Russian hackers to be so smart, they can remotely break into our voting machines software when these things sit in the basement of the community center in the middle of Peoria or whatever like that, that they have intelligence —
CORALINE: Powered off.
EMILY: Yeah, powered off but they have the intelligence to do these massive cyber hacking or whatever operations and not the intelligence to [inaudible] up the vote the numbers so it doesn’t look like a perfect ratio? Come on.
CORALINE: Yeah, is there value in opening that data up and playing in the open and sort of blowing at interested parties who are able to do that data analysis and maybe don’t have the same bias toward, “Yes, this looks like exactly what I was expecting.”
EMILY: There are some people that do that. I don’t know if voting data is made public universally but there, definitely are people that do that. Open government people do a lot of that work. There is definitely efforts to do that. It’s very difficult. These things aren’t storing data in nice JSON blobs. It’s very messy. It’s very hard to reconcile, system to system, precinct to precinct. People are different working in that space but I think it would be great if we could invest some time, money and energy into that. But we are not currently at the place.
SAM: Yeah, I don’t even know where that money would come from.
EMILY: Federal government would be ideal.
CORALINE: But there is value in doing statistics in the open and data analysis in the open and science in the open.
EMILY: Yeah, open data and open sciences is an amazing nascent field. There’s a lot that’s happening in that space. I’m actually giving a talk soon about the role of open science and open data and it’s an interesting corollary to open source technology. There’s a lot of similarities and there are some key differences in how we approach it. It’s not necessarily universally true that the more data, the better because not all data really needs to be in the open or should be in the open. But the more people that can independently assess the data and the results, generally tends to be better.
SAM: Does that basically a variation of, “Many eyes make all bugs shallow?”
EMILY: Yeah, pretty much in a sense. What it really does is it expands the process of peer review and it challenges the traditional academic model of peer review. In doing so, it allows for better processes to be developed on how we review science, how we promote science, how we cite science. There are a number of challenges to this and this is one narrow part of the scientific research process that we’re trying to retrofit to something that looks like a software engineering practice but we also have to do all of the other stuff. It doesn’t work to just do this one part and not all the rest.
There are some really interesting people working on that. Brian Nosek is the Director of the Center for Open Science, which is actually here in Charlottesville. He’s also a professor of psychology at the University of Virginia and he’s in a lot of work in the replication crisis that’s happening in the field of psychology right now, where many key results, key findings in the field of psychology are not able to be replicated. His whole career is… I wouldn’t say his whole career but a lot of his work is driving towards making tools available for researchers to stem this off at the past and prevent these kinds of issues from cropping up because they can be damaging, really long term damaging. People’s careers are being found to have been wasted because they chased a thread that was false for 30 years.
CORALINE: I would imagine there’s some institutional resistance to open science, especially in academia where a lot of studies are published in paywall journals. The peer review process is very steep in tradition so there must be a tremendous amount of resistance to any effort to open that up.
EMILY: It is. This is a very uphill battle against the publishers, who are very resistant to open access. It’s a barrier to the tenure model in academia and it’s certainly a big challenge for getting people on board with how to verify data. But what’s happening is there’s more and more success stories that are coming out of this model that are making it really compelling. There’s still some people that doubt that there’s a replication crisis in psychology to begin with but the evidence is leaning against them. But there is some interesting things that have come out of this. To transition what I was talking about earlier with my talk, one of the things that has happened last year where a self-proclaimed researcher published a study in OkCupid Data.
Now, this is interesting because OkCupid periodically uses their blog to publish internally studied phenomenon for their users. How many users are straight? How many users are gay? What do men respond to most in a message versus women, that kind of thing? What happened last May was that somebody dumped a bunch of data and studied a bunch of OkCupid users. This got publicized on Twitter. We took a look at it and I was one of the first people to actually take a look at this data because I was bored and I was leaving my job and I didn’t have much to do so it was great.
As it turns out, this student — he’s a Danish student — he scraped 70,000 users’ data and then put it on a CSV and uploaded it to the Open Science Framework without anonymizing it. He basically doxed 70,000 users and published a pre-print along with it. Reading the pre-print, it was even more heinous. What he was doing was he was trying to use OkCupid Data to justify a hypothesis that religious people were less intelligent than non-religious people. On that basis, Muslim refugees should be denied entry to Denmark. He was using this open science framework to justify a really phony psychology study with really racist objectives. In doing so, he violated research ethics. He violated just the scientific method in general and he actually violated European privacy law in the process, allegedly. I don’t know what the results of that investigation were.
But also, it had a lot of ripple effects because it brought up the question, “What is the role of the repos in protecting user data? What is the role of the repos in ensuring that research is done ethically?” What happened was I looked at this report and I wrote up a blog post and shared my findings and a couple of other people looked at the report and came up with similar analysis that the research was unethical, it was wrong, it was immoral and, “By the way don’t dox 70,000 users. What the hell is wrong with you?”
In receiving this criticism, he tried to hide behind this wall of, “But I have made my research open. It’s open data. It’s open science. You can’t criticize me for ethical violations. I have liberated the data to the world,” and that becomes an interesting question because that’s an ethical dilemma, like we want to liberate data, we want to make things open but how far is too far? And when do we draw that line? It’s a really fascinating thing that I thought was a very small thing when I got into it and it ended up turning into being a huge story in the open science world, that ended up changing policies on how the repos act.
SAM: When you say the repos, you mean the open source data repositories that are storing all these stuff?
EMILY: Yes. Open Science Framework is one of them, run by Center for Open Science and there are a couple others out there.
CORALINE: It’s remarkable to me, even that has been scrubbed and anonymized, there was a study that I remember hearing about on NPR last year about some company that released demographic data that was supposedly anonymized and someone actually went through the data and was able to identify individuals just based on demographic characteristics. I know for me, myself personally, if you knew my first name and the fact that I’m transgender and I live in Chicago, you can find out everything you want to know about me.
EMILY: Yep. Same thing with me. It’s really interesting those people that are experts on de-identification and it’s a hard problem. It’s an interesting problem because HIPAA requirements have anonymization requirements and there’s a question of like, “At what point are we doing good enough? At what point do we just say the statisticians will always win?”
CORALINE: What about data that still overtly tampered with? What kind of impact can that have?
EMILY: That’s an interesting problem. In the science world, that’s not a new problem. You know, falsifying data is considered to be a grievous ethical violation of every academic principle and science has certainly dealt with this for years — decades, well, actually longer than that — but it’s interesting in the machine learning world because you have algorithms that are very sensitive to training data and you can really screw with an algorithm by giving a bunch of false data.
People are doing that. We know that there are hackers out there that are wanting to take advantage of algorithmic systems so they flood them with bad data in whatever way they can, knowing that that system will adapt to that bad data and give them favorable outcomes in return. You can argue that high frequency trading is doing just that legally but definitely, people are absolutely, positively doing this. People are spending up lambda instances on AWS and throwing data at systems at scale to do this.
CORALINE: Could you give me an example of a consequence of something like that?
EMILY: Search engine say they’ve adapted to whatever is being linked to the most, that kind of thing so people will use this knowledge to spin up pages that have these links and wait for Google to index, some of them take them down so that it can bubble up a site to the top. That’s part of how the fake news stuff works, where stories have happened and if you know that the story is about to break, you can pre-populate all of that so that when people Google like, “Did Hillary’s emails expose her?” or whatever, you’ve already forced the link that you want to be at the top to the top and it’s all garbage data. It’s all bad data.
Other examples of this are like if you want to try to block log-ons from a certain region, you can spam a log-on service. It could be a financial institution. It could be a medical institution, knowing that their systems will look at the traffic and start denying traffic from that region. If you wanted to lock somebody out of, say their investment account, you could do that if you knew that the log-on systems for that institution were using machine learning to try to squash denial of service attacks, that kind of thing.
CORALINE: That’s one of those situations where people are not looking at [inaudible] so this system is rebuilding.
EMILY: It’s very much reactive. It’s very hard to project all of these things. We’re often just deploying and we’re asking for forgiveness later, or asking our ops and security people to bail us out. When I say, ‘we,’ I mean as industry, not as a company. I don’t speak for Simple.
CORALINE: Yeah, you may have noticed that Greater Than Code is no longer a weekly show. We’ve moved to a bi-weekly format because we’re having money problems. We need to raise more money to sustain a weekly show. There are a lot of things that we want to do with that money, including fairly paying our editor and producer. But we also want to do things like listener perks and swag that [inaudible] people in our Slack community who go above and beyond.
We want to do conference appearances. We’ve talked about doing a lot of podcasts at a conference. We have a lot of big plans but we need your help. If you can pledge at any level at Patreon.com/GreaterThanCode, every donation is appreciated. We are proud of the fact that we’ve been listener-funded for 37 episodes. We’re also open to corporate sponsorships. If your company cares about the things that we’re talking about and wants to invest in the kind of conversations that we have, they can go to GreaterThanCode.com/Sponsors. Please talk to your companies about this. We want to continue delivering great content and have a conversation that no one else is having but we definitely need your help to do that.
We are very excited to announce that we’ve been selected as this month’s Fund Club Project. Each month, Fund Club emails members their new pick, a project, initiative, event or organization focused on diverse communities in technology. Members give $100 to that month’s selection. Fund Club doesn’t manage your money or ask for it up front. You submit payment directly to the recipient project. Fun club so far has raised nearly $200,000 for projects like ours. We’re so delighted to join the ranks of projects that Fund Club has helped, including Trans H4CK, People of Color In Tech, I Need Diverse Games, Write/Speak/Code and MotherCoders. If you want to sign up for this amazing program, you can find more information at JoinFundClub.com.
I think we can go to reflections then. Reflections is when we look back at the conversation that we’ve had and talk about what was [inaudible] or call-to-action or in any way made us think. I’ll go first. I think what you talked about with the brainstorming sessions to consider, all the ways that something can fail, ranking failures on severity of harm, probability of harm, when a failure occurs and detectability of fail and harm as it occurs. That’s something that I want to personally work at to bring to the projects that I work on at work and outside of work that kind of systems thinking and planning for bad outcomes. As we discussed is not something that happens often enough in our field. It should. I think I’m going to do what I can to institute practices like that at my next job. Thank you for that.
SAM: I’m definitely going to have to go back and listen to this entire episode again because I have a feeling there is a lot I’m going to learn when I do so. Our conversation about what it’s like to work in, say a military contractor or what it might possibly be like to work for somebody making election software was really an interesting and useful reminder to me that the world that I live in and the kind of software development practices that I take for granted, really are extraordinarily niche, more so than I like to think about. I’m going to have to contemplate what it might take to bring some of these practices that I find useful out into the broader world. At the same time, what I might find of value in what I think of as the dark programming universe that might be useful to me. Thank you.
EMILY: Yeah, no problem. I like reflecting on that dichotomy when I’m at tech conferences because tech conferences tend to be really decadent, especially the big ones. I was at JSConf in Berlin and they had this wild opening sequence and flashing lights and drum and bass and it was just really extravagant and interesting. It was such a difference from coming from that academic corporate world, where conferences were like in the banquet hall of the Holiday Inn, outside of the airport. It was everyone wore suit and tie and it was the same people that have known each other for 30 years and the same grudges that go back 30 years.
For me, it was a really stark difference between the industries. I like to reflect on it when I’m in that because obviously, I prefer the decadents and the dance music but it was an interesting path for me to get from one place to the other.
SAM: Well, thank you Emily. This has been a really fascinating conversation and I’ve really enjoyed getting to talk to you for a little bit, more than we do on Twitter. To our listeners, thank you again for sticking with us through another episode and I hope that you have learned something as well and we’ll be back at you in two weeks. Bye everybody.
This episode was brought to you by the panelists and Patrons of >Code. To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode. Managed and produced by @therubyrep of DevReps, LLC.
Amazon links may be affiliate links, which means you’re supporting the show when you purchase our recommendations. Thanks!