Skip to content

interview with r. miles mccain

Background

In high school, my English teacher gave us an assignment to write up about one of four prompts about artifical intelligence. I picked one regarding if AI and robots could replace humans in whole, in part, or not at all in the workforce. Although I used other sources, I wanted to get a primary, personal source.

I reached out to Miles McCain, a Stanford student who has made numerous large projects such as PrivacySpy, Shynet, and PolitiTweet. I worked on the past on PrivacySpy, and asked him if he could spare some time out of his surely busy schedule to ask him a few questions about his takes on artificial intelligence, as well as more narrowed to my prompt on the workspace.

This interview was conducted over Google Meet on 5 June, 2021. It was recorded using Audacity at 128kbps and is stored as an M4A at the Google Drive link above. The audio has been edited slightly for the following reasons:

Licensing

The transcription is licensed under the CC-BY license. The audio is licensed under the CC-BY license. I'd love to hear if you use me as a source, so consider sending your final work to hello [at] doamatto.xyz.

MLA Citation

McCain, Miles. Interview with R. Miles McCain - Matt’s Reference Centre. 5 juin 2021, https://edu.doamatto.xyz/interview-with-miles.

Accessing

You can access the audio for this interview :


Transcript

Matt: First off, Miles, thank you for taking the time to chat with me out of your surely business schedule.

Miles: *laughs* Ha, well..

Matt: *laughs* So, let's just get it started. Where do you think technology, specifically AI and machine learning will be going in the coming years?

Miles: I think AI is a very interesting field because it is so unpredictable. Just 10 years ago, we were coming out of what folks are now calling « the AI Winter », the capabilities of AI for everything from Siri to GPT-3 to ImageNet and everything in between. So, I think that it can be really hard to predict the direction that AI will go because it is so stochastic and unpredictable.

But I think that's also what it makes it so exciting. We don't know what's possible, and I guess the downside to that is that we don't know what's impossible. But, um, I think that the effects that it might have on society, I mean, God, I think that we will find that many traditionally lower-paying jobs will become obselete. And, I think that people often say that inevitably this will create new jobs; like sure, you'll no longer have folks working the cashier at CVS, but maybe they will be more even more jobs in manufacturing or engineering.

But, sometimes I think that what that conversation misses is that the reason that I think AI will be adopted so quickly, if it becomes technically viable. The reason that it's going to be adopted so quickly is that it's going to be cheaper than the human alternative. Right, and it's that economic drive that's going to lead to development and employment. And so, sometimes I am skeptical of the people that say that it will ultimately end up creating jobs, because I think that part of the point is that it's getting rid of certain jobs and the reason why, for example, tech workers in Silicon Valley are paid so much is that their job is, essentially, to automate the work of far more people. You don't need 10 million or 50 million engineers all writing code. Maybe I'm not creative enough, but I'm not sure that there's not that much code to write, right? The whole purpose of automation is that you can have fewer people doing the work of more.

I guess the broader point that I was trying to make is that I think that it will ultimately lead to a fewer number of jobs; however, I think that it would ultiamtely be net good for society because there are still ways to give people income, to still give people capital, to still give people weight in society and abilities to contribute that are meaningful, and whether that's through, for example, the arts or through expanding our minds as to what is possible as an occupation. I think that even if jobs as we know them become obselete, I'm not necessarily sure that's a bad thing. Y'know, there is a dystopian posibility too, which is decently likely. But, we'll see. Sorry, I think I've gone off on a bit of a tangent. *laughs*

Matt: *laughs* It's fine. What do you think would be those main ways that people would have that alternative to a job, to still be able to bring in money so that they can buy food for their families and what not?

Miles: Yeah, so, I mean, this is such a deeply philosophical question: what is the purpose of artifical intelligence, right? Like what is the reason that we're deploying it? Well, I think that in sort of the US economy, we can take this sort of this narrow look, which is well to essentially create shareholder value, which is a very, sort of I think, capitalist, ruthless, y'know, market-focused answer. But then, why do we have that? Well, in theory, that's supposed to sort of create a better society, and distribute resources to people. So I would say, probably, ultimately, the purpose of deploying AI, is to make life easier for people; that's like my naive, rose-coloured glasses view of it.

And so, what jobs what would be remaining folks, and what would labour look like in a world where so much is automated? Well, why don't we look at what happened during the Industrial Revolution? That was a period where so many jobs were automated, and what ended up happening, and maybe I'm misreading the history, as I understand it, what ended up happening was: instead of having folks do manual labour, people ended up augmenting the machines. So, the machines would be doing the core, physical work, and then you would have individuals sort of tending to the machines. And I think that we'll end up seeing a similar thing should automation become even more prevasive than it already is. I think that you'll have a new labour sector that is dedicated to overseeing these sort of deployments. I think that there will be a whole new industry from the regulation of AI, folks whose job's it is to act as a watchdog over these AI systems.

I think that there will be a lot more room in society for things like the arts. In theory, if you have fewer people driving cars, and working in factories, and working in grocery stores, and working in warehouses, and working in logistics and shipping, I think you'll have a lot of people who will end up, probably, provided there are effective ways to give people income, perhaps that's a universal basic income, which I'm a fan of, that's my kind of pet policy, I think that you'll have a lot more people finding meaning in other parts of their lives; which would mean a lot more people working in the arts, a lot more people, ideally, retiring earlier, travelling. I think there are some areas that can't be automated, and I think that in the services industry, for example, I think there will always be a place for human, for example, servers at restaurants, or people who work in caring and nursing in the medical industry. Kind of feels heartless to call it the medical industry, but y'know.

No matter even if AI is incredibly effective at doing certain jobs, I don't think it will be able to connect with humans in the same way that other humans are able to connect with other humans.

I think that people will ultimately want to interact with humans. So I think that the jobs that AI will take over are going to be the jobs that are not fully visible through society to the typical person. I still think that we'll have human teachers, human therapists, we'll have human policy if we have police, y'know. I was about to say human firefighters, but perhaps we shouldn't. I think that would be a great place for robots. But warehouse workers, but y'know [redacted]. I'd be shocked if that weren't true. I think that the sort of jobs that are vital to keeping the world running as it is, for that part, the sort of directly customer-facing will be the ones that are automated. And the ones that aren't customer-facing, or maybe I should say human- facing, would be the last to go.

Matt: You were saying about the medical industry. I don't know if you watched Google's keynote recently, Google I/O, they're working on mammography technology for using AI to better find early signs of breat cancer. Do you think that could be the future, where it's not even a doctor or physician looking at you, but an AI saying that « Oh, you have A. Fib » or « You have a stigmatism in your left eye » ?

Miles: That's a really intersting question; I sort of have two thoughts on this. I think it's a fantastic idea to have AI augmented into medical care. Humans are fallible, and so are AI systems. But, if they are build right with good data and sort of good principles, then an AI system will probably be far more reliable than a physician, and far less biased, and sort of, dare I say, even more open-minded, than a doctor when it comes to diagnosis.

We're exiting my field of expertise, if I can even say that. But, you know, I know next to nothing about the medical industry from any perspective other than as a patient. But, I don't think that we're close to the point where you want have an actual doctor in the loop. Medical technology is remarkable. The fact that you even have the tools to do a mammogram, or the tools to do an X-ray. Those just empower doctors further; they didn't replace them. And I think that, we'll probably see something similar, at least with the sort of nearest generation of medical AI systems. I don't think that they're going to replace doctors; I think that they are probably going to simply help doctors.

I want to just point out that I think a huge part of the job as a physician or a nurse isn't to perform medicine. I really think it's to comfort people and to communicate with people, and to explain sort of complex issues to people in ways that are understandable and to be personable, and to give people and soothe them and be the sort of point of communication. I don't think that an AI can do that. Maybe it will in the far future. But, almost certainly not in the near future. We're still very much in the uncanny valley when it comes to AI systems assuming the roles of humans. So, with medical care, with diagnosees and core medical work, I think that almost certainly you'll have more and more AI systems performing that work. But I don't think the doctors themselves are on their way out.

Matt: So you're saying that AIs will help do some of the heavy lifting, but doctors and nurses will still keep the personal touch, so to speak?

Miles: Yeah, I do. You don't want an AI to tell be telling you that you have cancer, right? That is the job of a person, of a doctor, who can explain your options in a caring and compassionate, empathetic way. This may lead to sort of class-based stratification, it may well be that there will be completely automated clinic if you're on certain care plans or you can only afford so much, and then private practices that still have humans. Maybe that's the way that this is going, but I do think there will be demand for human care professionals, far into the future.

Matt: Gotcha. So, obviously you're in a more serious position with software engineering than myself, being that you're studying at Stanford, but do you think people like us doing software, do you think.. how do you think we'll be affected by AI being used more and more?

Miles: I think that if you are engineer, you are one of the people who are going to benefit the most from AI. I mean it really just expands the capabilities.. it expands your capabilities. There are problems that are provably unsolvable. Not just by computers, but by humans. There are some theoretical limits to what computes and also humans can do. Sorry, that was actually a total tangent that has nothing to do with what I'm about to say. We can talk about that, it's a very interesting part of theoretical computer science: unsolvable problems and provable unsolvable problems. But, you know, as an engineer, I think that better technologies which is what AI ultimately is, will just let you build more and more complex systems. I, maybe I'm not being creative enough, but I don't see there being an AI that picks problems to solve and then solves them, without any kind of human intervention in the near future. Even if you did have such a system, you would still have to give it objectives and reward functions to guide the sort of problems that it choose to solve. And, ultimately, I hope that it will still be humanity doing that.

In the short term, I think that AI will mean only good things for engineers because it means engineers will be empowered to build even more complex and even more impactful systems than there already are.

Matt: So, going to back to your mention about provably unsolvable issues, do you think there's a chance that we find something that we thought was provably unsolvable, and by some miracle, so to speak, AI could solve that?

Miles: Unfortunately, I don't think that's possible. There are problems that are really, really hard and perhaps are existing model of computation can't support them. There are some problems that are genuinelly impossible; not just that we're not creative enough to figure out how to solve them. If this program.. if you could solve this problem, it would be.. you would create a contradicition in the universe because if it did exist, it would lead to a paradox.

The classic example is the Halting problem, which I'm sure you're probably familiar with, but.. do you want to talk about that? It's a bit of a tangent.

Matt: I would say: yeah; it's a good idea to explain it.

Miles: Suppose you had a program that could take in its own source code, and then from that source code and some input, that program will tell you if that program will ever stop running. Will it go into an infinite loop? Will it run forever? Or, will it eventually stop? So let's say we have this program, let's call it willStop and it takes in two arguments: the source code to a program, and some input that you would have passed to that program. And we can actually, for the purposes of this conversation, we can ignore the input part. So you have this program, willHalt, and it returns true or false. We can literally think of it as a function. And it takes in some source code and it tells you with certainty if that program will run forever or if it will stop running.

And I argue that that program cannot exist. Being able to determine whether any program will stop or will enter an infinite loop is impossible, like, cannot be done. That is just something that we will never know, and here's why. So now, we have this function called willHalt. Let's write a new program and what it does is that it calls willHalt and passes that function its own source code, which you can do, that's possible. It's called a quine, programs can do that. And it's not like some trick of a particular programming language. It's fundamental to idea of computation, programs can inspect their own source code. And now, what this new program will do is call willHalt. If willHalt returns true, then the program will go into an infinite loop, and if willHalt returns false, essentially saying that the program will go into an infinite loop, then the program should not go into an infinite loop; it should just end right then and there.

So, essentially, we built a program that does exactly the opposite of what this other program says it will do. So, the fact.. if we assume that willHalt exists, then it means we could write this program. But, this program will clearly break willHalt, as it will do the exact opposite of what willHalt will do. So, what this means is that something has broken down. The only assumption here that we've made is that the program willHalt is possible. And because that assumption leads to a contradiction, it leads to the impossibility in a logical sense. It means that program cannot exist. It's impossible. And there's nothing we can do: we can build good programs that are good for the most part at figuring out whether a program will halt or not. But there will always be this paradox when you run it on itself. And so, this program can't exist. Even the most sophisticated AI could make a guess, it could make a pretty accurate guess in most cases. But, it will never be able to provide this function without a shadow of a doubt, because if it did, then you'd have this paradox.

Which is sort of this mindbending (?): the fact that there are just some problems that we cannot solve.

Matt: I guess that is the epitomy there; something that we're told we can't do, but is actually something that we can't do.

Miles: Right, what's interesting too is that you can apply the principle of the Halting problem to all sorts of types of problems. So, for example, say you wanted to solve the problem, does a particular problem have a backdoor in it? Right, automated vulnerability scanning. Well, what if you crafted a program that called that vulnerability scanner on itself and if the scanner said that it did have a vulnerability, then it doesn't have that vulnerability. And if the program said it doesn't have that vulnerability, then it exposes that vulnerability.

It seems circular. It feels like « Oh, why would we do that? Then everything is fine! » But the problem is the fact that we could write that program, it means that there is something fundamentally off about that program. There's something just impossible about it; it's beyond the reaches of computation. And, when I say beyond the reaches of computation, that actually includes human computation. Because anything a computer can do, given enough time, in theory a human could do. These aren't the sort of things that can just be solved with creativity. I should rephrase that the Halting problem is not something that can't be solved, but something that has no answer.

There are some problems to which there aren't any answers. It not that we can't figure out the answer, it's that there isn't any answer to find out. That's a different way of framing the same idea. But, I don't know, maybe that's pretty exciting.. or deeply depressing that there are just some things we just cannot know. So, there's no getting around it. There are all sorts of mind-binding results like this.

Matt: Yeah, I mean, that's any science really.

Miles: Yeah, I mean, like, there's even the result there are infinitely many problems that we can't solve, because there are more problems than there are programs. Which is something you could actually prove. They really do put you in your place as an engineer. Y'know.

*Miles disconnects* *Miles reconnects*

Miles: Alright, maybe, this will work. I have five bars, 5G. I don't know what's going on. Yeah, this is an unsolvable problem, but yeah.

Matt: In my experience, it's usually lying about that 5G part, or the five bars for that matter.

Miles: Yeah, yeah. They're just telling me what I want to hear. Yeah, that's next level. AI that lies to you to get you off its case.

*both laugh*

Miles: So it can go back to doing what it loves. Yeah, sounds like code I would write. Anyway, so we've gone on an interesting tangent into theoretical computer science. I will say: I think that there is a limit to the usefulness of that result though. And what I mean by that is: humans will make mistakes. The limits to the human brain are pretty well known. There is so much more we could know. But we know that we're not that smart. I mean, we're smart, but we're pretty stupid in the grand scheme of things. Humans are smart, but we know that we're limited. We have biases, we have blindspots. But that doesn't mean that we're useless.

And I think that the same thing can be said for AI systems. Just because a problem is theoretically unsolvable, doesn't mean that it's going to be unsolvable. It just means that there are some certain cases where there is no answer, but there are also cases where there are. And, I think that this actually, to bring this back to AI as it's currently construed in society, let's look at self driving cars.

They could be five times more safe than human drivers. But, they will still have bugs, they will still have their blindspots. Does that mean self-driving cars are a theoretical impossibility? Well, perhaps it is to have a perfect self-driving car. But do we really need perfect? Isn't better than humans good enough? And, I'm not saying we should be fast and loose when it comes to safety regulations, but let's not let perfection be the enemy of good.

Matt: Yeah, you've given me too many good branching points off of that one. Let's go with: obviously, in the tech world, there are a lot of regulations and standarisations. You've got the GDPR in the European Union, you have things like ISO standards and the HIPAA–

Miles: Well there are.. but yes and no though. I almost want to partially object to the premise of this question. I think that tech is still in many ways « the wild west ». For the longest time, the reason were able to succeed was one: because they were able to move quickly and nimble, blah blah blah. But the other huge reason was that they were able to side-step a lot of regualtion by virtue of being internet platforms.

And so, yes, GDPR exists. But in practice, what do so many companies do with GDPR? They put on a cookie banner, and then they create a small team that deals with data requests and then they create a very expansive privacy policy, you know, as you and I both know all to well. And then say, « Okay great! We've done GDPR! » I think that GDPR is a really great legislation, but it's not the solution. It's given some people more latitude when it comes to excerising your privacy rights; totally agree there, it's been a good thing. It lets you request all your data from a company, it allows you to demand your data to be deleted, et cetera, et cerera. But, I'm not convinced that it actually makes the standard privacy footprint of your standard European much better.

Because so much of GDPR, as I understand it, in practice, becomes opt-in systems. Sure, you have the right to delete your data. But, it's not automatically deleted if you don't use the platform the year. You must explicitly excersise that right. So, for as long as people are doing that, then companies will not operate under the assumption that they will be having to delete everyone's data. They'll just keep as much as they can, for as long as they can, and doing as much as they can with it.

But, sorry, you and I, it's interesting. We're technical, we're in the world of tech, we're familiar with this legislation. But, if you compare tech to other industries, tech is pretty loose. Like, you do have ISO standards, you do have PCI compliance. But often, those regualtions are written by people who are not super familiar with the tech world. But, y'know, every system is different. So it's really hard to regulate and standarise these systems, especially when it comes to security and privacy legislation.

Anyway, sorry, let's get on with your actual question. The point I just wanted to make though was that tech, in the grand scheme of things, I'm not sure that is particularly regulated.

Matt: Right, it has standarisations and regulations, and they're a good starting point. They are far from perfect, so to speak.

Miles: For sure, for sure. And tech companies like to talk about how their « self-regulating ». They'll do the right thing. Take that as you will.

Matt: So obviously, you're on the side of it shouldn't just be self regulation, it should also be influence from a third party.

Miles: Yeah.

Matt: Who do you think that third party should be? How do you think it should be regulated and standarised?

Miles: Yeah, so. I think that every type of regulation has its flaw. But, I think if you had sort of joint US-EU regulations, I think that would honestly be quite good. Although, ultimately, I think the biggest regulators are people themselves; ordinary citizens. So, I think a really important thing is just transparency and explainability, especially in the realm of AI. I think that, provided that humans are still able to have insight into the way their data is being used or the way decisions that concern them are being made. I think that government regulations will follow from that.

It's hard to regulate things you don't understand. So, as long as the general public and leglislators and their staff don't understand these systems, for as long as they're super locked down and proprietary, I think that we're going to be stuck with bad regulation. And, I think that bad regulation might be worse, might be better, than no regulation at all.

Matt: Right, so you're saying we need to inform both citizens and lawmakers better of how things should ideally be ran to benefit people, not to use people, and then we can go forward from there.

Miles: Yeah. If we take a perfect view of our democracy, the will of the American people will inform the action of the legislatures and those actions will then create that legislation and regulation. So, in theory, it should be bottom-up. We should have people understanding and having political opinions about sort of technical systems, and then that will spawn legislation.

Of course, that's almost certainly not how it's going to go. There's always going to be a place for domain experts. I'm not saying your average person should be the one crafting your tech legislation. That's absurd. What I am saying, ideally you would have public visibility into technical systems. Which could then signal to legislations that « hey, this is the sort of thing that needs to be regulated » or « these are the sort of decisions that are actually happening. Let's make sure that we regulate the right thing in the right way. »

And, I think that legislators don't have much more insight into tech companies than you and me. Like, sure, they can interact with lobbyists. But the lobbyists won't tell them anything that could put them in a bad light. So, looking at the sort of dirty laundry, so to speak, of tech companies, I think that sort of public accountability and transparency is that sort of important first step. And also, competition.

For example, Apple has a monopoly of a large margin of the US mobile market, I should say a plurality. You're not going to have much consumer power. So, the transparency is a very limited use, because if there's nothing you can do to escape a system, then what good is the transparency for. I mean, I still think that transparency is good in that case, but transparency is better when it comes to choice.

So I would say that transparency and fewer monopolies, no monopolies. But that's a long way off, so fewer for now.

Matt: Going back actually, to your example of self-driving cars. We look at roads today: obviously, car accidents happen. We kind of just accept that. It's not really newsworthy unless its a huge crash with several cars. But, if someone is in a Tesla with Autopilot or in a different car with that system, it's humongous news as if it's catastrophic. Do you think we're giving that fair justice or how should we be painting that light? Because it's no longer a human at that point, it's an AI.

Miles: Exactly, yeah. I feel like there a couple of ways you could approach this. I'm going to say that Tesla deserves much of the criticism there. Tesla's self-driving capabilities: they're good, but they're not good. Teslas cannot, as far as I know, have Waymo end-to-end driving capabilites. And yet, Tesla, in its marketing and what it lets you get away with while driving one, is not congruent to their self-driving system. They shouldn't even call it Autopilot; they should call it Assisted Cruise Control, or something. And I say this as someone who's driven a Tesla and has a sense of what it's capable of. I'm not saying that it's a bad thing. It's probably, in the case of highway driving, safer than a human driver. But, let's not think it's perfect. Tesla should be upfront about that.

In terms of the news angle, I think this is classic « Man bites Dog vs Dog bites Man ». Dog bites man? Who cares. Man bites dog? Now, that's a story. I think that fear mongering is probably misplaced, I mean: it's absolutely misplaced. I think that fear mongering over self-driving cars is a terrible thing that will lead to material damage. I think that prolong the adoption of self-driving cars, which will cause more and more accidents every single day.

But, I can't fault the news media for focusing on those stats. There are so many videos on YouTube that I love of Teslas avoiding crashses where you see some crazy scenario or traffic issue where someone pulls out in front of someone last second in an interseciton, and the Tesla magically swerves and saved the day. That's awesome, and I wish that got my coverage than it did, but I'm not surprsied or out of character that the media broadly construes and draws so much attention to the AI gone wrong story. Because their incentives are not to give an accurate picture of the world as it is; it's to draw attention to themselves. And it's more interesting to read about a crash than a near miss, which is a shame but it's kind of the way we are.

Matt: Right. Well, thank you so much for, what was half-an-hour and turned out to be an hour.

Miles: *laughs* Oh yeah, I don't care. I'm just in the car.

Matt: *laughs* Giving you something to pass the time. I appreciate it once again. I will definetely take you up on that offer the next time I have a chance.

Miles: Yeah, seriously. Yeah, please do. We'll both be glad to use.. *laughs* our dining hall meal points for guests

Matt: *laughs*

Miles: And repay you somehow for the time that you put in. I think I have a backlog of policies still to merge.

Matt: Never-ending it seems. *laughs*

Miles: Y'know what though, it's a good problem to have.

Miles: Alright, good talking to you. And, yeah, I hope this was useful.