Confronting Our Humanity - and Bias - in Artificial Intelligence (Guest: Bill Welser)
Ben Newton: Welcome to the Masters of Data podcast, the podcast where we talk about how data affects our businesses and our lives. And we talk to the people in the front lines of the data revolution. And I’m your host, Ben Newton. Our guest today has had an amazing journey. He was a captain in the air force, had a decade long career at the Rand Organization, a world renowned nonprofit research organization. And now he’s a partner at Red Associates. Red Associates is a strategy consulting company based on the human sciences. Bill Welser has done groundbreaking work around privacy, artificial intelligence, industrial ecosystems, commercial drones, and cryptography. We had a lot of fun talking together, and I hope you enjoy it as well. So, with no further ado, let’s dig in.
Welcome everybody to masters of data podcast. And I’m Ben Newton, and I’m super excited to have Bill Welser here with me from Red Associates. Welcome, Bill. Thanks for coming on the show.
Bill Welser: Thanks for having me today.
Ben: We’ve talked a little bit before about your background. I was actually taking a little time to look at what you’ve done in the past, and it seems like you’ve had an amazing career so far. You were in the air force. You worked at the Rand Corporation, doing research there. Now you’re at Red Associates. And I love with all of my guests just to kind of understand how you got into technology, why you ended up where you were, in particular coming from the air force into Rand. Tell us a little bit more about your story. Where did you come from? How did you end up where you are?
Bill: Sure. So, I’ve always been a little ambitious. And when I was graduating from high school, I knew I wanted to be sort of engineer. And I went and ran my own little set of interviews and found that chemical engineering always seemed to rise to the top of list of the hardest engineering discipline or the most challenging. So, I decided that was a good idea and tackled chemical engineering at the University of Virginia and at the same time really wanted to pursue an air force career as an officer. So, I participated in ROTC and got out of my undergrad degree, graduated, and went to Los Angeles Air Force Base. And as a chemical engineering falling into Los Angeles Air Force Base where they built satellites, and missiles, and what not, it was kind of like a candy store. There were lots of things that you could look at in terms of how to use different fuels for propulsion, how to do use different sensors, and what not, and how you manufacture those. But what I got into right away was building high power lasers, chemical lasers.
Ben: That sounds like fun.
Bill: It was neat to knock rockets and missiles out of mid air. And we were going to do this from space and then also do this from the front of 747 on a project called the Air Born Laser. So, right away, I just fell in love with tinkering and building large systems that were really pretty ambitious in nature. My career in the air force went from lasers to actually building a bunch of different types of sensors for satellites, some of which are flying today, and then into cyber systems because there was so much overlap between running these space systems and needing to understand the cyber environment around them, how to keep them secure, etc.
Ben: That makes sense.
Bill: It was really fun. At the same time as working on these cyber systems, I decided that I was a little bored with engineering. Everybody at the time was getting MBA’s, so I went and got an MBA at Boston College and decided that wasn’t specific enough. So, I went back and got a masters of finance with the intention of going and working on Wall Street. That was in 2007. So, that intention, while well placed, wasn’t well timed. And I ultimately decided that maybe finance wasn’t in my future, at least not near term, and wanted to find a way I could contribute to my community at large, and went and found the Rand Corporation.
Spent ten years there doing really detailed technical analysis in the beginning and then being the director of engineering and applied sciences, which is really overseeing 350 PhD level experts working across all these different spaces. So, healthcare, and labor, and education, and defense. Really exciting. Some of the most talented, brilliant people that you’ll ever run across. And after running that department for about six years and kind of getting into all the different things that I’ve gotten into personally from a research standpoint, I realized that there was a challenge that kept popping up, and that was how to engage the commercial sector in a realistic way. And I found the Red Associates. And Red Associates is a social sciences firm right under a hundred people. And they charge themselves with finding the unknown unknowns in the human system. So, what are those things that we do all the time, we don’t realize they’re doing, but they dictate what our actions are, what are decisions are, etc. And how can we understand those better so that we can make better decisions or make more practical or pragmatic actions or what not? And they do this for fortune 100 companies. And I took that as a really exciting space to jump into and to build on to that. They needed more detailed knowledge of technology. So, a colleague and I from Rand decided to join them as partners, and we’re building them a technology practice to go alongside their world class social sciences practice.
Ben: That sounds really exciting, Bill. Because we got connected through Christian Madsbjerg, who is over there. And I’ve read his book, Sense Making. And I think that that connection between understanding the human context of data and connecting the cultural and sociological context of the technology I think is amazingly interesting. I go back…the one thing you said, too…it’s kind of from when you said going into finance in 2007. I came out of a computer science degree in 2000 into the .com bust.
Ben: I understand the feeling. Though we were talking about it… I would have thought the pinnacle of your career was when we talked about before working with the Dinosaur Train and the PBS…it’s a kids’ show. What did you say? It was Ready, Jet, Go. So, where do you go from there?
Bill: Yeah. Well, so that was really neat. I met Craig Bartlett, who was a creator of Dinosaur Train and Ready, Jet, Go via a TEDx talk that I gave on asteroids. And he was creating this new show and wanted some detailed science like advisory. And so he had engaged a few of us in the local LA community. And it was really fantastic. It was neat to think through how you storyboard a show that’s aimed at young minds that want to learn and are eager to learn science and technology but in a fun way. It was a real exciting opportunity. And Craig and his team are just amazing.
Ben: Like I told you before, my kids used to watch Dinosaur Train, and I always have a huge appreciation for those kid shows that actually take the time to explain complicated concepts. If you can’t explain it in ways that definitely non-technical people but even kids can understand… That’s where you really understand the subject. I think Einstein said something of that sort where you should really be able to take these complicated concepts and explain them to our children. I think that it’s great work those people do.
Bill: And really important.
Ben: Now, going back to where we kind of were before. We were talking about now what you’re doing at Red Associates and what you were doing at Rand. We were kind of talking about how really the human context affects technology. Now, kind of looking back on some of the things that you’ve done before, I think that’s just been kind of a focus for you, hasn’t it? About looking how technology affects humans and how those kind of spaces intersect, right?
Bill: Yeah, so the way that I would describe my research focus for the past decade at least and really not in the air force as much as since I’ve left the air force has been looking at emerging technology and trying to understand how they affect and impact the human condition or the human system. And a good example of this is a few years ago probably now about four or five years ago, I walked into the office of one of my colleagues at Rand. He’s a world class machine learning expert. He’s from Nigeria. He is one of the most brilliant people that you’ll ever meet. And I said, “Ashanday [Phonetic], how is it that all of this wonderful AI capability that people discuss is coming out of one place in the world right now?” Which is Silicon Valley. And most of those software engineers and machine learning experts came from a very similar socioeconomic background. There’s got to be something there. Because they’re baking in their implicit biases. They’re baking in kind of their assumptions about the world into these automated systems that they’re building. And it started as a simple question of like, “Is there something there?” And it led to this gentleman, Ashanday, and I digging deeply into, “What is the potential for bias in AI?” It turns out that it’s actually a lot deeper than just thinking about the fact that a lot of it’s coming out of Silicon Valley. It’s instead this idea that we’re developing as humans an intelligence that is kind of modeled after our own. And with that comes the fact that we’re capturing the good things about our own intelligence, but we’re also capturing some of our maybe not so good things. And as Ashanday likes to say, we’re developing an intelligence in our own image, and that’s really not a compliment. And so that’s an example of taking an emerging technology space and really thinking about how it’s affecting the human condition or the human system. We can talk more about that if you’d like.
Ben: Yeah, absolutely. A bunch of things you said kind of touch on other threads that I’ve had conversations with people and really as artificial intelligence is developing and really in general how computer science and technology is developing how these either explicit or implicit biases in how we’re developing these technologies is so important. So, what are some of the things that you kind of uncovered in there as you were going through that? What kind of concerns you about bias? What are some of the implications you saw?
Bill: Well, the first thing I just kind of want to always start a discussion on AI with is the fact that the term artificial intelligence is kind of problematic. It’s problematic because…well, one, I don’t want to get into the use of the word artificial because I have issues with that. But the term as it stands means something different to most everyone. To some people, it means automating some widget in their house. Right? Their fridge knowing that it’s time for them to buy more milk. For other people, it’s automating a total system that’s running at a chemical refinery or something like that that’s moving lots of different widgets and moving lots of different fluid flows and what not. Obviously those are two different types of systems. For others, it’s talking about… And there’s a discussion of AGI. So, the idea of a general intelligence. It’s talking about emulating or recreating everything that we do in a very kind of organic manner. That spectrum of things that I’ve just described from the simple algorithm that would need to be written to tell me to buy more milk and maybe order it for me all the way to whole brain emulation, that’s a huge space. And yet we lump into one term. [Laughs] Artificial intelligence. So, that’s a problematic thing. I like to just raise it in the beginning of conversations like this just to get it out there.
Ben: Yeah, I would even say in my space even, AI even gets conflated with what really amounts to statistics.
Ben: Where people are effectively doing statistics or even basic machine learning, and then that gets branded as AI because it’s the popular term, and it’s what everybody wants to be associated with. And what it does is it then waters down the actual conceptual idea of what AI actually is.
Bill: Right. And from that starting point, you have the public perception of AI being…and I like it’s the public perception of AI is killer sentient robots.
Bill: And then but the commercial impression of AI is not that. It’s automating your credit score evaluation, or automating how I interact with certain aspects of my smartphone, things that aren’t killer sentient robots. And because they don’t kind of rise to that level of risk, and sexiness, and kind of interesting kind of level, we kind of forget about these other things. But if we’re automating whether or not I can get a home loan, that’s a big deal. And if we’re taking humans out of the loop for something like that, it’s an even bigger deal. Because what it means…and this actually is a real thing today…is that for some of these systems, not even the developer can go back in and tell you why it made a particular decision. They can posit a guess, but they can’t tell you exactly why. So, if I get disapproved for a home loan, and I go and ask the local teller or agent, “Why did that happen,” do they really know? And was it really fair? All these sorts of questions raise up. Man, that starts hitting you where it hurts. So, that’s why this kind of bias in AI thing matters, because we have a lot of these things that a lot of implementations of AI already in our communities, we’re not really clear on it. They don’t look like killer sentient robots. We haven’t really paid attention. But they do impact some of our day to day activities and decisions.
Ben: Yeah, it makes a lot of sense. It’s funny, I did run into this a little bit recently when our chief security officer over here was telling me this story about Alexa recording stuff and sending it over. He unplugged Alexa so now the one piece of technology like that in my house, I unplugged.
Bill: I don’t think that… A lot of people could jump to the conclusion that their large corporations are nefarious actors. Like, “Oh, well, Amazon is just trying to collect every piece of information on me,” or, “Google, look at what they’re doing.” Or this recent Facebook and Cambridge Analytica discussion. But the market forces, the way that people are voting with their feet, and with their pocket books, and with their time spent, support those companies developing capabilities like this. And so that’s kind of the interesting rub is that uninformed consumers are actually feeding the machine that’s building up a lot of…and by machine, I mean just this kind of market space…that’s building a lot of these capabilities. And then by the time they realize what they’ve fed and build, the consumers are like, “Whoa, whoa, wait a minute. Step back.” And that’s a little bit unfair. So, I don’t like when I read about how the Facebooks of the world, or the Amazon’s, or Google’s, or fill in the blanks…Apple’s, how they’re vilified. I don’t really like that because they didn’t just wake up one day and say, “Wouldn’t it be great if we had all this information?” They said, “Well, clearly consumers want their product delivered in X amount of time, or to be able to talk with their friends across distance, or whatever, so we’re going to build a system that helps them do that in a seamless way.”
Ben: Well, when the robot overlords take over, we have no one else but ourselves to blame.
Bill: That’s what I’m getting at.
Ben: As you describe it, I 100% agree. Because a lot of these things, they creep up on you. And we as consumers are demanding a certain set of experiences. We’re demanding a certain set of convenience. We want the technology to do all this for us, and then we seem surprised when… It’s like even in particular on some of those things, them being free...these technologies, well, if you’re going to make it free, they have to do something with it. And then we’re surprised when it’s doing things that we’re not comfortable with. So, where do we go from there? So, if somebody like yourself…you’re really looking into this, and I’m sure you’re actually talking to companies that are dealing with this and how they work through the implications. What do we do as a society and as an industry to work through these kind of thorny issues?
Bill: Right. So, I think one of the first things is to recognize the environment that we’re in today. And what I mean by that is for the past decade plus, there was a huge push to collect as much data as possible. You had this kind of big data movement as I’ll call it. And collected the data, organized the data, stored the data. And when that was all done, it was like, “Now what do we do with the data?” And I’m sure there was some very messy boardroom discussions where people were challenged as to, “Well, we just made all this investment. We’ve got these terabytes of data. What are we going to do with them?”
Ben: You have to justify it.
Bill: Right. And which has now led to this resurgence in AI. Because what does AI need? Well, it needs a data diet. It needs data to eat, to learn from, and then to move forward and adjust. So, part of this current environment is that most of the data that we have is really opportunistic in nature. It was the fact that we had a system that was already collecting this set of data, so we just grabbed it, and we pulled it. And we said, “Well, it’s data, so it must have some amount of value.” And we’ve jumped the conclusion. And this actually a bias of our, that we jump to the conclusion… And the bias is actually a bias toward what might be considered sunk cost. We don’t want to admit that we collected something or that we have something available to us that might not be worth very much. So, we try to make into something. And I think that’s right now where a lot of the hazard lies. Because we take this kind of opportunistic approach towards implementation of employment of AI. And that might not actually give you the outcomes that you want. What might be better is to say, “Okay, now we see everything that we have, data wise. And we see what we can do with it. But it would be wonderful if instead of these ten fields of data, we actually had five more. And let’s go build those sensors to collect those five more. And now let’s add value via an AI system.” And I really…in talking with companies and in talking with other types of organizations that have access to large amounts of data and are looking to automate the evaluation, or assessment, or treatment of that data, we’re talking with them about how do you take a fresh look, let a sunk cost be a sunk cost and just kind of set it aside, and get toward an instantiation of AI that actually deliver value versus just as something that you can point to that says, “Look at us, we’ve got AI.”
Ben: And when you say that, I guess I’ve kind of seen this come out from a different perspective even beyond AI. If I’m understanding you right, it’s that don’t just throw some AI in the mix with data. It’s actually think beforehand about what you’re actually trying to accomplish and the questions you’re trying to answer, and design for that. Am I understanding that right?
Bill: Absolutely. You have to take a rigorous problem solving approach. You have to follow the scientific method. And not just hop to the assessment phase just because you have a bunch of data available to you. And this is where I really want to see the community push. I don’t think all is lost by any means. I think people are pushing in this direction. But is very hard when you’re trying to deliver value to shareholders to admit that maybe some of that activity that you spent collecting data in the past wasn’t as wonderfully productive as you might have hoped.
Ben: Yeah. One thing when you say that… You made this transition from the Rand Corporation over to Red Associates. And now you’re definitely dealing now not just with public organizations anymore, maybe with governmental organizations, you’re not dealing with commercial companies. Is there a gap between both maybe in a good or not so good way between public organizations about how they view this and commercial organizations? Or are commercial organizations maybe less aware of some of the implications of what they’re doing and are kind of coming to it now? Or what have you seen in kind of comparison?
Bill: The gaps that I’ve seen… And understand that I’ve only been at Red for just under three months. But the gaps that I’m seeing are really there is a huge difference in terms of the knowledge set that leaders within the commercial space have about these technologies versus what the leaders in the public sector have about these technologies. And that discrepancy is worrisome to me because if I can sale something to someone… And I’m not saying this is happening, I think the conditions are right for this. If I can sell something to someone in a government that they don’t quite understand but that sounds really good and that they see referenced in every newspaper stand that they go to in an airport… AI is everywhere. If I can sell them AI, and it sounds good, but they don’t really understand what’s happening or what it’s doing, that’s a problem. So, I see a huge differential in terms of just understanding what these technologies do. So, that’s one piece I think should be reconciled a bit. The other piece, I see a huge push toward near term instantiations of these technologies in the corporate world, which makes total sense. But maybe not thinking through what sort of the long term implications might be. So, if we go back to the example of the bank loan. And if I automated the bank loan process and have the decision being made by an AI and have it just being delivered by a human, what happens when GDPR takes hold in Europe where now there’s a whole set of rules and regulations that are not supported by the existing technologies? And to say that you have to go back to scratch is probably an overstatement. In some cases, you really do have to go back to the drawing board with some of these systems. So, that’s what I’m talking about, the short term versus long term.
So, I think that it’s a wonderful kind of complementary relationship between the public sector and the private sector if they talk and if they work to share their perspectives equally with one another. So, the public sector explains exactly why they need it right now and what they’re going to have done, and have the context of the longer term considerations that the public sector may be thinking about. And that the private sector share their expertise that the public sector just doesn’t have. And actually likely doesn’t have the resources to keep fresh.
Ben: Right, that definitely makes sense. A lot has changed both in the circumstances and public perception in the last couple of years about… I’ve talked to a couple of other people about that is I think they’re… at least there’s now a growing awareness in the public about what’s happening to their data. I’m not sure it’s where it needs to be at yet, but there is definitely a growing awareness and maybe at least some push for holding companies and governments accountable to what they’re doing with the data, right?
Bill: Yep, agreed.
Ben: When you’re talking about how these different groups interface, where do you see the leadership coming from? Because in some sense, what you’re talking about, there has to be a few organizations and people that are stepping up front. Where do you see the innovation in this area coming from and particularly thinking through the implications? Does it come from the public sector? Does it come from some of these companies? Is it here in the US? Is it in Europe, particularly with GDPR? What do you think?
Bill: So, here’s where I’m a little bit biased. I’m biased because I voted with my feet for the past ten years, and that is that I really believe that the leadership has to come from organizations like Red and also like Rand that take a multidisciplinary approach to problem solving, that get the technologists in the room with the behavioral scientists, and the social scientists, and the political scientists, and the economists. And you think through all of the various aspects of the problem to come up with a solution that’s robust in all those spaces. And I don’t think that we can continue as a society to have kind of stove piped, “These are the technical experts, and they’re going to solve the technical problems. And then we’re going to have these economists over here that solve the economic aspects of these. And these behavioral scientists are going to be over here, and they’re going to think about how it all affects us.” Right?
I don’t think they can be standalone anymore. And unfortunately, our entire university system is built for them to be standalone. And I know that some universities have worked to kind of create multidisciplinary centers and this, that, and the other. But the incentive structure within the universities is not built to support that system. So, we really have to think about…and I’m not meaning to rail on the university system, but it just is a very good example…we need to think about how do we engender this focus on bringing various experts together to tackle very hard problems and give really widespread kind of wide spanning solutions that incorporate all these different various perspectives? And that’s where the leadership has to come from.
Ben: That totally makes sense, Bill. Because I think it seems like to me that what you’re talking about, this is not just a problem with the field of artificial intelligence in general. This is maybe…this is something that’s developed over years, over the last several decades when there was a tendency to separate out the technological and the science implications and how people were investing there versus the sociological and the cultural implications, and bringing those two together. And I think that’s… I see that in some smaller form in what I deal with day in and day out about… It’s more encourage the grassroots, encourage cross functional cross disciplinary collaboration because that’s how we’re going to come up with the best solutions. And it seems like that’s really where a lot of innovation is going to happen in the future. Not just in artificial intelligence but overall. It’s like how are you going to bridge those gaps and connect these multiple different disciplines that maybe haven’t collaborated as much in the past? Does that ring true with you?
Bill: It does. I think that one of the big challenges…and AI is, again, a great example case to bring up…one of the challenges is what happens when the development in each of those disciplinary spaces is happening at different speeds. And so if we take the AI case, there was a lot of available data. The algorithms that have been employed to date, many of them are decades old in terms of the theory behind them. There just wasn’t enough data to feed them. And the computing power was pretty limited. But now that we’ve kind of caught up in those other spaces of computing power and available data, we’re kind of running rampant with building new AI focused or AI enabled solutions. Behavioral science can’t move that fast.
It’s not meant to move that fast, because you’re supposed to actually see things over a period of time. You’re supposed to observe. And in observing, you gain so much context and perspective to then inform maybe the technological side of things. Well, when one side races ahead and leaves the other side behind or leaves their other partners behind, that’s a problem. And so part of I think the challenge to multidisciplinary problem solving is just that speed aspect and the fact that you have in some cases…you’ll have a variable speed at which the different sectors, or areas, or sides, or however you want to think about it may want to move. And in some cases, they just have to wait and say, “Okay, let’s hold on. Let’s let these guys catch up. And let’s see what they have to say.” Because we’re going to all be better off. And it comes from the conceit that we’ll all be better off if we work together.
Ben: Yeah, that makes a lot of sense. One question that comes to mind when you actually say that…it was bouncing around in the back of my head…is that it seems like it’s not just even these different disciplines is operating at different paces, but this also seems to be a bit of…I don’t know if I’d call it an international phenomenon, too. Because I’m wondering… Okay, so let’s say that the US, we end up doing a better job of this. Or in Europe, they end up doing a better job of this. But then you see some other countries across the world that maybe don’t take quite the care. They don’t have quite the same data ethical framework. It seems like this is going to be a real challenge, because maybe we take the time to do that collaboration over here, but then there’s another group somewhere else that doesn’t take that same careful approach, right?
Bill: It’s a great observation. This is one of the challenges of the global network that we live in today. And one thing that’s been bothersome to me… And I’m not going to jump into a political discussion right now. But if we just observe the fact that the EU from a rules and regulatory standpoint is diverging from the US. And not to say that the EU and the US are the only two entities out there but just to take them as a case study. They’re diverging. What does that mean for technology development? What does that mean that there’s going to be in some cases radically different standards? And again, I’m not weighing a value judgement as to which standards are better, or worse, or anything like that. But I’m saying it is hard to ignore the fact that this divergence is happening. Well, in a global community, that makes a huge difference. And maybe the entity that’s ahead will just have enough buying power, if you will, or influence that they will pull everyone else along with them. But we haven’t seen that happen yet. There’s a lot of case studies we could bring up right now of similar things. But I do think that’s an important thing to track. And those two entities, the US and the EU, to track them specifically. Because they have been in better lock step in the past than they are today.
Ben: Yeah, I absolutely agree. And I think it’s a fast developing area. And the reality is that the way innovation happens… I’m still not completely convinced that you can always spread innovation out. A lot of times, it tends to concentrate in a few areas. And if those areas aren’t in… Geographical areas. And if those areas aren’t always in lock step, and they’re not on the same page about importance ideas like ethics, and privacy, and things like that, it will create friction down the line. Kind of on that point, what’s next for you? So, you’re a few months into your new direction here. What are you focusing on next? Are you kind of in this working in the same kind of data bias – technology affects…how it affects culture kind of area? What are you looking forward to working on in the next several months?
Bill: So, for me, it’s always about…my personal goal is always about finding the thing that others aren’t paying attention to and kind of bringing light to it, helping bring it into the conversation. So, someone brought this up to me the other day saying that I seem to take a money ball like approach to technology. And I guess that’s fair. I’m looking for the opportunities to raise awareness, whether that’s through technical expertise, or through the intersection between social and technical considerations, or what not. But raise awareness around some of these things like…privacy is a great example…that are pretty abstract to most people. And if it stays abstract, we’re never going to get anything done. So, how do we push toward creating solutions and testing them out in the market to see how people react and then to iterate? And that’s what is so exciting to me to be at a place like Red is with all due respect to the US federal government that I’ve spent a lot of time supporting in my life, they don’t move swiftly.
Ben: No. No.
Bill: And that’s okay. It’s for good reason and etc., etc. But I am excited to be in an environment where they aren’t saying, “Hey, come back and talk to us in six months.” They’re instead saying, “Yeah, we really would like to know what we should do in three weeks.” And that increased pace and attention is what excites me around topics like how do you instantiate a machine learning system to support the future of mobility or something like that. And I’m obviously not going to go into details for any of my current clients, but I can tell you that there’s a lot of rich opportunity out there to make the world just…and this will sound a little bit idealistic…but to make the world a better place. And through producing better products that help people in their daily lives. So, that’s what kind of drives me right now.
Ben: I think that sounds great, Bill. And I’m going to say I’m super excited to continue to follow what you’re doing. And if you’re willing, I’d love to reconnect with you and hear more about what’s going on in a few months to see how things are going. I think the stuff you’re working on is incredibly important, and I think the perspective that you guys take at Red Associates about pulling these different disciplines together I think it’s incredibly inspiring to me to see that kind of work going on. And I think it’s incredibly important. So, I really thank you for coming on and taking the time. I’ve enjoyed this conversation, and I appreciate you taking the time out of your busy schedule to be able to come on the show.
Bill: I really enjoyed the discussion. Obviously happy to talk anytime. It’s such an exciting space. Who knows what tomorrow is going to bring, right?
Ben: Yeah, absolutely. Thanks everybody for listening and look for the next episode in your feed.
Female: Masters of Data is brought to you by Sumo Logic. Sumo Logic is a cloud native machine data analytics platform delivering real time continuous intelligence as a service to build, run, and secure modern applications. Sumo Logic empowers the people who power modern business. For more information, go to SumoLogic.com.
Our guest today has had an amazing journey. He was a Captain in the Air Force, had a decade-long career at the RAND organization - a world-renowned, non-profit research organization, - and now as a partner at ReD Associates. ReD is a strategy consulting company based on the human sciences. Bill Welser has done groundbreaking work around privacy, artificial intelligence, industrial ecosystems, commercial drones, and cryptography. We talked about Artificial intelligence, and how our humanity - and our bias - creeps into it.