Innovation at the Intersection (Guest: Linda Holliday
Ben: Welcome to the Masters of Data Podcast. A podcast that brings the human to data. I'm your host, Ben Newton. Our guest today is a perfect example of the kind of combinations of humanities and science in a single person. Linda Holliday is the founder and CEO of Citia. Linda grew up in a family of engineers but, decided to go her own way and pursue design in college. She was able to use her home brewed love of engineering and math together with her sense of design to start two companies and, build a very unique understanding of the intersection of data, engineering and the human.
Ben: So, without any further ado, let's dig in! Welcome everybody to the Masters of Data Podcast. I'm very excited to have Linda Holliday, who is the founder and CEO of Citia. We're actually talking to her from her office. Welcome Linda!
Linda: Hi, great to be here. Thanks.
Ben: You're right in New York City, right?
Linda: I am. You'll hear sirens, no doubt.
Ben: (laughs) Well, that's what ... you know, it's a sound scape. It's all good. That's the fun thing about a podcast.
Ben: So, Linda, it is great to have you on here today, and you know, one of the things I always start with is I like to humanize the people I talk to. Learn a little bit about you. So, I mean, you have done a lot in the last few years. You started a couple companies. You've been an angel investor. Done a bunch of different things over the last several years. So, just talk a little bit about how did you get to where you are, what's kind of your story?
Linda: Well, I like to explain myself these days by claiming to be homeschooled as an engineer because my father was an electrical engineer. My grandfather, my brother, my husband and his father, all engineers.
Linda: Yeah, weird. I say we had to learn the right way to do everything and I'm not kidding. We got grilled on square roots at the dinner table.
Ben: Really? Okay.
Linda: Yes. So, that made me want to go to art school, even though I was really good at science and math.
Linda: And I did, and my dad was yelling from the front porch, "I'm not paying for this" because he had other ideas for me. He didn't. Then, I studied design and I fell in love with the design because it's kind of the application of art to problems, right? It's a problem solving discipline. I was actually then the first designer to be admitted to Wharton. Again, thank you science and math, I wouldn't have gotten in otherwise I'm sure. But, there I found this other discipline called system sciences. It was just another form of design.
Linda: So, I fell in love with that. You know, there's a lot of buzz these days about design thinking. It's not really just adding humans to the process, it's a different way of looking at problems. It's a different way of looking at complex problems with a lot of inter relationships.
Ben: So, were there a ... when you were a designer at Wharton, was that any kind of ... did you feel there was any sort of friction there or was it kind of a natural transition for you when you found this system design stuff you're talking about?
Linda: I think I would have if I didn't have such a strong science and math background. For example, the first semester at Wharton was five math classes.
Ben: Really? Wow.
Linda: They all had different names. You know, like accounting, and finance, and quantitative methods. But, they were all math classes. So, you really develop a second language, you know, just thinking through numbers.
Linda: When you have that skill you can actually then, you know, advocate for a lot of things by converting them into math. So, I think my whole career has benefited from being able to make a case for things that a lot of designers or creative people, or musicians can't really do because they don't have that second language.
Ben: Hmm. No, that's really interesting. I've definitely seen that before, that there's a great appreciation now for bringing those things together. We were talking a little bit about that. Being able to bring that science, mathematics, engineering way of thinking with a more design thinking, humanist thinking, you know however you want to express it. But, I mean, that seems to be more important now than ever, right?
Linda: Yeah, and you know math is kind of a Rosetta stone. It can express almost anything. Like, the universe isn't math, but we can express it through math. So, by being able to translate a lot of ideas into numbers we can actually find common ground, as opposed to staying on separate sides.
Ben: Mm-hmm. That's an interesting way to put it. Yeah, I think that is definitely ... I studied physics myself, and that was one of the big things is like that mathematics was that common language that we could talk about things and actually get on the same page. That actually makes a lot of sense. So, you went to school, got an MBA, right? What did you do after that? Is that when you started your first business? Was it right out of school, or what was that like?
Linda: No, then I went to work for cable television, which was the old new media. It was just being invented. I'm old, so it was just being invented at that time. It was really exciting because you could kind of do anything you could think of. It was fun days. We were figuring everything out, but then that got a little bit routine too, and I'm somebody who likes to be in pain, I guess. And do things that are new and hard. Whenever it gets routine I have to kind of leave.
Linda: So, I left that and I went into production, which was pretty much a whim because I said, what's the most fun thing I do? Make television commercials. So, I went into that, which was kind of dumb because I was offered a really big job at Comcast, and in those days it was so small the head of programming and marketing was one head count. So, that's how old it was.
Linda: But, I went into production and then, got really interested in a lot of the art of making things that way. Then, that led to this company I started called Medical Broadcasting Company, which was actually kind of a consultative resource for the healthcare industry, with an emphasis on all the digital technology that was emerging.
Linda: So, we did a lot of weird stuff like XML systems for business intelligence, and you know disease modeling. I worked with giant data sets like, Fair Isaac, et cetera. Got a tremendous amount of experience in that. It was so dynamic for so long, and we built a couple other companies and launch them from inside that. But then, it also got fairly routine and I sold. My partner and I sold and became an angel investor.
Linda: You know, too young to retire, afraid of getting irrelevant. So, and loving entrepreneurs. You know, so I worked with a lot of big companies and the antidote to big companies is entrepreneurs. So, that was really exciting for a while.
Linda: Then, I say it got repetitive, stress injuries. I'm like, didn't I already say this? Because it all kind of suffering the same young problems and, you know, I wanted new, bigger, harder problems. Yeah, so I kind of stopped doing that and had started writing a book on running creative businesses. I realized that what I wanted to make wasn't a book, right? That I was used to using all kinds of weird and wonderful communications tech, and then, we still had the book as the primary way we pushed new ideas into the world.
Linda: So, I started making software for that. And, here I am, years later, lots of money later, and I have the software for that but no time to write the book.
Ben: Well, you know, it's how you write with the idea. I can definitely appreciate your [inaudible 00:07:31] things. That's at least the way I justify all the different positions I've moved in. You know, you want to discover new things and try new things out. I think that's a good way to understand better the world you're in. So, that makes a lot of sense.
Ben: Now, when you say ... so, you're writing this book and you decided to ... you started writing the software. Talk a bit more, what was the problem you were trying to solve with the software that became Citia. I mean, what gap did you see that you really thought you were going to be able to fill?
Linda: Well, here's where my inner engineer comes through. You know, I saw this problem of getting new ideas that are made by a certain set of people, and validated by a certain set of people, more widely distributed and why they held as a through put problem.
Linda: And, you know, I made it up because I don't have data, but I think it takes about 17 years for a new idea to be reasonably widely held. That's too long. The world moves to fast now. So, how do you take that 17 years and collapse it hopefully by more than half, right? So, we had this really old slow process, which is a big, and our book gets made. Maybe it's kind of academic. Then, maybe somebody more, you know, a better communicator like Malcolm Gladwell comes along and makes it more accessible. Then, it gets picked up in magazines. Then, it gets added to school curriculum. You know, it's a cascade of ... you know, from this kind of giant porous corpus down into the kind of granular present media that most of us have access to.
Linda: So, how do you collapse that thing? How do you put smaller ideas that have more relevance and availability in front of more people? It's a distribution problem.
Linda: So, at the same time, we had this internet going through this kind of phase shift, right? It was pages, and websites and applications. Now, you know, that was a transitional internet. Now, we have the actual mature, native born, internet, which is small pieces that can be reconfigured with intelligence. Every new company makes that, right? That is the framework for a modern company to get started.
Linda: But, a lot of our old media, legacy companies, they're all built in that kind of impenetrable hole or a page based mentality.
Linda: So, what's needed is a system for moving all those small pieces of content ideas, case studies, you know, the molecules, atoms too small, more like the molecules. How do you move them around and let them be discoverable inside the channels people are actually consuming content in?
Linda: I call it the discovery layer. It's like a new thing. We didn't have it in this way before. So, our software Citia is build to kind of rock that discovery layer. You know, you kind of make things into small pieces of content and then, you can distribute them into any channel.
Ben: That does make a lot of sense because, you know, one of the things you're saying that strikes me there, that's part of the ... seems like the transition, but [inaudible 00:10:35] tie it to data is that you start it off particularly early on the internet and even with other forms of media were these more generically targeted things. You know, I'm going to target at this larger group of people.
Ben: Yeah, because I can't do anything different than that. So, you've been able to develop these things over the last couple decades where you can take the context of the person. I think I heard somebody described it as like their digital lifestyle and actually, take that context of who they are and give them a different experience. That seems partly what you're doing, is you're allowing to match the content, and match the experience to who the people are who are actually looking at it. Does that sound right to you?
Linda: Yeah, I think ... you know, well one kind of weird distinction is what's the difference between data and content because kind of everything is data now. But, there's so much intelligence and money going after how to get smarter about those lifestyles. Who is who? Who wants what? How do we find people? It's a very vibrant sector.
Linda: But, there's almost no attention being paid to the payloads, right? Once you know that about somebody, now what? And, I'm afraid watching technology evolve over these decades that it's a familiar movie where, oh well we finish this part, then we realize we need that next part.
Linda: And again, since you're a musician, it's more about ... I think people see that as a targeting issue. Like, I need to find Ben. He's looking for a car, or something like that. But, it's not a sniper shot, right? It's more like a shot gun. Like, I need to meet Ben's kind of changeable nature with a set of things he might be interested in. I can't really be good enough now, and maybe ever, because people are a little hard to predict.
Linda: No matter how good the technology gets, we change. So, we have to get close. We don't have to be perfect. And, the payload has to be reasonable about what people are really like, and deliver something that's got a chance of being relevant.
Ben: The way you describe it almost ... I think I've heard somebody else say this before, but there's ... we're not really sometimes one person. We're actually multiple people. There's multiple aspects. You know, at the same time I'm a father, I'm a son, I'm a husband. You know, the simple stuff, but I'm also ... I might be ... I really like this type of TV show, at the same time that I like to play guitar or something like that. And, all those coexist together. And, it's not like you, when you're looking at that context, you don't view it all as one big jumble here. You say, okay, let's find that interest, let's find that community that person is in and be able to target to that.
Linda: Exactly. I think it was Walt Whitman, "we contain multitudes."
Ben: Yeah, exactly.
Linda: We do.
Ben: Yeah, exactly.
Linda: I think there are a lot of people who think we're machines and we're predictable, and we're rational, we're reliable. I think they need to take a few other courses.
Ben: (laughs) Well, you know, that's an interesting connection there because you and I were talking a little bit about this but, I think in talking all these, so many people now about data, one of the themes that I've found to be so interesting right now is this connection between the human and the data because to your point, the internet and all these technologies were invented primarily out of ... really out of the sciences and then, into engineering and were very, almost utilitarian, and focus on the technology stack. And like, what can we do with the technology? A lot of these things we're running into right now are, you know, in terms of inter technology gone wrong and biased and all these different things, are really about can we bring the human back to the data? How do we find ways of ...
Ben: You know, you and I were talking about the book Sense Making, with [inaudible 00:14:19] and one of the things he talks about is getting the social scientist to talk to the engineers. It seems to be partly that's what we're getting at here, is how do you take the understanding of human people are, in their complexity, their wonderfulness, and also, their craftiness, the nastiness, everything. You understand the whole people, and put that in the context of the technology too. Those don't ... we can pretend like they're different but, they're really not. They're not disconnected I guess is what I'm saying. They're connected whether we want them to be or not.
Linda: Yeah, and so much of science, you know and this kind of hangover from this reductionism, wants to put people in these really tight little categories. Even to the extent of illuminating all the hard to understand or maybe unpleasant aspects of us, right? Those don't count, or those shouldn't be on the table. You know, for years, economics was the study of markets. Trying to separate markets from humans.
Linda: That didn't work so well. I think their predictive value recently has been close to zero. So, they had to reinvent the profession and call it behavioral economics, meaning we're adding people in.
Linda: Well, you know, they could have saved some time if you asked me because you don't really have markets without people. So, you know, there's been a long history, hundreds of years, of thinking about us in very mechanical ways, and eliminating all those things like emotion that are hard to measure, and hard to incorporate into models.
Linda: So, we're just now to the point where we're seeing the limitations of that kind of thinking. I think the other thing you kind of scratched there was you know, the difference between say Google and Apple, right? So, there's a kind of tech stack out. We have this, what can we use it for? How can we take it to market? What kind of problem can it be applied to, which is a perfectly legitimate way of making stuff. Then, there's the Steve Jobs thing. It's like, I bet people want to have blank, blank, blank, right?
Linda: Then, he goes and makes that thing based on people. So, one is tech up, and the other one is people down. Design thinking is more people down.
Ben: Yeah, and you have to find way to meet it ... meet in the middle. I guess in particular in the first company you started, it sounds like you were already dealing with there with this kind of idea of big data, and taking these data sets and trying to apply them. Now, with Citia you're kind of delving into that again. How do you, in particular from your perspective of actually running a company and actually trying to make this practical and not just talking about it, how do you actually see us make that transition? I mean, how do we ... because in particularly, the big data, you know people don't talk about it as much anymore. But, big data was this thing where I remember that story about Target being able to predict whether you were pregnant or not. I mean, there was all this ... [inaudible 00:17:08] in a creepy way of using data to understand people. We've made a transition out of that where it seems like those, a lot of those efforts, failed because they didn't achieve what they were supposed to achieve. They ... maybe there weren't the right people working on it, or whatever it was.
Ben: Now, like you said, there's kind of a change going on. So, right practically from your point, I mean how do we make that change? I mean, what does that look like on the ground, you think?
Linda: Well, there's so much depth and complexity in what you just described. So, you know, I think one reasonable approach to all of it is that this kind of marriage of data and goals, and people and all that is a multiple disciplinary effort, right? You can't have people who just understand data building and launching things that are going to interact with complex human systems and expect too much.
Linda: You know, like look at these, was it Tay? The Microsoft natural language thing, and how heinous its kind of mistakes were racist comments and reflecting fascism, et cetera. You know, so I think we've seen enough evidence of the kind of mistakes that are made when the group who's trying to figure things out is too insular, right?
Linda: So, diversity is an important aspect. I think also, I don't know why, but we're so easily persuaded that something's that's converted to numbers or better yet, has decimal points is somehow accurate.
Linda: And, I say like, you know, ask me what I weigh, weigh me. They're two different numbers. You know, ask me what I want to weigh in a year, or what I intend to weigh in a year, it's a third number, right? So, a lot of data is self reported, and it's dirty. Unless you know how, you can't really use it well. So, again, being able to use data requires a certain amount of context and that context has to include people and the thing that the data reflects.
Ben: Yeah, and that makes it ... we call that in physics precision versus accuracy.
Ben: You can be very precise and totally not be accurate. It's like I always laugh when people give these numbers that are down to the decimal point. It's like, how'd you get that? Well, I estimate that. That was actually what always used to drive me crazy when I took a couple business classes when I was getting my computer science degree. I'd go over to the business school and they'd have this crazy formula, and we calculate it out. Like, how did you make that work? I was like, "What's that variable over there?" Oh, yeah well, we just make that variable up.
Linda: Yeah. But sometimes you have to you know? Like, one of my favorite examples of a design problem is how to make something, say a tea kettle. Right? You have to make it beautiful. You have to make it utilitarian. It has to perform the function of a tea kettle. It has to be cheap to manufacturer. It has to be easy to ship. It has to use certain materials. So, satisfying those, what are called mutually exclusive variables, is the essence of a design problem. How do you run across all of those vectors, and solve something simultaneously? It's a different kind of a process.
Linda: Part of that expertise is what is beauty? How do you know if a tea kettle is beautiful? Someone on that team better have a clue.
Ben: When like, is this in the eyes of the people you're trying to sell that kettle to.
Linda: Yeah, and so you can decide in advance who that is, and maybe test it against them. But then, sometimes you get to market and you find out it was a very different group or, you know, you can be surprised in lots of different ways. But, that kind of assessment, what is beauty, gets lumped into this category called intuition, which is ... you know, it's a really large and undistinguished category now. It won't be for long because I think we're starting to understand how much power is hiding in there.
Linda: But, the person who knows ... you know, there was an old saying about how do you that a painting is beautiful? The art critic answers, "By looking at 10,000 paintings." So, you know, you have to have the time to do that. You might have a talent but, you also have an expertise.
Ben: Yeah. You know, it was one thing that [inaudible 00:21:28] his book, Sense Making, that I remember, and he was quoting, I think, another well known model about maturity model. But, that's ... it reminds me of that where you can advance in terms of mastery, you know from just a beginner to kind of a mid level where you're still following rules but you kind of understand what's going on. You get to the point of this master where everything is intuitive. Everything is natural. It's almost instinctual but, the reason why it is, is because you've become a master. Because you've spent so much time either looking at-
Linda: Incorporating it into your own cognition, into your own memory, et cetera.
Ben: Exactly. Wait, you know, and the reason I brought that up is I think what's interesting in what you were saying, and it goes back to how you figure out what people to involve in these kind of projects. I think in particularly in Silicon Valley where I'm at, and just in general, there's a sense of like, okay, if you get a bunch of young people in a room. They're all smart, and they work hard, then you can solve any problem. But, then when you get to this kind of thing like intuition and things about like being able to instinctually understand a problem and understand what's going on, a lot of that only comes with age and experience. It's not like you can, you know, 22 years old out of college, you're going to be able to understand how the world works. You haven't really experienced it yet.
Ben: So, there's a sense too, to solve these problems that we're talking about, you actually have to have people that have actually experienced those things and actually have the maturity to understand them. I mean, does that ring true to you?
Linda: Oh. You know, it's been so strange to have a culture that thinks anything to do with experience is a negative. Right? I understand this kind of child's mind, and again, I keep referring to you as a musician because I know you understand these things personally. That you have to kind of, as a creative person, continuously hack yourself so that you don't go to the routine things.
Linda: Right? Your mind kind of wants to do that as one of its factory settings. So, people who are creative on purpose have actually found ways to keep their mind's open in ways that some other people might not. So, with an open mind over a long period of time, it acquires tremendous amount of value. Call it experience, and you look at a lot of the kind of weaknesses or mistakes have been made in Silicon Valley lately. I'll just pick on Uber, but there's plenty of them.
Linda: Some really naïve blind spots about reputation led them into a position where they may never recover from it.
Linda: Right? They were dominant and you know, I know a lot of people that would rather walk than take an Uber. It's going to take a long time to heal those wounds. So, you know, it was pretty short sided to not have some more senior people who understood just how long that reputation damage lingers and what it means to the whole business. I mean, everyone's learned that in other industries. It's not like they're the first one to make a mistake.
Ben: No. Well, and particularly with you being a CEO, like a part what you're talking about there is the way that they decided to hire people. I mean, it's been in the news about Uber and other places where they ... the culture they built and the way that they hire people and who they hired affected a lot of ... it's like, well of course this was the outcome that you were going to get. I mean, as a CEO there of Citia, I mean how you taking a lot of what we're saying now, and applying it when you actually ... you were building out a team and you were deciding the culture you were going to build? How did you inculcate that kind of, I don't know call it diversity, or breath of experience? How did you go about that?
Linda: Again, I want to site experience because you know, I have made a lot of mistakes. So, hopefully I remember them and try not to make them again. One of my favorite mistakes is all your weaknesses, sure as shit, are going to show up in your team, right? So, if you don't address your own weaknesses, you know, you're going to encounter your not so great self over and over. So, it really forces you to change yourself, right, to change your team.
Linda: Then, I think every culture has strengths and weaknesses, and letting something get too strong is always got a risk associated with it. You know, it's back to the original conversation we're having. Some things are easy to measure, and some things are hard to measure. So, you know, what's easy to measure is Uber's killing it, and look at the growth in drivers, growth in rides, growth in profitability, et cetera. But, what's hard to measure is what's our reputation in the world? How is that going to affect us over years? That's actually extremely expensive to measure. It can be measured. It cost millions of dollars to measure something like that well.
Linda: So, people go forward without enough data, and so you learn to just pay attention to the things that are hard to measure but, you can kind of guess about.
Ben: Well, you know, to that point too. You know, a part what you're talking about there is the reputation and trust you build. I think you nailed it with there, is like you ... it takes years to build trust, and you know, literally minutes to lose it with a customer. I would think in particularly with what you're working in now, where in particular, you're in the kind of advertising marketing area and content area where you're actually molding content for people to see based on their own context. You also get into that aspect of trust as well. I mean, is that something that you've had to grapple with, as in how you maintain the trust and privacy of these users' data and the companies you're working with? I mean, how does that ... how have you actually encountered that yourself?
Linda: Most of the Citia customers are using our platform for content marketing, training, HR, coms, and sales. So, that tends to be a part of the entire universe that's a little less sensitive. Right? We're not holding anybody's financial information. We don't have a lot of context about the actual customers. So, we are in, maybe, an enviable position of not having a lot of risk in that way. Our customers, on the other hand, are managing some things that are more sensitive. So, they have that issue with their customers.
Linda: So, we are ... our customers are companies and, their customers are consumers. So, we're a little bit one step removed from that kind of problem.
Ben: No, that actually makes a lot of sense. We are the same way at Assume Logic. You're helping other people manage that. So, that does make sense.
Linda: We actually think that the data needs to be owned by the customers, right? The first and second party data. We see ourselves as Kev and Kelly says, the big complex system is built from smaller simpler systems. So, we are the container delivery system for content. We clip into intelligence.
Linda: You know, one of our customers, GE for example, 16 industries, right? They have thousands and thousands of data sources and they're going to change daily. So, they need a system where they can actually put sentences together, right? I know this. I want to deliver that to there. So, we're the delivery system for the content, but we're not the brains.
Ben: Oh, that makes a lot of sense. I guess that's where things are at right now. Is there's such complex systems that are being managed with all this data. That's the way these kind of having these vertically aligned systems doesn't make as much sense anymore. You've got to find ways of actually giving a specific tool set that customers can overlay on processes that they already have, and kind of pump their own context and data into that. Be able to do that seamlessly.
Ben: So, one thing as we kind of wrap this up a little bit, you've obviously spent a lot of time thinking about data and its impact, and how things are ... how it intersects with society, particular with your kind of unique background. What are you seeing some trend of everything we've been talking about that you think people aren't seeing? You know, what do you think are some things going on maybe kind of under the surface that are maybe going to popup in not too long, that you're seeing and you don't think are really coming to the surface? Is there anything interesting you right now that way?
Linda: Yeah. Something I'm barely understanding myself, which is we're kind of at the collapse phase of an era where I think we brought it upon ourselves because you know, we forced everything into these kind of linear and binary structures. Almost no data is discreet.
Linda: Right? We force it into increments so that we can calculate it, so that we can calculate it with like linear and binary systems. Right? The biggest hammer ever invented. But, it doesn't really reflect what's underneath it. It's false. And, so many other things are getting so delicate or complex that, that binary force is our problem. You know, we have to let the more quantum, you know continuous, set of characteristics be expressed.
Linda: And, so lots of things are failing. It's not obvious, but even what's happening with social media and polarization. Like, is something good or bad? That's binary thinking.
Linda: I mean, I'm not a Trump fan, but he says a few things I like. I have a hard time myself not judging him entirely, and throwing him into 100% bad column because that's what we've been taught to do. So, you know, this kind of either or mindset is so ingrained in these generations. It might take the next generation to actually free itself from that because it's actually contributing to a lot of our problems. We want to square off. We want two sides.
Linda: Looks what's happening, you know? So, I think that's kind of like an insidious sub straight underneath a lot of our problems.
Ben: We know one thing that comes to mind when you say that Linda is I think ... tell me if this makes sense to you because I totally agree that there's a lot of life is a continuum. Life is a ... you know, we naturally as humans want to lock it down to very black and white because that makes it easier to understand. I feel that's probably fairly natural but, the world itself is a continuum. But, when you start applying artificial intelligence and machine learning and some of these new techniques to this world of data. It's basically taking in at scale, and almost automating the putting of things in boxes.
Ben: So, a lot of these problems that we've been seeing with artificial intelligence and biased and things like that, seem to me as it's basically taking what you just said, our kind of reductionism and wanting to put things neatly in black and white boxes that we can understand. All these new technologies are doing right now is basically automating that and making it happen a lot faster. In some sense, maybe that's actually bringing this to a head because now we're seeing, well you know, when I was doing this at a slower pace it wasn't as obvious how biased these results are. But, once you shove it into an algorithm, then suddenly it's like wow. Okay, now it's blatantly obvious and this is scary. Does that resonate with you?
Linda: I think you're so right because I was just thinking about this this morning. We've got this go, like the alpha whatever it's called. The alpha zero program I think, that beat ...
Linda: The humans echo. It didn't do what prior programs had done, which was just ingest every possible move, and be faster at accessing them. It actually learned to play itself.
Linda: And you know, was original in our definition of original, but not many things are like Go, right? Look at soccer, if you're trying to apply the same kind of AI, you know a good soccer player not only has that inventory of plays in their head, that's where the machine is great and we're not so great. But, it reads every player on the field and kind of knows who knows where I am, who is open, who is moving which direction? I can tell by body language and gaze and, momentum like so many subconscious calculations that this 150 neurons machine learned for 40 years, you know, is great at but, how is that computer going to cross over from marbles on a board to soccer players on a field? Think about how big that step is and, what's involved.
Linda: So, we're trying this now with self driving vehicles, right?
Linda: I think it's really applicable because when I'm driving I see that mom with a stroller, and she's not looking at me. I see ...
Linda: Right, I know instantly whether people know I'm there or not for 40 people on a corner.
Ben: Yeah. Well, you know, to that point too Linda, with that, I think important to distinctuate, you're saying too is it's not only do you see that there's a mother with a buggy. Is that you actually have a moral understanding of that. It's like it would be really bad for me to hit that man [crosstalk 00:34:38] with a buggy because they have worth.
Ben: Yeah, and the problem is like AI's right now to use [inaudible 00:34:44] expression is like they don't give a damn yet. They don't actually care enough. They ... that's not where they're at. So, it's probably what you're talking ... it's not only taking all the data and processing it. It's also actually having that human sense of caring about the results, right?
Linda: Right. Two super hard problems to solve, right?
Linda: Go doesn't have those.
Linda: So, we continuously make this mistake of saying because a machine is good at what we're bad at it's better than us.
Ben: Yeah. Yeah.
Linda: No, it's not. It's better at the things that we're terrible at. Like, Citia is named Citia because we are rock stars in spatial and three-dimensional space. You can get around a city the size of New York and, ten pints of transportation, multiple floors. You just know how to do it. You could have not been here 10 years and you know how to do it. But, you can only remember 10 phone numbers. You know, computers can be way better than you at that.
Ben: Well, you beat me to it. I was going to ask you what the company ... because I forgot to ask before. So, that's perfect. No, I think you hit the nail on the head. I mean, it really is about that complexity. I think there's a sense when I was studying physics way back when, we used to always joke. It was like I can solve a problem about a chicken if I assume that the chicken is a perfectly round sphere that is entirely smooth, in a place with no atmosphere. Then, I can solve the problem of what the chicken ... how the chicken will cross the road.
Ben: So, it's basically you can't always just reduce all the problems down to these very binary reductionist models because then, you don't really understand what's going on, right?
Linda: And, we've done that routinely because we've created these little belief systems that say, "Well, that other stuff doesn't matter. You know, that human stuff is not scientific."
Linda: Right? And so, we push it off the chalkboard because we don't know how to assign a variable, and that's why our models are weak.
Ben: Yeah. Yeah, exactly. You can't expect good results if your models aren't even approach reality.
Ben: Well, I mean, this has been a fascinating discussion Linda. I mean, I think what you guys are doing at Citia is fascinating. I think in particular the kind of breath of experience you bring to that problem I think is really indicative with the kind of ... the way we need to problem solve in the future is bringing that human design thinking side together with the engineering side. That's the way we're going to solve some of these really hard problems. So, I've really enjoyed having this conversation with you.
Linda: Me too. I was great. Thanks for having me. It was a real pleasure.
Ben: Absolutely. And, thanks to everybody for listening to the Master of Data Podcast. Check us out on iTunes, or your favorite podcast app. Go rate us so other people can find us, and listen as well. Thank you for listening.
Speaker 3: Masters of Data is brought to you by Sumo Logic. Sumo Logic is a cloud native machine data analytics platform. Delivering real time continuous intelligence as a service to build, run, and secure modern applications. Sumo Logic empowers the people who power modern business. For more information go to SumoLogic.com. For more on Masters of Data, go to MastersofData.com and subscribe. Spread the word by rating us on iTunes or your favorite podcast app.
Our guest today is a perfect example of the kind of combination of humanities and science in a single person. Linda Holliday is the founder and CEO of Citia. Linda grew up in a family of engineers but decided to go her own way and pursue design in college. She was able to use her home-brewed love of engineering and math together with her sense of design to start two companies, build a very unique understanding of the intersection of data, engineering, and the human.