Episode Thumbnail
Episode 72  |  43:21 min

The Advancements and Applications of AI (Guest: Dan Faggella)

Episode 72  |  43:21 min  |  07.14.2020

The Advancements and Applications of AI (Guest: Dan Faggella)

00:00
00:00
This is a podcast episode titled, The Advancements and Applications of AI (Guest: Dan Faggella). The summary for this episode is: The applications of artificial intelligence is seemingly never-ending. But, how do we find where AI can fit within our industries? In this episode, Dan Faggella, CEO and Head Researcher at Emerj, sits down with us to talk about where AI is currently and headed in the future. In fact, when we talk about AI it's important to recognize the significance that culture plays in furthering application, just as much as the science and algorithms.
Takeaway 1 | 01:51 MIN
Dan dives into AI while running a martial arts gym.
Takeaway 2 | 01:11 MIN
Dan Faggella vs. The Giant
Takeaway 3 | 01:41 MIN
Emerj: From Podcast to Research Firm
Takeaway 4 | 02:06 MIN
AI will have to prove itself by enhancing workflows in an unquestionably beneficial way.
Takeaway 5 | 01:16 MIN
Culture is a bigger problem than algorithms, science, and data.
Takeaway 6 | 02:26 MIN
The lens of incentives leads to a smoke-and-mirror effect.
Takeaway 7 | 02:06 MIN
3 reasons why risk is being emphasized by financial services.
Takeaway 8 | 01:43 MIN
The value of anomaly detection

In this week’s episode of the Masters of Data, we had the opportunity to chat with CEO and Head Researcher of Emerj’s Dan Faggella. Today we discussed the endless applications of AI and it’s fit in other places within a company.

Every day the technology behind artificial intelligence evolves and improves, but to unlock its full potential data science people and functional business people need to come together to create an understanding together about where AI has the potential to be used and creating a culture where data is valued.

Today AI reaches numerous fields like medical, finance, retail, and pharmaceutical. Dan explains how within these industries there are certain AI features that companies all across the board feel obligated to use to create this image of themselves to their customers. When in reality, the AI dollars are going towards more than just chat-bots and whatever other companies share in their press releases.

To learn more about Dan Faggella and his work at Emerj, check the resources down below.

The Emerj team also have their own podcasts, so be sure to check them out as well.

Dan Fagella: That crowd is often looking at what their competitors are saying. And they're looking at what's hot while if you kind of follow the money at the level of vendor funding, at the level of actually talking to leadership and figuring out where their technology priorities are and where AI is being deployed. You learn that there's a bunch of smoking mirrors on anything customer- facing.

Ben Newton: Welcome to the Masters of Data Podcast, the podcast that brings the human to data. And I'm your host, Ben Newton. Welcome, everybody to another episode of the Masters of Data Podcast and excited about my guest today. He has a great background and I think we're going to have a really fun conversation. Welcome to the podcast, Dan Fagella. He is the founder and head of research at Emerj. Welcome to the podcast.

Dan Fagella: Hey. Glad to be here, Ben.

Ben Newton: And as we do always start out just talking a little about your background and particularly in this time and space with what's going on. I love to ask people how you're weathering the apocalypse here. I'd have to say like the highest watermark is that one of my guests lives in a castle with a bridge over a moat. I don't know if you can beat that, but-

Dan Fagella: Yeah. I will tell you this, Ben, that's my dream. When I finally just sell it all and move to Austria, I'll do that. And there will be alligators in there at least in the winter and the summer months. But I'm getting through this thing by getting as many walks in as I can. I still get some vitamin D. I'm in the Boston area, so that, some good friends and some good books and plenty of work to do is pretty much been my survival strategy thus far.

Ben Newton: When you and I first got connected and started looking into your background, I think you have a super interesting background side. I love to hear how you kind of got in the area around AI and why you decided to start Emerj. Give us a little background and who you are, what's your story.

Dan Fagella: Happy to do so, yeah. It's certainly a little bit different than, “ Hey, I worked at Accenture.” or, “ Hey, I studied Computer Science undergrad.” When I was 20 years old and I moved out of the parents' house there, all my friends were either delivering pizza or selling insurance or something super boring. At that time I was competing a lot in Brazilian jujitsu. I was training mixed martial arts fighters and competing. And so when I moved out I decided, “ Well, I'll kind of pay my way by starting the martial arts gym.” Never having run a business before, having no idea what the ever loving heck I was doing. The guy I was working for, went out of business. I was teaching there for free for about a year because he wasn't making any money. I took his mats somehow eked at out a profit as a young fella and got really, really deep into competing and training jujitsu. By the time I graduated undergrad, I really wanted to learn about skill development, how do humans learn, what's the neuroscience, what are the cognitive models? Now, I wanted to win national tournament's. I wanted to train people to win national tournaments. And I was really fascinated by how we can learn to learn faster. And so University Pennsylvania has a positive psychology program. Ivy League price tag, unfortunately for a poor guy running a martial arts' gym, probably not the best debt to hurl on top of my shoulders, but I really wanted to learn from the best. I ended up for that whole year or so, that the masters took, driving back and forth from UPenn while running this martial arts' gym and learning all about skill acquisition and human learning. How do humans actually learn? And while I was there, Ben, the Computer Science folks had kind of tapped me on the shoulder and said, “ Hey all this neuro stuff that you're doing around learning, interviewing all these skill acquisition experts and stuff, there's actually a lot of corollaries to that and what we're doing with computers.” This is this 2011, so this is the very early days of image net. This the very early days of applying NLP to Twitter data when that was like groundbreakingly cool. “ Oh, New York is a little bit happier than Chicago today. Ooh.” That was really neat back then. I remember getting a whiff of this and by the time I graduated coming to the conclusion that these technologies, as they develop will become tremendously impactful to governments, the future of the human condition and the future of essentially any kind of business. Even though I was still running my martial arts gym, I kind of decided right then and there, and still competing, to be honest, I started right then and there, I'm really going to make it my passion life's purpose to understand this kind of post- human intelligence and how this thing is rolling forth. I started interviewing folks at Future of Humanity Institute, Oxford, all these different AI startups. Eventually grew and sold an Inc. 500 e- commerce company. I sold my martial arts gym. I turned all my footage. I have a great video online, Ben called Dan Fagella versus the giant where I break some UFC fighter guys ankle in like 11 seconds. And he weighs twice as much as I do. There's a lot of luck in that fight. I think there's a lot of versions of reality where that guy just crushes my head, but it turns out in that case he didn't. That video became really popular. And so I started building an email list off of that, started selling DVDs about jujitsu. Then I got other instructors who were teaching bladed weapon defense for law enforcement and all other kinds of self- defense. And it became a pretty big online publishing company and got into a couple of million bucks and sold it. And after that, I was able to do AI full time. And that was about three years ago.

Ben Newton: Wow. I've heard of a lot of different ways that people get into the subject area of AI. I'd say you have one of the more interesting ones definitely coming from the arts. When you talk about the skill acquisition, at first when I saw that, I was like, “ Okay, I think that's really cool.” I didn't necessarily see the connection, but when you talk about it in the context of skill acquisition, and that's exactly what they're trying to do with AI, it makes a lot of sense,

Dan Fagella: Yeah. It's so funny, Ben. I really should write an article about it, but all of my master's thesis work, they are direct corollaries to over- fitting and under- fitting with human beings. There's direct corollaries to all sorts of ways that you can mistrain AI systems that also tie to sport performance or memorization or music performance, or other research on human skill acquisition. It's pretty fascinating. That was the connected dots for me.

Ben Newton: Yeah. Yeah. It makes a lot of sense and I've definitely talked to other people on this podcast about the musical side, since I love music myself and I can definitely see that where I've gotten … I in particular one time, I was trying to learn the banjo and I got bad instruction on that. I basically had to give it up after a couple of years and then rediscover it later because those skills with a misacquisition of the skills, I guess, as you were saying, it actually became a deterrent to actually enjoying the instrument because it was really hard to relearn it improper. Right?

Dan Fagella: Bad data, Ben. Bad data.

Ben Newton: Well, tell me a little bit, so you had moved from this business that you were able to sell, you got into AI now and you started Emerj. Tell me a little about why you started Emerj and what you've been doing with it.

Dan Fagella: Yeah. Emerge began some eight years ago as just a podcast which is now the AI in Business Podcast. I was doing that just so I'd have an excuse to interview smart people about how AI is changing business and changing consideration for regulation. And then started doing some TED Talks on those topics. But yeah, starting about four years ago, Ben, I decided I wanted to really make it into a market research firm. When I was getting speaking engagements from kind of intergovernmental organizations and companies, and really what they knew us for at this time, it was me and a couple of contractors at the time, was really having a grounded understanding of the current state of affairs. What's possible in pharma with AI, let's say. Then within those possibilities, which of those AI applications are working. Within banking, what can AI do, what's possible and then what's working? This is the reputation we had built. And frankly, in terms of editorial calendar and my own research priorities, that really was where we were focused. I said, “ Well, if that's the stuff, then that's where I'd want to double down, because it's what makes us valuable for these groups.” I began Emerj Artificial Intelligence Research, kind of rebranded to that name a couple of years back. Essentially pivoted it into a firm where we basically, you could think about as Ben, like a boutique Forrester or Gartner, where we're a market research and advisory company, but we have a very narrow focus, which is just the ROI of AI. We don't help people with CRM. We don't help people with blockchain. We don't go in and do the technical integrations. We provide landscape data about where AI is delivering ROI, what competitors are doing. And people use that to essentially pick high ROI projects. If a bank is going to spend millions on a pilot for a conversational interface, they probably want to know what's working for America or Bank of America or US Bank or these other big players. You've listened to our show, you'll hear folks like the former head of AI at HSBC. We have the head of AI, US Bank who's going to be on the program recently. And as it turns out that Intel plus a lot of scored and sorted data around what's working is really, really hard to get. You want to have that in your hands before you spend a lot of money. And so we are a market research and advisory.

Ben Newton: Well, and I guess through all of that, you must see some really interesting use cases and run into some really interesting applications of AI, like in the last eight years, I would think.

Dan Fagella: In a big way. Yeah. And a lot of connected dots and trends. It's been fun seeing broad technologies like natural language processing, for example evolve or computer vision, for example evolve and see how it's finding its fit into insurance and into retail and into these other sectors. It's a cool vantage point. And it's the one that makes us valuable for the folks who are our customers. Yeah, that's the focus here at Emerj, essentially helping folks with AI strategy by providing a data analysis and a bit of advisory to allocate dollars properly and finding ROI.

Ben Newton: Yeah. And that makes a lot of sense. And I can see how that is really useful. We've definitely had some conversations with people on this podcast kind of around the data science and AI and ML area. You tell me if this kind of rings true with you, but it does feel like in the last, we'll call it decade, but even within the last five years, there's been a transition from things that feel more research projecty, seeing a lot more real life applications in brick and mortar businesses, traditional businesses, where they're actually able to apply AI in a way that you can actually explain to someone who doesn't have a PhD. Does that ring true with you?

Dan Fagella: Yeah, it does. There's a lot of things happening at once here in terms of what's making AI more accessible. We did a great series called the AI Zeitgeist, which I'm pretty sure is Google- able about how artificial intelligence will become more and more broadly applicable over the next four to five years. And some of these trends you articulated, I think are pretty snug. People, I think often think about, “ Well, the technology has to advance. NLP has got to get a little better. Computer vision has got to get a little bit more capable.” Actually, what has to advance is the osmosis of the brains of data science folks and functional business folks, most functional business folks within let's say banking and insurance, so the majority of our customers. Very smart people they know their industry better than anybody. These are some of the … We're essentially not working with any companies that don't do four or five billion in revenue. These are big companies, smart people, leadership, innovations strategy folks, but it doesn't necessarily mean that they have a conceptual grasp of what problems might AI be a fit for. If you asked, where could we apply it? It would just think about the press releases from their competitors. And that's where their head would go. The fact of the matter is AI does require a bit of a conceptual grasp of where could it be used. And until we have that, we're going to have a lot of false expectations around what's possible. Same thing on the opposite side of the corridor here. We look at data scientists, you learn a lot when you graduate with a PhD from Carnegie Mellon, but the fact of the matter is you don't know anything about underwriting and you don't know anything about how hard it is to change a core process within an established enterprise. And so what we're seeing, and I happen to be a big fan of this dynamic, I'd like to see more disruptive change in the future, but for the current present, I'm very, very happy to see this trend proliferate, where we're seeing more and more companies that are finding ways to keep most of the complexity of AI, the training of features, even the harmonization of data sometimes. Keep most of that on their side and essentially be just a pipe that they can plug into the client and have an output come out the other side. I can give you some examples of this that I've seen that I think are making adoption a little bit easier. There's a firm that we just talked to called Aidoc, originally based in Israel. They're expanding to the US. I interviewed their founders some two years ago. We just had them back on the horn. They've essentially figured out the workflow for radiologists. We're looking for, I forget exactly the diseases they are working. I think lung cancer, might've been the example we talked about on the call, and I think that's a lot of their work. Chest x- rays are extremely common. Lung cancer in the States is very common, around the world is very common, unfortunately. They found a way where the scans that come in from these x- rays just go through their system, are labeled by their system. And even to some degree, double checked by their own experts who have an understanding of radiology and of ML. Then get piped out as layer on top of the interface that the radiologists themselves look through so that the doctor will look through kind of like a, I don't know if it's like a VR environment, but there's multiple screens up that are displaying these images. And instead of just being the images, and then the doctor tries to find the tumors or the corollaries to risk, they might have some red circles or some highlighted areas, and maybe a short bit of notes around what seems to be getting called out by the trained algorithm based on however many hundreds of thousands of past instances of this kind of cancer. And so in that case, we're not radically shifting a workflow. Now, we could argue here that we should be radically shifting workflows. Some of these workflows are just not built for AI and you know what, we could probably be more capable if we did. But the fact of the matter is making shifts that are that monumental is really, really, really tough. I think AI is going to have to prove itself by being a set of pipes and by augmenting and enhancing workflows in an unquestionably beneficial way. And only with enough of those shakeups, are we going to see people genuinely reinvent the enterprise. I smile at that dynamic. I see that spreading across industries. I'm happy to see it because I think it will help to proliferate the general benefit of AI in a deeper sense in the coming, let's say two to three years.

Ben Newton: Yeah. You remind me of a conversation. I think maybe I mentioned this briefly when we had talked before, but when I interviewed a data scientist at Steelcase Furniture Company and one thing that I thought was really interesting with his story kind of relating to what you said is, he had and has a set of data science skills and you build a team, but they didn't necessarily understand in depth the furniture business and where I think, where he had found that success was going find the people that know this inside now, like they live and breathe office furniture and like how you set up an office, whatever, and then making those connections. And it did seem that one thing that he said, which stuck with me is about this idea of iterative change. It's like, you want to do the revolutionary stuff, but some of it is building that confidence and showing gains and starting building a routine in a muscle memory of working together and actually then starting to get then taking the huge depth of business knowledge that these people have that know the business, but have no idea what data science and AI can do for them and in connecting those dots. Is that kind of get to what you're talking about there?

Dan Fagella: There's so much to go into with what you just said, Ben. Yes, I concur on many levels. Some of the reason that AI is being asked to be a “ Quick win ” is because executives don't have an understanding of what's required to make these systems work and they're their longer- term strategic value. And so there is actually a problem with this leaning towards, let's just do something small, let's plug it in. AI is not IT, is something I like to say all the time. One of my mentors, Charles Martin says that. It is seen as IT by much of the C suite. Again, these are extremely smart people, it's just not their area of focus. Just like for me, some of the people I work with in banking know so much more about compliance, it will just blow my brains. AI is just new and so we're learning. The goal is to learn there. Some of that is actually I think a little bit of an unhealthy dynamic that education will help to alleviate. Some of it however, Ben is necessary. We absolutely do need to see some precedents of things working before we commit to radically altering a process. We want to know some results can be delivered. There's an irrational and irrational part to that dynamic until we see the core capability building. What were you just talking about with that data scientists? You were talking about new ways of working with teams. He's bringing in subject matter experts and data scientists together. You're talking about new ways of working with data, figuring out which data sources are important, which of them might be useful for our users or for training algorithms. All of these new skillsets, overhauling data infrastructure, and the value of data, creating a culture where data is actually valued, being able to build cross- functional teams that can work together on projects. And as it turns out, there's no done for you cookbook on that. People are kind of learning the hard way. Ben, until those prerequisites, those new skills are seen as a kind of ROI unto themselves, seen as a kind of necessary set of skills that will enable us to leverage AI and leverage data far into the future and give us an advantage at a culture level, we're always going to be picking little nitpick projects, little fun pilots that genuinely for the most part are not going to make much of an impact on the enterprise.

Ben Newton: Yeah. If there's one thing I think I've learned in being kind of in the technology industry for 20 years, is that there's always a tendency to try to apply technology to solve all problems. When nine times out of 10, it's a cultural problem. And so being able to have that recognition, then that allows you to do the leapfrogging revolutionary things is when you connect the culture dots and get that connection where people can trust each other and make that leap, that's when you can do the big things, but that's hard.

Dan Fagella: It's super hard. If you talk to hands- on AI consultants, they'll tell you straight up that culture is a much bigger issue than the algorithms, the science, even access to the data in terms of actual deployment, we've written a great article called Critical Capabilities. If you'd go on Google, you type in Critical Capabilities Emerj, E- M- E- R- J, you'll see this article. It's actually an Infographic. If you want to know how we articulate these prerequisites to deployment, these new skill sets that are really going to be a strategic advantage. I really, really hope that over time, as you had pointed out, kind of the culture has got to shift. And I think if we can quantify where are the areas we have to move on and kind of see as an advantage and be able to move towards, I think that helps firms to actually be able to do it.

Ben Newton: Maybe talking a little bit more about some of the specifics of the cool use cases you see. I think when you and I were talking, sound like you've been delving pretty deep lately in the financial industry. What kind of use cases and trends are you seeing? What kind of comes to mind?

Dan Fagella: Yeah. Again, as mentioned, our most robust research has been in banking and insurance, and that's just a consequence of whose worked with us most. I think it's also a consequence of who has the big budgets to really push these projects forward. And so there's a lot to go into here. We analyze something to the tune of 120 AI startups across insurance and banking and leverage our proprietary scores on things like evidence of ROI, things like ease of deployment, so we could compare them kind of side by side for different kinds of applications, categorize them into what kind of capabilities they enable. There's a massive map here to go through, but I can give you some kind of high level trends that I think are really important and useful for folks. One of those is around just how much Chatbots are genuinely hype in financial services across the board. Just to back this up with some numbers for you, something to the tune of 40% of press releases and outward facing communications from global top 100 banks, is about AI projects. And this is over the last let's say, I want to say just under two years. It's been 40% about conversational interfaces. While from our best research estimate, from talking to enterprise leaders and from looking at the funding of the vendors and speaking to the vendors themselves across the landscape, there's like less than 6% of the actual spend on AI, is going to conversational interfaces. This really highlights for us, Ben, a really important dynamic that we found to be even more true in finance than other sectors, but it carries to all sectors. And we call this the lens of incentives. The lens of incentive states that companies are going to talk about the AI projects that make them look good to their customers or their shareholders. And they're not going to talk about the projects that don't make them look good to their customers or their shareholders. If we're a bank and we see everybody else doing a press release about how cool, how hip they are for their customers about a Chatbot, well we're going to say, “ We need one of those.” And then we're going to talk to the first vendor that makes a bloviated promise. And we're going to pay them a half million bucks to be a guinea pig and have it not work, but we're at least going to have a press release and we're going to somehow feel almost satisfied by that. There's a keeping up with the Jones's factor here that actually hides where the money is going. Compliance, fraud and CyberSecurity are gobbling up by our estimates safely over 50% of all AI dollars flowing into banking right now, in terms of working with vendor companies, partnering with outside firms. But nobody talks about that because Ben, imagine you're a Bank of America customer, and they send out a cool tweet that says, “ Hey, we're working with Dark Tracer. We're working with,” whoever it is, whatever AI, CybersSec vendor it is, to help protect your credit card info from being stolen. Do you feel safer or less safe? You probably feel less safe. Or let's use another example. Okay. Ayasdi raised a$ 100 million. They have at least some traction. Let's say Wells Fargo says, “ Hey we're using a Ayasdi to make sure that less terrorists can buy missiles by sending money through our networks.” Are you happier or are you sort of more concerned? The answer is sort of patently obvious, Ben. It's patently obvious, but what this results in, is when we bring in the actual map of what's happening in the market, it's often sort of a real brain rattler for the people in the C- suite. We often work with innovation and strategy leaders, they're the ones thinking about finding fit for AI. That crowd is often looking at what their competitors are saying. And they're looking at what's hot while if you kind of follow the money at the level of vendor funding, at the level of actually talking to leadership and figuring out where their technology priorities are and where AI is being deployed, you learn that there's a bunch of smoke and mirrors on anything customer- facing. To give you an idea compliance, fraud and CyberSec as something to the tune of 5X, more funds raised by companies in those three sort of business functions, then customer service, sales and marketing combined. And so people think it's customer- facing stuff in financial services, but by golly I'll tell you right now, it's not. And there's a lot of reasons why that's the case. And a lot of reasons why the perception is being bent the other way. You'll let me know where you want to take this. There's a lot of branches here, Ben, but that's a mega trend that I think we really do need to set straight for the people in banking, who we need to know where the action is today.

Ben Newton: And that actually rings true with things I've seen over the years. I guess it would make sense because in some sense, making your customers happy is always a good thing and it's going to look good, but the reality is there's probably a lot more money lost in terms of fraud and wheel spinning when you're doing compliance and things like that. Because that risk management is where a lot of the actual, well that's where they actual risk is and fraud and compliance. I assume that's why they're spending there.

Dan Fagella: I can actually give you the three big reasons from our almost too many interviews in this space, why we believe that risk is being emphasized. Risk just tends to be the focus specifically in banking, but we could say financial services in general, but certainly in banking, of the FS world, more than 50% of our work is there. Risk is a focus. Since 2008, for example, all the regulatory issues and compliance considerations just really has people on their heels. And it's still a cultural thing, there's an antiness and a natural gravitation towards that because that's the culture and that's what leadership sort of has in their minds. That's one magnetic force pulling us towards risk. Now, the way that this manifests Ben, this is a fascinating thing. We talked to, again in financial services alone, well over 100 companies and you learn a lot of interesting lessons when you talk to these people one year or you talk to them to the next year, you talk to them the next year and you get a sense of what they're learning and what we found out, which is a really interesting discovery is that a lot of these companies, they're 18 months into a bunch of pilot projects, they still can't really quantify what their result has been for another known customer. They can't say, “ Hey, we saved them X money. We made them X money. We reduced X time.” And there's a lot of reasons why measurable results are hard, Ben. One is that it's actually hard to get results in life. That's one thing. A second thing is a customer just doesn't want to let you quote anything about their project. That's another thing, a third part is that it's often very hard to measure an attribute. If we help with, two or three little junctures within a long workflow that involves three or four different people, it might not be the case that we ever were measuring the amount of time that those junctures took. After the fact, we might be able to say, “ Oh, it feels faster.” But nobody can actually measure anything. It's not just failure that makes measurement unable to be sort of wrought, but it's certainly one of the factors, but regardless, when a company does not have those brag- worthy bullet points, Ben, what they end up doing is leaning off risk. But when we don't have a measurable benchmark to sell a customer buy, what we need is a plausible story that this could reduce risk. “ Hey, Mr. Buyer.” If you could get a 360- view and have a better likelihood of pulling up all the docs related to this one customer, so that you could remove the stuff that they want to get rid of, or you could find the things that they've already been sold so that you don't do any duplicate stuff that could be sketchy, wouldn't that possibly reduce your risk a little bit? Or “ Hey, Mr. Buyer, if you could search through your legal docs and find these kinds of risks or lib or regulation has changed and so you want to find the things that might relate to that, if you could pull up more of those and edit more of those and assess them to make sure there's nothing that's going to bite you. Wouldn't that be valuable for you?” So, you need a plausible risk story. What we're seeing in finance is that plausible risk is a big needle mover for companies that can't just lean on making money, saving money. Let me know if that makes sense.

Ben Newton: Yeah, no, it absolutely does. I guess you're tied in a bow here, is essentially risk is the safe cultural space in the financial sector because that's one of the bedrocks of that system. Yeah, that makes a lot of sense.

Dan Fagella: It's an interesting dynamic and again, is it right, is it wrong? There's some cases where I think leaning towards risk and in the way that I see financial services firms do is the right thing. There's other cases where I really do think they have to open their mind to other aspects of AI and other potential benefits, but it's reality. Here is a next dynamic, Ben. I don't know why nobody talks about this, customer- facing AI experiences, the skill set required to deploy and keep up customer- facing AI experiences basically is concentrated in one place it's called Silicon Valley. If you have data science skills good for you, but if you have data science skills that involve interacting with a product that involves satisfying a customer need, that you can then measure and you can make sure that it doesn't steer or swerve in ways that start to mess with that customer's experience, you are astronomically more valuable and your skills are astronomically more practical. If you're a bank, nobody in your company has those skills. Now, can you hire a couple of them? Sure. But now they have to fit themselves into a little bit of a more ossified environment. And some of them don't really like that. That skill set is rare. That's another reason why customer- facing is not moving as fast. It's a rare skill set to have in- house at banks. It's really tough to get those deployments to work. If I have an AI product, Ben, let's just say I help you search for legal documents. How boring is that, but I help you search for legal documents. And as it turns out, that's a pretty valuable thing in a bank where you have millions and millions and millions of those things. I help you search for legal documents. Let's say 40% of the time when you try to search with this AI tool, it kind of conks out and doesn't really work. And you have to just go back to your old search methods and you have to look at all the different silos individually and it's kind of annoying. But 60% of the time, it'll generally pull from all the different silos and it'll show you the stuff that's most likely to actually be relevant for you and you can use it. If I work in a bank and I have to use that tool every now and again, I'm very unlikely to quit because 40% of the time it doesn't work because 60% of the time it's helping me save some time. Now, if we translate that same 40-60 to answering customer service requests or guiding people along the marketing path with some sort of interactive experience in the mobile app that's customized based on their needs, that's just horrendously wrong, 40% of the time, even 10% of the time, then all of a sudden we have a good shot at losing market share, losing face. We have a lot bigger consequences. The iteration cycles and the understanding of product, the understanding of interface, the understanding of how to test and benchmark these things to make sure it's not swerving in some direction that we don't want it to go, those skillsets are so damned rare. It is hard for me to emphasize it enough to you, Ben.

Ben Newton: Yeah. Well, and what I mean that's not just in AI, right? That's literally like what you're describing is kind of like summarizing the last 15, 20 years in just business and enterprise software in general because applying the expectations from the consumer space and actually translating that to the enterprise space has been the life and death of hundreds of thousands of software companies for the last couple of decades.

Dan Fagella: Yeah. It's been sort of wild. And in AI we see the same thing. Certainly an understanding of product and of user experience, that's its own skill set. But if you want to now have AI, like literally programmatically altering those experiences for customers, you better be able to have iteration cycles and like quality control and testing methods that are just so on point, it's ridiculous. And if you are not even used to AI in the first place, your culture is just brand- new to AI, why are you trying to run the super marathon here? Well, let's build a couple of those things and let's learn a few hard lessons before we start doing something that can influence that. That's reason number two, I can go into three.

Ben Newton: Dan, Let me ask you a question about that. I'd be interested in your reaction. I did a series of interviews last year with some people we were talking about some of the implications in a negative way of AI and in particular where, kind of black box machine learning AI algorithms are making decisions on things and people don't understand them. I'm sure you're familiar, but what occurred to me as you were saying it too, I would expect that the financial industry might, I guess more risks to overuse the word, but have more risks there. I do find that really interesting is, what's the first example that a lot of the activists in that area go to, they go like, “ Oh this algorithm is determining whether or not you're going to get a mortgage or whether or not you get life insurance or whether or not you get a loan.” Whatever it is. It's only recently, only literally in the last couple years are people applying that same level of scrutiny to the Facebooks and the Googles and so on in the world. I would expect that that's also left some of these financial executives, a little burned where they're afraid of going forward. Is that true?

Dan Fagella: Yeah, that is very true. It's a super important point. Again, in the enterprise writ large, everybody's got to be a little bit more careful. It's not a smart up, move fast and break things. There are these big consequences. Lending is one of those areas. Every single time we do a podcast episode with lending firms. Some of these vendors are our customers, I'm certainly not throwing shade here. But they're forced to, as they bring on their CTO, talk over and over about sort of how transparent it is and how whatever because there's antiness around, “ Oh no, are we secretly proxying for race through some zip code thing that is going to get us in trouble with regulator.” So, there are some of those concerns. The fact of the matter is that a lot of AI applications within banking or financial services broadly, are really not discriminatory or have like gargantuan ethical considerations. Yeah. I think you are touching on a dynamic that is in the minds of many folks today. I think that that fear starts to stretch into everything like, “ Oh no, what if our search and just like document search and discovery, what are the ethical consider?” Man, you really got to put a bounding box on like what things are legitimate ethical concerns or not and we've done, I think a reasonable job of doing that in finance.

Ben Newton: Yeah. Absolutely. Well, I interrupted your flow there. What's your third point?

Dan Fagella: I talked to you before, Ben, about how companies are beginning to just become a new set of pipes, as opposed to overhauling an entire process in a radical and horrendous way. And as it turns out, anomaly detection by itself is exceedingly valuable with the data we probably already have on hand or maybe a slightly cleaner version of the data we already have on hand in areas like fraud, anti- money laundering and CyberSec. If we're talking about wealth management, I'm going to give you two examples and I'm going to explain why anomaly detection is the closest thing to “ low hanging fruit.” Now, this isn't to say that every time we work with a bank in Australia or the US we're always saying, “ Oh, you need to do anomaly detection as your first thing.” No, but we do like to make sure they understand this dynamic. I'm not making umbrella terms. It really is different per client, but I'm going to give you two examples. One is an example in wealth management. Let's say I'm a big bank that has a wealth management wing, and I want to build an AI system. That's going to recommend content and stock market updates to my wealth management customers, and maybe even tips about how to invest or whatever, and maybe even some kind of a recommendation system to help my wealth management agents or representatives kind of remind them when to touch their different customers. “ Oh, hey, you should call Steve because the algorithm says so.” You should email Steve because the algorithm says so. Little suggestions around when to touch base and also content recommendations to that user. And we believe that this is going to give them a better experience, have them invest more with us and have them have a better customer lifetime value. Now, I'm not going to go on a limb here and say that that's a bad idea to its core. But what I will say is it's extremely hard to measure success with that. What would we actually need to measure success with that? Are we going to wait two weeks and see if the person opens more of the emails and say it's working? No, no, no. That would be ridiculous. What we would need to do is we would probably need years. Plural, years, in order to see are people investing more with us in general? Are they showing less of the churn corollary behaviors than the other controlled group that we're treating the normal way? Ultimately in terms of customer lifetime value, we might need to wait longer than that in terms of having a control group run concurrently with these folks, maybe it's exciting in the near term, but maybe it kind of drives them away after a certain amount of time. We don't really know as it turns out, so we'd have to come up with all kinds of proxies for measurement. And at the end of the day, it would be very hard to take a long time, but we need a lot of benchmarking to even know if we're winning. Now, let's go to another example. Let's say I see all of my money transfers going from my bank. Let's say I'm Deutsche Bank, let's say I'm HSBC, let's say I'm whoever. And I look at all my money transfers. I see who's sending the money I see from where, I see how much, I see to whom, I see how frequently they're sending. Let's say it's 50 concrete data points, per instance of transferring money. Let's say I also have a backlog of millions of money transfers that were absolutely to the best of our knowledge, not fraudulent, not money laundering, nothing wrong there and then let's say we also have a backlog of hundreds of thousands or maybe millions of instances that are essentially almost certainly laundering of money, they're bad things. If we have that and what we can do is we can train an algorithm to do two things. We can match the patterns of past fraud, and we can determine what might be fraud or money laundering based not just on hard rules, but based on deeper underlying patterns of commonality that we humans may not have identified. We can hypothetically flag more instances of terrorists, find missiles or cocaine with our bank. Here's another thing we can do. We can find the patterns of normal, the patterns of non- risk and we can determine variance. We can determine anomalies. You could say, “ Ben, detect anomalies from that pattern of normal.” And we can also flag those. Maybe it doesn't exactly look like risk, but by golly, it doesn't look normal. We can make sure that those are flagged more reliably. If we can do that, it's actually not rocket science to run a concurrent test there and figure out if we're getting less false positives, less false negatives, and we're detecting more fraud and money laundering. Not only does that correlate to risk, we talked about factor number one. Not only does it have nothing to do with the customer, we just talked about wealth management. Remember factor number two, but also it has snug immediate fit for what ML does, which is detect patterns. That's a third reason why we're seeing a lot of shift towards risk in terms of where the money is initially going and why sometimes not always, we tend to recommend to our banking clients and even insurance clients that some of those risk applications are great places to build our skillset.

Ben Newton: Yeah, no, that actually makes a lot of sense. I think you explained it really well. If for all this stuff you're seeing, you clearly, over the last several years are really seeing and getting your hands in these trends that are going on, what are you watching over the next months and years? What do you think you are watching that maybe other people aren't thinking about?

Dan Fagella: Yeah. Well, there's a lot in that question, but I think for the time being one of our big focuses is helping clients steer. Strategy and innovation is generally where we fit in at the enterprise level. And those folks are thinking about how strategy is being redrawn with the kind of coronavirus era we're dealing with. Really one of the things I'm tuning into is, where are we going to settle economically and what's that going to mean for the appetite for AI and the appetite for RPA? I'm currently of the belief that in the next maybe 18 months, RPA is going to reap saw the immediate low hanging fruit rewards as there's tend to be less integration concerns with a lot of RPA products, but a lot of AI companies are going to move towards that new set of pipes model versus the overhaul, your data infrastructure crosstalk-

Ben Newton: What's RPA?

Dan Fagella: RPA is robotic process automation, sorry.

Ben Newton: Right. Okay.

Dan Fagella: RPA I think is going to grab some of the low hanging fruit, but I think we're going to see more of the AI ecosystem makes themselves more accessible. What I'm tuning into is the vendor ecosystem responding. Luckily, in a given week it's not unusual for me to do eight or nine interviews with AI vendor firms across the landscape. We're getting a pretty good pulse there and these are often people we talk to over years. And also get a sense of how enterprises are adopting, how is their appetite looking? So, as I keep a pulse on those two factors, I'm going to figure out where are the even better, low hanging fruit opportunities going to be for aiding in recovery, aiding in digital transformation despite the economic hardships. So, that's really where my eyeballs are pointed now where a lot of my interviews are oriented with leadership and with vendors. I'm staying tuned and I think we'll have some more good insights in the months ahead.

Ben Newton: Oh, no, absolutely. And I think with you, the way you explained it and the way you come at these subjects, I think really it definitely helps me wrap my head around them and I'm sure other people as well. I encourage everybody to go check out your podcast, go check out Emerj's website. And thanks for coming on Dan. This has been a fun discussion. I find this area fascinating. It's an honor to have you on the podcast and be able to talk with you.

Dan Fagella: Anytime, brother. I appreciate you having me on, thanks so much.

Ben Newton: Absolutely. And thanks, everybody again for listening. As always find us and rate us in your favorite podcast platform and recommend this to your friends. That more people can listen to the podcast. Thanks, everybody for listening.

Dan Fagella: Masters of Data is brought to you by Sumo Logic. Sumo Logic is a cloud native machine data analytics platform delivering real time, continuous intelligence as a service to build, run and secure modern applications. Sumo Logic empowers the people who power modern business. For more information, go to sumologic. com. For more on Masters of Data, go to mastersofdata. com and subscribe and spread the word by rating us on iTunes or your favorite podcast app.

More Episodes

James Governor - The Culture Change Observability Caused in Modern Tech (Observability Series - Part 3)

Ben Sigelman - The Future Of Observability & Why It's Not Just Telemetry (Observability Series - Part 2)

Charity Majors - Revelations in Observability (Observability Series - Part 1)

Today's Methods of Data Security and Erasure (Guest: Nathan Jones)

Data Insights bring everyone to the table (Guest: Julie Lemieux)

Human Capital Data matters more than ever (Guest: Paul Rubenstein)