Log in     Support     Status

The Support Automation Show: Episode 9

by | Sep 23, 2021

In this episode of The Support Automation Show, a podcast by Capacity, Justin Schmidt is joined by Kristina Podnar, Digital Policy Consultant at NativeTrust Consulting. They discuss the importance of adopting the right policies for your automation processes, the biases and risks you need to consider, and why organizations need to be transparent about automation with their clients. 

Listen now!

Justin Schmidt: Welcome to The Support Automation Show, a podcast by Capacity. Join us for conversations with leaders in customer or employee support who are using technology to answer questions, automate processes and build innovative solutions to any business challenge. I’m your host, Justin Schmidt. Good morning, Kristina. Where’s this podcast find you?

Kristina Podnar: Hey, Justin. I’m actually in Northern Virginia, right outside of Washington, DC. It’s a lovely rainy day.

Justin: Oh, yes. We’ve got a few clouds here in St. Louis. It might be related to what you have going on over there. Kristina, why don’t we start by telling us a little bit about yourself, what you do and what led you to this stage in digital policy, and some of the other consulting work that you do?

Kristina: I have this really crazy background, and I always say that, and people go, “Oh, so does everybody else out there.” My background is- it really is weird because I ended up actually going to school. I got my MBA. I was really excited as I don’t know how old I was when I got out of grad school, like 24. I landed this job as a project manager. It sounded so glorious. I was like, “Wow, coming straight out of school, and I’m going to manage things.” Came to work my first day, and I was told that I was actually going to be cutting up Photoshop files, and I was going to write some ColdFusion, do some HTML. I went home and I cried because I didn’t know what CGI-bin files were.

I actually ended up doing a number of gigs after that, in terms of roles. I basically was a project manager, but did lots of Webmasters kind of things. For a while, I did business analysis gathering and processes. Went through a period where I did Dev, especially in the Java world. I’m the worst developer. Don’t hire me ever for that. An interesting thing to me is that through all these incarnations of consulting, what I realized is that we were doing the crazy dance around digital. We were always putting either clients or companies, depending on what my role was at the time, at risk. It was either because, I don’t know, once I brought the [unintelligible 00:02:10] website down for eight hours without a backup plan. I know, yes, claim to fame. Or we would actually transfer donations files without having them encrypted to the bank, just clear text, yes.

Those were really cool things in the early days of digital, but what I noticed over time is that we were still doing these things. I was like, “I think we should grow up at some point,” and I started focusing more on governance and then started to specialize really in digital policy. All I do now is help organizations large and small balance out the risks and opportunities of digital and actually help them understand. Be very deliberate about the risk that they’re taking, the opportunities they’re gaining, and creating policies so that everybody in the enterprise understands what they ought to do, and what they should not do. That way all of us can stay free and continue to do cool innovative stuff.

Justin: It’s very interesting because here on The Support Automation Show, usually the people I talk to are leaders in a specific support function, whether that be customer support or someone on the IT/HR side. What makes you unique and what makes me really excited about the conversation we’re having today is that you operate in a domain, that is one of those, it’s- I don’t want to say ancillary to Support Automation because it’s actually very, very important. It’s seen only when there’s a problem. Like if a company doesn’t have the right policies in place or security or whatever it is and they’ve got AI automation and something goes wrong.

It’s actually a foundational piece of the puzzle that business leaders need to be cognizant of when they’re bringing something like automation into the business. To kick us off here, I always like to ask this of our guests first because it’s interesting how everyone defines it. When I say the word Support Automation, what does that mean to you?

Kristina: I go very broad. I go super broad, and I would say it’s about having a piece of technology perform a job that a human used to do repetitively or could do, but instead we outsource it in order to get economies of scale. I think to me, that’s probably the broadest that I can go.

Justin: The economies of scale really are what drives value here. You talked about your career in the digital domain. 10 years ago, 5 years ago, when we talked about digital, we didn’t necessarily include AI, bots, automations that are– Usually when you think about workplace automation, the first thing you think of is the robot arm on the car assembly line. We’re getting into things like robotic process automation, support automation, AI in the workplace. When you approach a client or a client approaches you rather and they say, “Kristina, we’re looking at doing X, Y, Z with automation.” What are the first couple things that you tell them, and what are the first steps towards getting those policies and procedures in place that they’re going to need to be successful?

Kristina: Great question. Really when I’m working with clients, it comes from several different places. Either folks are really freaked out about risk. It’s like, “Oh my gosh, we’re going to do this thing.” They pretend like they’ve never heard of AI, for example. It’s like, we’ve been living with AI for a while. Sometimes people come from that where it’s like, “Oh gosh. I don’t want to get into regulatory or legal trouble.” Sometimes, folks actually come at me because their competitors are doing something. They’re looking at automation or they’re looking at new tool sets, and they’re actually looking at it for an opportunity. It’s actually not going to create rest, something that’s going to give them either economies of scale or it’s going to give them some type of advantage, really competitive advantage in the industry.

When I start talking to folks, it’s always from a business perspective, like, what are you really trying to achieve? What is it that you’re trying to get done? From there, the conversation unfolds into what are the things that are the unforeseen risks? The things that we haven’t thought about, whether it’s legal and regulatory or if it’s process issues, people-change management, because not a lot of people tend to be happy about automation. We look at that aspect, and then we talk about the opportunity for like, why are we doing this? What is the benefit? How are we going to actually understand and measure that? Have we thought about all of the benefits that you’ve actually been thinking about? If we do this are there other unforeseen secondary benefits to the organization?

That’s really where the conversation always starts because that’s really the root of digital policies. Before we even start talking about a very specific technology, it’s always like, why are you even doing this? What’s the purpose? What’s the goal? Why are you doing it, and who is it going to help? Or is this something that you’re doing as an investment long term? Because some companies may not realize any kind of benefit for immediacy, if you will. Down the road, they’re going to realize some kind of benefit. Sometimes it’s just a matter of seeding things as well, which is fine, but the type of risks and opportunities that we see with immediate adoption are quite different than if you’re seeding something. Very important to have that foundational conversation.

Justin: On your site, which I recommend everyone go to, kpodnar.com, you’ve got this really great resource on the subtitles, which is digital policies, and we can go over each policy. Why it’s the key points, and then the type and how it’s relation to different industries and stuff. One thing that you touch on in the algorithm, formatting and management AI policy page that I think is really key here, is bias. How does policy impact bias?

Kristina: Bias, if you can net it out, at the end of the day, bias is a risk. It’s a risk. It could also be an opportunity. If you’re biased in certain ways, it could be an opportunity for your business. For example, I’m biased in terms of competitive pricing. I might be highly sensitive to competitive pricing, and so it could actually drive me to be more competitive against a competitor. Bias from the other side may not be good in terms of, we have diversity in global markets. I might not understand, for example, that if I’m going to be speaking tomorrow morning in the UAE, that I need to probably cover my head up right and be appropriate for my surroundings. I wouldn’t know that unless somebody from that area told me or maybe I did some research and understood that.

Policies really helped to cue individuals into what type of things to think about when we’re talking about AI. It’s the type of thing that you should do in order to cue yourself. You’re not always going to know a risk. You’re not always going to know an opportunity. The question when we’re in AI, when we’re looking at bias is, what are the things that should be cuing us to think differently? So that when we start something, for example, if we’re just starting a new project and we’re going to be thinking about a new technology, you might think about like, “Hey, is this everybody that should be in the room? Are we missing a perspective here? Are we thinking about it in terms of a three-year’s life cycle? Is this something that’s going to be around for a longer time than that? What do we expect the future to be?”

It’s really about a series of scenarios and running through what-ifs if you will, to really consider the things that you haven’t considered. It’s almost like playing devil’s advocate and troubleshooting at the same time, and I love that. That’s like the biggest fun of our project team’s startup, when we do things together with clients. It’s like, what are the crazy things nobody’s thought about yet? Think the craziest you possibly can. Be as innovative as you can. Think about the crazy use case, the edge case that nobody’s really thought about yet. It’s almost like being a teenager again and thinking about how you could get yourself into trouble, and going through that process.

Those are the things really, that net out policies because you’re not going to create policies around everything. You’re going to actually have policies that cue you into what to do or what to think about, and then out of that risk opportunity scenario, you’re going to think about what are the risks that you really want to worry about? Not everything is worth worrying about always.

A lot of organizations have things like algorithm bias. Well, what does that really mean in your context? Again, it depends on who you are, where you’re operating, how you’re operating, and thinking through the use cases through and through to see where does that come in? What does bias mean in that context? What are maybe the types of biases that you haven’t even thought about yet? Because everybody’s talking about ethics and bias, but those are their definitions. What is yours?

Justin: In the AI space, when people say bias, excuse me, bias- it means being a teenager, my voice is cracking like it. Been a long time since I’ve been a teenager, but here we are.

Kristina: You’re getting younger.

Justin: I’m getting younger every day. The bias question is an interesting one because oftentimes in professional settings, when we think of bias or business, we think of bias. We think of an algorithm that, classic example, is an AI that scans resumes, and there’s some sort of gender or demographic bias that would come through, and the AI is disproportionately disqualifying people of color, for example.

There’s other more subtle, and maybe to your point, like things you’re not thinking of, and one thing that it’s come up a couple times on this show, and it’s a conversation we’ve had with clients and stuff too is the voice of your– Let’s take a very simple automation AI, the voice of your chatbot. If you’re going to be using language, and if you want to put jokes or cultural references or something in there, you may be alienating customers. You may be creating hard-to-understand interactions or points to get lost, whatever it is, and you’re right. You have to really step back and really think about the broader implications of everything here.

It’s fascinating when you go into an organization, do you typically have these discussions with leadership, and then empower them to communicate that downward? Or are you coming in and working individual stakeholders plus leadership and bringing everybody together? Because I think it’s very important when companies go down support automation journey and start bringing automation in the executives, the front line, the customers, everybody understands what’s going on here. I’m just curious what the approach is from a top-down versus bottoms up.

Kristina: I actually tend to do top-down and bottoms-up and wide all at the same time, which probably sounds somewhat chaotic, but it ends up actually being more comprehensive. You definitely need your leadership to actually be involved. You need to understand the business strategy, and that usually is happening at the top, but you also need to actually do a bottom-up, because those are usually the people who actually know what’s happening. I’ve never met an executive that can really personify for me, for example, the actual consumer, the person who is actually calling the call center. Yet if I go to the call center, they’re like, “Oh, I’m all over this. Let me tell you what’s happening here.”

To me, it’s always fascinating, because I also like to go to HR, and I like to go to procurement. I talk to all these folks that are considered traditionally outliers, and people are like, “Oh, why do we need HR here?” It’s like, “Well, don’t you have people that are training your chatbot? Why would you not talk to HR?” They have a role. It might be a very marginal one. It might not be a very active one, but they have a role. I tend to go to pretty much everybody who understands what they do, how they do, and how they might actually adopt the technology we’re talking about. How they might impact it.

For example, going back to the notion of bias. I might talk to everybody about bias, like how do they interpret bias? What is a bias? Have they thought about things? Recently, I was working with a client that had a call center, and they were thinking about bias in terms of the call center. One of the questions that I had, and it’s funny that you say chatbot, because they brought up the notion of like, “Well, we have chatbots. We could just serve more people.” I’m like, “Well, yes, but let’s dial it back, and you can see is that the right channel to ask some of these questions in?” Because we’re biased just because it’s convenient for us. We’re biased towards ourselves. It’s not necessarily about the end-user.

If I’m going to ask about sexual orientation, marital status or age, is that really a channel that’s most comfortable for a user to interact with over chatbot, or is it better that there’s a human asking that person? Maybe we don’t want a human asking that on a phone call, we want a chatbot to be asking that on our website. It depends on the client. I think being aware of that, and then starting to break down, where are some of the biases? Different folks have different perspectives.

A great example, again, going back to a call center group that I was talking with, they weren’t really thinking about bias towards people who call frequently. Some of the HR folks said, “Well, yes, the call volume has been much, much higher, and what we’re hearing from employees is frustration around certain types of things.” When we analyzed the trends, it seemed like older customers were calling more frequently with a specific type of problem.

It’s fascinating because how the organization was dealing with that demographic was biased in a way. They were actually trying to decrease phone time, and increase their adoption of other channels, when in reality, probably higher customer satisfaction was going to come from servicing those individuals via phone calls or voice in human interactions. A great example of bias ageism in the wrong direction by the organization.

Justin: That’s really interesting, and what I like about that is that it is a new perspective on one of the common themes in support automation of the low-level repetitive tasks. That’s always the first thing people point to when you want to bring automation into an organization, and you should. That’s the low hanging fruit. One of the secondary effects of this is that you create an opportunity, and I want to pin the word opportunity there, to allow your best people to spend the most valuable time on the most important tasks or to allow- this is the call center example, if you automate the simple stuff and make it easy to connect to a human when you need to, you can facilitate this customer-brand interaction that requires a phone call.

The agent isn’t worn out because they’ve answered the same question 300 times in a row. They’re fresher, because, again, the repetition is taken up. People are freed up to do their best task. From a customer perspective, you’re having a good interaction with the person on the other end of the phone, therefore good interaction with the brand. What I loved about what you said there was that you looked at it from a- you painted that picture from a top-down perspective of like, the most important interactions are occurring over the phone. Think about the channels when you bring automation in. That was very interesting.

When you look across industries, and you’re in a very specific and interesting position for this because you are in a consulting business, you sort of work across the gamut of everything from healthcare to government to retail, whatever it is. Is there a particular industry or market segment where youth would especially caution leaders looking into automating some portion of their support function or back office, whatever it is? Is there a particular industry that has particular challenges that you think are of note?

Kristina: No. I think that every industry, every vertical has something that they need to consider. When you started asking me that question, my head immediately popped to the NGO sector and the humanitarian crisis, for example. We obviously don’t want to automate certain aspects of humanitarianism, because there’s risk. There’s risk of things like hacking and data leaks, and things that actually would endanger people’s lives. The value is out the roof, but you know what? You have that in pharma as well. We could automate some tasks that would actually cause issues from a pharma perspective and cost people’s health and life. Automation in the automotive industry as well.

Everybody has a risk. The question is, what does that risk look like in your organization? For some it’s going to be the loss of life, for others it’s going to be the loss of a 300-year-old brand in bankruptcy of the company. Another one, it might be the welfare of their employees, and so I don’t think so. I honestly don’t think so. I think it’s just a matter of pivoting and understanding what’s unique about you? Where are the dangers? Where are the opportunities?

Justin: There’s certainly something to be said of, while it may at first blush seem like oh, and humanitarian if I think of the humanitarian issues that are going to arise in Afghanistan now, for example. Those are obviously big, thorny issues that we need to deal with and deal with properly and ethically. That doesn’t mean that the data sharing or data breach possibility of something like filling your prescriptions online or something. Those are also opportunities for data to be used the wrong way, or hacking or some untoward behavior. They’re both important, and you’re exactly right.

When you look at the broader picture of the proliferation of AI and automation and the Digital 3.0 or whatever that we’re seeing with all this, obviously, government bodies are starting to step in and do things. Like we’ve had GDPR. What’s the California law, CC–

Kristina: CCPA.

Justin: CCPA, yes. Those are recent reactions to digital policy or digital risks that have been going on for a long time with email and data compliance. AI and automation has a whole other basket of issues that we haven’t legislated into fairness or anything yet. Really long-winded way of asking, what do you see from a public policy perspective maybe coming down the pipe that business leaders should be aware of or be thinking about?

Kristina: That’s a great question. It’s interesting how every regulatory body, I don’t care where you’re located in the world, but every single regulatory body is behind the curve on this, and it’s very unique. To me, when I look at what people should be thinking about, obviously, privacy data, protectionism of individuals is the number one area by far. It’s interesting, because people go, “Oh, GDPR,” which was enacted and came into force on May 25th of 2018. Like you said, you have CCPA, you have CPA, in Colorado. Those are coming up. They sound new, because they’ve been coming up since 2018. It’s interesting, South Africa had their PA created in 2013. People have been thinking about that for a while. It takes a really long time for regulations to catch up.

The areas that I see that are now on the cusp of that are things like trademarks, innovation, protection of your intellectual property. We just recently saw a case in South Africa, which was actually the first case that was granted a patent for technology that was invented by an algorithm. It was actually created via AI rather than by a human. We had a case in Australia that was actually overturned or was actually denied, and it was just recently overturned to allow the same thing, granting of a patent.

I think what’s really on the verge should be on the mind of every business individual out there is, “How is this paradigm shifting in areas that are traditional and almost disrupting them? What will that do to my business?” The patents, the innovations, intellectual property protection are just a prime example for competitive areas, or verticals that are competitive or for-profit in nature. There’s areas, obviously, like NGOs, nonprofits, that are also going to be disrupted in different ways. Think about deep fakes and humanitarianism. What happens if somebody actually enacts an individual who’s fleeing in their country or they’re seeking political assignment? What does that look like?

There’s a lot of implications across the board. I think what is happening is that leaders are so busy keeping the day jobs going and the lights on, if you will. They’re focused on things that are very known to them like business decisions, investments and fundings. They’re all about keeping their organizations afloat, but they’re not having these deeper conversations around, what does this really mean to my business? What does it mean to my industry? I think that’s probably one of the biggest risks today.

Justin: That story, an AI creating something that could be patented. That is a ball of wax, right?

Kristina: Yes, it’s fascinating. It’s like what happens when we have basically derived properties because now you no longer have a human. It’s actually a machine creating something, and it’s a derivative. The problem isn’t the machine or the logic or the thing that’s being created. The problem is that we are thinking about copyrights and trademarks in very outdated ways.

Justin: It reminds me of one of the classic AI stories or examples that just personally always troubles me as someone who is a hobbyist musician. At what point is the next great album going to be released by an AI. One of the last things we have as humans is being taken over by technology. That’s maybe dystopian and interesting to think about, but patents and there are issues with that that extend beyond your gut reaction to it. There’s precedent from a legal perspective. There’s IP ownership and all the different– We can barely agree as a global society on what IP ownership really is. Now, throwing the fact that you’ve got, “Well, my IP made its own IP in this jurisdiction which maybe is different in that one.”

Do you see a world where there could perhaps be disclosure in AI, that it is an AI before it interacts with somebody? You call Domino’s in a few years, and an AI answers the phone, and it’s virtually indistinguishable from a human. Or do you think we as consumers are going to continue to live mostly ignorant of this?

Kristina: No, I don’t think that we’re going to be ignorant. I don’t think we can be ignorant. I think that there will need to be transparency and visibility into that. I think of nothing else from a consumer trust perspective. I don’t know if you remember, a few years ago, I think it was AT&T that ended up actually introducing their chatbot, and we started doing crazy things like asking questions like, “Hey, I’m going to Italy. Do you eat gelato? What is your favorite type of gelato? This thing would just blow up. It was very obvious that it was a machine, but I remember, actually, how annoyed people were when they found out that it was a machine.

I think you’re going to see the same thing when you start interacting with a machine that sounds, looks, feels like a human, and you can’t tell the difference, because we’re almost there today in some aspects with certain things. I think that for most organizations, just from a pure liability perspective, and the risk of what could happen if a consumer found out, they will want to be transparent. They’ll want to be transparent, because that’s a good thing. That’s where if you know that this is not Kristina, that I’m actually a bot that’s speaking on behalf of Kristina and organization, that’s a lot easier to interact with and set your expectations.

If I get hacked, and you find out after the fact, you won’t be as upset as you will be if it ends up on the front page of the New York Times that I was hacked, and all that data you gave me is now out somewhere on the dark web. I think that organizations will be forced into it by consumer expectations. That’s one point, as well as the type of technologies that we’re talking about, because it’s no longer just about, “I’m picking up the phone, and I’m talking to someone who sounds Justin, but it might not be Justin.”

It’s about the fact that we’re on the verge of being in these virtual worlds. What happens when I’m in a virtual world? How do I know it’s an actual human I’m interacting with? What does that mean from a child safety perspective? Do I really want my child interacting with the machine? Do I want them interacting with a human or machine might be safer? It’s all these questions that I think are coming up, and I think we will have to have disclosure between what is “human” versus what is machine?

Justin: I think of the Google voice assistant demo they did a few years ago when they called the hair salon or the restaurant or I think it was a restaurant, to book a table. It was a pretty great wow moment. I don’t know how much they’ve actually done with that technology, but it was one of those great big companies, doing something flashy presenting AI moments.

Yesterday we had Tesla, with their AI and their robot. It was interesting. Elon, it was funny, he was joking, but he also wasn’t when he made it a point to say, “You’ll be able to outrun this thing. [chuckles] You know what I mean? It was interesting. I was watching that and saw that clip, and I smirked at first, but then I was like, “Oh wait a minute. We actually do have to be serious about this, right?” It’s just crazy because the world is changing very quickly, and it’s very exciting. In our little corner of the world here, its capacity is selling software to support leaders to help be more efficient and drive more value et cetera. It’s a lot of fun to watch all of this unfold, but yes, there are a lot of things we have to think about.

The work you’re doing is formalizing a lot of those discussions on what you have to think about getting in the organizations like, “Look team, here’s ABC all the way through Z on what you have to have prepared, written down, understood by the team and going through some of the scenarios that could unfold there.”

Kristina: I always back to, I don’t know if you remember this or not or if you read about this, you may have, but the Cosmopolitan Hotel and Casino in Las Vegas ended up actually adopting Rose, their artificial intelligence bot concierge. I don’t know if you’ve heard about this or not.

Justin: No.

Kristina: It was really well done. It’s fascinating because they thought about policies in a way that they didn’t realize would benefit the organization down the road. What happened is they had found governance, and so policies naturally flowed out of that. It was fascinating because what they did is they actually stood up for Rose. Whenever you’re in elevators, you could recommend places to have dinner. When you were in the room, she could order you room service, yadda yadda yadda. What they did is they always put in safety checks. At what point would you actually stop the automation? At what point were the hand-offs to a human being? At what point could you almost unplug the plug on this AI aspect, and introduce the human aspect.

People go like, “That’s important, right?” Because if you ask for a reservation and she can’t make the reservation, then you want a human concierge to step in, fair enough. When the shooting happened if you remember on the strip, what they were actually able to do was lock down the whole hotel. They were able to actually use the concierge driven by AI to communicate with everybody around what was happening. We were able to assure them that they would actually be safe. They were able to go ahead and take things like room service orders and communicate whether or not you could actually have food, what type. I think they switched up, so they were like, “We don’t have anything except for power bars and water but you’re not going to starve, it’s okay.”

What they did is at one point they just turned off that system because what they didn’t know is who is actually a threat on the strip. Could they actually hack into the system? What was really happening? Who was behind what’s happening? To me, it was fascinating because they were able to take this thing and utilize it in different ways for different reasons. The only reason they were able to do that, is because everything wasn’t so automated that you couldn’t stop and interject a sub-flow in an ad hoc routine manner. I really, really like that example because I think it’s perfect. It’s a matter of having a human-technology partnership, understanding what’s real and what’s not, and still being very transparent about it so that it works for the business.

Justin: That’s a really great anecdote. I might have to borrow that one as a good example of both the power of this technology, the benefit, and then also the immediate thing you have to think about to your point of rolling it back or whatever. You could see how, yes, that could go very wrong. You establish that trust with the guests. Then if the next interaction that happens is rerouted to a different decision tree on the responses, it could get very dark very quickly.

When you go into an organization, are you typically going in before they start making a lot of investments and vendors and software or building and if they’re going to build it themselves, during or after? Because they brought some tech in, they’re like, “Oh crap, we’ve got an issue here.”

Kristina: Yes, it’s all of the above, unfortunately. Because a lot of times I get people who invest in the technology and they’re halfway down the highway, and then they’re like, “Oh wow, we forgot to fill up on gas, oops. Let’s pull off, and no, we can’t get to St. Louis faster than this because we didn’t fill up the gas before we left,” type of thing. That’s always really, really sad for me but there’s always a mop-up project somewhere in the world.

Most of the time in an ideal scenario, I will actually come upfront into the organization, and I will usually stay in lockstep as technology is at least designed, and if not, starting to implement. That’s because in my world, you can document policies all day long. It’s like I can write you binders of policies. We can put him up as PDFs on your intranet, and at the end of the day, you’re going to have a bunch of PDFs on your SharePoint intranet. There’s really no value in that.

The value in policies comes in being very deliberate about them, and then ensuring that the folks who need them most actually understand that they’re there. That they’re actually trained. That they’ve adopted them. They’re ready to implement them, and that’s really a change management exercise. What we’re really doing is changing the culture, changing the people, helping them adopt policies and implement them and change the way that they think about technology.

Inevitably it has to be at the start hopefully, but if not then as they’re starting to implement things raising the flags and saying things like, “No, you can’t use your SMS channel in that way if you’re doing XYZ with your voice.” If your chatbot is doing something different, which happens a lot when we have a lot of siloed channels obviously.

Justin: When you think about the future of support automation and how that will create confluence with the future of digital policies, what’s the thing that is most on your mind?

Kristina: The most, I think, in my mind is how do you embed policies so that they’re seamless? A lot of work that I’ve been doing when policy land, the last year and a half to two, has been around enabling folks inside of the organization to actually have the policies that are almost seamless to them. I work with a lot of marketing teams, for example, and it’s great because marketing teams are always thinking about things especially in regulated areas.

For example, I worked with a pharma company that ended up saying, “Okay, I have marketing teams in the EU, marketing teams in the US, marketing teams in Asia-Pac. What do they need to know every time we do a campaign?” It was just a repetitive, redundant thing. Almost like, “We’re doing a campaign. We’re going to use these channels. We are going to target these groups. What do we need to pay attention to? We started to automate a lot of that. It’s all about policies. This is what you need to think about. Here are the things you should always do, never do. Conversely, we started actually creating things for them like checklists if they were going to bring in a vendor to do these things.

From there, we were able to actually use things like automation. Now we know if you’re logged into the network, who you are. Because you’re actually on our network, we know who you are. We know what your role is, where you’re located, what you’re focused on. No longer are we doing things like asking people, “What project are you working on? Because if you’re going to be rolling out a campaign in a pharma or capacity, you already need money. It’s going to cost you more than $20,000 to do this project. You’ve got money for this.

It’s related to a product that’s coming out or some kind of a service.

What we’re trying to do is automate a lot of data collection in the background and surfacing things to marketing teams just in time around policies so that when they’re ready to actually execute, the data is already available for them. They don’t have to ask the questions. We’re proactively servicing it to them, and in the back of the box, if you will, we’re starting to collect information and provide insights to folks that need them in the enterprise.

For example, if I’m a marketer in France, I probably need to understand that GDPR is a thing for me, if I’m going to be collecting data. I’m going to need to understand that if I’m actually dealing with children especially, that there are specific laws under GDPR that define children as being of different ages, so that’s going to impact things in France versus Italy versus Germany.

Creating that picture proactively for the marketers so they almost don’t have to think about it very much. It’s just a gimmick. Just like they open up Microsoft Word, they just know that’s where they’re going to type. They have this data. They already know that’s how they’re going to execute their campaign. It’s really cool in the background to also alert IT, for example, that there’s a new vendor coming in from the outside who’s going to be helping with this campaign because a marketer isn’t doing it in-house. We’re going to need access to the network for a third-party vendor.

We’re alerting procurement they were going to have a procurement going out and pooling what the marketer needs to adhere to and sending it off to procurement already, which is going to be the basis of this statement of work. We’re basically alerting, at the same time, folks that are for example in the ground. If you’re in France for example, we’re alerting the folks in Montreal and in Quebec that they might be able to repurpose a lot of the French collateral that the marketing team is creating for that part of Canada because it’s French-speaking.

We’re alerting the budget, the budgeting and ops team in finance that we’re starting to use this money and giving them the idea to spend so we could project if more money is going to be needed. It’s cool because the marketer is just logging into their computer not really thinking about anything. This is the world that they’re living in. It’s in my mind is having those policies that are proactively starting to understand what are humans doing? Where are decision points? What are the things that humans aren’t thinking about to cue them into things? Then automating a lot of the stuff that is just so redundant and shouldn’t be taking up brain space.

Justin: One of the things I always like to say is never underestimate the power of the concept of having one less thing to think about. The just-in-time information and contextual awareness and being able to easily get the information you need. We have a lot of clients in the mortgage industry, and one of the things that we do that’s pretty popular is get instant access to the GSE guidance on Fannie, Freddie, USDA, all that stuff. Very important when you’re selling mortgages. Getting that stuff in front of the LOs in the fastest, least friction way possible is a major benefit to them, and it’s similar to what you just described there, and I think I’m with you.

One of the things that’s really exciting about the future that we’re all barreling towards here is I’m just going to focus on my best work. I’m going to focus on the things where I can drive the most value and not get either cornered into a situation where I make a mistake because I violate some tenet. Whether it’s a law because we do something wrong with data or a gaffe just for whatever reason and just unlocked to really do my best work. That’s the future of work that I am most excited to spend the rest of my career in. This has been a great conversation.

Kristina: It has, thanks.

Justin: This was really elucidating, and I think a great change of pace from our usual episodes because these are questions that support leaders need to be asking. These are conversations they need to be having, and if you’re going to start bringing in AI and automation in your organization, you have to think about this stuff. I want to wrap up with the quickfire round that I still have not branded. Every podcast, I tell the guests I’m going to brand this and have a cute name for it, and I still have not gotten there. We’re just going to call it the quickfire round, and I’ll fire off a few questions for you. You answer the first thing that comes to mind. What’s the book that you most often recommend to people?

Kristina: Well, there’s two books that I always recommend, but one of the ones that I’m focused on lately is A Thousand Brains by Jeff Hawkins. I’m not sure if you’ve heard of it or not, but it’s a great book. It’s all about how the brain works, and then, obviously, from there we understand how technology could work including a lot of the AI.

Justin: Very cool, yes. A book I recommend a lot is The Power of Digital Policy by Kristina Podnar.

Kristina: Thank you. I appreciate that.

Justin: That was a very natural shout-out there.

Kristina: Thanks.

Justin: What’s the best productivity tip you’ve ever received that you’ve put into practice for yourself?

Kristina: I’ve turned on automatic booking for my calendar through Microsoft, believe it or not. This goes hand in hand with that is I’ve actually just become a fan of Calendly, which allows people to self-schedule things. I know you use it as well. To me, that’s the biggest thing. Being able to have my time scheduled with some buffers in it and having somebody that’s actually able to see my calendar. I don’t have as many emails in my inbox, and my days are flowing much smoother.

Justin: I’m with you. I want to normalize sending people Calendly links. It’s not impersonal like we’ve all done the, “I’m available.” For email back and forth. Quick tip for the audience. I’ll share this in the show notes, but I’ve recently made a Siri shortcut that if I tap the button for the Siri shortcut, I get a little menu to select either my 30-minute or my 60-minute Calendly link. If I’m on my phone slacking or whatever, texting, emailing, whatever it is, I can quickly grab that without having to type calendly.com/.

Kristina: Taking it to the next level.

Justin: Oh, yes, and get nerdy with it. If you could recommend one website, blog, Slack community, LinkedIn group, et cetera, for people to really discuss some of the policy and preparedness issues that we’ve talked about today, what would it be?

Kristina: Well, unfortunately, there really isn’t a geekdom fest going on around digital policy. It just doesn’t sound sexy enough quite frankly. What I would actually advise folks to do is to go into forums like XRSI’s forum which is all about things like what’s happening in virtual reality coming up? Because I would say that’s the community that’s talking the most about policies right now, the implications of decisions we’re making. It won’t be directly related to things like consumer care, for example, in your call center, but you can see the types of policies we’re talking about in that space, and they’re still very applicable.

Justin: Close us out here. If there’s one person in the world of automation, AI, or maybe even the stuff related to your work that we didn’t cover today, there’s one person you could take out for coffee or cocktail depending on the time of day and the vibe, who would it be?

Kristina: It has to be in the field?

Justin: Yes. Actually, there you go. Yes, you get to take one interesting person to pick their brain out for lunch, coffee, or cocktail, who is it?

Kristina: Any interesting person?

Justin: Any interesting person.

Kristina: You know what? I’m going to go back to Jeff Hawkins. I think I’m just going to geek out with him. There’s a lot of people out there, and I thought about runners because your shirt says run, and I’m a runner myself. I’m all about running people, but I think in this instance, I think I’m going to have to default to Jeff. If it was somebody who is dead, it would be Howard Hughes, but definitely Jeff in this instance.

Justin: Howard Hughes would definitely be somebody to sit down with, most likely with him. Well, Kristina, this has been a wonderful conversation. Where can people find you and the work that you do?

Kristina: If you head over to thepowerofdigitalpolicy.com, it’s the name of the book, it’s the name of what I do, thepowerofdigitalpolicy.com, that’ll direct you to everything else that I do, including some of the resources that you mentioned, like what kind of policies do I need to have?

Justin: Kristina is also great to follow on Twitter @kpodnar. Active user of the platform and shares a lot of good stuff.

Kristina: Thanks.

Justin: Kristina, this has been wonderful. Thank you so very much for coming on The Support Automation Show, and I hope you have a spectacular weekend.

Kristina: Great, see you too, and next time in St. Louis, I think of my treat for lunch.

Justin: Deal. The Support Automation Show is brought to you by Capacity. Visit capacity.com to find everything you need for automating support and business processes in one powerful platform. You can find the show by searching for support automation in your favorite podcast app. Please subscribe so you don’t miss any future episodes. On behalf of the team here at Capacity, thanks for listening.