Insights Table of Experts: The emerging AI threat – Milwaukee Business Journal

Table of Experts: The emerging AI threat – Milwaukee Business Journal

Access the story online at the Milwaukee Business Journal, or read the full article below.

Artificial Intelligence is helping companies increase productivity, but it is also exposing them to new risks. Hackers are creating increasingly sophisticated cyberattacks using legitimate looking emails and voice messages that are difficult to discern as fake. At the same time, employees are inadvertently exposing confidential documents through the improper use of AI tools. In order to understand the new risks companies face because of AI and what they can do to protect themselves, the Milwaukee Business Journal recently assembled a panel of experts to explore the issue.

MODERATOR: Artificial Intelligence (AI) is transforming everything it touches, including cybersecurity. Explain how AI is making it easier for cybercriminals to create fake emails and voice messages.

JASON NAVARRO: AI is an awesome computing tool, but it gives cybercriminals the ability to do things that they have only been able to dream of for years. AI makes it easier for them to get people to make mistakes through fake emails, fake imagery and fake videos. And it can do it at a much faster pace than humans. It never stops, it never sleeps, it just keeps going after potential targets.

JOSEPH STEINER: The biggest change is the sophistication AI allows. You used to be able to train people to look for certain hallmarks of a fake email – misspellings and grammar, for example. AI has made it harder to discern whether something is fake or real.

NAVARRO: And now it is in the voice world. AI can take 10 seconds of someone’s voice to create a voice message that’s almost impossible to determine as a fake.

MODERATOR: From a business leader, manager or owner’s perspective, how is AI increasing the potential for the company’s confidential materials to get exposed?

STEINER: There are three areas where confidential information is vulnerable to AI. The first are employees who use a public AI tool to summarize confidential information without realizing where that information might go. The second area involves people using an internal and trusted AI tool to inadvertently expose confidential information within an organization. The third area is failing to ensure that any output that contains confidential information is secured, labeled confidential and protected.

ZACH WILLENBRINK: If you aren’t careful, the information you input into AI can be output in unexpected ways. A famous example involved Samsung a few years ago. Shortly after ChatGPT came out, a group of coders realized its power to debug code. They input their draft code into the tool, but that allowed other users to get Samsung’s proprietary code.

NAVARRO: When AI tools came out, people were like, “Look at all these benefits. I’m going to give this AI thing everything to see what it can do.” And that drove me crazy. Would you let an outside person peruse all of the company’s financial documents in your CFO’s office? Yet, people are giving AI tools that information without even thinking about it because they think there is no risk. One common example is the note taker that pops up on Zoom, Teams or whatever platform you are using. What is that note taker doing with that information? Note taker makes it easy to summarize a conversation, but it can also expose a lot of privileged and confidential information.

WILLENBRINK: AI note takers are a huge unexamined risk for businesses. They typically do a good job at taking notes, but people don’t appreciate the risks involved, which can range from the outputs lacking necessary context to being discoverable in litigation to being used by the software providers for training if the right configurations and contract terms aren’t in place.

NAVARRO: It’s like having someone sitting in on your corporate board meeting and then publishing everything they heard. No board would allow that, but people freely allow this technology to listen to everything that’s being said, summarize it, and then do who knows what with it.

STEINER: We tell our customers that they should treat an AI agent the same as they would a person from outside their organization. Would they make an outside person sign an NDA before sharing the information? If they would, they need to be careful about sharing it with an AI tool.

MODERATOR: Are there ways to protect what AI does with your information?

WILLENBRINK: Most of the AI tools offer a business or enterprise subscription that lets you control whether your information can be shared outside of your organization.

STEINER: Using trusted tooling is key. So is locking down your data, because even within an internal environment you can have some mishaps in terms of data being shared with people who shouldn’t be seeing it. You also have to make sure that whoever’s interacting with your data is who they say they are. MFA, or multifactor authentication, is a big part of that.

MODERATOR: Are MFAs mandated and do they offer enough protection?

STEINER: There are ways around MFA, but it does stop many of the simpler threats. Locking down admin accounts, which give users access to critical parts of the network, also makes a big difference. It’s not going to prevent everything, but it stops a lot. The goal is to make it as difficult as possible for somebody to break in.

NAVARRO: All of these items we’re talking about are tools. None of them can guarantee that you will stop nefarious actors, but they make it more difficult. As far as mandates, insurers can mandate that you have to have MFA in place to get an insurance policy.

MODERATOR: How is AI impacting the ability to keep company networks secure?

STEINER: You can leverage AI to identify patterns in the millions of log elements your network is collecting to identify suspicious activity.

NAVARRO: Endpoint detection and response is an AI tool. It monitors your network in real-time alerting humans of potential dangers. Insurers also use AI predictive analytics tools to create mock attacks. They never actually intrude into the system, but they can do many more attack simulations than humans can do to determine the risk potential.

MODERATOR: What kinds of strategies can employers implement to counter these AI challenges?

WILLENBRINK: Two things. First is technical: a lot of our clients, particularly clients that commonly face threats of fraud, are using agentic AI tools that can act to stop what they identify as fraud without needing human intervention. Some are still in an experimental phase and any agentic tool needs to be deployed with care, but they are showing some overall promise in counteracting some of the AI fraud threats. The second strategy is governance based: having at least a basic AI policy is also critical. Right now, a lot of businesses are sticking their head in the sand because they don’t want to think through the complexity of using AI tools. A basic AI use policy could probably address 75 percent or more of key AI risks.

STEINER: Something that’s been encouraging to me is that the larger tech providers – Microsoft, Google, Meta, Amazon – are sharing information in order to help everybody. Information sharing benefits all. It’s a threat for the large tech providers as much as it is for their customers.

MODERATOR: Let’s talk about cost. Cost is always an issue, but it’s even more significant for small businesses and startups. What are some minimum steps small businesses can do to thwart cyberattacks and ransomware?

STEINER: Some protection tools don’t cost as much for smaller businesses because their prices are based on the number of users. And because smaller businesses don’t often have the necessary skill sets in house, having automated protections is especially beneficial for them. As for the steps businesses can take, I would say the first is to ensure that you don’t have holes in your identity plane, which is the part of your network that verifies users and determines what they can access. Attacks starting with admin accounts can be really bad really fast because of the access they have within the network. You need to make sure you are protecting the identity plane, locking down your data and ensuring you are backing it up appropriately.

NAVARRO: Cost is obviously a huge issue from a small business perspective, but there are a lot of things you can do for little to no cost like education, training and practice. You can create a cyber continuity or business disaster recovery plan for little or no money. The website CISA.gov has lots of free material including training and education resources, as well as links that can help you create your own game plan. Leverage your resources – your IT team, your attorney, your insurance agency. As far as the plan goes, you have to specify who on your team is going to do what when an event occurs. You practice for fires and tornadoes. You practice for active shooters. You need to do the same for cybersecurity.

WILLENBRINK: Having a playbook and practicing it ahead of time can make a big difference. Having an annual tabletop exercise or a meeting with the various stakeholders that support you can be valuable because it can identify gaps in your plan, itself, and also in things like your insurance coverage.

MODERATOR: How important is it for companies to be looking at the cybersecurity policies of their contractors and vendors?

WILLENBRINK: Performing some due diligence on your supply chain is key to reducing your risk in today’s interconnected environment. Under most state notification laws, you’re going to be responsible for notifying the individuals and companies impacted if it’s your data that’s been breached, regardless of where the breach occurred. We’ve seen this happen multiple times with supply chain attacks.

NAVARRO: That’s the first step. The second step is considering the associated financial loss that can occur. If the information you gave to supplier ABC gets compromised or their systems go down due to a cyber situation, you can’t operate on a fully functional basis. That’s going to impact your revenue. So, you need to plan not only for the data breach, but how you are going to stay afloat if a key vendor goes down.

WILLENBRINK: The reality is it can be really difficult for businesses, particularly small and medium-sized ones, to get indemnity language in their vendor contracts to cover things like breach notifications or, especially, lost revenue. Even for situations where the vendor is at fault. So, beyond just basic diligence and contract review, you also need to talk with your insurance broker to determine whether you’ve filled that gap.

STEINER: Cloud providers will be clear about what they won’t be responsible for, but they might not be clear on all the potential risks. Working with your legal, insurance and technical providers can help you ascertain that.

MODERATOR: Has the risk of data-breach litigation increased?

WILLENBRINK: We’ve seen an explosion in data breach litigation over the last five years nationally and in the last couple of years in Wisconsin. It has become one of the key risks businesses have to think about. It can be awful for businesses because first they’re victimized by a cyberattack. Then, once they’re through the attack, they might face class-action lawsuits by plaintiffs claiming, after the fact, that something should have been done differently, which can be expensive to fight.

STEINER: They are costly cases for a company to defend even if they have a good defense case.

WILLENBRINK: Absolutely.

NAVARRO: One thing I see evolving over the next five years is a pivot to claims against directors and officers, which is a whole separate litigation area. If that happens, you are going to have cyber lawsuits and you are going to have officers and directors lawsuits.

WILLENBRINK: There’s a pretty interesting body of case law developing in Delaware concerning potential liability for officers and directors for alleged oversight failures.

STEINER: Do you find that stronger regulation can help companies by providing some level of protection if they abide by the regulations?

WILLENBRINK: Some states have started offering a safe harbor approach that provides an affirmative defense to data-breach claims. But I think the reality is that the more regulations there are, the more likely it is that companies, as well as their directors and officers, are going to be targeted in these suits.

MODERATOR: Are companies required to notify the government if there’s been a data breach?

WILLENBRINK: Every state requires businesses to notify individuals if a data breach impacts certain categories of their personal information, and about half of those state statutes separately require notifying state law enforcement or regulators. And, of course, there are specific notification requirements for federally regulated businesses under, for instance, HIPAA (the Health Information Portability and Accountability Act) and GLBA (Gramm-Leach-Bliley ACT).

MODERATOR: What are one or two things that you want to share with our readers that we might have missed or that you are passionate about?

NAVARRO: I always go back to the same thing. You have to train and prepare for cyberattacks. I start there because it gets you thinking about how well you are protected. It gets you in a state of mind that allows you to question how you can address your risks much like you would any other exposure.

STEINER: AI technology is not going away and it can provide a lot of benefits. You have to ensure that you’ve set up a responsible policy for using AI. You have to take the necessary steps and train your employees so that you can realize more of AI’s benefits and less of its risks.

WILLENBRINK: Whether we’re talking about AI or cybersecurity, my view is the most important thing is planning. Put together easy to understand policies to help avoid the worst outcomes, then develop and practice procedures for what to do when the worst happens.