Select Page

ethical use of AI

Ethical Use of AI

When it comes to the ethical use of AI in your business, you want to do the right thing and you assume you will.

But what if artificial intelligence is like an iceberg with hidden danger below the waterline?

No one wants their business to be the Titanic, and that’s why we’re going to explore how to start thinking about the ethical use of AI to successfully navigate those hazards.

What You’ll Discover About the Ethical Use of AI:

* What to know about the bias used in training AI models

* 2 policies and assessment you need to examine to evaluate the ethical use of AI

* The number one red flag when evaluating the ethical use of AI

* How to avoid misinformation, copyright infringement and breaches of confidentiality

* And much more.

Guest: Frank Kyazze

Frank Kyazze

Frank Kyazze’s career is a blend of innovation, ethics, and humanism in technology. Over the last decade, he’s become a leader in digital trust, covering cybersecurity, governance, risk management, compliance, privacy, and AI ethics. From Pittsburgh to New Orleans, as the founder and CEO of GRC Knight, Frank champions strategies that protect national security and our society’s infrastructure. His work spans traditional sectors to startups and venture capital, advocating for cybersecurity as a growth engine.

With certifications like CISSP, CIA, CEH, CIPP/E, CIPM, and ISO Lead Auditor, Frank’s expertise is unmatched. However, it’s his passion for philosophy and improv theater that brings a creative and ethical lens to his work. He sees digital trust as essential to business integrity and innovation, aiming to make it accessible and integral to corporate culture. Frank is a thought leader who connects technology’s tangible aspects with its philosophical roots, offering insightful perspectives on technology, national security, and the digital economy.

Frank Kyazze is not just a name in the tech industry; he is a storyteller, a philosopher in the digital age, and an architect of trust in a world that increasingly depends on it. His journey and insights offer a unique blend of depth, pragmatism, and visionary thinking, making him a compelling voice in the ongoing conversation about our digital future.

Related Resources:

If you liked this interview, you might also enjoy our other Corporate Governance and Culture episodes.

Contact Frank and connect with him on LinkedIn, and Instagram.

And visit his website.

_____

How Entrepreneurs Need to Start Thinking About the Ethical Use of AI

When it comes to the ethical use of AI in your business, you want to do the right thing and you assume you will. But what if artificial intelligence is like an iceberg with hidden danger below the waterline? No one wants their business to be the Titanic, and that’s why we’re going to explore how to start thinking about the ethical use of AI. To successfully navigate those hazards. Stay tuned.

 

This is Business Confidential Now with Hanna Hasl-Kelchner helping you see business issues hiding in plain view that matter to your bottom line.

 

Welcome to Business Confidential Now, the weekly podcast for smart executives, managers and entrepreneurs looking to improve business performance and their bottom line.

 

I’m your host, Hanna Hasl-Kelchner and I have a super guest for you today. He’s Frank Kyazze. Frank is the CEO of GRC night, and over the past decade, he’s become a leader in digital trust, covering cybersecurity, governance, risk management, compliance, privacy, and AI ethics. His work spans traditional sectors to startups and venture capital, where he advocates for cybersecurity as a growth engine and champions strategies that promote national security and our society’s infrastructure.

 

It is a pleasure to have him join us. Welcome to Business Confidential Now, Frank.

 

Thanks for having me, Hanna. Wow, that was an amazing intro. I think you giving me way more credit than I deserve.

 

No, I don’t think so. You’ve accomplished a lot, and I really love how your career is a blend of innovation, ethics, and humanism in technology because it’s so easy to kind of, you know, get this robotic, mechanical feel about it all. And so I am particularly interested in the ethical use of AI because in some respects, from the little that I’ve read, it feels like it’s here to stay. It’s growing faster than a new puppy. And if we don’t know how to set boundaries and train that puppy, it’s going to mess all over the house.

 

And so in your experience, what should business leaders and business owners in particular be doing to harness and ensure the ethical use of AI?

 

Yeah, absolutely. It really boils down to transparency and visibility, right? A lot of the organizations that I’ve been working with and speaking to around AI security governance are using tools like ChatGPT cloud, Google AI to do a lot of the heavy lifting in regard to developing the models that are trained against data to solve their business problems, whether it’s to be able to quickly access a huge amount of data for their organization or even decision making.

 

And in lending out this responsibility to these third party suppliers like ChatGPT and Google AI. There’s a little bit of oversight that’s lacking in terms of how these models are being developed and what they’re actually doing with organizations data. So it really boils down to transparency and organizations being able to really understand how the models work, how the training is occurring on the data sets that they provide these models.

 

You know, how these third party organizations themselves promote transparency and trust with AI, safe AI, and how they’re dealing with bias when it comes to training AI models? That’s going to be a lot of the work that organizations are going to have to do to make sure that they themselves are using AI ethically.

 

That’s really interesting. So are there some kind of policies that they could enact in order to ensure the ethical use of AI? Or maybe I should back up and say what? What kind of questions do they need to ask? Because this is really interesting. You’re talking about third party AI tools and that they may have these, these biases in them. I mean, that’s kind of scary. Can you tell me a little bit more about that?

 

Yeah, absolutely. When an organization is looking to acquire or bring in some sort of AI tool, let’s say it’s GPT API or Google AI API, it’s really good to look for three things. It’s an AI security policy, an AI governance policy, if possible, and then an AI impact assessment.

 

These three artifacts, which are typically in documentation, form a lot of the major AI service providers have them internally, but don’t really provide them with when you use your services, because it’s sort of like a self-service situation where the user can go in and get their API key, do the pricing and everything, and not really have to talk to an account executive of sorts.

 

But these three artifacts are important because an AI security policy will detail how the organization plans to protect its customers and the essential data subject from any sort of security, negative security implications that could happen from the use of their product. I governance details how the organization lays out a plan to have rules, guardrails, protections in place to fall in line with regulations and laws around AI I as an organization because similar to what you would see for an internal privacy policy, data protection policy.

 

And the most important is the AI impact assessment. This is a little bit more of a rare artifact that I’ve seen for organizations, but I’m starting to see more of. And an AI impact assessment is a detailed assessment that an organization does to determine any sort of negative impacts or consequences that the AI solution will have on their customers, on humans and society as a whole.

 

So these three artifacts together, a lot of the big players in the AI development space are starting to have them, but it’s also good practice for organizations that are looking to get into the AI development space or are looking to sell a product or service that’s going to be AI driven to look into drafting their own AI security governance policy, and then also doing that AI impact assessment internally to determine if they’re going to be creating the next terminator of sorts.

 

Yeah, that is the kind of scary thing that, you know, people have these nightmares over. But I think most of the people listening aren’t necessarily interested in developing their own AI tools, but they’re curious about using them. This has been really helpful in terms of here are some things to look for. The security policy, governance policy, and especially this impact assessment.

 

In your experience, Frank, in looking at these different third party providers, are there significant or meaningful differences between the information that they have and that impact assessment? It seems like that seems to be the more sensitive one of the three.

 

Yes I’ve seen different flavors of an AI impact assessment from very high level, not really going to too much detail on the solution to other organizations, providing a lot of detail as to how exactly they do their training and how often they do quality assurance and analysis on training results all the way down to data transfers and where are they going to be sending data to during the AI development process? Because you’ll be surprised.

 

A lot of these major AI service providers are using other AI models and other AI products. So it’s a bit of a bit of a treasure map to determine where your data may or may not be going to. But yes, that that AI impact assessment is going to be really crucial for organizations to look into whenever they are trying to acquire or bring on a vendor that is a major AI service provider.

 

Well as a business owner or an executive trying to make a decision about how to encourage the ethical use of AI or ensure the ethical use of AI. What are some red flags in these three documents that they should look for and then ask themselves, well, maybe this isn’t the vendor for us.

 

Yes, absolutely. So a lack of one of these artifacts of the number one red flag, right. Especially if you’re looking to acquire a vendor that’s going to be handling sensitive information. For example, HRIS human resource information Systems. I’m starting to see products on the market around helping HR professionals, better screen candidates, better engage internal employees.

 

You know, being able to educate employees on their own benefits with the use of AI. And these AI models might be processing sensitive information like names, gender orientation, medical history, employment history. So security numbers and what I’m looking at an AI impact assessment for an AI human resources information system I’m going to want to see that there has been an assessment on the potential misuse or abuse of data by the model.

 

There’s a potentially wrong answers, misinformation provided by an AI model that’s supposed to be given to an employee for, let’s say, around their benefits or also profiling to right being able to profile ingested data for users based on their age, race, gender, or sexual orientation.

 

Those are going to be red flags that I would want to look out if I didn’t see any sort of assessment of potential of those negative consequences.

 

Okay. What you’ve described, which I think is really important, is for the organization that is trying to use AI to manage a system like HR information. I would imagine that there are smaller organizations that aren’t quite to that level of maturity, shall we say, where they have these massive systems. And ChatGPT just sounds like a good tool to create some marketing tools or marketing content.

 

Are there ways that they should be thinking about this in order to promote the ethical use of AI?

 

Absolutely. And you mentioned a really interesting point. I would say that the most common use of AI system services for most businesses, medium and small, are going to be in the realm of helping them make more money and sales, or helping them make their lives easier, or a combination of both. So when you think of using AI for marketing copy, for example, it’s hard work to write good marketing copy and then write the right marketing copy to reach your target audience.

 

And AI can definitely simplify and streamline that process in terms of being able to create an original, crafted messaging for each target prospect that you want to sell your product and services to. There’s a bit of an ethical risk, though, in the fact that is I original. I don’t think so. Right. If you think of AI models being trained on lots and lots of already preexisting texts and language, you might be surprised to know that the content and output of that content isn’t original.

 

You might be copyright infringement, especially if you’re using AI to write white papers or books that you want to sell commercially. So there’s that sort of ethical risk there. Right. And then there’s the, the ethical risk of misinformation. That’s the number one of you might use AI to write support guides on technical support guides, for example. But the model might skim the surface of what it’s supposed to really give a user to handle an issue around technical support and that user could end up in more trouble than before they started.

 

I use AI myself for a lot of troubleshooting with code. I do some development and I’ll find from time to time that when I’m using, let’s say, ChatGPT to troubleshoot some code, I end up in a deeper rabbit hole than where I was before. I would have just stopped through the issue myself. So there’s definitely some sort of risks and ethical qualms there when it comes to causing more harm or more distress to the user before they use AI.

 

Well, what kind of guardrails can an organization put in place to get the best of both worlds, so that they can get some of the speed and sort of jump start of the process. I mean, AI is just phenomenal in terms of being able to process a large amount of information in a short period of time and basically spit something out.

 

But what I’m hearing, and I totally agree, is that you don’t just want to take it at face value. So then how do you encourage people or what policies can you put in place or procedures in order to guard against, like you said, the misinformation or the copyright infringement or the incomplete information that can cause more problems than it solves?

 

Yeah, absolutely. So I would say another really major common usage for AI is the question answer solution. Right. Where an organization, maybe it’s a knowledge business, or maybe it’s a business that has a lot of proprietary information that they need to use to provide their product or service. They can basically set up their own, let’s say, GPT, on their own data sets to provide that quick question and answer to either internal folks or customers that might have questions or answers or need support.

 

And in this sort of scenario, I would say the number one guardrail is you get what you put in, right? The quality of the data that you give, the retrieval and generation AI system and framework is going to be the quality of answers you get back. So a lot of due diligence in terms of really testing and providing QA on data sets before you’re entering them into AI retrieval and generation frameworks that’s going to be the number one guardrail in terms of using AI for other instances when it comes to testing models.

 

It’s almost like you’re pretending to be the bad guy when you’re asking questions to a test model or a test AI solution or service, you’re trying to ask questions that will try to break it or puzzle it.

 

And this sort of testing scenario, that’s where if you do find any sort of bad answers, you can create guardrails via code to essentially say, handle these types of questions. This way, when you get a question around something that’s out of scope for our business and service, provide this response to say, I don’t really know the answer to that.

 

But if you have a question around A, B, or C, I can provide you an answer. And the common AI models like GPT quad, they do pretty well at guardrails in terms of you can’t use ChatGPT to get bomb instructions on how to build a bomb or do something scary like that. Right? And that’s because they have folks that have asked all the hard, bad questions to see if it’s possible to trick it into doing so.

 

But does that mean it’s impossible? No. That’s where the real risk, the AI risk, lies in that AI will not be 100% perfect or safe, and businesses will have to weigh the pros and cons of using such services and solutions or not.

 

Well, that makes sense. But let’s say a business has decided. Yes, there’s some real advantages to using AI, especially the ethical use of AI. How do we make sure our employees are equipped? What kind of training would you recommend?

 

Oh, absolutely. Yes. So depending on the deployment of the solution. So for a larger enterprise, I’ve seen organizations have sort of like how they have security awareness training, AI safety training. Right. How to properly use AI services within an organization. If you see any sort of misinformation or bias in reporting mechanisms. How can employees report bad instances of AI usage to administrators of these systems?

 

Those are typically the trainings and awareness programs that I’ve seen around AI. For smaller organizations, it might be a little bit more work to educate the whole team, the whole organization as a whole around usage. And really a lot of the training is going to fall on the folks that are going to be administrating the solutions.

 

And you think of the stakeholders of the iris system, there’s going to be an HR stakeholder that has to really make sure that this solution that has been acquired is doing what it’s supposed to do, and they’re going to have to learn how to set up those guardrails where the configuration of the iris system, whether they have to work closely with the IT teams to determine what data specifically you can pull and can’t pull.

 

That’s going to be a lot of work on their part for organizations that are Microsoft 365. I’d seen a lot of organizations try to use SharePoint data, for example, in AI systems. But the issue is how do you know what SharePoint data is out there and what’s public and what’s not public? Maybe there are teams that shouldn’t see other teams data, but if an AI system isn’t configured properly, they might be able to ask the question and query the data from those teams.

 

So there’s definitely going to be a lot of onus on the stakeholders in the custodians and the administrators of essentially the infrastructure that’s going to support these AI systems. Does that make sense?

 

Yes it does. So it’s not something to be taken lightly. That’s as long as you want to be able to maintain the ethical use of AI. So definitely I understand that.

 

But let me take it to another level. I mean, it’s one thing on the implementation phase and maintenance phase for these boundaries to be created, for protocols to be created, to understand where your sensitive information is and put up the proper firewalls.

 

But AI seems to be evolving at just lightning speed. How does somebody stay up to date with these changes, and how this third party platform that they may have adopted is adopting these changes?

 

Yes, that’s a really good question. And that’s the million dollar question. Because if you think of some organizations how long it takes for them to onboard a new vendor and then get it implemented and deployed, AI is moving so fast that I see some products have really good launches and within nine months be old news, right? That they haven’t they weren’t able to keep up.

 

Organizations that spend a lot of time and money on implementing these new products, and solutions. So it’s a bit of a loss sometimes it’s a bit of a game of running fast and slow at the same time. Right?

 

It’s you don’t want to use every new product or solution that’s, that’s coming out for AI. Because even though you might think it might be good for your business because things are changing so fast and you wait a couple more months and there might be a better solution that’s cheaper or even free potentially. Right?

 

I think the best way for folks to really stay in the know, though, is to actually use the tools. And I know it’s a cliche answer to, say, subscribing to AI tech newsletters and  following incubators to see what’s who’s coming out with the new the new magic. But that’s pretty much what you have to do Or you’ll notice it from your competitors, right?

 

Which might even be a worse scenario where you start to see some of your competitors starting to excel in certain areas, like sales and marketing and new product development, because they were the first to take initiative on new services and solutions. But I would say what organizations must not do is be afraid, not be afraid of change. And that’s how they’re going to be.

 

The hardest thing to accept is that you have to accept change and you have your organization has to have an appetite for change to be able to be able to dance in this new realm of AI technology and how it’s enabling businesses.

 

Well, it looks like you got to buckle up, right?

 

Yep, yep.

 

All right. Well, Frank, this is really interesting. I’m just wondering if there’s one thing that you like our audience to take away from today’s conversation and mastering, if you will, or attempting to master the ethical use of AI in their business, what would it be?

 

I would say AI is not going to replace us, right? I think it’s not any time soon. I think it’s that sort of booster to help people be a bit more productive, a bit more creative, be a bit more of a creative director in their own career, in their own work. And I would say that folks should start to think about their relationship with AI, how it can help you if you want it to, how it can hurt you if you want it to. Just like anything else in life, right?

 

Yep. It’s all about thinking it through. And you’ve definitely given us some food for thought. I especially love those three policies. Well, one is a security policy, a governance policy, and an impact assessment, which isn’t really a policy, but still three very important criteria to help people assess the kind of AI tools that they would be using.

 

And then, of course, how to use them ethically is a whole other ball game that they need to do some, some thinking about. And you’ve certainly helped us in that department too. So, Frank, I appreciate your time and all you do to help organizations plan for the future of AI and take steps to ensure their ethical use and their business.

 

If you’re listening and you’d like to know more about Frank Kyazze and his work at GRC night, that information, as well as the transcript of this interview, can be found in the show notes at BusinessConfidentialRadio.com.

 

Thanks so much for listening. Please be sure to tell your friends about the show and leave a positive review. We’ll be back next Thursday with another episode of Business Confidential Now.

 

So until then, have a great day at an even better tomorrow.

Join, Rate and Review:

Rating and reviewing the show helps us grow our audience and allows us to bring you more of the rich information you need to succeed from our high powered guests. Leave a review at Lovethepodcast.com/BusinessConfidential.

Joining the Business Confidential Now family is easy and lets you have instant access to the latest tactics, strategies and tips to make your business more successful.

Follow on your favorite podcast app here as well as on Facebook, YouTube, and LinkedIn.

Download ♥ Follow  Listen  Learn  Share  Review Comment  Enjoy

Disclosure:

This post may contain links to products to products on Amazon.com with which I have an affiliate relationship. I may receive commissions or bonuses from your actions on such links, AT NO ADDITIONAL COST TO YOU.