Balancing innovation with integrity: ethical AI management for today’s workforce

September 3, 2024
Sponsored by

What are the main ethical concerns around governing AI? And what can we do as managers or HR staff as well as citizens to make sure AI is being used responsibly at the workplace?

On May 24th 2024, our Head of Science, Vivienne Smith, interviewed Dr. Marianna Ganapini, professor of philosophy at University of Notre Dame, faculty director of Montreal AI Ethics Institute and Co-Founder of Logica Now, on these very questions. Below is a summary of the talk; you can find the full version on LinkedIn.

 

Who is responsible for regulating the use of AI in the workplace?

We start with legal guidelines that are enforced by acts such as the AI act, the world's first comprehensive AI law instigated by the European Union. If companies want to know where to start, they can look at such acts to guide their own policies.

Law is not the same thing as ethics, however. So while guidelines can give us a starting point, it is up to individual companies and even individual citizens to govern their own use of AI.

 

What are the main ethical concerns these regulators have around how AI will be used?

There are 4 main ethical concerns around AI:

  1. Trustworthiness: is the AI model accurate?
  2. Privacy: how is AI using the data and information given?
  3. Fairness: is the model being trained in a way that is fair or are there inherent biases that will be then propagated that discriminate?
  4. Manipulation: is the AI trained to manipulate you in a certain way? Given that the machine can know a lot about human beings and its power to personalize its recommendations, put in the wrong hands, it could have far-reaching negative consequences on society.

How do we know if the model is fulfilling these ethical criteria?

It's a tough question. Sometimes we don't know because the tools are black boxes. It's hard to know what's going on even though there are technical advances trying to figure that out.

We need to use technology to uncover the black box and see what's under the hood. There are also open-source models. There are pros and cons but it is one way of ensuring transparency.

The third way is to show as a company that you've done your homework, that you've trained your model a certain way. You've done boundary testing, red-teaming...all the things that are reasonably required to show that your model does not have as many biases.There is reduced risk for manipulation and privacy boundaries, for example.

There are things that can be done to ensure your model is fulfilling these criteria and this is becoming more and more required.

 

Can you measure and test for these criteria, for example, if your model is biased?

Yes! First of all, every model is biased but the problem is if it leads to discrimination. You can test to see if your model is fairly representing the dataset you want.

For example, if you are training a model to recognize melanoma for a broad population, you want to make sure the dataset you are training your model on is representative of the population you will be using it on.

There are other models that can tell you how the model came to the conclusion and the uncertainty of that conclusion.

Then, you make a decision and that gets to the ethics side.

Is there a consensus on the guiding principles used to craft AI legislation? Are there any debates on these principles?

A little bit of history, around 2015 - 2016, people started really talking about AI ethics and there was a flurry of principles and frameworks. There was NIST, the European Union one,OECD...but pretty much they were about the same thing..privacy, discrimination, fairness, human rights etc.

What people don't agreeon now is whether or not we should open-source models. If we reveal too much,that information/model could be used to do bad. But some are really in favorbecause it makes it more transparent and democratic.

The problems are more on the applications rather than the principles.

 

What are the challenges of putting the principles into application?

The challenges are immense.

I can tell you whether your model is biased or skewed or lacks information or is uncertain, but then you have to make ethical decisions.

This means you have to introduce tradeoffs and this means looking at values.

If your technology is going to be good for one sector of the population but not another sector, you have to decide if that aligns with your values. That is an ethical decision which is very difficult.

For example, facial recognition can be a great thing. It can be a bad thing if put in the wrong hands, like if you have a dictator. But on the policing side of things, there is a big debate in the EU on this - should we allow facial recognition for policing purposes?

On one hand, I don't want to be spied on but on the other hand, I want to have the bad guys tracked.

 

Could you give us an example of an ethical concern of AI at work?

It can be done very safely if principles are put in place and AI is used to streamline work, speedup processes and cut costs. And if you are putting in guardrails in place to make sure your proprietary information doesn't get leaked.

Of course, there are issues such as layoffs. If you are cutting costs, are you going to need to layoff people? Or will you need to retrain or up skill people whose skillset will be replaced with AI?

Then, you have more controversial technology like emotion recognition. It can be an enormous help to understand if your staff is overworked or are enjoying what they are doing.On the other hand, it could feel like you're spying on them. What if the emotion recognition is incorrect?

The bottom line is: don't say no to the technology, but do put the guardrails in place.

 

Are there cultural differences between countries in the field of AI ethics?

There are cultures that are more concerned with privacy. In the U.S., we value privacy very highly andEurope as well. But there are countries that do not see privacy as high of a value and put more emphasis on the common good. So yes, definitely there are cultural differences.

The AI act applies toEuropean citizens so if you are a company that will sell to Europe, you need to make sure you comply with the act. There are different fines for various levels of offenses, so based on the country where the AI will be used, it is important to know the acts that govern those countries.

 

Finally, what are some take home messages for companies and the common person regarding AI ethics?

First, companies need to have a clear standard and set of policies for internal use of AI. But then, you also need to train them. They need to be clear on how they can use AI in the most efficient way.

If they don't know how to use AI efficiently, it will be a waste of time. Teaching prompt engineering, for example, is a great start.

Then you also need to teach them how to use it safely. Teaching them about the dangers and harms - for them, the company and at society at large.

We need to teach at the general level because everyone is using AI, but also on a specialized level, for example for HR, the business unit, sales unit etc. 

It's low-hanging fruit and there's evidence that it works.

For the common person, awareness is where it starts. People will start to be aware of how they use their phone and chatGPT.

Know the privacy limitations we talked about, but also now that AI itself has limits. The results it gives you is from a machine - which is very fast - but also limited. So don't take everything as true.

 

Put a human at the end of the AI process; don't let AI decide everything.

 

Sources

Tags

You might like these.