How can organizations ensure that their use of AI aligns with ethical and responsible practices?

Stephen
6 replies
What steps can they take to mitigate potential risks and negative impacts on society? Here's our latest blog about our commitment to ethical practices and how we're leading the way in creating a better world with responsible AI at StoryFile. Let me know if you have any questions or feedback. https://storyfile.com/storyfiles-values-champion-responsible-ai/

Replies

Eylul Savas
Conversa - Videos That Talk back
Conversa - Videos That Talk back
By setting up clear rules, being open and accountable about what they're doing, getting input from different types of people when making decisions, regularly checking for potential problems, and teaching their employees how to use AI in a responsible way. It's also crucial for them to keep checking and updating their AI systems to make sure they're doing the right thing because new issues might come up later.
Denise Campbell
Conversa - Videos That Talk back
Conversa - Videos That Talk back
@eylulsavas I wholeheartedly agree. A continual and consistent review and/or benchmarking of how an organization is conveying "ethical AI" will be important to this rapidly evolving field.
Stephen
Conversa - Videos That Talk back
Conversa - Videos That Talk back
@eylulsavas well said Eylul!
Eylul Savas
Conversa - Videos That Talk back
Conversa - Videos That Talk back
Valorie Jones
Conversa - Videos That Talk back
Conversa - Videos That Talk back
Part of what makes modern neural networks so powerful is their ability to recognize and extrapolate complex patterns and tasks from massive input data, but this often means that AI is a blackbox where it is hard to decipher why the neural network made the decision it did. This kind of justification is important to many high-profile decisions. One classic examples where this went wrong, are using AI to recommend whether prisoners are likely to relapse, but we also see it where GTP-3 generated flawed medical recommendations or historical inaccuracies. There is some interesting reseeach into how can try to detect biased training data. I recommend both these articles for further reading on bias in NLP: Detecting and Mitigating Bias in Natural Language Processing https://www.brookings.edu/resear... A higher-level summary. A Survey on Gender-bias in Natural Language Processing https://arxiv.org/abs/2112.14168 - a in-depth academic article comparing detailed methods to detect bias.
Stephen
Conversa - Videos That Talk back
Conversa - Videos That Talk back
@val_jones Very interesting point Val, thank you for sharing these articles. Will read them ASAP.