Should AI developers be held personally liable for the biases present in their models?

Esranur Kaygin
7 replies
My opinion: Blame the code or the coder? Holding devs personally liable might sound flashy, but ignores the bigger picture. We must address the data, algorithms, and societal factors feeding the bias monster. Let's fix the system, not just punish the programmers.

Replies

Shreya Gupta
No, but Implementing strict guidelines and a solid framework can significantly reduce bias and errors in development, leading to fairer outcomes.
Share
Gurkaran Singh
As an avid tech enthusiast and data science aficionado, I believe that the question of holding AI developers personally liable for the biases in their models is a complex one. Instead of pointing fingers solely at the individuals crafting the code, it's crucial to take a step back and examine the broader ecosystem at play. Let's not just blame the coder – let's focus on addressing the root causes such as biased data, flawed algorithms, and societal influences that perpetuate these issues. By fixing the system as a whole, we can create a more fair and unbiased AI landscape. Let's shift the narrative from punishment to progress.
Share
Esranur Kaygin
@thestarkster Great answer, sometimes I feel like humans are just biased and we're trying to create a bias free world (AI world) while the data we feed it to build this would is Biased. In a way its like fruit of the poisonous tree.
Joep van den Bogaert
As a company, you are definitely responsible for how you use AI models. You cannot blame the model for unfair decisions or other harm done by your application. Bias and inaccuracies are part of the tech at this moment. So you have to deal with it. I wrote a LI post about accountability recently following the Air Canada ruling: https://www.linkedin.com/posts/j.... Besides, it's impossible to create a model that works well and is unbiased for all use cases, so don't expect OpenAI to fix everything for everyone.
Share
Joep van den Bogaert
@esranur_kaygin I agree. I think I misunderstood what you meant with AI developers first. But both the model providers (OpenAI, Google) and the companies building applications with these models have responsibilities. Model providers should communicate clearly on the limitations and risks associated with using their tech. Preferably also provide guidelines and no-go's. I think they are dropping the ball on this right now as all you hear is how models are capable of all these amazing things. Companies incorporating these models into their software have a responsibility for how they are used. This was what I meant here originally. In both cases, it's not the programmer who is responsible imo, but higher management. They should be aware of the risks, take action to evaluate thoroughly and decide what can be safely deployed.
Share
Esranur Kaygin
@jopie gonna copy past my prev comment here too: Just as plane manufacturers aren't directly responsible for a crash, AI developers aren't solely to blame for AI misuse. However, both industries must prioritize safety measures, like internal and external audits, and regulations. This ensures responsible practices and builds trust in their technologies.
Milli Sen
Agreed. The coder cannot be held responsible entirely since they cannot control the biases of the model. There are too many factors to consider in terms of the training data and the model's capabilities.
Share