1. how to solve the problem of algorithmic bias in AI?
It seems to me that the datasets used by AI often contain a historical bias that affects the fairness of decisions. I would like to know what are some practical ways to eliminate or minimize this bias? Are there success stories to look at?
2. How can the transparency of AI models be improved?
The “black box” nature of many AI systems makes it difficult for me to explain their decision-making process. Are there any methods or tools to make AI systems more transparent and easier to explain? This is especially problematic in the case of deep learning models.
3. How to ensure the ethicality of AI in automation?
How do you ensure that AI follows ethical standards when it makes decisions in areas like autonomous driving or medical diagnostics? I'm curious if there are frameworks or norms applicable to these fields that I can look to to help me understand how to incorporate ethical considerations into AI decision-making.
4. How should the regulatory framework for AI be established?
The current U.S. regulatory framework for AI is underdeveloped and in many cases lacks uniform standards. I would like to understand which countries or regions are doing a better job of this? Can we learn from some successful regulatory experiences to improve the relevant system in the U.S. market?
If anyone has any experience or research in this area, I would love to get some specific insights and help.