I am conducting a study to evaluate Google's responsible AI review process.

Context: When AI solutions and technologies are proposed, the Google Responsible AI team conducts a review to ensure the proposed solution/technology aligns with its established AI principles.

I would like to apply Rate (1994) Ethical Decision Making (EDM) framework to validate the integrity of the review process, and to establish if an EDM framework is useful in augmenting the level of confidence in the intended outcomes of the proposed AI solutions

I would welcome thoughts, suggestions, and ideas on an evaluation approach

Thank you

More Nwakaego Obi's questions See All
Similar questions and discussions