We are conducting a research project on "Measuring the trust in explainable models" and seeking experience to complete the questionnaire survey of the captioned study.

The study seeks to measure the trust of the user in AI explanations by allowing the user to view a question/image and make a decision; the user expresses the degree of trust in this decision.

This study can highlight the importance of embedding interpretable models like LIME and Grad-CAM in the decision-support neural networks to improve system transparency and user trust.

Before accessing the link you must have a basic background in machine learning and decision trees.

We would appreciate your involvement in this survey round by accessing these 𝐨𝐧𝐥𝐢𝐧𝐞 𝐬𝐮𝐫𝐯𝐞𝐲 𝐥𝐢𝐧𝐤s -

https://ramiibrahim.limesurvey.net/976154?lang=en (5-10 minutes to complete)

https://ramiibrahim.limesurvey.net/518388?lang=en (5-10 minutes to complete)

https://ramiibrahim.limesurvey.net/763522?lang=en (5-10 minutes to complete)

https://ramiibrahim.limesurvey.net/744241?lang=en (5-10 minutes to complete)

https://ramiibrahim.limesurvey.net/483341?lang=en (5-10 minutes to complete)

https://ramiibrahim.limesurvey.net/476152?lang=en (5-10 minutes to complete)

Thank you so much!

More Rami Ibrahim's questions See All
Similar questions and discussions