01 January 1970 1 6K Report

I have another blog post on AI this week (seems to be a theme this year). In this one I take a look at some important examples of AI "misbehavior," such as algorithms telling people to harm themselves. Why would a large-language model produce this type of result? Is it really out to get us, or is this just a misunderstanding? https://sites.google.com/view/two-minds/blog

More Paul F Cook's questions See All
Similar questions and discussions