Generative AI can be used to manipulate people into making harmful decisions, says US Federal Trade Commission in warning to firms building and using AI-powered tools...
As the defence sector looks into using artificial intelligence to create autonomous weapons, we examine the risks and ethics of military AI. Amazon Web Services is under fire for not providing vital emissions data to customers – we investigate the issues...
As I have said in many other discussions and questions, there are good and bad points of AI and we need to understand the potential benefits and limitations.
Microsoft and Google go into battle for enterprise AI
Developments in generative AI accelerate, Microsoft and Google have each unveiled their plans around enterprise applications. Nobody doubts the importance of digital transformation – but experts say it won’t work without cultural change as well...
The hype around generative artificial intelligence (AI) models such as ChatGPT and other large language models has put ethical considerations surrounding their use to the fore, such as copyright issues, reducing harm from misinformation and potential biases in AI models...
While some of those considerations are already being addressed by big tech firms and industry regulators through ethical guidelines, the fact that these large language models are spurring more interest in AI demands organisations to take AI ethics more seriously...
As generative AI technology gains adoption, questions emerge about how both the developers of generative models and users of these models can work with generative AI ethically.
Ethical AI use has long been a topic of debate in the tech world – and beyond – but it is becoming increasingly important to set up guardrails and establish guiding principles for how to use this advanced and highly accessible form of AI.
In this guide, we’ll discuss what generative AI ethics look like today, the current challenges this technology faces, and how corporate users can take steps to protect their customers, their data, and their business operations with appropriate generative AI ethics and procedures in place...
Generative AI ethics are important because, as with many other emerging technologies, it is all too easy to unintentionally use this technology in a harmful way...
It’s challenging to be confident that you’re using generative AI ethically because the technology is so new and the creators behind it are still uncovering new generative AI use cases and growing concerns. As generative AI technology is changing on what feels like a daily basis, there are still few legally mandated regulations surrounding this type of technology and its proper usage.
However, generative AI regulations will soon be established, especially in trailblazing regulatory regions like the EU. In the meantime, many companies are taking the lead and developing their own ethical generative AI policies to protect themselves and their customers. You owe it to your customers, your employees, and your organization’s long-term success to establish your own ethical use policies for generative AI...
Generative AI’s quick growth has raised cybersecurity and regulatory compliance concerns. These concerns are certainly warranted and need to be examined from all angles as companies manage their cybersecurity postures. But what many people don’t yet realize is this same technology can also supplement security management tools and teams if used strategically...
The House of Lords has put out a call for evidence as it begins an inquiry into the seismic changes brought about by generative AI (artificial intelligence) and large language models.
The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm...
The President’s Council of Advisors on Science and Technology (PCAST) has launched a working group on generative artificial intelligence (AI) to help assess key opportunities and risks and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly, and safely as possible...
The EU AI Act is the first legislation proposed to regulate the use of AI based on the risk posed by different AI systems (such as automated social scoring, which will be banned). The EU Parliament has proposed including providers of foundation models, including generative AI, in the scope of the legislation and requiring them to disclose training data...
The rise of big data analytics more than a decade ago raised novel ethical questions and debates because emergent tools made it possible to infer private or sensitive information about people that they had not, and would not want, revealed. How should companies handle their ability to possess such information?
Given its potential to supercharge data analysis, generative AI is raising new ethical questions and resurfacing older ones...
The biggest issue with many of the current Generative AI models, including the versions of GPT, is the source of their training material and the quality of their answers. While the incredible power of LLMs is self-evident, some of the outputs such as those derived from regurgitated Reddit posts leave much to be desired...
AI can leverage scholarly content to create a countless number of products and the more use cases that can be created the more value it will have. It behooves publishers to make their content as relevant as possible in their respective disciplines and consider how they want to be part of the move towards a Generative AI world...
If our curated content is as valuable as I believe it can be when properly leveraged, might we even see GenAI companies looking at scholarly publishers as potential targets of acquisition?
"We, the Leaders of the Group of Seven (G7), stress the innovative opportunities and transformative potential of advanced Artificial Intelligence (AI) systems, in particular, foundation models and generative AI. We also recognize the need to manage risks and to protect individuals, society, and our shared principles including the rule of law and democratic values, keeping humankind at the center. We affirm that meeting those challenges requires shaping an inclusive governance for artificial intelligence...
We believe that our joint efforts through the Hiroshima AI Process will foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximize the benefits of the technology while mitigating its risks, for the common good worldwide, including in developing and emerging economies with a view to closing digital divides and achieving digital inclusion..."
The Truth Is in There: The Library of Babel and Generative AI
"Generative artificial intelligence offerings such as ChatGPT are being retooled and developed so rapidly that anyone who attempts to write about them risks their words being outdated before they reach publication. As we reckon with how generative AI is shaping our relationships with work, information, and one another, it is worth trying to analogize our current experience to others, real or imagined, to see what perspective we might find...
We may view generative AI as one method among many for understanding the world, but we should not mistake it for the world itself..."
New generative AI guidelines aim to curb research misconduct
"China’s Ministry of Science and Technology last month published new guidelines on the use of generative artificial intelligence in scientific research, as part of its efforts to improve scientific integrity and reduce research misconduct. The new rules notably include a ban on the ‘direct’ use of generative AI tools when applying for research funding and approval.
Under the guidelines, generative AI can still be used in research, but any content or findings that use the technology must be clearly labelled as such..."
New book: Moral AI by Jana Schaich Borg et al. Pelican (2024)
"The industrialization of machines in the nineteenth century and of chemicals in the twentieth century led to both gains and disasters. Artificial intelligence (AI) will produce even more complex effects, argue three interdisciplinary researchers. Indeed, they introduce their book’s stimulating analysis of moral dilemmas in AI with snippets of both good and bad AI-related news — from the worlds of art, environment, investment, law, media, medicine, the military, politics and more. AI “deserves both pessimism and optimism”, they note..."
"Generative AI provides many opportunities for different sectors. However, it also harbours risks, such as the large-scale generation of disinformation and other unethical uses with significant societal consequences...
The technology also entails the risk of abuse. Some risks are due to the tool’s technical limitations, and others have to do with the (intentional or unintentional) use of the tool in ways that erode sound research practices. Other risks for research in Europe could stem from the proprietary nature of some of the tools (for example, lack of openness, fees to access the service, use of input data) or the concentration of ownership. The impact of generative AI on research and various aspects of the scientific process calls for reflection, for example, when working with text (summarising papers, brainstorming or exploring ideas, drafting or translating). In many respects, these tools could harm research integrity and raise questions about the ability of current models to combat deceptive scientific practices and misinformation...
Different institutions, including universities, research organisations, funding bodies and publishers have issued guidance on how to use these tools appropriately to ensure that benefits of those tools are fully utilised. The proliferation of guidelines5 and recommendations has created a complex landscape that makes it difficult to decide which guidelines should be followed in a particular context. For this reason, the European Research Area Forum6 (composed of European countries and research and innovation stakeholders7 ), decided to develop guidelines on the use of generative AI in research for: funding bodies, research organisations and researchers, both in the public and private research ecosystems..."
Aligning AI with human values needs a democratic approach
"Ensuring that AI stays aligned with human values requires that we pay attention to the ethical, moral and historical training of AI systems and that, along with technical knowledge, the system is exposed to the ethical frameworks of a multitude of different people...
Colleges and universities have a special responsibility here even though their students are not precisely representative of the general population. Of the more than 18 million students this year (and more entering the post-secondary system every year), almost all are using computers and the internet. These students form a good foundation for the creation of democratised LLMs..."
Lethal AI weapons are here: how can we control them?
"Autonomous weapons guided by artificial intelligence are already in use. Researchers, legal experts and ethicists are struggling with what should be allowed on the battlefield...
Some argue that accurate autonomous weapons, such as AI-equipped drones, could reduce collateral damage while helping vulnerable nations to defend themselves. At the same time, observers are concerned that passing targeting decisions to an algorithm could lead to catastrophic mistakes..."
New Discovery Applications for Scholarly Information in the Era of Generative Artificial Intelligence
"Whether you believe artificial intelligence is friend or foe, it’s undoubtedly changing many aspects of how we work, including the way we discover content, information, and knowledge. In our first post exploring AI and information discovery, we discussed the evolution of AI, and how it can potentially be applied to solve pain points for researchers and publishers alike.
Here, we explore generative AI (GenAI). These systems can produce original and realistic outputs based on the patterns and data they have been trained on. We’ll discuss how GenAI is moving us towards conversational discovery and what this might mean for publishing, as well as potential future trends in information discovery..."