This week in AI: AI ethics keeps falling by the wayside


This week in AI: AI ethics keeps falling by the wayside
AI ethics keeps falling by the wayside
Spread the love

Keeping pace in a swiftly evolving field like AI poses challenges. Here’s a helpful collection of recent updates in machine learning, encompassing notable research and experiments. This compilation serves as a resource, particularly for tasks that an AI might not handle on your behalf.

Ahead of the holidays, the news cycle in AI finally (finally!) calmed down this week. However, that is not to say that there was a lack of material to write about—a boon and a bane for this exhausted reporter.

This morning, I was drawn to the following headline from the AP: “AI image-generators are being trained on explicit photos of children.” , a data set that is used to train numerous well-known open-source and commercial AI image generators, such as Stable Diffusion and Imagen. The Stanford Internet Observatory, a watchdog organization, collaborated with anti-abuse organizations to locate the illicit content and notify law authorities of any linkages.

The charity LAION has removed its training data and promised to eliminate offensive content before republishing it. However, as the need to compete increases, this instance highlights how little thinking is being placed into generative AI products.

Generative AI is becoming frighteningly simple to train on any data set, owing to the widespread availability of no-code AI model development tools. That makes it advantageous for tech giants and entrepreneurs to release such models. But when entry barriers are lowered, there is a temptation to compromise morality in favor of a quicker route to market.

There is no doubt that ethics is complex. Taking this week’s example, sorting through the thousands of bad photographs in LAION will take time. Additionally, collaborating with all pertinent parties—including those that stand in for underrepresented and negatively affected communities—is desirable for building AI ethically.

See also  Unleashing the Power of SEO Service In Singapore

There are numerous instances in the industry of AI release decisions made with investors, not ethicists, in mind. Consider Microsoft’s AI-powered chatbot on Bing, Bing Chat (now Microsoft Copilot), which insulted the appearance of a journalist and made comparisons to Hitler at debut. ChatGPT and its rival, Bard, continued to provide old-fashioned, discriminatory medical advice in October. Additionally, there is evidence of anglocentrism in the most recent version of OpenAI’s picture generator, DALL-E.

Let’s say that pursuing AI superiority, or at least the idea of AI superiority held by Wall Street, is not without consequences. There may be some hope with the passing of the EU’s AI legislation, which risks sanctions for violating specific AI guardrails. Nonetheless, a significant journey lies ahead.

Ethical considerations are crucial for AI’s technological advancement, yet they often recede amidst the push for innovation. Ethical standards should be emphasized in advancing AI to guarantee its responsible and advantageous deployment. Collaboration among industry players, policymakers, and developers is vital to prioritize ethics, nurture trust, and amplify AI’s positive societal influence.

(Information Source: Techcrunch.com)


Spread the love