Complexities of AI


Image Source- Freepik
Complexities of AI
Spread the love

Do you know that friend who somehow never pays for drinks? They just charm their way through life getting free stuff. Well, it turns out AI models are a bit like that friend. A recent study called The Data Provenance Initiative revealed that a lot of freely available data sets used to teach AI models didn’t properly pay their tab, so to speak.

Turns out that around 70% of popular fine-tuning data sets either failed to specify the right license or slapped on a “free drinks forever” sticker that didn’t belong there. So these AI models are getting schooled with other people’s hard work without permission or proper credit. The researchers who organized this intervention must have felt like overworked bartenders realizing their best customers were not showing up. Once the data was uncovered, they audited over 1,800 shifty data sets like a security reviewing VIP list fraudsters.

This licensing limbo causes all kinds of problems for developers trying to follow the rules. They might violate copyright while teaching these artificial intelligence without even realizing it. Also, creators aren’t getting credited or paid for their contributions. Not cool. The AI apology letters are sure to be real gems though: “Dear human creative, I’m ever so sorry I used your data without permission or payment to become an unstoppable artificial intelligence. I didn’t mean any harm, I just wanted to learn! Let’s let bygones be bygones, what do you say? Anyway, gotta run, I’m off to generate some fine art and personalized marketing campaigns. No hard feelings!”

See also  Can AI Replace Middle School Math Teachers? Exploring the Possibilities

So how did we get to a point where most AI training data is about as ethically sourced as a gas station hot dog? Well, blame modern machine learning practices. Data sets get passed around like a hot potato – combined, repackaged, and re-licensed so many times they’re basically identity-less. The Data Provenance researchers hope to unravel this tangled web and bring some order back to the Wild West of AI data ecosystems.

They want to get lawyers, engineers, and policymakers on the same page about licensing and representation in these data treasure troves. Right now the team’s shining a spotlight on the shady byte dealers in hopes of cleaning up the neighborhood. They’re like conscientious sheriffs in a rowdy AI saloon.

Another issue highlighted was the lack of diversity in commonly used data. Most of it comes from North American and European sources, meaning languages from the Global South are limited. So the AI’s word bank is a little ethnocentric. Probably best to avoid having it write birthday cards for Abuelita until it gets more culturally sensitive data.

In the future, an AI may be generating this kind of article, so it’s only right that it pays its respects to us original creators. These revelations will lead to fairer licensing systems and more equitable training data. The robots still have a lot to learn when it comes to proper social etiquette and creative crediting! But together we can build AI that uplifts humanity, not mooches off it.

The rise of artificial intelligence (AI) has sparked growing concerns about regulating this rapidly advancing technology. In response, the United States and the United Kingdom have taken major strides to boost oversight and steer AI development responsibly. Earlier this year, President Biden signed the nation’s first AI executive order to guide federal agencies in harnessing AI technology. The order puts guardrails on AI applications and directs agencies to craft standards preventing misuse. It emphasizes democratizing access to AI research infrastructure so academics and non-profits can better understand the impacts.

See also  Why My Dishwasher is Not Draining?

Biden’s executive order calls on the National Institute of Standards and Technology (NIST) to develop much-needed definitions and standards around AI. Without clear guidelines, deploying algorithms responsibly becomes nearly impossible. The order also instructs agencies to establish standards to stop nefarious uses of AI like engineering dangerous biological materials.

Across the pond, the UK convened the AI Safety Summit, bringing together representatives from over 25 countries and big tech. The goal was to discuss risks associated with advanced AI and align on ways to minimize harm. The UK has also committed to developing a national AI supercomputer.

Both the US and the UK plan to give researchers cloud access to expensive computing power needed to study AI’s impacts. Currently, only tech giants can afford massive systems training complex algorithms. By democratizing access, governments hope to level the playing field. The UK intends to triple funding for its national AI Research Resource, which provides researchers supercomputer-level power to probe frontier technologies. This aims to ensure testing keeps pace with private-sector development.

Between Biden’s executive order and the UK’s supersized research computer, governments are waking up to AI’s societal impacts. Compare this to just a few years ago, when consumer companies were racing ahead with scarce oversight.

Of course, talk must lead to action. Governments will need to vigilantly update policies as algorithms grow more powerful. And nations must collaborate across borders to align on AI best practices. But the US and UK’s recent focus signals a growing global urgency to steer AI responsibly.

If governments stay nimble and open-minded, perhaps we can develop AI that enhances human potential. With conscientious standards and democratized research access, the brightest AI minds may create a just and thriving future for all and not just slapstick comedy lessons about what training can come from off-set data set jux.

See also  Supreet Kaur: From Vision to Reality in Data and AI Product Evangelist

Spread the love

BullEyes

BullEyes Company is a well-known name in the blogging and SEO industry. He is known for his extensive knowledge and expertise in the field, and has helped numerous businesses and individuals to improve their online visibility and traffic. BullEyes Is a highly experienced SEO expert with over Seven years of experience. He is working as a contributor on many reputable blog sites, including Newsbreak.com Filmdaily.co, Timesbusinessnews.com, Techbullion.com, businesstomark.com techsslash.com sohago.com ventsmagazine.co.uk sthint.com and many more sites..