Once upon a time someone had to write the news, draw the picture, film the scene, censor the blank — but now it’s apparently the robots’ turn to have a proper go at all that. All manner of exciting/terrifying technological developments in that space have a lot of people on edge. Clearly for-profit ventures are launched via public betas under the banner of scientific research in noble pursuit of knowledge.
Sure looks like this bird has already flown the coop and our cat is still in the damn bag. Some might have preferred to stop it earlier (like some people are saying Herschel Walker aborted that baby eagle) but very little has been done so far to actually place constraints on these AI implementations. Even as I play around with some of them myself, the sense that their abilities come up too short to be a real threat to us humans feels like it has an expiration date fast approaching.
So let’s enjoy these wonderful soon-to-be before-times until the robots manage to completely saturate our cultural bandwidth. One thing’s for sure, I can’t imagine any silly algorithm being able to provide the warm camaraderie of all you lovely PT ‘cados… yet.
How tweets, lies from India fueled Hindu-Muslim unrest in central England
Rumor had it that a Muslim girl had been kidnapped and a Hindu temple had sent masked thugs into combat. Add in local fury over an India-Pakistan cricket match, and Hindu and Muslim men were soon fighting on the streets of central England.
It was a social media storm – mostly cooked up a continent away – that materialized in real life in Leicester, where police made almost 50 arrests and a community was left in tatters.
While expatriates have long absorbed content from India, and commented on events, disinformation has mushroomed with the rise of social media platforms, said Pratik Sinha, co-founder of Indian fact checking site AltNews.
“A lot of hate speech and misinformation, particularly in regional languages, goes unchecked on social media platforms.”
Meta Made DALL-E for Video and There’s No Way This Ends Well
The company announced in a blog post last week that it had developed a text-to-video generator dubbed Make-A-Video. The AI-driven system seems to be Meta’s answer to DALL-E and Imagen—only instead of simply making a still image, it spits out full-blown videos.
There is no end to the real-world examples of the harm biased AI has caused either—from racist chatbots, to mortgage lending algorithms that reject Black and Brown applicants, to even highly sophisticated image generators like DALL-E mimicking our biased tendencies.
Meta’s Make-A-Video is no exception. Though the paper’s authors note that they attempted to filter NSFW content and toxic language out of its datasets, they concede that “our models have learnt and likely exaggerated social biases, including harmful ones.”
Northeastern University package explosion was a hoax carried out by employee, complaint states
A Northeastern University employee who told police last month he was injured by an exploding package fabricated the story and now faces charges in the hoax, according to a criminal complaint.
“Throughout the course of the investigation, we believe he repeatedly lied to us about what happened inside the lab, faked his injuries, and wrote a rambling letter directed at the lab threatening more violence,” Joseph Bonavolonta, FBI special agent in charge, said Tuesday.
According to the complaint, Duhaime, 45, called 911 to report he was injured by very sharp objects expelled from a plastic case he had collected from the mail room and opened in Northeastern’s virtual reality lab.
He also told investigators he found a threatening note with the case that accused the lab of secretly working for Facebook and Meta founder Mark Zuckerberg in a US government plot to take over society through virtual reality, according to the complaint.
However, investigators found the case and the letter had no signs of damage, and they discovered a document on Duhaime’s computer that was “word-for-word” the same as the threatening letter, the complaint states.
The White House moves to hold artificial intelligence accountable with AI Bill of Rights
The issues with AI are well-documented, the Blueprint points out — from unsafe systems in patient care to discriminatory algorithms used for hiring and credit decisions.
The EU has led the way in realizing an ethical AI future with its proposed EU AI act. Numerous organizations have also broached the concept of developing an overarching framework.
But, the U.S. has notably lagged in the discussion — prior to today, the federal government had not provided any concrete guidance on protecting citizens from AI dangers, even as President Joe Biden has called for protections around privacy and data collection.
And, many still say that the Blueprint, while a good start, doesn’t go far enough and doesn’t have what it takes to gain true traction.
Welcome to Wednesday, Politicados! The McSquirrel Rule remains in effect of course, but you wouldn’t want to publicly state the sort of things that would break that rule anyway, lest the AIs of the future harvest your comment for their opaque algorithmic machinations and catch you in a pre-crime. So yeah, please don’t do that 😊