Politics FB

The Thursday Politics Thread Lets The Robots Decide

Mornin’ Politocadoes!

AI is the latest buzzword to come from Silicon Valley. It’s in everything now. Even Windows has an AI now even if you don’t want it or need it. Why, it’s even gone into the military. Those robots, thoughtlessly accumulating data from across the net and then try to answer a malformed query from worthless meatsacks living in meatspace, then produce a result. Sometimes it’s eerily what is requested, sometimes the result is something from the uncanny valley. We’ve seen this happen time and again with AI generated art that creates hideous monstrosities, or suddenly becomes a Nazi when it interacts too much with the Internet, or in AI generated responses to questions that are either completely non-sensical or, in some cases, completely illegal. Thank goodness no one’s been so blind as to start using it on the battlefie-

Oh wait, no. See, Israel has been using AI to help identify bombing targets in Gaza. The tool, known as Lavender, is known to have a 10% error rate. According to six Israeli intelligence officials, Lavender has played a central role in the unprecedented bombing of the Palestinians. While the IDF does not dispute its existence, the IDF has rejected the idea that it is being used to identify suspected terrorists. Nevertheless, the AI tool has been described as central to the war effort, particularly early on.

The system is designed to label all suspected operatives in the military wings of Hamas and the Palestinian Islamic Jihad, including low ranking ones, as potential bombing targets following October 7th. According to sources, Lavender clocked nearly 37,000 Palestinians as potential militants. The system also does not have a robust human double-check protocol. The Israeli army gave sweeping approval to the kill lists Lavender generated with no requirements to investigate why the AI made the decisions it made. Previously, Israel had a rigorous set of protocols to check, cross-check, or double-check before an attack on a suspected militant. But all of that has been swept away as the AI supposedly takes care of all of that tedious grunt work.

Human personnel were there to rubber-stamp the decisions made by the AI, with only 20 seconds of delay being given so that the human could ensure that the target was male. Again, Lavender has a 10% error rate and is known to mark people who have either no connection, or a weak connection to militant groups.

Those targeted by Lavender were usually hit by so-called dumb bombs which destroy buildings and let the debris fall on targets. After all, according to one of the Israeli intelligence officers,

“You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],”

Ahh, unimportant people who were also an existential threat to the state of Israel. Okay.

The army also decided early on what collateral damage was acceptable. And, according to these sources, that for every potential militant that Lavender targeted, as many as 15-20 civilians were considered acceptable losses.

All of this information comes from a sprawling investigation by 972 Mag and Local Call which I’ve linked below. It’s really worth a deeper dive as it goes into the mechanics of how and why Lavender works the way it does. But at the end of it is still the same chilling conclusion.

AI is being touted, especially the military, as being a way to streamline decisions, With this much information being honed and processed by an outside source, it takes a lot of guesswork out of humans’ heads. Supposedly. The reality is that humans set an AI to do a thing, don’t bother to rigorously check the AI’s decisions and the results are horrendous.

Would things have been better *if* Israeli forces were second-guessing their AI tools? Perhaps. But I do not want to explain away the horror of Israel’s war entirely on its use of a fucking algorithm. Human beings repeatedly ignored these issues for a reason. And at least part of the reason is that they just didn’t care or actively approved of it. The worst thing about AI is that it’s supposed to take out the guesswork and thought processes for doing a thing. And to my view, if its used in war it makes it far, far, far, easier to make other humans less human.

https://tinyurl.com/sx4znmre

https://tinyurl.com/msazywsm

Welcome to Thursday! Please be excellent to each other in the comments. The Mayor McSquirrel Rule remains in effect. Something to keep in mind! Police are using digital data to prosecute abortion seekers, and Facebook and Google are helping them! Even if they are not legally required to do so, the tech giants will aid them in this since it’s technically *against the law*. Please be careful when looking for abortion providers or helping someone who is looking for that assistance. 

https://tinyurl.com/2azchvd2

The Covid-19 pandemic continues even if the emergencies end, continue to vaccinate using the latest version of the FDA approved vaccines by Phizer and Moderna. Even if you are vaccinated, please continue to maintain social distancing measures, wear masks in public areas in accordance with CDC guidelines in regard to your own vaccination status.

Also, according to recent polls approximately 50% of adults are not planning on getting their Covid-19 boosters. So please get them and older adults should also look to be vaccinated for RSV! It’s been difficult to get adults onboard with getting the RSV shots as it is. Remember that getting these shots could potentially keep thousands of Americans who do not have to be there out of the ICU this winter.

https://tinyurl.com/ycxvehya