Google gets a lot of criticism from developers for the lack of human involvement in its app review process, with the Play Store instead opting for automated scanning of apps for viruses, policy violations, and other bannable offenses. Saying Google’s app review bots “ban first and ask questions later” would actually be an improvement over the current situation, since the bots can’t ask questions. The bots ban, send an automated email, and it’s up to the developers to figure out why they were banned and jump through hoops to make the bots happy, often without being able to speak to a human.
Google and Apple both collect a percentage of app sales, which the companies categorize as a necessary tax that pays for the infrastructure of the store ecosystem. Apple uses this money partly to hire an army of human app reviewers, a system that Google Play developers often hold up as an example that Google should follow. Instead, Google only has this bot system—or at best, it has an extremely small team of manual reviewers—and developers frequently complain that they are at the mercy of an illogical bot, with no human to speak to even during an appeals process.
Video game programmers learn to celebrate “crunch” from the get-go. Like many of his peers, Kevin Agwaze went to a specialized school that taught coding for games, rather than a traditional university. Such schools normalize a brutal workweek, treating high dropout rates as a badge of honor, and instilling the idea that the games industry is a shark tank where only the strong survive.
Just what we have schools for, no?
I think it’s shedding a bunch of light on the fact that platforms like ours have a lot of power. And I think that scares people. I think that’s reasonable.
In last week’s Reading List, I noted similar sentiments by Twitter’s Jack Dorsey. These companies need and, more importantly, want to be regulated. It will take a load off of their shoulders, regardless of the quality of the regulation we end up with.
But even if they didn’t want to…
The bottom line: Gen Z and young Millennials live and breathe social media and technology and are confident in their own ability to use these platforms and detect misinformation. Yet — with notable bipartisan agreement — they think Big Tech’s power must be checked.
Granted this is US-centric. Still, it’s important to note that there is clear overlap between those that are against government regulation and those that feel social platform have too much power in their hands. There is friction that will have to be dealt with one way or the other.
A user in Brazil uploaded an image to raise awareness about breast cancer that featured eight pictures of female breasts—five of them with visible nipples. Facebook’s software automatically removed the post because it contained nudity. Human Facebook moderators later restored the images, but Facebook’s Oversight Board still issued a ruling criticizing the original removal and calling for greater clarity and transparency about Facebook’s processes in this area.
Of course a US company would have a particular view on nudity and its role. Of course their automated tools would reflect that.
A user in the United States shared an apocryphal quote from Joseph Goebbels that “stated that truth does not matter and is subordinate to tactics and psychology.” Facebook removed the post because it promoted a “dangerous individual”—Goebbels. The user objected, arguing that his intention wasn’t to promote Goebbels but to criticize Donald Trump by comparing him to a Nazi. The board overturned Facebook’s decision, finding that Facebook hadn’t provided enough transparency on its dangerous individuals rule.
It’s my understanding (hopefully I’m wrong) that per Facebook’s policy, if an individual is deemed dangerous, any quoting of said individual is to be taken down, regardless of context or purpose. Gee, I wonder why this ruling got overturned by the Board. Right?
Just in case you had any doubt Facebook wants to be regulated, they created the Oversight Board with journalists, judges, politicians and more from all over the world, specifically to decide on disputed moderation. The first batch of cases was chosen by Facebook, not by users (users will be able to escalate in the future) and 4 out of 5 moderations decisions were overturned.
The Oversight Board’s decisions are binding for Facebook, even if they disagree with any of them. Essentially Facebook created their own Supreme Court, whose members can’t be fired by the company either, because no one else was ready to deal with the mess quickly enough. This is actually a good move in lack of actually government regulation. Doesn’t change the fact that the Board can’t possible take into account regional laws so that’s an issue for another time (and different mechanisms) I’d say.
Those that insist that a company is a private entity and can have whatever internal policy it wants, even if it’s this big and not designed to deal with political matters, I guess Facebook gave them the answer they didn’t expect.
The Oversight Board is a net positive. It’s also neither enough, nor a proper solution. Let’s all keep our fingers crossed then. We should be used to it by now.
For many years, OSINT (open-source intelligence) researchers and journalists have developed methods for the analysis of networked data that has led to a better understanding of the identities of criminals and their motives. Police and journalists are increasingly using social media as a platform for investigations, gathering potential evidence, witness accounts, and other clarifying information, hoping the digital traces they find on social media can provide clues for both legal action and rapid-response reporting.
Staying on the social media/gift that keeps on giving train, expunging content for prosecutable offences can make prosecuting said offences infeasible. This is a real issue to be sure, one that’s complicated not by a platform’s crackdown on illegal or just policy-violating content but also by the developing expectations on privacy.
One idea for addressing this challenge is the “human-rights locker” (also known as a “digital locker” or “evidence locker”), where publicly shared content—including content and accounts that have been removed by the platform—is collected, preserved, and verified for future research and investigation by select individuals and groups, like social scientists, researchers, advocacy organizations, historians, journalists, and human-rights investigators. Although many platforms have specific procedures for data requests, they are inconsistent, can take a long time, may be costly, and may differ by jurisdiction.
A locker would try to remedy some of this, while continuing to allow platforms to do the necessary work of removing hateful and dangerous content out of circulation where it could otherwise be amplified by trending or recommendation algorithms. Ideally, a set of standards would apply across platforms to address how digital information is stored, how to preserve a digital chain of custody, who can access the information, a credentialing process for those wanting access, and what safeguards should be in place to prevent potential abuse of data. This dataset would contain only public posts and accounts, not private messages, and pertain to significant events. Furthermore, social media companies should provide information on why the content was removed, whether it was manually or automatically flagged for removal, and whether appeals were made to reinstate the accounts or content.
“Ideally”. This is one tough cookie.
Google’s privacy efforts are happening alongside sweeping changes from Apple that similarly make it harder to track individual user data online. These major changes come amid a privacy reckoning in the U.S. and in Europe over online data.
Facebook might be going after Apple for the latter’s app tracking and privacy labelling changes while Google (and I bet not just Google) is looking for a workable alternative to cookies. Some advancements are only forced by applying pressure. But not everyone gets it at first. Think of flash and the PR battleground it grew to be when Steve Jobs publicly dissed on it in 2010. We had companies swearing to keep Flash going on TVs, promising supports on mobile devices etc. That support either never came or was never up to snuff. Keep in mind that back in 2010 the iPhone wasn’t the power house that it is today, Apple didn’t hold the sway it holds today in the smartphone market. The problem for those pretending otherwise, was that Flash really was unworkable for the future that was at hand.
I thought I should lighten up a bit this week so here we go.
At the moment, people accept their feelings are just how they feel — but Newell says BCIs will soon allow the editing of these feelings digitally, which could be as easy as using an app.
“You could make people think they [are] hurt by injuring their tool, which is a complicated topic in and of itself,” he said.
There’s also a choice quote about tentacles. But it’s best that you read the interview (and watch the accompanying video) as it’s a very interesting read and topic.
And Newell does get his enthusiasm alone won’t simplify matters going forward. But then he pretends everything will be a matter of consumer choice and brings up how no one is really obligated to use a smartphone, it’s their choice. That’s nice, Mr. Newell. But also fundamentally wrong.
“There’s nothing magical about these systems that make them less vulnerable to viruses or things like that than other computer systems,” Newell said.
“Right now, you have to trust all your financial data, all of your personal information to your technology infrastructure, and if the people who build those people do a bad job of it, they’ll drive consumer acceptance off a cliff.
“Nobody wants to say, ‘Oh, remember Bob? Remember when Bob got hacked by the Russian malware? That sucked – is he still running naked through the forests?’ or whatever. So yeah, people are going to have to have a lot of confidence that these are secure systems that don’t have long-term health risks.”
I looked through the service’s FAQ and felt confused, so I reached out to Plex representatives, who pointed me to a sub-FAQ about emulators. As it turns out, while Plex Arcade is smart enough to recognize ROMs and immediately silo them into respective “platform” tabs, it otherwise forces users to procure their own emulation cores—specifically ones built for the RetroArch interface. Plex goes so far as to suggest that users download RetroArch outright and use its automatic core-update system, then drag-and-drop those core DLL files into Plex’s interface (and this includes exposing the “AppData” folder that Windows hides by default, which Plex doesn’t clarify for brand-new users). Then you have to manually edit an XML file and use precise metadata tags to tell Plex Arcade that you indeed downloaded those cores. (A single typo could nuke the whole file.)
Like, why can’t you do all of that, Plex?
This was the funniest thing I read this past week. Funnier than the ban of .ass files by Google.