How streamers pick games: the SplatterCat interview

The whole thing is a very delicate act of finding A) something that matches what my audience wants to see B) something that I think is high quality and needs the attention C) something that works for propagation / pushing out on YouTube (Note that this one I wish was not a factor, but the platform demands what it demands – despite our displeasure) and D) something that I personally find really fun. This listing isn’t in any particular order. All those criteria are really important, and you’ve gotta have that right mix. Every choice is a pragmatic one trying to balance all the factors.

I find it’s always good to be reminded that while all media fight for people’s attention, there’s never been a time when fighting a algorithm before fighting for that attention has been the default. Which is also why algorithms are blamed for essentially shaping content, output and public opinion. It’s no easy problem to solve. At the same time I have a gnawing feeling that we’re not even trying enough.

Hang in there. In a lot of ways, content creators for YouTube / Twitch and developers have a ton in common. We’re both ruled by algorithms we have no control over and sink or swim a lot of the time through forces largely out of our control.

Just in case you thought I was being hyperbolic there for a second.

#61: Unto dust

This is a so “me” kind of column it’s kind of uncanny.

ANNOUNCING RICOCHET ANTI-CHEAT™: A NEW INITIATIVE FOR CALL OF DUTY

We’re at a point where announcing an anti-cheat system is legitimate gaming news, with its own post, release, reporting and everything. It’s a noble endeavour I guess. But also futile. The best one can hope is that a system is preventive enough to keep the majority —never all— of the prospective cheaters at bay. And yet, it only takes a few to really ruin it for everyone.

Amazon copied products and rigged search results, documents show

The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform. The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results so that the company’s products would appear, as one 2016 strategy report for India put it, “in the first 2 or three … search results” when customers were shopping on Amazon.in.

Among the victims of the strategy: a popular shirt brand in India, John Miller, which is owned by a company whose chief executive is Kishore Biyani, known as the country’s “retail king.” Amazon decided to “follow the measurements of” John Miller shirts down to the neck circumference and sleeve length, the document states.

This phenomenon isn’t entirely new, as super market chains with house brands show. But at the level Amazon hit, it must be said, it’s impossible for it to be spun as peachy. Not when your team is so bored to to R&D for a new product that straight up copies competing products’ specs. But this only part of the problem.

In sworn testimony before the U.S. Congress in 2020, Amazon founder Jeff Bezos explained that the e-commerce giant prohibits its employees from using the data on individual sellers to help its private-label business. And, in 2019, another Amazon executive testified that the company does not use such data to create its own private-label products or alter its search results to favor them.

But the internal documents seen by Reuters show for the first time that, at least in India, manipulating search results to favor Amazon’s own products, as well as copying other sellers’ goods, were part of a formal, clandestine strategy at Amazon – and that high-level executives were told about it. The documents show that two executives reviewed the India strategy – senior vice presidents Diego Piacentini, who has since left the company, and Russell Grandinetti, who currently runs Amazon’s international consumer business.

This is the other part. As is par for the course in the age of Big Tech, everyone’s after everyone for one reason or another. Straight up copying someone else’s work is bound to lead to some issues. But beyond that, you’re caught lying repeatedly under oath and you’re in a different game. I feel it’s only a matter of time till this is portrayed as something going on locally that truly senior leadership didn’t know much about. But damn.

Raya and the Promise of Private Social Media

Raya is the rare social network that insures that all of its users are who they say they are. Since it launched, in Los Angeles, in 2015, it has gained a reputation as the “celebrity dating app” and “Illuminati Tinder.” Impersonation isn’t tolerated, nor is anonymity, much less any form of harassment. The app is private; aspiring users must undergo an application process that can stretch on for months. (One applicant recently reported that she was approved after a wait of two and a half years.)

First I’m hearing of this which I suppose is not the point but kind of the point. I mostly believe the founder that this didn’t start out as a celebrity magnet, but it only stands to reason that it would turn into one. People are naturally drawn to the concept of exclusivity but the famous are those more likely to need or even yearn for some online community that does not expose them to the ebbing and flowing masses. The fact that it has not free membership tier does a lot of the lifting of course. It’s rather easy to do away with the masses when there’s a cost that goes beyond the nebulous (aka “personal data being sold”). In the social media space, this approach might be the only one that really affords space for a place online where you know what you’re giving away, it’s never more than you wish it to be and people try to act more civilise because of all of this.

Naturally, scaling this up will be quite the experiments, especially for a company so few employees and such exacting vetting procedures. I’m not the kind of person that would be accepted in Raya, I don’t think so. But I am still quite interested in how it all goes for them in the long run.