Featured

Published on June 11th, 2020 📆 | 6178 Views ⚑

0

How Silicon Valley is putting our rights at risk (Opinion)


iSpeech.org

We're watching the cresting of a wave of a national debate we've been having since the 2016 election, when politics online first became synonymous with fake news, conspiracy theories, alternative facts, dog whistles, trolling and gaslighting. With the 2020 Presidential Election just five months away, and many of us stuck at home and online due to quarantines and curfews, it is high time we demand an Internet that brings us together in the spirit of debate and discussion, rather than polarization and manipulation.

With 80+ million followers, President Trump has benefited a great deal from Twitter. It not only allows him to "fight back," to use his own words, against critical press coverage, but also to sidestep his accountability to the public when he endorses libelous, slanderous, or hateful content. How? Because, unlike mainstream media outlets, Twitter and other social media networks do not have to be liable for what they publish. This gives Trump the ability to Tweet his way to unprecedented attention, a power to publicly humiliate that some say keeps fellow Republicans in line.
Perhaps this is why, when Twitter labeled two of Trump's tweets as "potentially misleading" for the first time ever, the President immediately retaliated by signing an executive order that could remove certain legal protections for social media companies. While some observers worry that what the President really wants to do is "turn the Federal Communications Commission into [his] speech police," we can't lose sight of the bigger picture. According to a 2019 Gallup poll, 48% of US adults, across the political spectrum, agree that tech companies should be regulated.

Regulating tech companies should have nothing to do with Trump's personal interests, of course, but instead support our democracy and citizens. Yes, technology has been of great benefit to the public, for instance aiding the protests by Black Lives Matter and allies. But tech companies must not be allowed to privately design the information highways that manipulate what we think, believe, and feel. We need oversight of the algorithms that influence how information reaches us via social media and the internet.

These algorithms are the real mechanisms of power that affect how information flows, targets, tracks, and impacts our behavior. They determine far more than our experiences on social media; they are the engines behind decisions made around bank loans, insurance, access to housing, even how we are policed. We need to be involved in vetting, if not designing, these algorithms because we know what underlies the choices of tech companies: like other major corporations, it isn't democracy, it is their bottom-lines.

A front row seat to Covid-19 conspiracies

As we all wait out the Covid-19 pandemic together—a captive audience at home in front of our screens—we have front row seats to why unaccountable tech companies are so dangerous. No, this isn't a Netflix original. It's a pandemic thriller in which both internet virality and biological virality work together to create a political crisis for us all. So many of us have already heard of the newest crop of coronavirus conspiracy theories trending on social media under hashtags like #scamdemic and #plandemic, the latter of which reportedly gathered more than 26,000 posts before Instagram decided to block it.
Trump's dangerous move against Twitter

Many of these theories have used the internet to turn what might have a kernel of possible truth into an experience of hysteric certainty. If we assume that the conspiracy theories brought to us by online algorithms are truth, we have moved away from reasoned discussion and thoughtful information-sharing into a bad trip of "knowledge" and paranoia gone wild. This makes us distrust and stereotype one another, and treat important questions, for example regarding whether a given vaccine is appropriate for a given person or demographic, as a basis to discriminate against one another.

Algorithms in an "attention economy"





The algorithms that power the internet today are optimized to make sensational, attention-grabbing content most visible. Our tech companies, from Facebook to YouTube, are in the business of monetizing our behaviors and gathering our intimate data. As a result, the social media feeds we have all come to depend on for news are designed to stoke our greatest fears and anxieties. In an attention economy desperate for clicks, shares, likes, eyeballs and comments, this is further heightened.

Moving forward with transparency and oversight regulations

As the 2020 election nears, we are likely to be engulfed in new disinformation campaigns again fueled, like in 2016, by foreign governments or shady organizations like Cambridge Analytica, which specialized in manipulating our behaviors by targeting us online based on their predictions of our psychology. That is because we have failed to enact any legislation to ensure that tech platforms are not gamed by these third parties; we simply trust them to better police themselves with little visibility into how they are doing so. The majority of us recognize that the private interests dominating the internet are not supporting us as citizens in a democracy or as workers in an increasingly digital economy. It's now time to move forward with regulations that ask for greater transparency and oversight, if not public governance, of the algorithmic systems that fuel the internet. Many Americans agree with this, and the coronavirus pandemic has likely only strengthened this ethic.
Consider that Zoom has recently been called a "privacy disaster" (which it says it has fixed) that Amazon-founder Jeff Bezos is on a path to become the first trillionaire within five years, partly thanks to the pandemic. Consider that New York Governor Andrew Cuomo has rolled out the red carpet to tech magnates Bill Gates and Eric Schmidt (of Google), giving them extensive influence over what the state will look like post-pandemic, all without any clear demands of accountability or transparency.
It will take regulation to force Facebook, Google, Apple and other big tech companies to collaborate with us on promoting the public good. Yes, in response to public concern companies like Twitter and Facebook have hired fact-checkers and installed improved algorithms designed by AI experts. But these services are all private, their inner workings secret, and their code proprietary. Their fact-checking depends on whoever they think is credible. And though Facebook insists that its fact checkers are certified by Poynter's International Fact-Checking Network, its engineers and corporate execs get the final say on what counts as truth on their platform.

The algorithms and corporate decisions that shape our social media feeds are just like the programming that shapes the software that tech companies sell to police departments: hidden. The keys to our democracy shouldn't be hidden inside black boxes that obscure bias, profit-mongering, and even outright racism. We need to demand protections that build power for the people into the code.

This is why we have proposed a Digital Bill of Rights that requires disclosure and transparency around what data is being collected about us, by whom, and for what purposes. It means public governance and auditing of algorithms and AI/facial recognition systems, monetary compensation for those whose data are being extracted from them to make billions of dollars for secretive investors and executives, and active antitrust investigations into big tech companies to protect the public interest.
Consider how timely this is as we witness a wave of protests around our country not seen since the 1960s. A Digital Bill of Rights could bring the most criminalized populations in the country into the creation of technologies that support racial justice as well as criminal justice. We have learned over the past few years that our police and courts all increasingly rely on algorithmic technology, which for the most part have been created by white and Asian male engineers in tech companies, not the black and brown populations most vilified by our justice system. Instead of remaining distressed by this, let's imagine: What would policing look like if Black Lives Matter had a seat at the table when criminal databases or the algorithms that run predictive policing software were designed and implemented? This legislation could also ensure that citizens are notified any time they interact with AI systems, such as facial recognition technologies, which have tended to reinforce and normalize racial and gendered biases. Amazon, for example, claims to be in "solidarity" with the protests while profiting off of sales of "Rekognition," its facial recognition technology -- which an ACLU report found was racially-discriminatory -- to police departments and the military. Amazon has defended the technology, but has also said it supports "calls for an appropriate national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology."

Instead of trusting private corporations that brand themselves as servants of the public, we the people need to have greater power over our technologies and public institutions.

Our data are not just numbers, transaction records, clicks, or likes; they are traces of our human creativity, our relationships, and our lives. Collected nearly every moment of our lives, they are ours and no one else's — and it's time that we make tech oligarchs recognize this.

Source link

Tagged with:



Comments are closed.