Back in January, we told you about a young, Austin, Tex.-based startup that fights online disinformation for corporate customers. Turns out we weren’t alone in finding it interesting. The now four-year-old, 40-person outfit, New Knowledge, just sealed up $11 million in new funding led by the cross-border venture firm GGV Capital, with participation from Lux Capital. GGV had also participated in the company’s $1.9 million seed round.
We talked yesterday with co-founder and CEO Jonathon Morgan and the company’s director of research, Renee DiResta, to learn more about its work, which appears to be going well. (They say revenue has grown 1,000 percent over last year.) Our conversation, edited for length, follows.
TC: A lot of people associate coordinated manipulation by bad actors online with trying to disrupt elections here in the U.S. or with pro-government agendas elsewhere, but you’re working with companies that are also battling online propaganda. Who are some of them?
JM: Election interference is just the tip of the iceberg in terms of social media manipulation. Our customers are a little sensitive about being identified, but they are Fortune 100 companies in the entertainment industry, as well as consumer brands. We also have national security customers, though most of our business comes from the private sector.
TC: Renee, just a few weeks ago, you testified before the Senate Intelligence Committee about how social media platforms have enabled foreign-influence operations against the United States. What was that like?
RD: It was a great opportunity to educate the public on what happens and to speak directly to the senators about the need for government to be more proactive and to establish a deterrent strategy because [these disinformation campaigns] aren’t impacting just our elections but our society and American industry.
TC: How do companies typically get caught up in these similar practices?
JM: It’s pretty typical for consumer-facing brands, because they are so high-profile, to get involved in quasi-political conversations, whether or not they like it. Communities that know how to game the system will come after them over a pro-immigration stance for example. They mobilize and use the same black market social media content providers, the same tools and tactics that are used by Russia and Iran and other bad actors.
TC: In other words, this is about ideology, not financial gain.
JM: Where we see this more for financial gain is when it involves state intelligence agencies trying to undermine companies where they have nationalized an industry that competes with U.S. institutions like oil and gas and agriculture companies. You can see this is the promotion of anti-GMO narratives, for example. Agricultural tech in the U.S. is a big business, and on the fringes, there’s some debate about whether GMOs are safe to eat, even though the scientific community is clear that they’re completely safe.
Meanwhile, there are documented examples of groups aligned with Russian intelligence using purchased social media to circulate conspiracy theories and manipulate the public conversation about GMOs. They find a grain of truth in a scientific article, then misrepresent the findings through quasi-legitimate outlets, Facebook pages and Twitter accounts that are in turn amplified by social media automation.
TC: So you’re selling software-as-a-service that does what exactly?
JM: We have a SaaS product and a team of analysts who come out of the intelligence community and who help customers understand threats to their brand. It’s an AI-driven system that detects subtle social signs of manipulation across accounts. We then help the companies understand who is targeting them, why, and what they can do about it.
TC: Which is what?
JM: First, they can’t be blindsided. Many can’t tell the difference between real and manufactured public outcry, so they don’t even know about it when it’s happening. But there’s a pretty predictable set of tactics that are used to create false public perception. They plant a seed with accounts they control directly that can look quasi-legitimate. Then they amplify it via paid automation, and they target specific individuals who may have an interest in what they have to say. The thinking is that if they can manipulate these microinfluencers, they’ll amplify the message by sharing it with their followers. By then, you can’t put the cat back in the bag. You need to identify [these campaigns] when they’ve lit the match, but haven’t yet started a fire.
At the early stage, we can provide information to social media platforms to determine if what’s going on is acceptable within their policies. Longer term, we’re trying to find consensus between governments and also social media platforms themselves over what is and what isn’t acceptable — what’s aggressive conversation on these platforms and what’s out of bounds.
TC: How can you work with them when they can’t even decide on their own policies?
JM: First, different platforms are used for different reasons. You see peer-to-peer disinformation, where a small group of accounts drives a malicious narrative on Facebook, which can be problematic at the very local level. Twitter is the platform where media gets its pulse on what’s happening, so attacks launched on Twitter are much more likely to be made into mainstream opinion. There are also a lot of disinformation campaigns on Reddit, but those conversations are less likely to be elevated into a topic on CNN, even while they can shape the opinions of large numbers of avid users. Then there are the off-brand platforms like 4chan, where a lot of these campaigns are born. They are all susceptible in different ways.
The platforms have been very receptive. They take these campaigns much more seriously than when they first began looking at election integrity. But platforms are increasingly evolving from more open to more closed spaces, whether it’s WhatsApp groups or private Discord channels or private Facebook channels, and that’s making it harder for the platforms to observe. It’s also making it harder for outsiders who are interested in how these campaigns evolve.
from TechCrunch https://ift.tt/2LABukI
No comments:
Post a Comment