Modern Law - Droit Moderne

Episode 1: Fighting disinformation by legal means

Episode Summary

An interview with Ève Gaumond, an affiliate to Quebec's Observatory on the Societal Impact of AI and Digital Technologies, about the evolving media landscape and the laws governing disinformation.

Episode Notes

In this first episode of a new “Modern Law” series, Yves Faguy speaks with Ève Gaumond, an affiliate to Quebec's Observatory on the Societal Impact of AI and Digital Technologies, to find out more on how the law can be used, or not, to fight disinformation. They discuss a range of topics from the role of mainstream media, the perils of computational propaganda and legislative efforts to tackle disinformation as well as online hate.

To contact us (please include in the subject line ''Podcast''): national@cba.org

 

Episode Transcription

Fighting disinformation by legal means

 

Yves Faguy:     You’re listening to Modern Law presented by the Canadian Bar Association’s National Magazine.

Welcome to Modern Law a new CBA podcast series, where we discuss the law’s ability to keep pace with change. Over the next few months, I’ll be conversation with leading legal minds and practitioners exploring this theme as it relates to a wide range of challenges at the intersection of law, tech progress and modern society.

On today’s show, to kick off the series, is the law a solution to disinformation? I’ll note here that we chose this topic, how to counter disinformation through legal means, as we’re recording this on September 13th, a week before the 2021 Canadian Federal Election. So, we saw social media and also digital advertising through the microtargeting of voters being stories that were fairly widely reported in the media during this campaign.

There was the item about Liberal Candidate Chrystia Freeland sharing a video that was marked by a Twitter, by Twitter as manipulated media. Although Canada’s Elections Commissioner, to be fair, dismissed a complaint against her for that. There are reports too of the Liberals vastly outspending other major parties when buying ads on Facebook. Although again, the other major parties were also participating in these kinds of endeavours. And of course, there are ongoing concerns that voters could be swayed by false narratives about Covid-19 and vaccination campaigns to name but a few.

To help us understand the issues at play, Eve Gaumond is with us today. Eve is an affiliate to Quebec's Observatory on the Societal Impact of AI and Digital Technologies. Currently finishing a masters degree at Laval University focusing on the risks and benefits of using AI to enhance the intelligibility of judicial information. She is also studying for the Quebec Bar, she insisted that we mention that. She has published work on the topic of data privacy, online speech, AI regulation, privacy in a digital age as well as papers in the field of machine learning. You can also read some of her work at Lawfare where she is a contributor. Welcome Eve to the show.

Eve Gaumond: Hi, thanks for having me.

Yves Faguy:     It’s great to have you here, thanks for joining us. Let’s get into it. Obviously, there have been, just by  way of background, there have been mounting concerns over the years. And particularly since the 2016 U.S. Election, but also, I can think of the Brexit Referendum, that a technologically driven media environment is distorting politics, sowing confusion, I guess, contributing to a state of information chaos generally.

Before we get into some of the legal issues that might come up, help us understand how people in your line of work are diagnosing the problem of disinformation. I mean, obviously disinformation is hardly a new human invention, what’s different about it today? Is it social media that’s the main driver of it, I think of people presume it is, tell us how you see it playing out right out?

Eve Gaumond: Well, I'm glad that you’re asking this question. We’re facing a disinformation crisis, and this crisis happened to take place in an era where technology and social media are ubiquitous. So, we’re tempted to believe that the disinformation crisis is caused by technology and social medias. And if I may, all the fuss around Cambridge Analytica in 2016, Netflix documentary such as, The Big Hack, it’s fed this narrative, but the reality is more complex than that.

Yochai Benkler from Harvard Law School, his work shows that the main culprit for the disinformation crisis in the U.S. is not technology, it is the media eco system that is getting increasingly polarized. Take the Voter Fraud story, for instance, it was driven by the former president, and mainly circulated through mass medias, social media only played a secondary role.

To a certain extent, it is also true in Canada, after studying what happened with the Chrystia Freeland Tweet, that you mentioned earlier, the Canadian Election Disinformation Project from McGill University concluded that, misinformation doesn’t just spread social media, actually nearly two thirds of respondents who heard of this Tweet, the Chrystia Freeland Tweet, did so through traditional media.

So, to answer your question, no, social media is not the main driver of disinformation and misinformation.

Yves Faguy:     So, it’s more an enabler, it’s a way to broadcast misinformation, but from what I understand, it’s created by the, perhaps even the mainstream media environment, by focusing on certain news stories?

Eve Gaumond: Exactly. It’s what Benkler called, the feedback loop, so the news arises from mainstream media or a politician and then it reverberates into social media, but it doesn’t start from there. And this is not the place where the main activity happens.

Yves Faguy:     OK. These are also other concerns and we talk about the current media environment, because there are of course other participants outside the political parties who, and the media, and the mainstream media who play a big role in disseminating information.

And we’ll get into legal things afterwards, but what about concerns about computational propaganda, we’ve seen stories emerge about deepfakes, the use of algorithms in feeding debates that are going on in public life? Should we worry about these political or ideological bots manipulating us?

Eve Gaumond: Worry or vigilant, yes, absolutely yes. But once again, I would like to take a deflationary stance. There is no need to panic, and I feel like your question is twofold, so let’s start with deepfakes.

Deepfakes, for those who don’t know, are realistic fake videos created with AI. They can portray people saying things that they’ve never said or doing stuff that they’ve never done. The technology is good enough that we could see some uses of it during election campaigns, though for now there’s no real example of disinformation incidents powered by deepfakes.

What we’ve seen though is cheapfakes. Cheapfakes are manipulated media created with basic editing techniques, like Photoshop, that we’ve seen a lot in the last Presidential Election in the U.S. And there some examples in the current election, Canadian Election as well. Take the Justin Trudeau Willy-Wonka ad that was launched by the Conservative a few days before the election; it is one example. The O’Toole manipulated video about healthcare, so the Chrystia Freeland Tweet. It could arguably be seen as another example of cheapfakes, because it’s a manipulated media with basic editing techniques. So, we definitely have to be vigilant about manipulated media.

The European Union and its Artificial Intelligence Act, so it’s a regulation proposal, they’re arguing something quite interesting in that regard.

Yves Faguy:     Just to interrupt, this is their, this is the regulation on use, regulatory use of AI, right?

Eve Gaumond: Yes, exactly. So, the AI Act, which is a regulation proposal that was released in April 2021. There’s a small provision in this Act, which is pretty interesting, they would require that deepfakes have to be, deepfakes should be labelled as such. So, if I’m placing a video using deepfake technology to imitate Brad Pitt or Donald Trump or Justin Trudeau, no matter who, I should say that it is not a true video, but it is a deepfake. This could be an interesting way to prevent disinformation incidents powered by deepfake, so yeah.

And the second part of your question about computational propaganda bots. There are indeed some fake accounts on Twitter that are actually bots, we’ve seen it with Russian interference in the U.S. However, in my opinion, the narrative around this threat is a bit overblown; it is not a very important threat.

According to the Digital Democracy Project, we are talking of 0.3% of all Tweets posted during the 2019 election campaign. So, earlier we were saying that Twitter is not the main driver of election misinformation or election information period, but now only 0.3% of this not very important driver of information is done by bots. So, let’s keep following that and let’s be vigilant about that, but that’s not what is going to derail our election.

Yves Faguy:     OK. I sometimes get a sense that journalists being constantly on Twitter sometimes are maybe professionally biased towards these opinions, that the world is getting overrun by bots and disinformation artists.

Eve Gaumond: Yeah, yeah. Talk to your people, your friends around yourself that do not use, that are not in the academia or journalists, they don’t know what happens, what is happening on Twitter, it’s very niche.

Yves Faguy:     Yeah, of course. Now, having said that, Canada has had still to confront some challenges around disinformation and or at least has felt that it was important enough to address it.

Let’s talk about, you talked a little bit about AI regulation in Europe, but let’s talks a little bit about how Canada is going about trying to address issues around disinformation from a legal perspective? First of all, is there even a legal definition for disinformation?

Eve Gaumond: Broadly speaking, let’s start by defining what is disinformation, because sometimes it’s not totally clear. Disinformation is falsehood aimed at achieving a political goal, which is different from misinformation, which could be about Covid-19, for instance. So yeah, but now that’s not the legal definition, that’s the political definition. And Canadian Law does not straightforwardly regulate disinformation, so it doesn’t define it either.

We did in the past though, we regulate it and then, so we also defined it. Indeed until, up until 2018, we had a provision prohibiting the action of making or publishing false statements about political figures citizenship, place of births, education, membership in a group, past legal offenses or professional qualification.

Yves Faguy:     We’re thinking of some, the birtherism campaign that went on in the States, for example, this was a response to that – 

Eve Gaumond: Exactly, birtherism in States or Andrew Scheer claimed that he was an – 

Yves Faguy:     An insurance broker.

Eve Gaumond:– insurance broker. Yeah. If it was done with the intention of influencing the result of an election, it would have been prohibited and possible of five years in prison or a R50 000 fine, so quite significant penalties.

Yves Faguy:     So, you’re talking about this provision as if though it’s in the past, what happened since then, because it didn’t sound like that provision had a very long shelf life?

Eve Gaumond: Yes and no. It existed in one form or another since 1908, but it was amended in 2018 to be more specific and it was contested in Ontario and found unconstitutional in February 2021. It was quite surprising for me, because it was written pretty narrowly and it was following the guidelines from R v Zundel, which gave some hints about how to regulate to prohibit false news, false information. Yet, because of the fact that there was a requirement, that there was no requirement that the publishing or a false statement was made knowingly, it, the Ontario Superior Court found it unconstitutional.

Because I could add for instance, share the fact, let’s take again Andrew Scheer claim the fact that he was an insurance broker. We could’ve said, oh he’s trustworthy, he was a, he had a good job, please, like on Facebook, please vote for him, I think he’s going to be the best Prime Minister. And even though I didn’t know that it was not true, I could have been persecuted for saying that, because they were knowingly, it was not, it was nowhere to be found in the provision. –

Yves Faguy:     So, you mean if I was, so if I was a social media user and I redistributed something or retweeted something claiming that Andrew Scheer had been an insurance broker, I could’ve been captured under the prohibition?

Eve Gaumond: Yes – 

Yves Faguy:     Even if – 

Eve Gaumond:– that’s very theoretical.

Yves Faguy:     – even if I didn’t know about, even had I not verified myself that he had been an insurance broker, that would’ve been the problem?

Eve Gaumond:Yes. And now, the Election Commissioner said that they came during the trail and said that, they wouldn’t go after who was in such a situation, but it was more like an internal – 

Yves Faguy:     A decision – 

Eve Gaumond:– like they – 

Yves Faguy:     – it’s a decision not to enforce or enforce the provision?

Eve Gaumond: Yes, instead of like, the law still allowed him to do it, so this is why the Ontario Superior Court decided that it was unconstitutional.

Yves Faguy:     So, there had been changes and I think this is, and perhaps you can tell us a little bit about the Election Modernisation Act. I understand that this provision probably came in from there, came in through that, those reforms, some of those new provisions are still in place.

Eve Gaumond: Yeah.

Yves Faguy:     Can you tell us a little bit about then the Modernisation Election Act?

Eve Gaumond: There were a few new provisions in the Election Modernisation Act, there was prohibition, like we have regulated against foreign adverting, for instance. It was also the case that we changed, like there was some caps, caps for third party that were added and there was the ad reposition, which is the big, the big thing that was implemented in 2018.

Yves Faguy:     That online platforms have to keep an online advertising repository?

Eve Gaumond: Exactly, yeah. So, large internet platforms, such as Facebook, are required to maintain an ad repository, that is to say a registry of partisan advertising messages and election advertising messages for the pre-election, election and election period.

And that’s the, that’s how we now know, as you were saying in the introduction, that the Liberals have spent more on advertising on Facebook and Instagram than the other four major parties combined. It’s also a matter to monitor third party’s advertisement, sometimes called dark money ads, which is strictly limited in Canada.

But I should say that, the information that has to be made public on these repositories is insufficient. If we are to go further to fight disinformation, I think that the law should require more transparency on these repositories. The number of times an ad has viewed, for instance, should be made public, I think, the audience targeted by the ad should also be made public.

Like if someone wanted to micro-target an ad to people who identify as Jew haters on Facebook. If we’re not to outrightly ban it, such practices, at least it should be public to deter people to engage in such practices. Like journalists should be allowed to go in the repository, the Facebook ad library and see, oh wow, this is a weird targeting.

It was a case, for instance, in 2020 in the United States, there were some advertisements floating around the internet with deceptive messages about mailing voting, voters, mailing voting and they were microtargeted to minority voters in Swing States.

And at the end of the day, the Washington Post have seen it and made papers in the newspaper, made a piece about it and then Facebook took down these advertisements. But there should be an easier way to find these kinds of advertisements that are problematic.

Yves Faguy:     OK, yeah, that’s quite something. So, disinformation is really actually often quite disguised in the sense that we don’t know how people are being targeted. I guess there’s a reason why people get targeted for buying a pair of shoes or buying an automobile and some kind of market product. But what you’re saying is that in the political sphere we should not be targeting people, or we should be knowing why we’re targeting people?

Eve Gaumond: Yeah. I’m not sure if it would be constitutional to say that microtargeting is, in a political context is prohibited, that I am not sure it would pass [unintelligible 00:18:45]. But the transparency requirement is a way to prevent it even though you are not prohibitingly, prohibiting it straightforwardly.

If it is public and well-known that you’re using labels that are not politically correct or not acceptable in terms of public opinion, the ethic might be similar. I don’t, yeah, in the context, in the present context where we’re not facing a huge disinformation crisis, I’m not sure it would pass the [unintelligible 00:19:23] test if we wanted to prohibit microtargeting, although some people are pondering the idea. But definitely, more transparency would help us to take, like understand better the problem and maybe deter people to engage in such practices.

Yves Faguy:     There really is a sense that, there’s certainly been a sense till recently that governments in the Democratic West at least have been reluctant to interfere in this field, in the social media landscape through regulation. And I’m guessing it’s because they’re afraid of being accused of limiting free speech, but what are the challenges in formulating legal solutions?

Eve Gaumond: Yeah. Of course, in the North American environment, free speech is the hallmark of our democracy, so it’s very hard to limit free speech and very, it’s seen as something very problematic. So, of course it is one of the reasons why we are not tackling disinformation and the regulation of social media. There is also some economic reasons, of course, the United States are these places where all the tech companies are, so, and it’s powering their economy, so there it is also one of the reasons why.

But we have to remember also, that the disinformation problem on the internet is still kind of recent, law is not known to be so fast of a discipline. So, maybe in addition to freedom of expression concerns, time concerns might also be a reason why government haven’t acted yet. But more or less five years after online disinformation really clearly became a problem, so this year, we’re seeing many states trying to find ways to tackle disinformation online.

There is some work done around, quite a bit of work done around the world these days. The AU for instance, is working on its 2020-2024 European Democracy Action Plan. Australia is also taking some steps towards regulating misinformation and disinformation. So, the field is moving quite fast and it’s honestly hard to follow.

And I should maybe add something else, I would like to point out something else actually, there are very democratic states, like in the Democratic West, but with different free speech traditions that have already acted to prohibit disinformation online. Germany, is the most striking example with the NetzDG, a law which since 2018 required that obviously legal content, such as Denial of The Holocaust, that’s the reason why they have a different free speech tradition. Insult, malicious gossip is taken down by platform in a delay of 24 hours after a complaint or it’s a notice and takedown system.

Many people are arguing that is the path to a less democratic world and that it is an example to countries such as China or so on that legitimising these countries to control speech and limit freedom of expression. But it is something that exists and in a very democratic country and.

Yves Faguy:     And the notice and takedown system is kind of an interest one, because one of the criticisms I’ve heard about that is that, or that we hear about that is. It’s actually not a very feasible means of doing things when you have so many out there spreading sometime hateful or disinformation stories out there. And I think we’re getting a sense anyway at least, and correct me if I’m wrong, but the major media or social media platforms are really struggling with taking down posts and Tweets and whatnot. What are your thoughts, yeah?

Eve Gaumond: Yes. The whole point is to determine what is good speech and bad speech that’s the problem. Like the line between truth and falsehood is not so clearly defined.

Yves Faguy:     No.

Eve Gaumond: So, that’s the main problem, after that, like the notice and takedown system, it’s a system that works for IP claims, it’s not necessarily hard to implement. The real problem is how do you determine which content you want to take, to keep up and which content you want to takedown.

Yves Faguy:     Absolutely.

Eve Gaumond: And this is not a technological problem, it is a, a question that exists for, since the dawn of time, like censorship and all of that. It’s always the same question, are you, are we asking some people to stop saying some stuff, because we don’t agree with it or because it’s false, what is false. Take Salman Rushdie’s Satanic Verses, for instance, in the Islamic world it’s considered life, it could be sanctioned quite heavily, yet in the democratic world, it’s kind of – 

Yves Faguy:     Free speech.

Eve Gaumond: Yeah.

Yves Faguy:     What else do you think needs to be done, I mean you gave us a little bit of a tour of the world, some countries addressing disinformation through legislative means? What would you like to see done to craft solutions to these disinformation challenges, you talked about transparency?

Eve Gaumond:Yeah. Transparency is something very important in my mind, because it’s a way to understand the problem properly and see how our actions are working or not. And it is also a way to make sure that the regulations that we already have are actually working. Like if we have no means to see a target, a microtargeted ad paid for by a foreign actor then we cannot sanction it.

So, as I was sort of saying earlier, the repository should be more complete, more thorough, more information required to be put in this repository. It should be, like in 2018 when the mandatory repository was put into effect, Google decided that it would not, the company would not display any political advertisement anymore, because they thought that it was not possible for them to put such a repository in place.

We should see how to regulate all platforms in a way that we can touch on all possible ways that disinformation is spread. Also WhatsApp, it’s a way disinformation circulates, so we should make sure that every platform, internet platforms is regulated. But there is also some other kinds of regulations that should be put in place.

Last year I wrote a paper for Lawfare in which I compared the American and Canadian situation with respect to disinformation. And the conclusion of the piece was that Canada does seem, it doesn’t seem to be facing a disinformation crisis of comparable magnitude to one facing the United States.

And one of the reasons why, was campaign finance, Canada is known for its so-called egalitarian model. The egalitarian model is characterised by quite respective spending limits for political parties and third party’s advertisers, in order promote a level playing field between political candidates.

Now with advertisement online being way cheaper than it was, than it used to be on traditional media, those limits are a bit outdated; it doesn’t, the egalitarian model doesn’t work all well, because paying for an ad online is so cheap that it doesn’t make any difference. I think that we might want to think about modernising the egalitarian model to take into account online advertising, that means updating the Canada Election Act to do it.

Michael Pal from the University of Ottawa suggests that we should have spending limits for traditional media and some spending caps specifically for online advertisement. I find this idea [unintelligible 00:28:18], yeah.

Yves Faguy:     No, no, I’m just agreeing. So, because presumably the cost of advertising on social media is much less – 

Eve Gaumond:Exactly. So – 

Yves Faguy:     – that we need to adapt our spending caps accordingly?

Eve Gaumond: Exactly. So, this idea from Michael Pal, I find it pretty interesting. He also proposes that platform be required to conduct due diligence to make sure that no foreign actors are paying for political ads in Canada. It is already prohibited under the Canadian Election Act, but it is hard to enforce.

If Facebook for instance was making sure that political advertisement is not paid in foreign currency, that people who are paying for the advertisement were or have an address in Canada. This would be a way to make sure that foreign actors are not interfering in our Canadian political environment.

Earlier we also think about section 99, 91-1 of the Canadian Election Act, the provision against false information regarding political candidates. It was not perfectly written, but the idea was pretty interesting in my honest opinion. So, I think that we should read the ruling and rewrite the provision to follow the guidelines proposed by the judge and make some similar provision re-enacted in the future.

For this election it, of course, won’t be possible, but in the future, I think. Even though it was not a provision that was used in the past and nobody has ever been convicted of it, it is still, it still is a [French 00:30:10], a way to protect ourself from any – 

Yves Faguy:     A guard rail.

Eve Gaumond:A guard rail, yeah. And interesting to know, that it was actually planned by the government in the last Federal Election, last Federal Budget.

Yves Faguy:     OK.I want to touch on another topic, well a related topic and I’m trying not to conflate the two issues. But it is interesting too, that on the final day of the Sitting of the House of Commons this past spring, the Liberal Government released a consultation document, and this was on their plans to tackle online hate. We’ll see what happens following the outcome of the election, and I don’t know if someone else got into power whether they would pick up on this.

But what seems to be contemplated is a new framework, legal framework that would target the most reprehensible types of harmful content online, so, including criminal content. My sense is that there have been some pretty negative reactions to the document. And I’m wondering how you view that and what it says about our ability to regulate the spread of disinformation and the spread of harmful and hateful content?

Eve Gaumond:One of the main critiques about the proposal regarding online hate, is that it does not take into account what the government has been told by experts. And some, like there have been, there has been many informal consultations, many experts have participated to these consultations and yet the concerns that were voiced during these consultations does not seem to have influenced the content of the proposal.

Yves Faguy:     What were those concerns that have been voiced?

Eve Gaumond: People like Cynthia Khoo, Emily Laidlaw, Suzie Dunn are long-time advocates for the equality of right and protection of women against technology facilitated violence. They put forward solution, even like, actual legal regime preposition and it doesn’t seem to have been taken into account.

Some sort of things that are worrying them, is that the RCMP, the content flagged on social media could be communicated to the RCMP and could be prosecuted by the RCMP. And as a result, if the proposal was to become black letter law, it could hurt vulnerable people, people who are, whom are literally people the Bill aims to protect. And they, like vulnerable people are not only targeted by trolls, they are also victims of abusers from the States. And their speech could be silenced by the State as well.

And queer people could be, there is some worried that queer people for instance, their content would be flagged as sexual, pornography or something like that and be taken down while it’s as means for them to empower themselves. One of the main worries about the online [eight 00:33:33], the proposal, is that it will hurt the very people it aims to protect.

And I think that when you’re asking what it’s important for ability to regulate the spread of disinformation, I think that the lack of consultation and the lack of taking what experts are saying into account is something that we have to start to learn from. If we are to regulate social media to tackle disinformation, we should do it only after heavy consultation with experts and stakeholder.

And we should be cautious about not seeing the tree, but seeing the forest, that is to say, not only regulating social medias, but also all of the environment that has created the disinformation crisis. That means education, funding and great journalism, the media ecosystem is one of the important components of why we’re not facing such a big disinformation crisis.

We have to focus on that, we have to focus on maintaining the egalitarian model for campaign finances, makes sure, like spending caps. Third parties advertising, make sure that it’s still up-to-date in the online world. But if we, I don’t think that we should tackle speech per se, I don’t think that we should say, oh we’re going to takedown disinformation.

Like the proposal, online hate proposal is trying to implement a notice and takedown regime for illegal content. Something of the like of what the NetzDG is providing for and I’m not sure that it is the way to go, I don’t think we should focus that much on the speech. Because actually it might not even be constitutional to limit some political speech, but we should focus on all of the ecosystem around speech.

So, we’re not saying you cannot say that, but we are saying, oh you cannot pay millions of dollars to make sure that everyone sees what you’re seeing, saying. You cannot pay millions of dollars to microtarget your, what you’re saying to people that believe the same thing as you do without oversight from other people, from Election Canada.

So, that is the thing that I think we should remember from the recent move of the Liberal Government. Consult with experts, listen to experts and don’t get into the hype of social media, social media, social media and think about the whole ecosystem of disinformation.

Yves Faguy:     OK. Well, that was an absolutely fascinating talk about an issue that is probably going to merit our attention for the foreseeable future and we will try not, we will try to resist the social media, yeah, hype, I don’t know with what degree of success. But these are things to think about as we take stock of the election results as well that will come in next week. 

Thank you so much Eve Gaumond for joining us today.

Eve Gaumond: Thanks for having me.

Yves Faguy:     Well, we hope you enjoyed this first instalment of Modern Law, our new addition to CBA podcasts. You can here this podcast and others on our main CBA channel The Every Lawyer and on Spotify, Apple Podcasts, Google Podcast and Stitcher. So, please subscribe to get notifications of our new episodes.

And to hear some French listen to Juriste branché and please share it with your friends and colleagues. And if you have any comments, feedback or suggestion, feel free to reach out to us on Twitter at @CBANatMag and on Facebook. And check out our coverage of legal affairs at NationalMagazine.ca. 

Finally, I just want to say a big, big thank you to our podcast editor Ann-Catherine Désulmé and we’ll catch you next month for our next episode.