From Photoshop to Midjourney - Building Resilience Against Visual Misinformation
Cuihua (Cindy) Shen
Author
11/28/2024
Added
1
Plays
Description
Media Literacy in the Age of Artificial Intelligence - Panelist, Cuihua (Cindy) Shen
Searchable Transcript
Toggle between list and paragraph view.
- [00:00:00.080]So without further ado, I want to introduce our speaker this afternoon, Cindy Shum.
- [00:00:05.280]She is a professor of communication at UC Davis and the co-director of the Computational
- [00:00:11.200]Communication Research Lab there. Her research focuses on computational social science and
- [00:00:16.880]multimodal misinformation in AI-mediated environments. She is the past chair of the
- [00:00:23.280]Computational Methods Division of the International Communication Association
- [00:00:27.440]and the founding associate editor of the journal Computational Communication Research,
- [00:00:33.440]as well as the associate editor of the Journal of Computer-Mediated Communication. So she's done a
- [00:00:38.240]lot of work in this space. Her research has been funded by the National Science Foundation and
- [00:00:43.440]Facebook. She's also the recipient of numerous top paper awards from the International
- [00:00:47.840]Communication Association, as well as a Fulbright U.S. Scholar Award. So welcome, I'm Cindy.
- [00:00:53.600]First of all, I want to thank our host Brian and Dean Vail for this wonderful symposium. I've
- [00:01:10.020]already learned so much from the two speakers this morning, and hopefully my talk this afternoon will
- [00:01:16.220]be in conversation with both Jess and Kirsten as well. So I want to talk about
- [00:01:23.080]from Photoshop to mid-journey building resilience against visual misinformation. But first of all,
- [00:01:29.500]let's talk about the backdrop of the research I'm going to present here. In 2018, the journal
- [00:01:38.740]Science published a review piece which seemed to suggest the study of "fake news" has entered entry
- [00:01:45.040]prominence. But what exactly is fake news? This term was populated, if you remember, by then presidential candidate
- [00:01:52.560]Donald Trump. It can be understood as fabricated information that mimics news media content in
- [00:01:58.880]form but not in organizational process or intent. Fake news outlets in turn lack the news media's
- [00:02:05.340]editorial norms and processes for ensuring the accuracy and credibility of information. Now at
- [00:02:12.240]the same time, I'm also arguing against using this term because this term carries a lot of
- [00:02:17.880]political bias. For example, Trump himself repeatedly used the term "fake news" to
- [00:02:22.040]describe anything that does not support his agenda, as here he's calling all these mainstream
- [00:02:28.760]media news outlets as "fake news." And sometimes when the news, the poll for example, is fake
- [00:02:37.280]because it shows he is losing in Pennsylvania. This was taken around October 2020, right before
- [00:02:43.640]the 2020 election. So here fake does not mean fake anymore. It simply is a label for anything that's
- [00:02:51.520]against him. So for these reasons, instead of using fake news, I want to use these other two
- [00:02:57.280]alternative terms, misinformation, which refers to false or misleading information, and disinformation,
- [00:03:04.400]which refers to false information that is purposely spread to deceive people. In reality, however, it is
- [00:03:13.180]extremely difficult to assert the intention of the publisher. So in this talk, and also in my line of
- [00:03:21.000]work, I'll only use misinformation to refer to both. Okay, so for today's agenda, I want to talk specifically
- [00:03:30.080]about four questions. The first question, what is visual misinformation? I want to give a definition
- [00:03:36.020]and categorization. The second question, how do human beings evaluate it? Do we have the ability
- [00:03:42.800]to identify visual misinformation? And third, what interventions can build resilience against
- [00:03:50.480]visual misinformation? And lastly, is media literacy the answer? So let's dive right in.
- [00:03:57.320]What is visual or multimodal misinformation? The motivation for this line of work actually
- [00:04:06.660]was very simple. So most existing work in communication and other disciplines regarding
- [00:04:12.680]misinformation heavily focuses on textual misinformation. But as we all know, people nowadays
- [00:04:19.960]do not consume information in a purely textual format. Instead, they increasingly consume
- [00:04:26.400]information in a multimodal format. So case in point, this is a screenshot I took some
- [00:04:31.640]time ago on the top stories on snopes.com, which is a popular fact-checking website in
- [00:04:37.940]the United States. As you can see, every single story has a thumbnail image with it, right?
- [00:04:44.800]And that is commonplace in all the news websites. There is almost no pure
- [00:04:49.440]textual news items anymore. And in 2023, my colleagues and I, we published
- [00:04:56.980]a perspective piece in a journal, Political Communication, in which we did a systematic
- [00:05:02.640]literature search using term misinformation and visual misinformation. As we can see,
- [00:05:08.980]the misinformation literature has dramatically increased over the past decade, but visual
- [00:05:16.060]misinformation still lags far behind.
- [00:05:18.920]In other words, we just do not know enough about what visual misinformation is and even
- [00:05:26.480]less about what to do with it. Now, decades of research on visual materials and visual
- [00:05:34.040]processing have shown that users process and perceive visuals fundamentally differently
- [00:05:41.980]than they do with text. Visuals are more easily recalled, they're more easily shared, and
- [00:05:48.400]they often are more persuasive than words. So that points to the danger of visual misinformation
- [00:05:57.820]more so than textual misinformation. Now, images historically have been perceived as
- [00:06:04.720]photographic proof of the depicted events. We have expressions such as "seeing is believing,"
- [00:06:11.960]right? Photographs are convincing proof of events that have happened or may be
- [00:06:17.880]happening. An example of this is the moon landing I'm showing here, and the only way
- [00:06:24.400]we knew it happened is this photographic evidence. None of us in this audience today were eyewitness
- [00:06:33.300]to the moon landing event in 1969. The only way we knew it happened is through this or
- [00:06:39.340]some very grainy videos released in news media. However, while image manipulation technology
- [00:06:45.900]could be used effectively,
- [00:06:47.360]with modern tech, it's getting much easier to deceive and manipulate using forgeries.
- [00:06:52.700]A very strong example, and that's not even a recent example. This was about two decades
- [00:06:57.300]ago. As this one is shown here, this is from the 2004 US presidential election. Does anyone
- [00:07:04.480]recognize the man here? He is John Kerry. He was the then presidential candidate. Here,
- [00:07:11.520]he was photographed sharing the stage with the activist Jane Fonda.
- [00:07:16.840]At the Vietnam era anti-war rally back in the 70s. Now, regardless of your political
- [00:07:23.420]opinion, this picture showed contrary to the image John Kerry wanted to project. And this
- [00:07:30.180]was very damaging to his political campaign. And the author of this image, this is shown
- [00:07:37.780]as AP photo, right? It's a very reputable news source. However, this image was a visual forgery by
- [00:07:46.320]a Bush supporter. It was exposed because the originals of John Kerry and Jane Fonda was found
- [00:07:53.960]in archival sources. But by the time this forgery was uncovered, it has been circulating a lot,
- [00:08:00.260]even appeared on New York Times. We all know that when New York Times runs a retraction,
- [00:08:04.760]the story may not be viewed as widely as the original story. So the damage to John
- [00:08:10.720]Kerry's political career has already been done. The point is, human beings
- [00:08:15.800]just simply lack the ability and the technical tools to tell the fake or
- [00:08:21.020]forgery from the true images. Even very big media outlets like New York Times
- [00:08:26.000]will fall for it. So with the wide availability of advanced photo editing,
- [00:08:30.480]with Photoshop, now with MidJourney, we can no longer ensure that photographic
- [00:08:35.880]records represent reality. And that's extremely worrisome.
- [00:08:40.720]I want to show a couple more examples. This is another example of
- [00:08:45.280]composition. This image display here of Sarah Paling began circulating on the
- [00:08:50.540]internet days after the announcement that she's being tapped for the vice
- [00:08:55.320]presidential candidate. However, it is purely digital manipulation created by
- [00:09:00.780]pasting an image of her head onto this original photo of another bikini-clad
- [00:09:05.560]woman holding a BB gun.
- [00:09:06.840]Now, even journalists at very big media outlets sometimes knowingly
- [00:09:14.760]commit photo manipulation. And this was 2010, and the media outlet in question
- [00:09:21.340]was Economist. They published a cover photo of President Obama alone on a
- [00:09:30.040]Louisiana beach. This was immediately after the BP oil spill. And this cover
- [00:09:34.380]photo shows him alone on the beach, thinking deeply. But the original photo
- [00:09:39.500]shows that Obama was, in fact, not alone at all. So the journal photo
- [00:09:44.240]journalists here eliminated two other figures who are standing with Obama to
- [00:09:50.360]create the effect they wanted. And sometimes a fake image does not have to
- [00:09:56.660]involve deliberate manipulation at all. It involves mismatching the image and
- [00:10:01.580]the context is supposed to describe. I don't know how many of you have seen
- [00:10:05.180]this video of Biden speaking at an event in 2022 in a neighboring state of Iowa.
- [00:10:10.760]And this video quickly went viral with the caption,
- [00:10:13.720]"Bird poops on Biden." But do you know that this caption is actually wrong?
- [00:10:19.440]Because in fact, Biden was speaking in an indoor event and he was standing directly
- [00:10:26.340]beside a giant pile of processed corn. What appears to be bird poop was not bird
- [00:10:32.220]poop because it was indoors and it was processed corn meal. Okay?
- [00:10:36.880]And there were many eyewitnesses too. So this is an example of unedited image
- [00:10:43.200]and video coupled with the wrong caption or interpretation. And I argue it falls right
- [00:10:50.180]into the category of visual misinformation as well. And this one is actually quite similar
- [00:10:55.680]to the shark, swimming shark example Jamin presented. This one was also pretty old. I
- [00:11:02.880]think about two decades ago about Hurricane Sandy at New York City. In this photo, Statue
- [00:11:07.940]of Liberty looks like she's under attack, right? But in reality, this is a screen capture
- [00:11:12.680]from the movie The Day After Tomorrow, okay? So the photo, we can say the photo or the
- [00:11:18.260]screen capture was unedited, was not manipulated, but it was used purposefully in a wrong context.
- [00:11:26.120]So I think that is visual misinformation. And internet memes often fall into the category
- [00:11:32.740]of misinformation as well. In this example, Einstein is real, right? The photo is real,
- [00:11:38.680]but the quote was not. So again, using a real image in a wrong
- [00:11:42.160]context. Now, some might argue that internet memes should not be considered in the realm
- [00:11:47.960]of news or misinformation at all, because people don't take them seriously. They are
- [00:11:53.000]humor, sarcasm, entertainment, everything but news.
- [00:11:56.640]If this is indeed the assumption by most people, then I would agree that it should not be considered
- [00:12:02.800]as misinformation. It should not be verified at all, because we shouldn't dignify it by
- [00:12:08.040]verifying it, right? But sometimes internet memes
- [00:12:11.640]do get considered as a legitimate form of information or news.
- [00:12:16.000]Okay, so going back to Kirsten's presentation this morning, there are a lot of very diverse
- [00:12:20.820]sources of news. Some people get news from The Daily Show, some people get news from
- [00:12:25.480]TikTok, some people get news from memes. So in that context, in these situations, memes
- [00:12:31.460]can become misinformation as well. So after all these examples, the point I'm
- [00:12:36.660]trying to make is that image veracity is contextual. We cannot
- [00:12:41.120]judge the image, or by the same token, a video's veracity, simply by looking at
- [00:12:47.300]that thing alone. It has to be judged against a context. And then how do we
- [00:12:54.500]define fake or forged images or visual misinformation in general? In 2018, I
- [00:12:59.900]published a paper with my colleagues and we defined four categories of fake or
- [00:13:06.400]forged images. Composition means putting two photos together,
- [00:13:10.600]pasting one person's head on another, for example. Retouching means changing
- [00:13:14.980]specific elements in the existing image. Elimination is cropping out certain
- [00:13:20.240]parts of the image and misattribution is putting an image or video in the wrong
- [00:13:24.480]context. But of course, as we all know, things in this space changes so fast.
- [00:13:29.660]Our 2018 categories soon had to be updated. Fast forward to about a year
- [00:13:35.880]ago, AI generated news images began
- [00:13:40.080]to circulate on the internet. So this is March 19th, 2023, that depicts Trump
- [00:13:46.080]arrested by FBI at Mar-a-Lago, except this never happened then. Similar events
- [00:13:51.820]actually happened later last year, but that was AI generated. Earlier in 2024,
- [00:13:58.500]actually just a month ago, several images of Trump surrounded by smiling black
- [00:14:04.160]people appeared online. But the odd lighting and kind of two perfect details provide
- [00:14:09.560]cues to the fact that they were all generated using artificial intelligence.
- [00:14:14.600]Now, these photos actually have not been linked to the Trump campaign.
- [00:14:19.280]Sources say they were created by some Trump supporters, and they emerged as
- [00:14:26.060]Trump seeks to win over black voters. Now, as we're getting ready for the
- [00:14:29.840]election season, this example highlights the danger that any group, Latinos,
- [00:14:34.400]women, older male voters, they could be targeted with lifelike images
- [00:14:39.040]meant to mislead and confuse them, as well as demonstrate the need for
- [00:14:43.780]regulation around this technology. So, how do we define fake or forged
- [00:14:48.900]images now? We added one additional categories of deep fake and AI
- [00:14:54.220]generated visuals to the original four categorizations. So, the next question
- [00:14:59.940]is this, can human beings detect fake or forged or synthetic images, and how?
- [00:15:08.520]Let me bring you back to the first image of moon landing. How do we even know it's
- [00:15:15.960]real or fake? Unfortunately, the technology that allows for creating
- [00:15:21.520]manipulated images has far outpaced the technological method for detecting such
- [00:15:26.200]manipulations. There are currently some analysis methods available based on
- [00:15:32.060]metadata or encryption, or in this method developed by my colleagues in
- [00:15:37.080]computer graphics,
- [00:15:38.000]by analyzing shadows and perspectives. So in a nutshell, shadows are determined by
- [00:15:43.440]the source of light, the shape of the object, and the surface on which they are
- [00:15:47.780]cast. And this is governed by law in physics. So if there are irregularities
- [00:15:53.540]or inconsistencies that does not really align with physics, then this photo might
- [00:15:58.780]be compromised. That's how we know. And without going into too much technical
- [00:16:03.380]detail, this analysis shows that moon landing did
- [00:16:07.480]actually happen. But these methods are very complex. They're not accessible to
- [00:16:13.700]an average internet user like you and me. Okay, but you might say even when such
- [00:16:19.300]technology might be complex, as long as we have enough eyeballs, right, as long as
- [00:16:24.360]we have enough eyeballs, maybe collectively we can catch Photoshop
- [00:16:29.260]pictures. By now everyone and their mother on this planet has probably seen
- [00:16:34.300]this picture. It was posted on March 10 when
- [00:16:36.960]the UK celebrates Mother's Day, and then the royal family spoke out for the first
- [00:16:41.760]time since Kate's abdominal surgery, and it was posted on their official
- [00:16:45.900]Instagram account. However, less than 12 hours after this photo was released,
- [00:16:52.360]numerous agencies, including AP, Getty Images, they began removing this photo
- [00:16:58.200]from their press libraries, citing concerns about manipulation, and there
- [00:17:02.720]are many, many manipulations or irregularities.
- [00:17:06.440]So, for example, over here in area A, Kate's hand somehow is out of focus and
- [00:17:14.840]very blurry, and there is no reason for that to happen considering this is a
- [00:17:19.460]staged photo. It shouldn't be blurry, and if this is blurry, we would expect some
- [00:17:24.140]other elements to be blurry, but they are not. Over here in area B, we can see
- [00:17:30.620]inconsistency with the sweater Princess Charlotte was wearing,
- [00:17:35.920]and in some other areas with Kate's hair, somehow it was very blurred. There's no
- [00:17:43.640]drop shadow behind it, etc., which gave us clues about, you know, this is a
- [00:17:49.600]photoshopped image, and it turned out, indeed, it was a photoshopped image.
- [00:17:55.860]However, we are not always that successful in identifying photoshopped
- [00:18:01.720]images. The example from Princess Kate was actually
- [00:18:05.400]an exception. In fact, the average Internet user
- [00:18:10.400]often assumes they have the ability to identify
- [00:18:13.660]forgeries, but they're often wrong. Now, let me show you a
- [00:18:17.660]counterexample. In 2016, Mike Pence posted this image on
- [00:18:22.600]on Twitter, taken at a restaurant in New York City
- [00:18:25.590]with his wife, who was dressed in black,
- [00:18:29.090]as well as his daughter, who was dressed in white.
- [00:18:32.310]If I asked you to look at this image very closely,
- [00:18:35.890]can you spot anything wrong with it?
- [00:18:40.530]I see some heads nodding, okay?
- [00:18:44.450]And is that because we see Mike Pence's reflection
- [00:18:48.170]in the mirror, but not her daughters?
- [00:18:52.330]Okay.
- [00:18:53.550]And many internet users did claim that something was wrong.
- [00:18:58.690]And they immediately created a buzz to say
- [00:19:02.470]that his image is fake because his daughter doesn't reflect
- [00:19:06.150]in the mirror and some Twitter users started calling her
- [00:19:09.250]a vampire.
- [00:19:10.470]However, as you can see here in this analysis
- [00:19:15.630]of shadows and perspective, the fact that her reflection
- [00:19:19.370]is not apparent in the mirror is consistent
- [00:19:22.330]with the geometry of the scene through analyzing reflection
- [00:19:26.510]and how reflection behaves under perspective projection.
- [00:19:29.670]Applying math, you can see that the tip of her nose
- [00:19:32.350]should be behind pens and that's why we don't see
- [00:19:34.870]her reflection in the mirror.
- [00:19:36.830]So even when there is a growing awareness
- [00:19:39.970]that images no longer represent authentic proof of reality,
- [00:19:43.630]we're not very good at finding the manipulations
- [00:19:46.030]and therefore are very vulnerable to them.
- [00:19:48.690]So we are pretty bad at this,
- [00:19:51.190]despite some experiences,
- [00:19:52.330]some occasional successes.
- [00:19:54.830]The next question,
- [00:19:56.090]maybe there is a technical solution to this.
- [00:19:58.410]Can we just apply some technical detection algorithm,
- [00:20:02.410]which would help us detect
- [00:20:03.970]whether this is Photoshopped or not?
- [00:20:06.590]My answer, unfortunately, is no,
- [00:20:10.690]because manipulated media and misinformation
- [00:20:13.270]is a socio-technical problem.
- [00:20:15.970]So technology alone is not going to cut it.
- [00:20:19.350]Let me show you this example.
- [00:20:21.830]In December 2023, Google was suddenly grappling
- [00:20:26.050]with a new problem.
- [00:20:27.450]That is, photorealistic AI images
- [00:20:30.870]were topping Google's search result.
- [00:20:33.530]For example, for a period in September 2023,
- [00:20:36.250]if you search the term "tank man" in Google,
- [00:20:39.990]you actually get this image.
- [00:20:42.330]So for those of you who had some kind of a history,
- [00:20:45.870]a context of the word "tank man,"
- [00:20:47.630]it actually refers to 1989, the Tiananmen Square,
- [00:20:51.330]incident.
- [00:20:53.330]But here, if you search "tank man,"
- [00:20:55.330]you actually see an image of a man taking a selfie
- [00:20:59.830]in front of a tank.
- [00:21:00.830]And in 1989, selfie wasn't even invented yet,
- [00:21:04.830]so this could not have happened.
- [00:21:06.830]So this selfie was generated by AI.
- [00:21:11.830]Still, for someone without the contextual knowledge,
- [00:21:14.830]and for people who rely on quick Google searches
- [00:21:16.830]for everyday knowledge acquisition needs,
- [00:21:18.830]they will definitely get misled.
- [00:21:20.830]So here are additional examples
- [00:21:22.830]that start popping up on Google search results
- [00:21:25.830]if you type in the name of these celebrities.
- [00:21:28.830]Which one is a real photograph of Margaret Thatcher?
- [00:21:32.830]Is it A or B?
- [00:21:34.830]And here are photographs of Marilyn Monroe,
- [00:21:38.830]Abraham Lincoln, and Maya Angelou.
- [00:21:42.830]In all of these cases, the AI or the synthetic,
- [00:21:45.830]the generative AI created pictures
- [00:21:48.830]are almost as convincing
- [00:21:50.330]as the real ones.
- [00:21:51.830]And for an average internet user,
- [00:21:53.830]it is impossible to expect them
- [00:21:55.830]to be the detective
- [00:21:57.830]and try to uncover which one is real
- [00:22:00.330]and which one is fake.
- [00:22:01.830]Now if Google, with all its computing
- [00:22:03.830]and engineering resources,
- [00:22:05.830]is grappling to discern real from fake,
- [00:22:08.330]then what hopes do billions of people
- [00:22:10.330]like you and me have?
- [00:22:12.330]Okay, so humans are pretty bad at this.
- [00:22:15.330]Purely technical solution does not exist.
- [00:22:19.830]And irregularities, the little evidence
- [00:22:23.330]that we rely on to find out that
- [00:22:26.330]the Princess Kate's picture was actually photoshopped,
- [00:22:29.330]they will become increasingly hard to spot
- [00:22:32.830]due to technological arms race.
- [00:22:34.830]So initially when Midjourney or other
- [00:22:37.830]generative AI image tools just came on the market,
- [00:22:41.330]one kind of surefire thing to tell
- [00:22:43.330]is to ask it to generate fingers and hands, right?
- [00:22:47.830]So we all know that, you know,
- [00:22:49.330]those generative AI are pretty bad
- [00:22:51.330]in terms of rendering fingers and hands.
- [00:22:54.330]But a few weeks passed, a few months passed,
- [00:22:57.330]after each iteration, they become better and better.
- [00:23:00.830]So these irregularities that used to give us a clue
- [00:23:03.330]about whether this is synthetic or not
- [00:23:05.830]actually will disappear and become increasingly hard
- [00:23:09.330]to spot due to the technological arms race
- [00:23:12.330]that we are experiencing every single day.
- [00:23:16.830]All right, so second question I want to answer
- [00:23:18.830]is how do people evaluate visual misinformation?
- [00:23:22.830]Next, I want to talk through a few empirical studies conducted
- [00:23:25.830]at my lab over this question.
- [00:23:29.830]So this research project asks, what
- [00:23:31.830]are the factors that might predict people's credibility
- [00:23:34.830]evaluation of visual materials due to the lack
- [00:23:37.830]of visual credibility literature at that time?
- [00:23:40.330]This was conceived around 2016.
- [00:23:42.830]So we did a natural thing.
- [00:23:43.830]We borrowed heavily from general credibility research
- [00:23:46.830]and tried to extend these findings
- [00:23:48.330]to the image context.
- [00:23:50.330]So our process is we created a number of image mockups
- [00:23:54.330]with the image itself.
- [00:23:55.830]We changed the medium.
- [00:23:57.330]We changed the media outlet.
- [00:23:59.330]We gave some very convincing media outlet,
- [00:24:01.830]like the New York Times, NPR, BBC,
- [00:24:04.830]and some not so credible source.
- [00:24:07.830]And we also accompanied each image
- [00:24:09.830]with some kind of contextual information.
- [00:24:12.330]So the final mockup will look something like this.
- [00:24:15.330]This is an example of one of the mockups, as you can see.
- [00:24:17.830]This is a picture of a war zone,
- [00:24:21.830]and then purportedly it was posted on Facebook
- [00:24:25.830]by BBC World News,
- [00:24:27.830]and there is a very short caption to accompany it.
- [00:24:30.830]We even created comments, likes, et cetera.
- [00:24:33.830]So the first step, we recruited people from California,
- [00:24:37.830]which is considered very liberal,
- [00:24:39.830]and also we recruited from Texas,
- [00:24:41.830]which is more conservative,
- [00:24:43.830]to join our focus groups.
- [00:24:45.830]And then we found out
- [00:24:47.330]that participants made judgments
- [00:24:49.330]based mostly on non-image cues.
- [00:24:51.330]So instead of looking at the image itself,
- [00:24:53.330]they actually made their judgment on sources,
- [00:24:55.830]or how many likes it has,
- [00:24:57.330]or is it from Facebook,
- [00:24:58.830]or is it from the media organization's
- [00:25:00.830]own website, et cetera.
- [00:25:02.830]And in general,
- [00:25:04.830]participants are pretty poor
- [00:25:06.830]at identifying fake images,
- [00:25:08.830]which confirms our previous conclusion.
- [00:25:12.830]And then with the focus group findings,
- [00:25:15.330]we designed a series
- [00:25:16.830]of experiments.
- [00:25:18.330]We decided to test them all.
- [00:25:20.330]I believe a gentleman in an earlier session
- [00:25:22.330]said, you know, source Q
- [00:25:24.330]is really important, and this is the first thing
- [00:25:26.330]we tested is, you know,
- [00:25:28.330]does source matter? So we varied
- [00:25:30.330]the trustworthiness of the source,
- [00:25:32.330]so some images have New York Times,
- [00:25:34.330]some images have BBC or
- [00:25:36.330]NPR or something else as their
- [00:25:38.330]purported source. Some have more kind of
- [00:25:40.330]poor credibility ratings.
- [00:25:42.330]At that time, Buzzfeed was
- [00:25:44.330]that kind of news outlet, so we used
- [00:25:46.330]Buzzfeed, and then genetic individuals
- [00:25:48.330]like someone, you know, random
- [00:25:50.330]person on the internet.
- [00:25:52.330]We also varied
- [00:25:54.330]source and media type. Is this from
- [00:25:56.330]media organizations or is this from
- [00:25:58.330]individuals? Is this from websites or
- [00:26:00.330]social media? We also
- [00:26:02.330]varied intermediaries. Intermediaries
- [00:26:04.330]are the secondary sources
- [00:26:06.330]where you get exposed
- [00:26:08.330]from, but they're not an actual
- [00:26:10.330]publisher of the source.
- [00:26:12.330]But sometimes the secondary sources
- [00:26:14.330]will endorse something
- [00:26:15.830]like something, or quote it, or comment
- [00:26:17.830]on something that actually
- [00:26:19.830]have an impact on our
- [00:26:21.830]perceived credibility of the
- [00:26:23.830]piece of visual
- [00:26:25.830]information. So we included that as well.
- [00:26:27.830]We also included bandwagon
- [00:26:29.830]cues, which are number of likes,
- [00:26:31.830]number of comments. These are heuristics
- [00:26:33.830]people use to judge, oh, how many
- [00:26:35.830]people believe this is true or liked it.
- [00:26:37.830]We tested for people's
- [00:26:41.830]digital media literacy. We included multiple
- [00:26:43.830]scales to measure
- [00:26:45.330]how knowledgeable they are about
- [00:26:47.330]photography, whether they had experience
- [00:26:49.330]of digital imaging,
- [00:26:51.330]their internet skills,
- [00:26:53.330]and their social media use.
- [00:26:55.330]We measured their issue attitude
- [00:26:57.330]because research has shown
- [00:26:59.330]that in the context of
- [00:27:01.330]credibility judgment, people are
- [00:27:03.330]more likely to perceive something as credible
- [00:27:05.330]if it confirms their existing beliefs and opinions.
- [00:27:07.330]So we did all that.
- [00:27:09.330]This is the
- [00:27:11.330]biggest experimental design
- [00:27:13.330]I've ever run in my life.
- [00:27:14.830]28 different
- [00:27:16.830]conditions with many different images.
- [00:27:18.830]So I just want to show you
- [00:27:20.830]some of the stimuli. So this is
- [00:27:22.830]one of the mock-ups.
- [00:27:24.830]Here we see the original poster
- [00:27:26.830]is Bill Gates. At that time,
- [00:27:28.830]at that time, he was considered
- [00:27:30.830]credible. So we look for Pew
- [00:27:32.830]as, you know, who are the most credible
- [00:27:34.830]person in the United States. Bill Gates was
- [00:27:36.830]ranked very high. Of course, things have changed
- [00:27:38.830]since then. NPR,
- [00:27:40.830]and then how many
- [00:27:42.830]likes there are,
- [00:27:44.330]et cetera. And as you can see,
- [00:27:46.330]the images cover a wide variety
- [00:27:48.330]of social and political issues,
- [00:27:50.330]and they're not even very well done, okay?
- [00:27:52.330]So we include -- I photoshopped
- [00:27:54.330]these images myself, so they're not
- [00:27:56.330]really that well done.
- [00:27:58.330]So what did we find?
- [00:28:00.330]We recruited
- [00:28:02.330]Mechanical Turkers in the United States.
- [00:28:04.330]So this is kind of a crowd-sourced
- [00:28:06.330]sample pool.
- [00:28:08.330]And then we also recruited a student sample
- [00:28:10.330]for comparison, but in the end, their results
- [00:28:12.330]are pretty much the same.
- [00:28:13.830]Okay, findings.
- [00:28:15.830]We found absolutely
- [00:28:17.830]no effect
- [00:28:19.830]for source trustworthiness.
- [00:28:21.830]Okay? So people either don't pay
- [00:28:23.830]attention to the source at all,
- [00:28:25.830]or they pay attention, but source had no
- [00:28:27.830]effect of their judgment,
- [00:28:29.830]of the credibility of the visual information
- [00:28:31.830]they were shown. There was no effect
- [00:28:33.830]for source or media type. There was
- [00:28:35.830]no effect for intermediary.
- [00:28:37.830]There was no effect for bandwagon. That's how many
- [00:28:39.830]likes and comments there are.
- [00:28:41.830]Okay.
- [00:28:43.330]What does matter are
- [00:28:45.330]two things. First,
- [00:28:47.330]people's digital media literacy
- [00:28:49.330]absolutely mattered. So people who
- [00:28:51.330]score high in terms of digital media
- [00:28:53.330]literacy, they tend to be more skeptical
- [00:28:55.330]of these photoshopped
- [00:28:57.330]images overall. The second
- [00:28:59.330]thing is their issue attitude.
- [00:29:01.330]If they are in agreement,
- [00:29:03.330]if these images
- [00:29:05.330]are in agreement with their prior
- [00:29:07.330]held beliefs on something,
- [00:29:09.330]you know, being science or war,
- [00:29:11.330]whatever, they tend to
- [00:29:12.830]rate them as more credible,
- [00:29:14.830]which is the old
- [00:29:16.830]confirmation bias play.
- [00:29:18.830]Alright, so this
- [00:29:20.830]being in 2016, we did not
- [00:29:22.830]pre-register our experiment.
- [00:29:24.830]However, given that
- [00:29:26.830]the main results are null results,
- [00:29:28.830]we tried really
- [00:29:30.830]hard to try to fish for some significant
- [00:29:32.830]results, and we ended up with none of them,
- [00:29:34.830]except digital
- [00:29:36.830]media literacy and
- [00:29:38.830]issue attitude. But I'm glad
- [00:29:40.830]to report there are two recent
- [00:29:42.330]replications of our study.
- [00:29:44.330]One is a literal replication
- [00:29:46.330]using a US sample with
- [00:29:48.330]exactly the same design and
- [00:29:50.330]exact the same experimental stimuli
- [00:29:52.330]we used back in 2016.
- [00:29:54.330]And they got exactly the same result.
- [00:29:56.330]And another replication
- [00:29:58.330]done by a German team, they did a
- [00:30:00.330]conceptual replication using a German
- [00:30:02.330]sample using similar design but
- [00:30:04.330]different stimuli. They created a different
- [00:30:06.330]visual stimuli.
- [00:30:08.330]And both replication ended up
- [00:30:10.330]with same conclusions.
- [00:30:11.830]So none of these
- [00:30:13.830]heuristics mattered. So I feel like
- [00:30:15.830]there is a high confidence
- [00:30:17.830]that this is what was going on.
- [00:30:19.830]So with that in mind,
- [00:30:21.830]what can we do to solve the visual
- [00:30:23.830]misinformation problem? Whose burden is this?
- [00:30:25.830]Is it governments? Is it platforms?
- [00:30:27.830]Is it media? Or is it users?
- [00:30:29.830]So what
- [00:30:31.830]interventions could help
- [00:30:33.830]us build resilience?
- [00:30:35.830]So I want to present this
- [00:30:37.830]overview of the
- [00:30:39.830]interventions
- [00:30:41.330]from literature.
- [00:30:43.330]And I want to say that
- [00:30:45.330]this is only from the user side.
- [00:30:47.330]This means that the misinformation
- [00:30:49.330]is already in the system.
- [00:30:51.330]On a systemic level,
- [00:30:53.330]we're not trying to do anything.
- [00:30:55.330]We're not trying to interfere with the system
- [00:30:57.330]or the platforms.
- [00:30:59.330]This is only from the user side.
- [00:31:01.330]And we can categorize them into
- [00:31:03.330]boosting, nudging, and refutational
- [00:31:05.330]strategies. So boosting,
- [00:31:07.330]we could inoculate people, right?
- [00:31:09.330]We preempt the misinformation.
- [00:31:10.830]We preempt the misinformation beliefs.
- [00:31:12.830]For nudging, there are little things we can do.
- [00:31:14.830]Or maybe we create more friction.
- [00:31:16.830]Maybe we can provide provenance cues, etc.
- [00:31:18.830]And the refutational strategy
- [00:31:20.830]is very well known, right?
- [00:31:22.830]The fact-checking strategies fall into refutational.
- [00:31:24.830]So you're exposed to something.
- [00:31:26.830]I'm going to publish a fact-check
- [00:31:28.830]in the hope that you are actually
- [00:31:30.830]going to take a look at a fact-check.
- [00:31:32.830]I don't know how many of you routinely go to
- [00:31:34.830]fact-check websites, but, you know,
- [00:31:36.830]not a lot of people do.
- [00:31:38.830]Flagging is another
- [00:31:40.330]refutational strategy.
- [00:31:42.330]I think we see this a lot during
- [00:31:44.330]election cycles where, you know,
- [00:31:46.330]maybe a candidate says something
- [00:31:48.330]and then it was flagged as,
- [00:31:50.330]"Oh, this is not trustworthy," etc.
- [00:31:52.330]So in the time I have,
- [00:31:54.330]I want to specifically talk about one
- [00:31:56.330]empirical study where we used
- [00:31:58.330]media literacy education,
- [00:32:00.330]okay,
- [00:32:02.330]to be relevant to the theme of this symposium.
- [00:32:04.330]But we also done a lot of other work
- [00:32:06.330]using other intervention strategies.
- [00:32:08.330]So this was published
- [00:32:09.830]in J.C.M.C. in 2022.
- [00:32:11.830]It was also supported
- [00:32:13.830]by a Facebook research award.
- [00:32:15.830]So in this specific study,
- [00:32:17.830]we are trying to tackle one problem only.
- [00:32:19.830]That is miscaptioned images.
- [00:32:21.830]So the image itself,
- [00:32:23.830]there's nothing wrong with it,
- [00:32:25.830]but somehow you place it
- [00:32:27.830]in the wrong context.
- [00:32:29.830]That creates misinformation.
- [00:32:31.830]So here,
- [00:32:33.830]this is actually a very well known picture
- [00:32:35.830]of empty spaces.
- [00:32:37.830]And this is a picture
- [00:32:39.330]in the supermarket shelves.
- [00:32:41.330]And here, this says,
- [00:32:43.330]oh, this is because of socialism.
- [00:32:45.330]In socialism, you have this.
- [00:32:47.330]But this is actually because of earthquakes.
- [00:32:49.330]People, you know,
- [00:32:51.330]go into shops
- [00:32:53.330]and all that to prepare
- [00:32:55.330]for the aftermath
- [00:32:57.330]of the earthquake. It's not because of socialism.
- [00:32:59.330]So the intervention
- [00:33:01.330]we designed is a very simple one.
- [00:33:03.330]We created an infographic.
- [00:33:05.330]In order to debunk
- [00:33:07.330]this type of misinformation,
- [00:33:08.830]the most effective way is to have people
- [00:33:10.830]do a reverse image search.
- [00:33:12.830]Not everyone knows about
- [00:33:14.830]a reverse image search. So here,
- [00:33:16.830]this intervention
- [00:33:18.830]tries to teach people what a
- [00:33:20.830]reverse image
- [00:33:22.830]search is.
- [00:33:24.830]And we varied. So in one
- [00:33:26.830]group, they only see the infographic.
- [00:33:28.830]In another group, they actually
- [00:33:30.830]were instructed to practice
- [00:33:32.830]reverse image search.
- [00:33:34.830]They have to do it before they can proceed.
- [00:33:38.330]And then we go to Snopes.
- [00:33:40.330]And then there is a category called
- [00:33:42.330]Photography that deals with
- [00:33:44.330]miscaptioned images.
- [00:33:46.330]So we got a lot of stimuli from Snopes.
- [00:33:48.330]And this is one example.
- [00:33:50.330]So here the image is the same.
- [00:33:52.330]It's a Black Lives Matter bus.
- [00:33:54.330]But is it
- [00:33:56.330]a bus that's going to
- [00:33:58.330]get a lot of rioters to
- [00:34:00.330]Black Lives Matter march?
- [00:34:02.330]Or is it taking the
- [00:34:04.330]Toronto Raptors to somewhere else
- [00:34:06.330]for their match?
- [00:34:07.830]So the contexts are different.
- [00:34:09.830]So this one is pre-registered.
- [00:34:11.830]We recruited from
- [00:34:13.830]Prolific.
- [00:34:15.830]And then let me fast forward to the results.
- [00:34:17.830]We found that active intervention
- [00:34:19.830]significantly increased intention
- [00:34:21.830]of reverse image search tools
- [00:34:23.830]compared to the passive intervention
- [00:34:25.830]and the control conditions.
- [00:34:27.830]However, neither active or passive intervention
- [00:34:29.830]had an effect on credibility judgment
- [00:34:31.830]of misinformation discernment.
- [00:34:33.830]So in other words,
- [00:34:35.830]there's some effect.
- [00:34:37.330]But the effect is extremely small.
- [00:34:39.830]And it actually did not affect
- [00:34:41.830]people's credibility judgment
- [00:34:43.830]of misinformation discernment.
- [00:34:45.830]So a little bit disappointing.
- [00:34:47.830]Okay?
- [00:34:49.830]So this could suggest two things.
- [00:34:51.830]First, participants may be not
- [00:34:53.830]motivated enough to actually use
- [00:34:55.830]reverse search. The stakes are not high.
- [00:34:57.830]Okay? Or second, maybe the interface
- [00:34:59.830]is too lab-like. This is a common
- [00:35:01.830]drawback for lab-based studies.
- [00:35:03.830]So we said, okay, we're going to design
- [00:35:05.830]a real-life Facebook
- [00:35:06.830]mock site. So in a
- [00:35:08.830]follow-up study, we
- [00:35:10.830]designed a site that looks
- [00:35:12.830]exactly like Facebook, with
- [00:35:14.830]the same functionalities. You can interact
- [00:35:16.830]with it. You can like it. You can comment it.
- [00:35:18.830]You can click everything
- [00:35:20.830]here. Okay? And
- [00:35:22.830]we're going to up the stakes.
- [00:35:24.830]Right? Participants are not motivated.
- [00:35:26.830]Let's make them motivated.
- [00:35:28.830]How do I do that?
- [00:35:30.830]In one group, I give them a symbolic
- [00:35:32.830]incentive. That is a badge.
- [00:35:34.830]Okay?
- [00:35:36.330]In another group, I give them a dollar.
- [00:35:38.330]If they do the
- [00:35:40.330]reverse image search.
- [00:35:42.330]Okay. So we're writing up this manuscript
- [00:35:44.330]right now, and our preliminary results
- [00:35:46.330]are also disappointing.
- [00:35:48.330]Okay? So we found
- [00:35:50.330]monetary rewards work a little bit better
- [00:35:52.330]than symbolic rewards or no rewards.
- [00:35:54.330]And task contingent rewards work a little
- [00:35:56.330]bit better in the short term, but performance works
- [00:35:58.330]a little bit better in the long term.
- [00:36:00.330]But the overall effect was
- [00:36:02.330]extremely small or
- [00:36:04.330]negligent.
- [00:36:05.830]Okay, so let me go
- [00:36:07.830]back to this.
- [00:36:09.830]We looked at media literacy education,
- [00:36:11.830]two studies. I thought it was, you know,
- [00:36:13.830]pretty good design,
- [00:36:15.830]but very disappointing results.
- [00:36:17.830]And there are some other studies that my
- [00:36:19.830]team had tested
- [00:36:21.830]in various studies that I won't
- [00:36:23.830]have time to get into. But what
- [00:36:25.830]does that leave us? So I want
- [00:36:27.830]to end my presentation
- [00:36:29.830]on this fourth question, and I think
- [00:36:31.830]the most important question of all. Is
- [00:36:33.830]media literacy the answer?
- [00:36:35.330]Okay. So I want to be a little
- [00:36:37.330]provocative here.
- [00:36:39.330]And to say, and to
- [00:36:41.330]share my current thinking on this question.
- [00:36:43.330]Again, there are meant to be
The screen size you are trying to search captions on is too small!
You can always jump over to MediaHub and check it out there.
Log in to post comments
Embed
Copy the following code into your page
HTML
<div style="padding-top: 56.25%; overflow: hidden; position:relative; -webkit-box-flex: 1; flex-grow: 1;"> <iframe style="bottom: 0; left: 0; position: absolute; right: 0; top: 0; border: 0; height: 100%; width: 100%;" src="https://mediahub.unl.edu/media/23711?format=iframe&autoplay=0" title="Video Player: From Photoshop to Midjourney - Building Resilience Against Visual Misinformation" allowfullscreen ></iframe> </div>
Comments
0 Comments