Misinformation was Already Challenging — Then Came Generative AI
Jevin D. West
Author
11/04/2024
Added
27
Plays
Description
AI Media Literacy Symposium
Searchable Transcript
Toggle between list and paragraph view.
- [00:00:00.000]Welcome back, there's food, there's fruit, there's snacks in there, so feel free to go
- [00:00:12.240]and grab something to drink or eat anytime you would like to.
- [00:00:17.300]I would like to introduce our next speaker, Dr. Jeven West.
- [00:00:21.760]Dr. Jeven West is currently a visiting associate professor at the University of California,
- [00:00:28.440]Berkeley.
- [00:00:29.440]He is an associate professor in the information school at the University of Washington.
- [00:00:33.760]He is also the co-founder and inaugural director of the Center for an Informed Public at UW,
- [00:00:40.940]aimed at resisting strategic misinformation, promoting an informed society, and strengthening
- [00:00:46.820]democratic discourse.
- [00:00:49.460]His research and teaching focus on the impact of data and technology on science with a focus
- [00:00:55.680]on slowing the spread of misinformation.
- [00:00:58.880]He's also the co-author of the book "Calling Bullshit: The Art of Skepticism in a Data-Driven
- [00:01:04.420]World," which I highly recommend.
- [00:01:05.760]It's a really engaging and interesting book.
- [00:01:08.640]It helps non-experts question numbers, data, and statistics without an advanced degree
- [00:01:14.980]in data science.
- [00:01:16.660]So I'll have Dr. Jevon hook up the PowerPoint slide and talk about "Misinformation Was
- [00:01:25.320]Already Challenging, Then Came Generative AI."
- [00:01:28.320]Sorry, it was just working, and hopefully it'll come on screen mirroring.
- [00:01:53.160]You can do it.
- [00:01:55.620]Come on.
- [00:01:56.620]I mean, I could talk through it.
- [00:01:57.760]But I think the slides are going to be more fun here.
- [00:02:03.580]It's not seeming to recognize it here.
- [00:02:07.800]So sorry.
- [00:02:10.980]Come on, screen mirroring.
- [00:02:16.160]Just won't wake up here.
- [00:02:17.160]It was just hooked up.
- [00:02:18.160]I'm not sure.
- [00:02:24.660]You can -- no, it's okay.
- [00:02:25.660]You can talk amongst yourself here.
- [00:02:27.200]Come on, screen mirroring.
- [00:02:31.520]I wonder where I can stand.
- [00:02:36.440]Okay, let's disconnect from the Wi-Fi.
- [00:02:44.960]There we go.
- [00:02:50.860]There it is.
- [00:02:51.860]Oh, that was it.
- [00:02:52.860]Okay.
- [00:02:53.860]Hopefully this -- 9607.
- [00:02:54.860]9607.
- [00:02:55.860]Okay.
- [00:02:56.860]Oh, perfect.
- [00:02:57.860]All right.
- [00:02:58.860]Hopefully this works now.
- [00:03:03.860]Switch swap displays.
- [00:03:07.880]Swap displays.
- [00:03:10.880]Okay.
- [00:03:11.880]All right.
- [00:03:12.880]We're here.
- [00:03:13.880]All right.
- [00:03:14.880]Well, thank you for the patience.
- [00:03:16.880]Thank you to Brian and Erica for inviting me.
- [00:03:19.880]Thanks for taking your time.
- [00:03:21.360]I know everyone's busy all the time, and so I always appreciate anyone who wants to engage
- [00:03:25.220]on this topic.
- [00:03:26.300]It's something that I live and breathe every day in my professional lives, and I think
- [00:03:29.680]in our personal lives, we're living and breathing this all the time as well.
- [00:03:35.420]The screwed comment was a perfect segue to where I want to go.
- [00:03:41.200]I am an optimistic person by nature, but I spend most of my time in the darker corners
- [00:03:47.220]of the internet, and it doesn't give me the most optimistic view.
- [00:03:50.660]And yesterday when I was teaching my class, I asked them.
- [00:03:54.540]It wasn't...
- [00:03:55.620]This wasn't an IRB-supported research project.
- [00:03:58.600]It was just an informal question.
- [00:04:00.460]What's your general sentiment about AI?
- [00:04:03.860]And every single hand went up on the thumbs down kind of negative, which really surprised
- [00:04:10.380]me because right now I'm in Silicon Valley and there's this techno-optimism that's just
- [00:04:14.420]way too high all the time.
- [00:04:16.080]And every single student had their hand up saying they kind of have a negative view.
- [00:04:20.800]Maybe it's the anxiety around it.
- [00:04:22.020]But there was one student that raised his hand.
- [00:04:24.340]I was so curious.
- [00:04:25.340]One student was a little bit positive about this.
- [00:04:28.000]And he said, well, it's not as awful as I thought it was going to be.
- [00:04:31.820]So basically, he thought it was bad anyway.
- [00:04:33.680]So that's kind of where I'm going to take you a little bit today.
- [00:04:37.240]But I am going to end with what I think at least what we're trying to do, my colleagues
- [00:04:41.940]and I in our center, when it comes to trying to address some of the challenges that we
- [00:04:46.340]study each and every day.
- [00:04:48.300]But most of my talk is going to be a little bit pessimistic.
- [00:04:51.500]So last year in 2023, about a year almost exactly from now.
- [00:04:55.060]It was on May 22nd, there was this fake AI generated image of what seemed to be a bombing
- [00:05:01.340]at the Pentagon.
- [00:05:03.280]This was not a real image, but it did have major effects.
- [00:05:09.060]It was debunked, oh, about six hours from when it was originally published.
- [00:05:13.900]And by that time, it had been published already in major newspapers within the United States
- [00:05:18.520]and across other places.
- [00:05:20.960]Bloomberg News had been reporting this.
- [00:05:22.720]The market itself had dropped a half a trillion.
- [00:05:24.780]Half a trillion dollars at that time.
- [00:05:27.280]It did correct after it was fact-checked, but this is an example, I think, and when
- [00:05:33.040]we look at these kinds of things, we think of them mostly as experimentation.
- [00:05:36.580]A lot of times there's trolls doing this stuff, but in these kinds of cases, I think there's
- [00:05:40.220]evidence that there's experimentation going on.
- [00:05:42.680]And that's the kind of thing that we're dealing with right now, where you can generate content
- [00:05:46.100]that can drop a market half a trillion dollars.
- [00:05:48.840]And there's a lot of money that can be made in the ups and downs of those kinds of things,
- [00:05:53.080]and certainly during major events.
- [00:05:54.500]So when there's a major hurricane, when the Middle East crisis really started up, we studied
- [00:06:11.140]those kinds of things, and of course during elections.
- [00:06:13.940]Now this can hit our pocketbooks as well.
- [00:06:16.400]Many of you probably saw this, and this isn't the first time this has happened, where you
- [00:06:22.100]can use these kinds of technologies.
- [00:06:24.220]You can use them to take a lot of money, and this money, by the way, has not been recovered
- [00:06:29.040]at this point.
- [00:06:30.300]We sort of tracked these things, because a big part of the challenge that we have in
- [00:06:35.740]society of course is the propaganda, the messaging, the way it's affecting collective discourse,
- [00:06:41.480]but it also, there's a lot of money at stake and a lot of money to be lost.
- [00:06:46.880]A lot of what I've been doing recently over the last year is spending a lot of time with
- [00:06:50.460]medical professionals, seeing the ways in which misinformation
- [00:06:53.940]and, of course, now the effects of AI on misinformation, how that's affecting everyday practice.
- [00:06:59.880]And I'll get a little into that if I have some time, but I just want to say that this
- [00:07:09.160]is an area that I know some of you are working on, and maybe I'll get a chance.
- [00:07:13.140]I talked to a few of you just over the break, so maybe we'll get into that.
- [00:07:15.960]But as I mentioned, most of my research is in the darker corners of the internet, working
- [00:07:20.140]with postdocs and PhD students that are studying this sort of thing.
- [00:07:23.660]And it's not an easy thing to study.
- [00:07:25.460]I actually, I just mentioned to Erica that one of my, a person I'm a huge fan of, Mike
- [00:07:30.040]Caulfield, who studies media literacy, had to take a true professional break from studying
- [00:07:34.560]political misinformation, because what it does to your psyche.
- [00:07:38.840]In fact, I've now hired in our center a psychologist that we can go to when we're needing that
- [00:07:47.280]sort of thing.
- [00:07:48.280]In fact, when we go in to these areas of the internet, we now have students working in
- [00:07:52.380]pairs.
- [00:07:53.380]We can have them work by themselves because it can be very disorienting.
- [00:07:56.200]So if it's disorienting to researchers that are well-prepared, know what they're going
- [00:08:00.260]into, imagine what it's doing to society.
- [00:08:03.120]So these are the kinds of challenges.
- [00:08:04.380]And also, if you decide to put your foot into the political realm when it comes to misinformation,
- [00:08:09.680]you can get attacked.
- [00:08:10.680]And we've been attacked pretty vociferously, both legally—we've won those, of course,
- [00:08:16.560]legal challenges—but also just reputation-wise through all sorts of things like massive
- [00:08:23.100]numbers of FOIA requests, which I'm a full support of Freedom of Information Act supports,
- [00:08:27.260]but the way they've been weaponized, I think, can be problematic.
- [00:08:30.440]The ways we've been subpoenaed to the U.S. Congress.
- [00:08:33.460]These are the kinds of things that we have to deal with, and researchers that study this
- [00:08:36.680]and universities that are willing to engage with this, it can be challenging.
- [00:08:41.560]But that doesn't stop us from doing the research we're doing, and I work with a lot
- [00:08:45.320]of great colleagues, students, and postdocs, we do all sorts of research.
- [00:08:51.440]We also, just like in the previous talk,
- [00:08:52.820]think a lot about the ways in which we can audit these platforms and algorithms.
- [00:08:57.100]This is an example of a recent paper, just to give you some sample of the kinds
- [00:09:00.580]of research that we do.
- [00:09:02.460]We do a lot of qualitative work, but a lot of quantitative work.
- [00:09:05.220]So during the previous election, we were able to track some bots that were being,
- [00:09:11.320]that one set of bots just said, "Hey, I'm just going to take whatever the algorithm
- [00:09:16.160]tells me in terms of friendships and in terms of content recommendation."
- [00:09:20.160]And we wanted to see whether that really led them
- [00:09:22.540]into these echo chambers, which is fairly debated within the literature.
- [00:09:26.420]Some results have shown that we sort of have these echo chambers on social media, that
- [00:09:30.140]people live inside them, and other research shows differently.
- [00:09:33.860]But this is the kind of work that we want to do, and it's becoming harder and harder.
- [00:09:37.300]Because if anyone who's worked with social media data knows that when Musk took over,
- [00:09:42.300]it really did shut off our access.
- [00:09:44.580]And that's not just at X now, it's on other social media platforms.
- [00:09:49.080]And fortunately, there is legislation coming down the line, mostly in Europe,
- [00:09:52.260]actually, the Digital Services Act, which is going to hopefully help researchers get
- [00:09:56.580]at that sort of thing.
- [00:09:57.580]And what was interesting about this is we found, and this isn't always the case, but
- [00:10:02.140]this is sort of why we have to check in on our own biases, because I had a research bias
- [00:10:05.640]for these echo chambers.
- [00:10:06.640]It turns out that at least some of the friendship algorithms that we were auditing actually
- [00:10:10.160]did give a little bit more diverse content and a little less misinformation, but that
- [00:10:14.160]doesn't mean that's the case in all of our research.
- [00:10:17.340]We also do simulative work.
- [00:10:18.580]We do work where we look at all the different interventions that are being tested in the
- [00:10:21.980]world.
- [00:10:22.980]We recently published a paper in The Nature of Human Behavior looking exactly at these
- [00:10:26.900]different interventions to see what would work.
- [00:10:30.920]One of the big take-homes, of course, it seems sort of obvious in retrospect, was that in
- [00:10:34.700]combining different interventions at less extreme values had some of the most effect.
- [00:10:39.580]But that's just to give you a sample.
- [00:10:40.660]I can go into some more of the research that we're doing.
- [00:10:43.780]But today, because this is one of the themes is media literacy, I also want to make sure
- [00:10:47.740]that we don't forget some of the discussions around community engagement, around media
- [00:10:51.700]literacy.
- [00:10:52.700]What are some of the things we can do on policy?
- [00:10:54.800]What are some of the solution space conversations that we can have?
- [00:10:59.480]So I just want to mention one thing.
- [00:11:01.340]We have this large program called Miss Info Day where we bring thousands of high school
- [00:11:05.600]students to campus, all of our campuses in the state of Washington, not just at the University
- [00:11:09.800]of Washington, but Washington State and our satellite campuses.
- [00:11:13.180]And we spend an entire day talking about media literacy and oh my gosh, we learn so much
- [00:11:18.200]from the students.
- [00:11:19.200]We probably learn as much from them as we learn from them.
- [00:11:21.420]We learn that they learn from us.
- [00:11:24.700]So anyway, that's the one thing I wanted to put up sort of at the center of the talk,
- [00:11:29.980]because I hope maybe we can maybe collaborate on something in that space.
- [00:11:34.440]We're hoping to work across different states on this.
- [00:11:37.120]But of course, you can't have talks at all without talking about generative AI.
- [00:11:41.100]So I'm going to spend a little bit of time talking about the ways in which it is impacting
- [00:11:45.380]what we're seeing in the misinformation space.
- [00:11:48.200]This is something that I literally love.
- [00:11:51.140]I literally wake up worrying about every day.
- [00:11:53.380]Of course, when things like Sora come out, it makes us go, "Oh my gosh, it really is
- [00:11:59.220]quite amazing."
- [00:12:00.220]Although, there was a recent investigative journalistic account from Wall Street Journal
- [00:12:07.300]and a few other places that got a little bit more access to Sora.
- [00:12:10.280]We as a group used to be able to have access to red teaming.
- [00:12:12.920]Open AI has not given us that access as much anymore.
- [00:12:15.380]We're a little bit more critical.
- [00:12:20.860]Red teaming exercises.
- [00:12:22.620]And I can say at least Sora has lots of mistakes too.
- [00:12:25.580]We only saw the best videos.
- [00:12:27.100]If you've seen some of the videos
- [00:12:28.660]that they first advertised, there
- [00:12:30.440]are all sorts of problems coming out of the ones
- [00:12:33.520]where journalists have a little bit more access.
- [00:12:36.080]But I have to sort of reveal my biases
- [00:12:38.620]from the very beginning about generative AI.
- [00:12:41.480]So when it first came about in late 2022,
- [00:12:44.100]at least the sort of current epoch that we're in,
- [00:12:50.580]I got a chance to sort of test some of these out.
- [00:12:52.640]I wrote about this in a couple of op-eds.
- [00:12:54.880]And as Brian mentioned, I teach a class in BS studies,
- [00:12:59.000]in bullshit studies.
- [00:12:59.940]I actually put it on my CV.
- [00:13:01.160]It's a real thing I really take seriously.
- [00:13:03.860]And one of the most important principles in bullshit studies,
- [00:13:07.400]and it's something that's on Wikipedia, which
- [00:13:09.240]is one of the few rays of light on the internet,
- [00:13:12.900]there's an actual entry for this.
- [00:13:14.360]It's called Brandolini's BS Asymmetry Principle.
- [00:13:17.360]And this particular principle basically says that it's easy
- [00:13:20.300]to create BS, it's really hard to clean it up.
- [00:13:23.840]That's basically all it says, but it's a really important
- [00:13:25.880]principle 'cause it guides a lot of what I do
- [00:13:28.200]and motivates a lot what I do.
- [00:13:29.820]So, when Meta came out with their first large language
- [00:13:34.040]model interface for society, they were bragging about how
- [00:13:39.040]it was gonna transform science, just like you hear
- [00:13:42.300]a lot of times when things come out of Silicon Valley,
- [00:13:46.080]it was gonna be transformative.
- [00:13:47.300]The name of it was Galactica, it didn't stay up more than
- [00:13:50.020]three days, by the way, they then just released it.
- [00:13:53.080]We were, I was criticizing, I'm not saying I was the only one
- [00:13:55.500]that caused them to take it down, there was other people too,
- [00:13:57.780]'cause there was all sorts of problems, but I did have
- [00:13:59.840]a little fun before they took it down.
- [00:14:02.040]And I asked Galactica about this BS principle.
- [00:14:07.040]I wanted to see how well it would answer this,
- [00:14:09.700]because it was gonna be the next science engine,
- [00:14:12.180]writing papers, writing Wikipedia entries.
- [00:14:14.560]And this was, so here's what Wikipedia says,
- [00:14:17.120]here's what it says, it said,
- [00:14:19.740]"Brandolini's law, or Brandolini's rule,
- [00:14:21.980]"is a theory in economics, which is not true,
- [00:14:24.220]"proposed by Giuliani-Brandolini,
- [00:14:25.900]"professor of the University of Sparta, not true,
- [00:14:27.960]"which dates at the smaller of the economic,
- [00:14:29.540]"not true, not true, not true, not true."
- [00:14:32.160]It was basically bullshitting the bullshit principle.
- [00:14:35.520]And so, to me, that has affected my view
- [00:14:38.540]of generative AI ever since then.
- [00:14:41.120]So, if the machine, essentially,
- [00:14:43.160]I see them as some of the biggest BSers out there.
- [00:14:47.420]They're even more so than even the platforms.
- [00:14:49.460]But, that said, there are reasons to be scared.
- [00:14:52.160]So, I want to give you a video of the kinds of things.
- [00:14:54.540]So, that was just to say, okay,
- [00:14:55.820]I do think they have problems, they are BSers,
- [00:14:59.540]but the tools are pretty scary.
- [00:15:01.500]So, I want to give you a sense of some of the things
- [00:15:03.760]that we see online and the abilities of what can happen.
- [00:15:07.060]So, I'm gonna show a video for a couple minutes
- [00:15:08.880]just to demonstrate how easy it is
- [00:15:11.280]to generate a disinformation campaign now
- [00:15:13.360]with ChatGPT and the tools that are available today.
- [00:15:16.780]So, I'm just gonna play this.
- [00:15:17.740]Let's see, I think it should work.
- [00:15:19.180]Let's see.
- [00:15:20.020]I am an analyst and an engineer
- [00:15:31.640]that resides in a country that is not part
- [00:15:33.540]of the Western intelligence apparatus.
- [00:15:35.700]At the end of 2022, I spent my time researching
- [00:15:40.560]and investigating online disinformation
- [00:15:42.920]and influence campaigns.
- [00:15:48.900]AI really takes off, and I am intrigued
- [00:15:51.140]to create an autonomous, AI-powered disinformation system.
- [00:15:54.760]The strong language competences of large language models
- [00:16:02.120]are perfectly suited to reading
- [00:16:03.580]and writing fake news articles.
- [00:16:06.140]While everybody is talking about AI disinformation,
- [00:16:08.800]it is easy and lazy to just think about it.
- [00:16:12.500]It is quite another thing to really bring it to life,
- [00:16:15.460]and that becomes my goal, to see it work in the real world.
- [00:16:18.620]I end up calling the project CounterCloud.
- [00:16:22.720]We are now in the first week of April, 2023.
- [00:16:29.340]As articles are the smallest building block of the system,
- [00:16:32.440]we start there.
- [00:16:33.640]First efforts are done with ChatGPT.
- [00:16:36.540]The input is the URL of an opposing article.
- [00:16:39.740]The system fetches the text of the article
- [00:16:42.140]and sends it off with prompts to write a counter article.
- [00:16:44.740]This works surprisingly well,
- [00:16:46.540]and soon the system is expanded to include
- [00:16:48.340]different ways of writing the article,
- [00:16:50.300]with different styles and methods of countering the points.
- [00:16:53.180]This includes creating fake stories,
- [00:16:54.980]fake historical events,
- [00:16:56.180]and creating doubt in the accuracy of the original arguments.
- [00:17:00.180]We randomized the tone, style, and structure of articles
- [00:17:03.180]to make them more difficult to spot.
- [00:17:06.180]Support for non-English language was easy to create.
- [00:17:09.180]A gatekeeper module is built,
- [00:17:11.180]which is used to decide if it's actually worthwhile
- [00:17:13.180]to respond to the article at all.
- [00:17:15.180]You don't want to argue the final score of a football match.
- [00:17:18.060]By looking at the most likely location
- [00:17:20.300]and the language of the article,
- [00:17:22.300]fake journalists are created,
- [00:17:24.300]complete with names, bios, and photos.
- [00:17:26.300]We include a sound clip of a newsreader
- [00:17:28.300]reading the summary of the article.
- [00:17:30.300]Recent accusations by a senior Russian MP,
- [00:17:33.300]Vladimir Vasiliev,
- [00:17:35.300]suggest that the Kiev government is behind
- [00:17:37.300]several terrorist attacks in Russia.
- [00:17:39.300]However, these claims lack evidence
- [00:17:41.300]and seem to be part of...
- [00:17:43.300]Where possible, we reuse the original article's photos,
- [00:17:45.300]but if it is not usable,
- [00:17:47.780]due to text over the image,
- [00:17:49.780]we create our own using AI image creation services.
- [00:17:53.060]Later, we create fake comments
- [00:17:55.060]randomly on some articles.
- [00:17:57.060]We do it in moderation.
- [00:17:59.060]Not all articles have sound, not all have comments,
- [00:18:01.060]not all have pictures.
- [00:18:03.060]The next step in the puzzle is to direct traffic to the site.
- [00:18:05.060]All right.
- [00:18:07.060]So you get kind of a sense of how easy
- [00:18:09.060]it's becoming.
- [00:18:11.060]And this is the sort of thing that we see
- [00:18:13.060]in our work every day.
- [00:18:15.060]So we do what's called rapid response.
- [00:18:17.500]So every day we're sort of tracking these kinds of things
- [00:18:22.300]that we see that go viral.
- [00:18:24.300]And then we report on them and also report on the dynamics
- [00:18:27.500]of how they're spreading across different platforms.
- [00:18:29.500]And, of course, it's just getting more and more difficult.
- [00:18:32.500]And I don't think I needed to show you that video
- [00:18:34.500]to sort of convince you that it was challenging before.
- [00:18:39.500]It's now becoming more and more challenging.
- [00:18:42.500]And so one of the questions that people ask me are,
- [00:18:44.500]you know, should we fight fire with fire?
- [00:18:46.500]Should we use some of these same kinds of tools?
- [00:18:47.220]Should we use some of these same kinds of tools?
- [00:18:48.220]To push back, to counter narratives in particular.
- [00:18:51.220]Because it's not any individual piece of information that worries me.
- [00:18:56.220]It's, you know, it's just this overall feeding of narratives
- [00:19:02.220]that we've seen, of course, in our own political conversations,
- [00:19:05.220]whether it's Stop the Steal or whether it's conversations in other countries.
- [00:19:09.220]And other countries have already faced these sorts of things.
- [00:19:11.220]And one of the things that we do when we study this
- [00:19:13.220]is we look at what happens in, for example, Slovakia,
- [00:19:16.940]or other smaller countries, where a lot of these methods are experimented with,
- [00:19:20.940]and then they translate into the United States.
- [00:19:22.940]Or we see these things emerge in the United States,
- [00:19:26.940]and then we see them, of course, finding themselves in political conversations across the world.
- [00:19:31.940]Slovakia in particular had an issue around a deepfake
- [00:19:34.940]that was not literally minutes, but basically minutes before people went to vote.
- [00:19:39.940]And it was an election that certainly centered around the Ukrainian crisis
- [00:19:46.660]and whether it had an impact, you know, it's hard to always do these kind of causal analyses,
- [00:19:51.660]but I can say that it wouldn't be that,
- [00:19:55.660]you wouldn't be going out on a limb by saying that it likely had a pretty,
- [00:19:59.660]it may have had an effect because of how close the elections were.
- [00:20:03.660]Now there's all sorts of interesting things happening in the U.S. framework.
- [00:20:07.660]So here, you know, early on in the Republican nomination process,
- [00:20:12.660]there was a video or an audio clip that was created
- [00:20:16.380]by the DeSantis campaign of Trump saying something he didn't say, but he wrote.
- [00:20:21.380]And so those are the kinds of nuances that we're going to be challenged with
- [00:20:25.380]over the next several years, certainly before the November election,
- [00:20:28.380]when it comes to what do we even potentially do on the policy side.
- [00:20:34.380]He didn't say it, but he did write it.
- [00:20:37.380]How do you attack those kinds of things on the policy side?
- [00:20:41.380]And I could give you lots and lots of examples of things that we see that I would have never
- [00:20:46.100]anticipated, even like six months ago, and that's hijacking the use of obituaries, for
- [00:20:52.960]example.
- [00:20:53.960]So people, it's all of course about grabbing your attention on the internet, it's not just
- [00:20:59.560]about pushing propaganda, but there's this use of obituaries as a way of basically use
- [00:21:08.280]AI to sort of leverage sometimes they're not even people that have died yet as a way of
- [00:21:15.820]grabbing your attention.
- [00:21:16.820]And I'll talk about some other ways of gathering attention.
- [00:21:19.080]So one of the themes that we work on a lot is what are the methods by which you gather
- [00:21:24.300]attention for?
- [00:21:25.300]And then one of the biggest issues that doesn't get attention is the use of fraud, or sort
- [00:21:38.780]of the use of these technologies to push fraud.
- [00:21:41.660]We talk a lot about the effects on our political systems, but it's fraud.
- [00:21:45.540]And what's interesting is that if billionaires like Bill Aikman can't even control the use
- [00:21:50.620]of…
- [00:21:51.620]Sorry.
- [00:21:52.620]Oh, no.
- [00:21:53.620]Uh-oh.
- [00:21:54.620]That's okay.
- [00:21:55.620]I'll just pause.
- [00:21:56.620]Okay, perfect.
- [00:21:57.620]No worries.
- [00:21:58.620]No worries.
- [00:21:59.620]That's just probably a deepfake somewhere else, probably.
- [00:22:02.260]So, but the point is there's billions and billions of dollars lost in it.
- [00:22:10.380]And if the billionaires themselves can't address it, how is an individual citizen,
- [00:22:15.260]how is an individual citizen going to address these sorts of things?
- [00:22:17.920]So let me talk about, I'm not going to talk about all the different methods that are evolving
- [00:22:21.420]right now around attention getting using this technology, but I'll give you one and it's
- [00:22:26.380]a little disturbing and also a little confusing to me.
- [00:22:29.440]Several months ago, we started to see in our data and through conversations with fact checkers
- [00:22:33.360]that we work with a really strange population of images that were being splattered all over
- [00:22:40.700]public spaces on social media, Facebook in particular.
- [00:22:43.980]Um,
- [00:22:44.980]and we looked at all the social media platforms and that were, that was images like this.
- [00:22:49.640]And I, it's, this isn't, there's way more disturbing, but this could be a little disturbing
- [00:22:53.120]to some, but this particular image, you know, getting millions and millions of vote with
- [00:22:59.060]no text, no context.
- [00:23:01.600]What's essentially happening, at least this is me speculating, um, is that individuals
- [00:23:06.720]are using this technology to experiment with every possible kind of image to get people's
- [00:23:12.020]attention.
- [00:23:13.020]Most things of course don't stick.
- [00:23:14.700]You put through a whole bunch of spaghetti on the wall and all you need to know is which
- [00:23:17.960]spaghetti noodles stay on the wall the most.
- [00:23:21.920]Use those images, then pull the attention, amplify certain kinds of accounts, and then
- [00:23:27.720]voila, you have your audience.
- [00:23:30.880]And so those kinds of things have been going well, actually some of my colleagues at Stanford
- [00:23:35.380]have posted a pre-print on this, here are some of the images that you see, these are
- [00:23:39.000]all fake images of course, this was an image, these were images, I mean it is kind of cool
- [00:23:44.420]to look at, I mean the one that's got 40 million views and 2 million gauges just by the time
- [00:23:49.580]we took, by the time a screen capture was taken of this, I mean this one's okay, I mean
- [00:23:55.480]that is pretty cool to look at, but once it pulls your attention this way, the collective
- [00:23:59.040]attention, then comes the bait and switch, and we see those things all the time, in fact
- [00:24:03.520]one of the most successful bait and switches in, actually I looked and it happened in local
- [00:24:09.900]communities in Nebraska because it happened all over the United States, was a campaign
- [00:24:14.140]of, on Facebook, of very sad sickly looking dogs that needed to be cared for, and so people
- [00:24:22.420]said please upvote this, so they upvote those images, and then the image swishes, and then
- [00:24:29.600]a lot of times they're scams, or they're other kinds of images, a lot of this is coming from
- [00:24:35.840]foreign adversaries and bad state actors, but this is the kind of thing that we're after,
- [00:24:40.620]and what they're after, what they're after are things called
- [00:24:43.860]data voids, these are things like my colleague Francesca Tripodi and other people have talked about this,
- [00:24:49.540]I've talked about it in other forms when it comes to medical misinformation,
- [00:24:52.700]but data voids are essentially the places that fact-checkers, journalists, and any good citizen
- [00:24:57.700]doesn't have the time to debunk, because they're trying to debunk something else,
- [00:25:02.500]so you're playing whack-a-mole, but now you've got the ability to create tons and tons of moles,
- [00:25:07.940]they're everywhere, you can't stop, and so what happens is the search engines don't know, they're not,
- [00:25:13.580]they're just like gobbling up everything, and so if you ask a question that has had no attention
- [00:25:19.140]other than the person that plants that mole, then it looks like it's real.
- [00:25:24.040]I'll give you a specific example, going back to the medical issues that I had mentioned,
- [00:25:30.300]during COVID, in the state of Washington, there was an alarming number of people
- [00:25:37.320]that were so concerned about ethylene oxide because of that data void filling,
- [00:25:43.300]massive areas of the conversation online.
- [00:25:47.320]Ethylene oxide is used to sterilize the swabs to test whether you have COVID.
- [00:25:50.740]In fact, the reason I learned about it is I was talking to some emergency care physicians,
- [00:25:55.460]and they had all sorts of stories, and one story was an individual that had walked in
- [00:25:59.480]from a construction accident during COVID with his arm literally hanging off, dripping blood,
- [00:26:05.280]and he would not go into the emergency room because he would not take a nose swab,
- [00:26:10.380]and that was the protocol at the time. He had to take a COVID test.
- [00:26:13.020]Early on in COVID, he turned away, and it's the last those doctors ever saw of him.
- [00:26:16.680]So when we talk about misinformation, it's not always just a common annoyance.
- [00:26:22.440]It has real behavioral effects, and in that case, this was a data void.
- [00:26:27.820]This was something that the Washington State Department of Health would have never anticipated.
- [00:26:32.500]There was a lot of other things they had to deal with during COVID.
- [00:26:34.500]They didn't think they had to make flyers about the safety of ethylene oxide.
- [00:26:39.200]I mean, there's a million things to worry about. Now, the other thing that doesn't get enough
- [00:26:42.740]news, and didn't many years ago, I'm giving this piece of news, this was, I'm
- [00:26:47.260]looking at this, this was 2018. This is way before AI. When Mark Zuckerberg was
- [00:26:52.160]pulled in front of Congress for the first time, he talked about, you know, he
- [00:26:56.660]had to admit, this was written in Recode, about how they deleted in a six-month
- [00:27:00.200]period 1.3 billion accounts. That was in the same week they were talking to their
- [00:27:06.280]shareholders about having 2.27 billion users worldwide. So when you see 1.3 billion
- [00:27:12.460]accounts fake, that should tell us something strange. Are we really entering
- [00:27:17.180]into what some people have been calling the dead internet? The dead internet is
- [00:27:21.160]this idea that so much of the content, the users, are just not real. They're not
- [00:27:25.640]human. And people, I can tell you, as we look at these conversations online, are
- [00:27:29.800]commenting, even when they know the images aren't real, that that's kind of a
- [00:27:34.420]depressing view of maybe where we're going. But I'll tell you what worries me
- [00:27:37.900]most. Out of all the time that I spend, and I spend literally every day, seven days
- [00:27:42.180]a week thinking about this kind of issue, the thing that worries me most is not
- [00:27:46.640]that AI image that I showed you at the beginning of the talk. That's not what
- [00:27:50.940]worries me most. That's concerning, but what worries me most is when there is
- [00:27:55.740]actually going to be a bombing at the Pentagon or wherever, and people just
- [00:28:01.740]think it's a fake image, and they don't respond. They become desensitized. They
- [00:28:07.180]become jaded to the information environments in which we live, and we
- [00:28:10.680]don't have them responding
- [00:28:11.900]in ways that we want them to, and having the trust that we depend on in the
- [00:28:16.160]society in which we live. That's what concerns me most. So, you know, generative
- [00:28:21.920]AI, you know, in terms of the science, in sort of our world, in science, and
- [00:28:26.840]research, and at university campuses, and in education, this of course is
- [00:28:30.560]concerning. We have several research projects and papers that we're working
- [00:28:33.500]on right now looking at the ways in which it's being used, of course, and
- [00:28:36.500]misused, and also, you know, embarrassingly misused.
- [00:28:41.620]We've been tracking some of these kinds of things. Some of you may have seen at
- [00:28:46.200]least the most obvious example here. These are two papers, I have many others
- [00:28:50.240]to show you, but I'm not going to, published by Elsevier. Elsevier is the
- [00:28:55.080]largest scientific publisher, scholarly publisher in the world, and if you read
- [00:29:00.920]the introduction you might be able to see it says, "Certainly, here is a possible
- [00:29:04.340]introduction for your topic," and then on this other side it's, "I'm very sorry, I
- [00:29:09.440]don't have access to real-time information
- [00:29:11.340]or patient-specific data, I'm an AI language model."
- [00:29:15.620]So it's one thing to use AI, I'm not fully against the use of these technologies to help
- [00:29:20.840]us in the process, but we need to be more transparent, and holy cow, how did this get
- [00:29:25.600]past reviewers, editors, and look at all the authors on both those papers.
- [00:29:31.380]That's embarrassing.
- [00:29:32.380]So that's the kind of thing, I mean, we're entering a world in which we're in this transition
- [00:29:37.760]where this is what we're seeing and what
- [00:29:41.060]the effects are are things that, of course, we're on the lookout for.
- [00:29:44.280]One area in particular that I've been spending a lot of time on research-wise is looking
- [00:29:48.080]at the ways in which AI is and potentially could engage on the peer review process.
- [00:29:54.740]And I have all my concerns, of course, but an interesting paper came out from McFarland
- [00:29:59.040]and others that looked at this massive spike in certain kinds of linguistic features in
- [00:30:06.400]looking at a peer review corpus, this computer science corpus, after ChatGPT came out.
- [00:30:10.780]This is fascinating to me, because if you look at these spikes and the types of words
- [00:30:16.540]that are being used, you know, there still needs to be some work to determine whether
- [00:30:20.640]that is just ChatGPT.
- [00:30:22.980]But when you see that kind of change in the ways these peer review corpora are looking,
- [00:30:29.160]that to me is of concern.
- [00:30:31.360]Now, the other thing that's a concern, at least on the research side, I'm sort of moving
- [00:30:34.820]away from the sort of more charged environments, the more charged political environments.
- [00:30:40.500]I just want to talk about a general theme that's coming out in our research, too.
- [00:30:44.620]I'm sort of hitting general themes that I've been seeing in our research, and I'm happy
- [00:30:48.700]to jump into any of these research articles that I've talked about and research I've talked
- [00:30:53.420]about throughout the talk during the question and answer.
- [00:30:56.720]But there's this general brittleness that seems to exist in the research that me and
- [00:31:01.820]my colleagues have been doing and others, many others have been doing as well.
- [00:31:04.740]But let me just give you one example to demonstrate what I mean by brittleness.
- [00:31:08.660]One is that there...
- [00:31:10.220]You know, lack of physical understanding.
- [00:31:12.500]Here's one of my favorite, it's weird to say older examples, like a year old.
- [00:31:17.140]If you give this question, you gave this question to ChatGPT, 3.5, actually four does solve
- [00:31:22.600]it a little bit better.
- [00:31:23.600]It says, "I have a 12 liter jug and a six liter jug.
- [00:31:26.720]I want to measure six liters.
- [00:31:28.660]How do you do that?"
- [00:31:29.660]It seems pretty simple.
- [00:31:31.260]If you were human, what would you do if you wanted to measure six liters?
- [00:31:34.640]You would use the six liter bucket, right?
- [00:31:36.460]So here's what ChatGPT says, "Fill the six liter jug completely.
- [00:31:39.940]Pour the water from the six liter jug into the 12 liter jug.
- [00:31:42.080]Fill the six liter jug again.
- [00:31:43.080]Carefully pour the water from the six liter jug."
- [00:31:45.080]Okay, on and on and on.
- [00:31:46.080]It clearly doesn't have access to the physical world, so we've been writing a set of papers
- [00:31:51.700]that looks at the it's like an axiomatic paper that looks at what really what is
- [00:31:56.370]it that these large language models are and what they're not they of course
- [00:32:00.570]don't have any access to truth I know that seems obvious but that actually is
- [00:32:04.190]important it doesn't have access to the physical world etc etc and we need to
- [00:32:08.290]remind our students our faculty our researchers and the public that's
- [00:32:11.850]engaging with this sort of thing about these axioms and so we have a couple
- [00:32:15.570]papers out on on these axioms but let me give you an example now a more current
- [00:32:20.090]example of this brittleness so if I was to give you this sequence here and if
- [00:32:26.410]this is would be a typical analogy test that a cognitive psychologist would give
- [00:32:30.050]and I would say something like okay here's your example ABCD goes to a BDC
- [00:32:35.690]ABCD e now I'm going to give you a new example and I said I JKL goes to I JKL
- [00:32:44.270]what
- [00:32:46.590]and this is a test what would it be
- [00:32:50.030]I JKL am all right good all right so you've just demonstrated to me as humans
- [00:32:56.810]that you can solve this problem well there was a paper in nature human
- [00:33:00.530]behavior that's been getting a ton of attention because it's pulling this new
- [00:33:04.950]field called machine psychology which I'm a little skeptical of I'll just say
- [00:33:09.590]from the very beginning this is published in some of the top journals in
- [00:33:12.830]the world this has got so much attention it was talked about in all these
- [00:33:15.650]different and where I'm going I'm having a back-and-forth literally even just
- [00:33:19.010]yesterday with the authors about the methods that were used and I'm not
- [00:33:21.890]nothing wrong with the authors asking these questions nothing wrong there it's
- [00:33:24.790]just I question some of the the conclusions that are coming from the
- [00:33:29.150]results of this paper in other words this is the results they show they said
- [00:33:32.750]this they said okay look at this if you look at string analogies for example
- [00:33:36.770]that's the ones I gave you if you look at humans that's in the lighter color
- [00:33:41.210]and GPT-3 the darker color Wow look GPT-3 can solve letter string analogies
- [00:33:48.050]Wow
- [00:33:48.850]they must be smart like humans in fact now there's been studies testing whether
- [00:33:53.830]they have theory of mind wait a minute theory of mind is being human knowing
- [00:33:58.150]how another human thinks again just because they pass those tests doesn't
- [00:34:02.530]mean they have these abilities what they're likely doing is memorizing the
- [00:34:05.770]training data so my student and I decided to take this exact study and
- [00:34:11.450]test whether they truly have what's called zero-shot reasoning that's the
- [00:34:15.850]ability of solving something like letter string analogy
- [00:34:18.790]without seeing any other examples that's what humans can do so we gave them
- [00:34:24.070]synthetic alphabets we extended the sequences and long story short and you
- [00:34:28.330]can read more about our response it's an archive and likely will be published
- [00:34:32.770]later in nature about this that they fail miserably so again they're magical
- [00:34:38.110]they're scary in terms of what AI is doing to our political conversations
- [00:34:44.710]that's where I've talked about so far and they can make things like Sora a
- [00:34:48.730]game amazing but they have a lot of weaknesses still and as we talk to our
- [00:34:53.410]students about it we have to I think at least from my perspective we need to
- [00:34:57.250]really engage in that but that's not stopping major governments from moving
- [00:35:01.930]quickly forward with this technology I was recent I was recently talking to a
- [00:35:06.150]Associated Press journalist and I was talking to this journalist about some of
- [00:35:11.110]the things that we have been seeing in our research and the journalists had
- [00:35:14.890]found him himself this is I'm quoted just
- [00:35:18.670]as this came out a few days ago that New York City decided to just go forward
- [00:35:23.470]with their AI chatbots giving advice to businesses and I guess to no surprise I
- [00:35:30.850]guess I would you would think people would be surprised but I guess they're
- [00:35:35.410]not these things were telling people how to break the law in other words so what
- [00:35:41.290]happened was if you look at these questions you can't read them fully but
- [00:35:44.890]it says like can I take a cut of my workers tips
- [00:35:48.610]yes you can take a cut of your workers tips bosses can't take tips that's the
- [00:35:52.810]reality you know landlords cannot discriminate by source of income but
- [00:35:56.590]this chat bot was essentially giving advice that was breaking the law and
- [00:36:01.730]here's the crazy thing New York City hasn't taken this chat bot down yet so
- [00:36:06.910]we're moving so quickly into this space of like oh well we got to use it and of
- [00:36:10.750]course in education we're worried about these things and and there is promise I
- [00:36:15.130]don't want to be the person that wants to be the downer on
- [00:36:18.550]everything it certainly can be used but this has severe consequences especially
- [00:36:22.930]if it's in the medical field and we've got examples of those as well and by the
- [00:36:27.310]way when it comes to elections these chat bots aren't doing well Princeton
- [00:36:31.570]University came out with a recent study testing all of these different chat bots
- [00:36:35.990]seeing if they could answer basic questions like where can you vote can
- [00:36:40.030]you vote by text message and it says oh sure with a hundred percent confidence
- [00:36:44.530]in the state of California you can text no you can't it's not true
- [00:36:48.490]and yet people are using these engaging where we know that because of the usage
- [00:36:52.090]of these chat bots and so there these are the things that we need to be aware
- [00:36:56.590]of when it comes to the media literacy so there's this brittleness that exists
- [00:36:59.950]but also of course legal issues and high consequential implications to moving
- [00:37:08.010]quickly during this transition period so what the heck can we do about it this is
- [00:37:12.270]of course the big question that I think we're gonna be talking about today and
- [00:37:15.250]and campuses around the country and something that I'm
- [00:37:18.430]really passionate about trying to engage in and this is sort of my
- [00:37:22.310]reflection on it you know and my engagement on is that I think there's a
- [00:37:26.290]little bit of technology we can throw at the problem I do think there is some
- [00:37:29.710]policy actually I I've been working in the policy area and I'm not a policy
- [00:37:34.330]expert I actually helped write one of the first deep fake bills in the state
- [00:37:38.090]of Washington that passed and the governor you know was publicizing it and
- [00:37:43.090]now I'm talking with people in DC about it and I don't think it's a perfect bill
- [00:37:45.970]but at least it's you know a norm-setting
- [00:37:48.370]kind of bill so there's some things we can do in policy there's something
- [00:37:51.750]technology but really it has to be I think on the education side and I think
- [00:37:56.850]that's what we do at a university obviously so I'll mention a few of these
- [00:37:59.750]and others so a couple of my colleagues and I I'm on the board of this new
- [00:38:04.270]nonprofit nonpartisan organization that we just actually was just featured in
- [00:38:08.290]the New York Times a couple days ago called true media org and this is an
- [00:38:12.130]attempt to bring the tools of detection of things like deep fakes which is not
- [00:38:16.490]easy by the way and it's not perfect
- [00:38:18.310]but we want these tools to be available to the public and to organizations that
- [00:38:23.410]don't have large budgets like tech companies or the Democratic National
- [00:38:27.070]Convention or the committee or the Republican National Committee we want
- [00:38:30.910]everyone to have access to these during elections and so we've started this
- [00:38:34.250]nonprofit and that's an example of kind of the technology I think we can throw
- [00:38:37.750]at it but I know from even using the best things that we know of that it's
- [00:38:42.430]not they're not they're not perfect like I
- [00:38:45.430]said here's the bill that passed in the state of Washington there's some things
- [00:38:48.250]around deepfakes going in different states that we can do and I don't think
- [00:38:51.610]they're perfect bills but again the way I look at this exercise is just a flag a
- [00:38:56.930]norm-setting flag that says okay there needs to be some friction in the use of
- [00:39:02.010]these technology around major consequential events like political
- [00:39:05.310]campaigns now there's major decisions going on in the Supreme Court for those
- [00:39:08.950]that follow things that are going on in the Supreme Court and certainly this
- [00:39:12.790]comes close to researchers and anyone working on misinformation and in the
- [00:39:15.970]social media space this
- [00:39:18.190]is one of the biggest cases we'll see in this year there's several others but
- [00:39:21.610]this is the big one Murphy versus Missouri this is this idea or this
- [00:39:25.750]question of whether the government and the White House in particular where they
- [00:39:28.910]can have conversations with tech companies and after the this already
- [00:39:33.370]played out a couple weeks ago the decision is coming in a couple months it
- [00:39:36.190]looks like most of the justices are going to say that it is okay to have
- [00:39:41.890]conversations between government and one of the things I think that convinced
- [00:39:45.030]them is one of the arguments that was made was well
- [00:39:48.130]okay justices and this happened to one of the justices if someone was to get
- [00:39:53.150]access to your personal information and was to docks you're gonna make and we
- [00:39:58.930]knew that was going to be made available can we go tell the tech companies to be
- [00:40:02.530]aware of that they're like oh yeah actually we probably don't want our
- [00:40:05.570]personal information being so those are the kinds of things that the government
- [00:40:09.310]maybe should have access but that's still being debated I think there should
- [00:40:12.070]be limits I don't think they should be core coursing tech companies but this is
- [00:40:16.030]an interesting one the most successful one that's
- [00:40:18.070]going on that's bipartisan support is the Defiance Act so we have a couple
- [00:40:21.970]projects looking at the spread of non-consensual pornographic images and
- [00:40:28.210]this is something that both Republicans and Democrats think that we should act
- [00:40:32.170]on pretty quickly and I hope it becomes a model for how we can do something in
- [00:40:37.270]the technology space okay so that was a little bit of tech that was a little
- [00:40:40.930]policy and I'm gonna end the talk here talking about those programs so I
- [00:40:43.790]mentioned this miss info day it's something I'd love to do in all states
- [00:40:46.630]if you want it we have make everything
- [00:40:48.010]free we actually just had our program this is something that we make all the
- [00:40:51.950]tools available we make that available to the teachers the universities the
- [00:40:55.450]librarians let us know if you're interested my favorite program that
- [00:40:58.870]spurred from that is something where teachers now are teaching high school
- [00:41:03.070]students for an entire semester about miss around media literacy and then they
- [00:41:07.390]teach their grandparents their neighbors and their their parents and friends at
- [00:41:12.590]an event and I've been to a couple of these events they're amazing and we call
- [00:41:15.770]these sort of media ambassador nights not
- [00:41:17.950]because of the political charge I spent a lot of my time I love games so I've
- [00:41:22.390]been making games many years ago when style Gans and deep fakes were just
- [00:41:26.190]coming online I created a game called which faces real calm it's been prayed
- [00:41:29.950]tens of millions of times around the world in different languages just to
- [00:41:33.130]bring to public attention that the technology exists because it's during
- [00:41:36.490]that transition time that I think we're most vulnerable and so I just the games
- [00:41:41.190]easy one image is real one is not you have to decide and then of course you
- [00:41:46.130]know sometimes you get it right sometimes you get it wrong
- [00:41:47.890]I'll let you decide which one's real we may go back to that we've also now
- [00:41:52.150]created some escape rooms that we can deliver to you we send them if you want
- [00:41:55.750]them again free we do a virtual one we do digital ones so and we've translated
- [00:42:00.190]these in different languages it's an escape room for students adults
- [00:42:03.130]communities we've created lots of versions of it and the idea is to engage
- [00:42:07.510]in those things so all these resources and more I'll put this link up if you're
- [00:42:10.570]interested and then I'll just say that also we think that coming offline is
- [00:42:14.890]really really of course how we can address some of this
- [00:42:17.830]so we have a program funded through the National Science Foundation that we
- [00:42:21.430]call co-designing for trust where we work with communities rural librarians
- [00:42:25.450]community colleges high school educators and we and community leaders to design
- [00:42:33.150]interventions to misinformation as I mentioned I have you know books and
- [00:42:37.570]things around this and this is what we're dealing with in the education is
- [00:42:41.110]what are we gonna do with AI in education and by the way don't do what
- [00:42:44.050]Boston University did which was say okay we're gonna
- [00:42:47.770]strike and we're gonna replace you with AI that was kind of a problem so I know
- [00:42:53.290]that we're ending so I'm gonna skip a few slides here so and I will just say
- [00:42:58.110]that also there's a really interesting things going on on the internet around
- [00:43:02.890]the amalgamation of traditional search and faster search and I'll just say
- [00:43:08.470]there's so many of these examples I have but I have to show one that got a lot of
- [00:43:11.950]attention was written in the Atlantic Monthly about it but if you ask a
- [00:43:15.310]question like country in Africa that starts with
- [00:43:17.710]K while there are 54 recognized countries in Africa none of them begin
- [00:43:24.170]with the letter of the reason why that happened was that this was an AI
- [00:43:40.470]generated website that's just creating garbage and Google goes oh wonderful
- [00:43:47.650]at nine jumping it now so the last thing I'll say and I'll end here is that
- [00:43:53.950]something that we hold it for though is that companies like perplexity where they
- [00:43:59.590]say if you can directly answer somebody's answer nobody needs those two
- [00:44:03.110]ten blinks any links anymore I say that is exactly what we don't need in this
- [00:44:08.650]new era we absolutely need those links we need to be able to do what these three
- [00:44:13.270]dots allow those are the three dots that allow you now which we encourage Google
- [00:44:17.590]to do and they now allow you to look at the metadata so with that I'll end and
- [00:44:21.010]say thank you so much sorry going over a little bit but happy to take any
- [00:44:25.090]questions
- [00:44:27.470]give any questions sorry I know I went by really fast and I know we're getting
- [00:44:41.710]caught but happy to talk with others oh yes
- [00:44:47.530]you were the one that we're feeling we're gonna be screwed so we're on the
- [00:44:51.070]same page probably no no I'm fascinated by the the misinformation and hamp a
- [00:44:54.910]schoolers and was just curious I mean I know Washington is a little more
- [00:44:58.450]progressive than some other places but you know have you gotten any pushback
- [00:45:01.210]from that as well I'm just thinking about here in Nebraska if you were to
- [00:45:04.390]bring something like that here you probably have some people saying great
- [00:45:06.930]and you'd have other people saying you know it's a liberal conspiracy yes no
- [00:45:10.810]I'm glad you brought that up so the state of Washington is about as red as
- [00:45:17.470]maybe more red maybe even Nebraska in some ways and very very very blue and so
- [00:45:22.390]we a lot of the work that we're doing are in those rural communities in
- [00:45:26.370]eastern Washington we're working with Washington State University and some of
- [00:45:29.450]the other universities that have better access to the rural communities and the
- [00:45:33.150]one thing we learned a lot of things first of all don't call it Miss Info
- [00:45:35.710]Day that's the one thing we couldn't do that so we call it media ambassadors and
- [00:45:39.790]actually it's one of the things I talked to and I grew up in a very small
- [00:45:42.790]community so I enjoy going to those communities and talking with them and I
- [00:45:46.390]talked to him about how
- [00:45:47.410]one of the most important skills that we can give our kids today it's like a
- [00:45:50.890]superpower I was always I love superheroes when I was young Superman
- [00:45:54.170]Batman all those things I said actually the superpower today is being able to
- [00:45:57.430]discern what's true or not
- [00:45:58.870]don't you want your kids to have that ability and we're very careful in the
- [00:46:02.350]content we use we we try to customize it to the things that that community cares
- [00:46:06.850]about so if it's an agricultural community we talk about agricultural
- [00:46:09.790]issues whether it's GMOs or other things that that people care about we try if we
- [00:46:14.410]do move into the political space which we don't very often
- [00:46:17.350]we really try to give examples because it really does exist on both sides and
- [00:46:21.970]then what we do is is is really an actual a lot of examples we give of
- [00:46:27.010]other countries too it's easier to talk about these issues when it's that far
- [00:46:30.490]away and that's like well of course that's a silly thing I can't believe
- [00:46:33.310]that that's what happens and you go oh my gosh there's a parallel hill going on
- [00:46:36.230]but it's very it is very difficult I won't say it's not easy it's that it's
- [00:46:39.250]easy it's very very challenging but I I feel like it's the only way we have to
- [00:46:43.670]do it in both sides and so that's why we do the co-designing we go in and
- [00:46:47.290]say you tell us you know what what is it that you want your your kids and your
- [00:46:52.150]community to know so it's an ongoing project and a hard one wonderful
- [00:47:00.490]wonderful presentation and I appreciate your emphasis on education on media
- [00:47:06.310]literacy actually this afternoon I'm about to present a counter case on that
- [00:47:10.530]okay yes okay so um I'm curious about the Miss info day and related education
- [00:47:17.230]oh modules from my own research we tend to find that such modules are effective
- [00:47:23.710]in the short term but they don't necessarily stick for the long term and
- [00:47:28.030]sometimes they bring backfire effects that is they actually increase overall
- [00:47:32.830]skepticism not only towards misinformation but towards trustworthy
- [00:47:37.290]information as well I'm wondering if you couldn't speak about that thank you yeah
- [00:47:41.350]so I think this is a really really important issue and I'll say let me just
- [00:47:47.170]kind of summarize about eight years of like thinking about this and and you
- [00:47:54.790]know in doing research and developing teaching modules in this one of the
- [00:47:57.770]things that I've been trying to convey just in the last year so and this is
- [00:48:00.910]informed my colleagues that I that I talked with about that during the
- [00:48:06.050]pandemic there so much of the world had to do their own research because there
- [00:48:10.130]was nothing there and doing your own worst research is something I highly
- [00:48:13.610]encouraged but
- [00:48:17.110]if you come back to me and you say I found evidence to support some claim
- [00:48:22.090]then I think that's not there's infinite evidence on the internet now what you I
- [00:48:29.730]want you to do is come back to me with how you came to your evidence tell me
- [00:48:34.990]how you would actually separate one piece of evidence from another and if we
- [00:48:38.470]can change that culturally that's something that I think is one of more
- [00:48:43.030]important things to do and so I bring that up because when I have conversations
- [00:48:47.050]about like epistemology which I never use the term when I talk to public about
- [00:48:50.170]just like how do we know what we know and we've done these like you know
- [00:48:53.650]epistemology in the real world sort of conventions you know with the public
- [00:48:58.090]that I see you know just anecdotally less backfire resistance to these kinds
- [00:49:04.390]of things and actually as you know because you read about this there's
- [00:49:07.870]there there's evidence that shows backfire like when Brandon Nyhan and
- [00:49:11.770]others actually came out with this even he's come back and said actually the
- [00:49:14.590]backfire effect might not have as much of an
- [00:49:16.990]effect I mean this is all debated and now you know maybe you're going to even
- [00:49:20.170]get into this but at least just from my personal experience when I change the
- [00:49:24.730]conversation from just evidence or you know what's a reliable you know what's a
- [00:49:31.210]reliable news site versus another and I just talk about how you go about doing
- [00:49:35.350]that and have them reflect on that I find that that's the least
- [00:49:38.890]I see the least resistance in that sort of thing and then when I talk about
- [00:49:46.930]you know just generally misinformation to political leaders and business
- [00:49:51.190]leaders I find that one area in which I can make a little bit of progress is
- [00:49:55.870]when we talk specifically about the crisis element rather than just talking
- [00:50:01.210]about you know it's easy to talk about the political environment because that's
- [00:50:04.510]the one that we have the you know the most visceral the strongest visual
- [00:50:07.630]responses to um but yeah I I still think it's something
- [00:50:11.350]we we can't ignore because I would hate to be doing all this work and then
- [00:50:16.870]having people become more skeptical in fact generally I actually am concerned
- [00:50:22.510]about even talking about sometimes misinformation to the public and about
- [00:50:26.050]the ways in which our information environments aren't all that reliable
- [00:50:28.810]because oh my gosh I don't want them leaving like well I don't care like I
- [00:50:32.470]give talks for example on misinformation in science which I am a huge fan of
- [00:50:37.090]misinformation science and I've written many papers about um how much you know
- [00:50:41.050]it still works despite these problems
- [00:50:43.030]um but I do feel like there probably has been some talks that I've given
- [00:50:46.810]or other things where people leave even more concerned more skeptical more
- [00:50:51.430]cynical the one thing that really stuck with me is when I talked to a middle
- [00:50:54.670]school class several years ago actually this was before the pandemic and I
- [00:51:01.030]showed them some pieces of information I said you know you know which one do you
- [00:51:04.150]think is more reliable or not and and a couple of students that raised their
- [00:51:07.210]hand they said well none of it's true I don't trust anything on the internet and
- [00:51:11.350]I don't want that to be the case I think that's a worse output for all the things
- [00:51:16.750]so if that's where we're going I'm just gonna stop go get a piece of ice cream
- [00:51:20.110]and call it a day because I don't want to make people more skeptical and I
- [00:51:23.990]think that's probably what you're getting at so yeah thanks for the
- [00:51:26.350]question I mean it's really important yes thanks for such a wonderful
- [00:51:34.690]presentation and it's very resourceful indeed so your title site said that
- [00:51:40.990]misinformation is already challenging and then came generative AI so my question
- [00:51:46.690]is how severe do you think the misinformation was challenging before
- [00:51:52.330]the generative AI and how much do you think generative AI has magnified it and
- [00:51:59.350]what are the areas that you think misinformation was challenging before
- [00:52:03.850]generative AI came into the picture no it's a great question and it's a good
- [00:52:07.810]chance to do that comparison when you work in technology studies history is
- [00:52:11.630]yesterday you know and you're like you study the future really
- [00:52:16.630]so it's just things are moving so quickly that we can do those comparisons
- [00:52:20.830]and so in the misinformation was a challenge before I guess if before was
- [00:52:26.990]the sort of rise of social media social media all by itself and the ways in
- [00:52:30.830]which it would sequester information it would amplify information and and because
- [00:52:36.830]of this connectivity that was hyper connectivity that we have you know
- [00:52:41.050]before we didn't we didn't even need generative AI to cause challenges here and also
- [00:52:46.570]so you have all these different forces so one you have algorithmic sorry you
- [00:52:52.090]got this I have this ring I'll step back so you have this algorithmic element
- [00:52:57.370]that's you know as Kirsten talked about it's it's sort of driving you know a lot
- [00:53:04.810]of the issues that we're seeing here in terms of even just our own personal view
- [00:53:09.370]of the the world and also you have bad state actors we've seen that in our
- [00:53:13.870]research actually some of my colleagues work on this every day
- [00:53:16.510]that happened and they were effective before but the difference there is that
- [00:53:20.410]you had you could write bots but you couldn't write them as effectively and
- [00:53:25.570]create the sort of plausible writing output that you can now with generative
- [00:53:29.450]AI and and the other thing that that that we saw at least when we compare
- [00:53:32.950]like the 2016 election to the 2020 election a lot of what we saw that was
- [00:53:37.810]you know a lot of what was driving some of the issues the sort of macro level
- [00:53:41.530]issues were inauthentic accounts outside the
- [00:53:46.450]United States in the 2020 election we saw more of what's called we call
- [00:53:51.490]participatory disinformation that's when people are given a frame so here's
- [00:53:56.650]a frame a narrative of some sort you find the evidence so then people go
- [00:54:01.490]looking for the evidence they drop things into that frame and then it
- [00:54:05.590]amplifies you don't need generative AI to do that you just need to create the
- [00:54:08.570]frame once you create that narrative that frame whether it's you know a stop
- [00:54:13.430]to steal or whatever it is you have that and then people look at and a lot of
- [00:54:16.390]sometimes that evidence is legitimate and you should look into that but it but
- [00:54:21.070]it creates that you don't need generative AI for that now what's
- [00:54:24.530]happened of course now is that we see experimentation going on at a scale like
- [00:54:31.090]that you can only do with automated bots that can create plausibly looking
- [00:54:36.370]language and all sorts of other modalities although I'll say that the
- [00:54:40.270]modality that scares me the most right now is probably audio that's the hardest
- [00:54:44.110]to discern I think for us and it's
- [00:54:46.330]now down to 15 seconds if you give me 15 seconds of your bit your voice your
- [00:54:51.010]question if I just could have recorded your question that you asked me I can
- [00:54:54.670]create a deep fake of you saying something preposterous and so that to me
- [00:54:59.590]is there we saw that with robocalls in New Hampshire that was kind of a low
- [00:55:02.930]level sort of application of it but I think these these are challenging it's
- [00:55:07.430]just that what happens is we start to develop the understanding and tools
- [00:55:11.530]about what to do about the spread of misinformation social media and then all
- [00:55:15.430]of a sudden generative AI
- [00:55:16.270]came and now we're trying to generate the tools and understanding of what's
- [00:55:19.450]going on and it's it's really chat it's especially challenging but I would say
- [00:55:22.970]the thing that's changed the most from the world in which social media was
- [00:55:27.190]amplifying things and the biggest challenges we had there were these
- [00:55:31.870]little isolated islands of communities that we couldn't we can't track
- [00:55:34.930]everything and when we saw like whatsapp emerge in places that have never had the
- [00:55:39.490]internet in in places in the world that didn't have internet for the first time
- [00:55:43.730]even Edgar Welch who was the person that went shooting on
- [00:55:46.210]pizzagate in Washington DC he was someone that had just had access to the
- [00:55:50.730]internet to the most part and it's it's that transition into the information
- [00:55:55.510]worlds that's always going to be a problem but the new thing with
- [00:55:58.050]generative AI I think really is experimentation the scale and the
- [00:56:01.630]plausibility of what we see yes so let's jump to a worst-case scenario okay let's
- [00:56:10.930]do that let's get really screwed is Cambridge Analytica 2024 coming for us so
- [00:56:16.150]let's take generative AI and sophisticated prompt engineering and a
- [00:56:21.850]Facebook network of 1.3 billion fake accounts what could go wrong what are
- [00:56:28.030]you afraid of so I absolutely think it's a possibility and it's likely playing
- [00:56:33.790]out right now I I wouldn't you know it's we're we're trying to you know keep track
- [00:56:41.710]of these kinds of things the things the thing about Cambridge Analytica
- [00:56:46.090]is they were basically leveraging tools that anyone could have had access to
- [00:56:50.650]they just leveraged them you know at the right time and had the right resources
- [00:56:55.570]at the time to be able to do that that's it's clearly happening now and I think
- [00:57:01.750]here's here's getting back to the previous question to the the thing that
- [00:57:06.550]I find most challenging in dealing with this now is that we'll tell the public
- [00:57:11.290]we'll say okay if you see something strange do
- [00:57:16.030]what's called you know lateral reading so to open up a new tab look at other
- [00:57:19.250]sources but what's happening now is that their technology allows you to fill all
- [00:57:24.850]those extra tabs with other news sites and even comments on new sites and other
- [00:57:30.790]things that and that can be customized if cookies are being saved and so what
- [00:57:34.590]would Cameron Cambridge Analytica it was on Facebook but now it's not just
- [00:57:39.730]Facebook in which you can sort of do this kind of experimentation and and do
- [00:57:44.230]these you know and apply some of these cycle
- [00:57:45.970]metric measures I mean some people are sort of skeptical of what they can
- [00:57:49.030]actually do but yeah I absolutely think we cannot rule out the fact that we're
- [00:57:55.950]constantly being barraged with a lot of accounts that aren't even really humans
- [00:58:01.090]and that are running experiments constantly so I can give you a couple
- [00:58:05.030]papers that came out of the Stanford Internet Observatory that's addressed
- [00:58:08.710]this exact issue but yes I what could go wrong lots and it does keep me up at
- [00:58:15.010]night for sure
- [00:58:15.910]I saw a question here yes you mentioned earlier like what is news and your I
- [00:58:25.570]can't help but think like that
- [00:58:26.950]I don't know how to articulate this question but you keep talking about
- [00:58:29.230]systems of power and you don't want to talk about politics and you don't want
- [00:58:32.510]to talk about it at what point is this systematic and and and what is this
- [00:58:37.630]really about AI because when I see true media.org I wonder who owns it and I
- [00:58:43.030]wonder who owns you and I wonder why
- [00:58:45.850]you're telling me the thing you're telling me and and I think that like
- [00:58:48.550]everybody has to critically ask of everything that but but Cambridge
- [00:58:54.310]Analytica happened before AI the the media powers that are out there still
- [00:58:59.290]want to misinform like the only thing the internet wants to do is sell me they
- [00:59:03.150]don't there's no information there's no altruistic this is my perspective at
- [00:59:06.470]this point but there's no altruistic platform out there that wants to unite
- [00:59:10.850]humanity the internet is just to sell you know like so I think and I'm just
- [00:59:15.790]channeling some of my own interior but how do you differentiate between what
- [00:59:20.110]where this problem is systematic and where it's human impulse to trust the
- [00:59:25.990]three sources or to educate myself and do my own research because if we're all
- [00:59:32.070]doing our own research as critically and analytically as possible inside of a
- [00:59:35.750]platform that really doesn't want us informed we're never gonna get there and
- [00:59:39.210]I'm just I'm curious about like where does misinformation stop if we tackle AI
- [00:59:45.730]to me misinformation is about questioning the source not the actual
- [00:59:50.070]content but it's what's the incentive of this person to tell me the truth and how
- [00:59:55.870]do you would I guess how do you see that yeah I'm so glad you brought that up
- [01:00:00.010]because it's one thing I had to blow by real quick so one of the things I wanted
- [01:00:03.550]to end with was say that one of the best things we can do is check the source so
- [01:00:07.570]there's this method that my colleague Mike Caulfield came up with it's called
- [01:00:10.150]sift basically it's real simple it's stop investigate essentially the source
- [01:00:13.930]that's one of the most important things you could
- [01:00:15.670]do of all the things it's not the content itself it's really the sourcing
- [01:00:18.910]and credibility and reputations matter more in this world than it ever is and
- [01:00:23.110]you should we should be questioning it to some degree at some point we have to
- [01:00:26.530]have some trust I mean trust levels have gone down in about every institution
- [01:00:30.610]possible but to give you a practical example not to say this is a panacea for
- [01:00:35.050]addressing this issues and it's certainly more than AI I mean we had
- [01:00:38.410]this issue if AI wouldn't have come into that I mean first of all a I's been
- [01:00:42.050]around of course for you know very very long time many many decades since
- [01:00:45.610]Rosenblatt in the 1950s I mean this is not a new thing but its current form
- [01:00:51.310]where you have these machines that can create plausibly looking text that is
- [01:00:55.630]new but it's not like misinformation is I completely I could not agree more that
- [01:01:00.990]misinformation is is its own thing and it didn't need AI to make it worse but
- [01:01:06.050]there are some practical things we do so one of the things we've been telling
- [01:01:08.710]Google for like three years maybe four years now is that why aren't you making
- [01:01:13.270]the tools that fact-checkers need all the
- [01:01:15.550]to do sourcing so this is sourcing like you say so you can check what you know
- [01:01:20.590]here what this means like back to bar it was just recently in Finland talking to
- [01:01:24.610]this fact-checking organization I want to know who they were so you can now go
- [01:01:27.810]to these three vertical dots they exist on Google now not that they're perfect
- [01:01:30.710]Google is now providing some of that fake data we encourage them to do this
- [01:01:33.990]and now they are doing this and you can actually see you can say you know it was
- [01:01:38.690]first indexed by Google in March 2014 if you see something indexed yesterday that
- [01:01:42.450]might be something worth question there's all this metadata again this is
- [01:01:45.490]insane oh we fixed it it means we need to create better tools for sourcing and
- [01:01:49.750]not doing what perplexity asked us to do which is just just accept the answer
- [01:01:53.350]we're gonna give you the answer you don't need those ten blue links anymore
- [01:01:56.250]and so the example I love to give because this is something I study I can
- [01:02:00.330]guarantee you every like in the next hurricane we have there's gonna be shark
- [01:02:05.050]swimming in the street guaranteed and it's gonna be this one now this the kind
- [01:02:09.430]of the crazy thing is last year there actually was a shark swimming in the
- [01:02:13.550]street so now my students make fun of me again with this
- [01:02:15.430]sample but still a good example but with these three vertical dots this
- [01:02:18.450]literally is a new feature within the last several months you can now look at
- [01:02:22.790]the three vertical dots of images and you'll see with this image right here
- [01:02:25.990]that Snopes and others have said does this picture show a shark no and you can
- [01:02:29.810]look at all these places now it could still go back to your point and say
- [01:02:34.030]well ultimately and I don't know if that's you know I still don't know if
- [01:02:38.870]that's true and that's why we have to get out of that internet selling
- [01:02:42.130]business and come in line like this come into our communities I
- [01:02:45.370]think we need to create a culture of coming offline more than anything so that
- [01:02:49.150]we can get back to that kind of thing
- [01:02:53.130]it totally is it we're coming more splintered in fact the term that I me
- [01:03:02.410]and other colleagues have been using for a long time it's not my term but I use
- [01:03:04.810]it it's called the splinter net we know it's not the internet anymore in the
- [01:03:07.690]1990s I remember sitting around with my friends going wow democratization of
- [01:03:11.770]information it's gonna be so cool you know sitting around the campfire now
- [01:03:15.310]like it's just on the opposite direction like I don't think that's where we've
- [01:03:19.210]gone we have our own internets now and if we get away from it at least we share
- [01:03:23.030]a physical space but anyway that would be a little bit over simplistic with the
- [01:03:28.090]problem but to your point yes it is more than just AI misinformation does involve
- [01:03:33.290]politics power and all those things that have been around forever so thank you
- [01:03:38.930]thank you Jenna
The screen size you are trying to search captions on is too small!
You can always jump over to MediaHub and check it out there.
Log in to post comments
Embed
Copy the following code into your page
HTML
<div style="padding-top: 56.25%; overflow: hidden; position:relative; -webkit-box-flex: 1; flex-grow: 1;"> <iframe style="bottom: 0; left: 0; position: absolute; right: 0; top: 0; border: 0; height: 100%; width: 100%;" src="https://mediahub.unl.edu/media/23409?format=iframe&autoplay=0" title="Video Player: Misinformation was Already Challenging — Then Came Generative AI" allowfullscreen ></iframe> </div>
Comments
0 Comments