"Discriminatory Algorithms and Cultural Complexities" | CAS Inquire
CAS
Author
02/27/2020
Added
42
Plays
Description
Drs. Carrie Heitman and Heather Richards-Rissetto of the Department of Anthropology give this CAS Inquire talk. cas.unl.edu/cas-inquire
Searchable Transcript
Toggle between list and paragraph view.
- [00:00:03.650]I have the outfit (mumbles)
- [00:00:05.520]ready to go for the talk. (laughing)
- [00:00:07.470]So I'm Heather Richards Rissetto
- [00:00:09.650]and soon I'll be joined by Carrie Heitman in a bit
- [00:00:14.530]but imma start out the talk
- [00:00:17.810]so before we begin, we wanna thank the CAS Inquire committee
- [00:00:21.740]and June Griffin for bringing to (mumbles) this program
- [00:00:25.460]and for inviting us to speak today.
- [00:00:27.600]We'd also like to thank our anthro colleague Taylor
- [00:00:30.590]for directing CAS Inquire
- [00:00:31.950]and congratulate him on his great success
- [00:00:33.880]in its inaugural year.
- [00:00:37.992]While computers now influence some might say infiltrate
- [00:00:41.380]nearly every aspect of our lives,
- [00:00:43.660]they were originally designed for number crunching
- [00:00:46.460]that is they were designed to perform basic calculations
- [00:00:49.240]on big data sets, much faster than humans could.
- [00:00:53.140]Today, computers are integral to education, healthcare,
- [00:00:57.000]entertainment, transportation, agriculture, our social lives
- [00:01:02.000]well, basically everything.
- [00:01:04.310]While we may be aware of their presence,
- [00:01:06.260]we might often not notice their influence
- [00:01:09.310]on our daily decision making.
- [00:01:11.890]Everyday computers are making decisions for us, or are they?
- [00:01:15.870]Are computers really taking the place
- [00:01:17.430]of human decision making?
- [00:01:19.753]Yes and no.
- [00:01:21.730]Our objective in this talk is to delve deeper
- [00:01:24.060]into this question, through exploration of algorithms
- [00:01:27.600]and their influence on culture.
- [00:01:32.010]But before we delve into how algorithms affect our lives,
- [00:01:34.800]whether positively or negatively, and then reflecting
- [00:01:37.720]on the future, let's begin with the basics.
- [00:01:40.710]What are algorithms?
- [00:01:42.130]And how do they relate to artificial intelligence
- [00:01:44.300]and machine learning two terms currently inundating media?
- [00:01:51.720]What exactly is an algorithm?
- [00:01:54.200]We hear this word almost every day
- [00:01:56.690]and yet to most of us, algorithms are still daunting
- [00:01:59.120]and mysterious but algorithms are simply a set of guidelines
- [00:02:02.970]that describe how to perform a task.
- [00:02:05.670]Remember in elementary school when you had to stand
- [00:02:07.970]up in front of the class and give a speech
- [00:02:10.310]on how to perform a task such as baking a ranza?
- [00:02:13.590]Or how to navigate traffic on a (mumbles) game day?
- [00:02:17.220]Well, essentially you created an algorithm.
- [00:02:20.050]A sequence of instructions on how to do something.
- [00:02:23.230]In our digital age however, algorithms typically refer
- [00:02:26.250]to a sequence of instructions telling a computer
- [00:02:28.700]what to do rather than a human.
- [00:02:31.340]Basically, an algorithm is just computer code
- [00:02:34.140]and the software is fundamentally composed of algorithms.
- [00:02:38.420]So, not that mysterious after all.
- [00:02:43.060]So, how do algorithms impact our daily lives?
- [00:02:46.470]Algorithmic operations are behind all of the responses
- [00:02:51.040]that we get from digital assistance such as Siri, Cortana,
- [00:02:54.510]or Alexa, when we ask them a question,
- [00:02:57.310]they use one set of algorithms to interpret
- [00:02:59.820]or get a sense of what you've asked,
- [00:03:02.110]and then another set to acquire the information
- [00:03:04.520]you've requested so, just this morning I asked Siri,
- [00:03:08.970]what should I have for breakfast?
- [00:03:10.670]She responded broccoli
- [00:03:12.520]So I think the algorithm might be a little bit (mumbles)
- [00:03:15.352]but we'll see unless you love broccoli so.
- [00:03:18.620]Social media is wrought with algorithms,
- [00:03:21.030]Instagram, Twitter, Facebook,
- [00:03:23.600]and these algorithms are constantly changing.
- [00:03:26.250]Prior to late 2019, Facebook's newsfeed algorithm ranked
- [00:03:30.540]and displayed posts for users based on relevance
- [00:03:33.980]driven primarily by businesses, brands and media.
- [00:03:37.630]Now Facebook is using a new algorithm to select news feed
- [00:03:40.870]based on the likelihood that a user will have a positive
- [00:03:43.730]reaction, they do this by studying your friends,
- [00:03:47.220]family and groups so that's not creepy at all.
- [00:03:54.129]And oops, back here (mumbles)
- [00:03:57.477]And recently, McDonald's bought an algorithm
- [00:04:00.410]for data driven meal choices that can automatically change
- [00:04:03.670]menus depending on the weather, time of day
- [00:04:06.340]and traffic and of course suggest an order
- [00:04:08.680]based on your license plate.
- [00:04:12.640]So how does such algorithms relate
- [00:04:14.760]to artificial intelligence and machine learning?
- [00:04:18.320]Artificial intelligence or ai is an umbrella term,
- [00:04:21.550]referring to instances where machines simulate human
- [00:04:24.120]behavior using human intelligence.
- [00:04:27.270]When a machine completes tasks based on a set
- [00:04:29.080]of instructions, that is an algorithm,
- [00:04:31.510]this intelligent behavior is what we call artificial
- [00:04:33.880]intelligence, machine learning is simply a type of ai.
- [00:04:38.840]In the case of algorithms, instead of repeatedly processing
- [00:04:41.730]a stable set of instructions, systems based on machine
- [00:04:45.070]learning, rewrite themselves as they work.
- [00:04:47.690]In other words, humans write the original algorithm
- [00:04:50.450]or computer code, but then the computer modifies
- [00:04:53.120]this original algorithm based on what it's told to learn,
- [00:04:56.330]what it's learning from that is its training set
- [00:04:59.170]and how it's learning.
- [00:05:03.280]So this brings up two critical points.
- [00:05:05.960]First, we must remember is Cathy O'Neil,
- [00:05:08.680]author of " Weapons of Math Destruction" writes,
- [00:05:11.760]algorithms are opinions embedded in code.
- [00:05:14.870]Humans write the rules that comprise algorithms,
- [00:05:17.930]and who typically writes these rules?
- [00:05:19.830]Well those, typically with wealth and power.
- [00:05:28.110]Second, in a similar vein, Meredith Whittaker,
- [00:05:31.560]a former Google employee was quoted saying,
- [00:05:34.113]" What's frightening about ai isn't terminators
- [00:05:36.560]and super intelligent machines, it's the way ai works
- [00:05:40.140]to centralize knowledge and power in the hands
- [00:05:42.320]of those who already have it, and further disempower
- [00:05:44.980]those who don't."
- [00:05:46.660]But before we go down that road,
- [00:05:48.160]let's talk about some potential positives of algorithms.
- [00:05:53.020]For example, self driving vehicles or buses
- [00:05:56.340]can potentially lead to safer roads or fewer traffic jams
- [00:05:59.200]reducing carbon emissions.
- [00:06:01.120]So here you're seeing Chancellor Green,
- [00:06:03.440]avoiding a self driving bus in Lincoln's 2018
- [00:06:06.680]autonomous shuttle project.
- [00:06:09.010]Another potential positive is in retail,
- [00:06:11.370]using real time geo fencing,
- [00:06:13.950]which location based algorithms trigger notifications
- [00:06:18.910]on sales, best prices and even offer coupons
- [00:06:22.700]as you walk down the aisles of your favorite store.
- [00:06:26.122](chuckling)
- [00:06:27.530]That makes me wonder what Big Brother was,
- [00:06:29.007]but in this week, (chuckling)
- [00:06:31.300]so text and speech recognition algorithms can recognize
- [00:06:36.290]and translate written text into spoken language
- [00:06:39.240]and vice versa, and vice versa translate spoken language
- [00:06:42.950]into text or even into drawings.
- [00:06:49.137]There is a moon at night,
- [00:06:53.780]two trees in the field.
- [00:07:00.543]Some flowers.
- [00:07:07.428]Far away there's a house.
- [00:07:09.401](chattering)
- [00:07:12.826](mumbles)
- [00:07:14.530]So this Google experiment turns real time free speech
- [00:07:18.020]into animated drawing, such tools can possibly facilitate
- [00:07:21.660]web accessibility for disabled persons,
- [00:07:23.960]foster alternative modes of communication and much more.
- [00:07:31.240]With global population growth and aging populations,
- [00:07:34.100]healthcare is one of the world's fastest growing
- [00:07:36.150]and most important industries
- [00:07:38.150]and algorithms are changing medical practice from diagnosing
- [00:07:41.110]and even predicting disease to discovering new medications.
- [00:07:45.320]Google software can look into our eyes to assess our risk
- [00:07:48.130]for a heart attack or stroke, other algorithms can spot DNA
- [00:07:53.050]mutations and tumors and artificial intelligence
- [00:07:56.400]is being used to improve breast cancer screening
- [00:07:59.890]or predict lung cancer, which is the deadliest cancer
- [00:08:03.150]in the world and the sixth most common cause of death.
- [00:08:06.710]Recently, an algorithm discovered a much needed,
- [00:08:10.000]entirely new antibiotic called Palicin
- [00:08:13.050]which successfully destroys several bacteria straints
- [00:08:16.400]including some of the most dangerous drug resistant bacteria
- [00:08:19.160]on the World Health Organization's most wanted list.
- [00:08:26.210]In 2018, Google's art and culture released an app
- [00:08:30.560]enabling you to find your art doppelganger,
- [00:08:34.450]what could be better?
- [00:08:35.790]You took a selfie and Google's facial recognition algorithm
- [00:08:38.690]searched the database of art from 17 museums worldwide
- [00:08:41.960]to find your twin.
- [00:08:47.275](upbeat music) Do I look like her?
- [00:08:49.813](upbeat music)
- [00:08:54.720]I feel very fancy looking at these, like all of them have
- [00:08:58.080]a Victorian collar, their hair is curled,
- [00:09:00.720]they have a lot of jewelry (chattering)
- [00:09:02.954]and the painting styles are really different
- [00:09:04.371]too, like I can tell they're from different eras.
- [00:09:07.798](upbeat music)
- [00:09:11.454]I think the beard might be a little misleading
- [00:09:14.090]because all the results have beards so far,
- [00:09:17.420]so I think it's pretty much focusing on that aspect
- [00:09:19.680]as opposed to other things.
- [00:09:21.389](upbeat music)
- [00:09:24.123]I would say it picked up well on the nose.
- [00:09:33.982]So, will your selfie be in a museum one day?
- [00:09:39.155]Nop.
- [00:09:41.477]While finding your museum doppelganger is fun,
- [00:09:44.680]the science behind it is scary and Arwa Mahdawi,
- [00:09:48.860]a columnist for The Guardian writes
- [00:09:51.010]in reference to the doppelganger project, I quote,
- [00:09:54.173]" Google's latest experiments is less about art
- [00:09:56.870]than it does about the burgeoning science
- [00:09:58.490]of facial recognition technology
- [00:10:00.117]and the rise of the surveillance state."
- [00:10:05.720]So as great as it might sound to have your face
- [00:10:08.360]or your selfie in a museum one day as archeologists,
- [00:10:11.090]Carrie and I can't help to ponder this excerpt
- [00:10:13.450]from the New Yorker by Adrian channel.
- [00:10:15.833]" What does it mean that our cultural history,
- [00:10:18.170]like everything else is increasingly under the watchful eye
- [00:10:20.860]of a giant corporation whose business model
- [00:10:23.420]rests on data mining?
- [00:10:25.350]One dystopian possibility offered by critics
- [00:10:28.130]in the wake of the Google selfie app,
- [00:10:30.020]was that Google was using all of the millions
- [00:10:31.840]of unflattering photos to train its algorithms."
- [00:10:35.260]Those that you saw in the video of people using.
- [00:10:38.123]" Google has denied this but training goes both ways,
- [00:10:41.550]as Google scans and processes more of the world's cultural
- [00:10:45.070]artifacts, it will be easier than ever to find
- [00:10:47.880]ourselves in history, so long as we offer
- [00:10:50.360]ourselves up to the computers gaze."
- [00:10:52.840]So on that sobering note,
- [00:10:54.180]I'm gonna turn it over to Carrie.
- [00:11:04.462]Hi everybody?
- [00:11:05.295]It's a pleasure to be here.
- [00:11:09.230]So there's no doubt that artificial intelligence
- [00:11:11.450]or ai is seeping into nearly every aspect of our lives
- [00:11:14.410]whether we know it or not.
- [00:11:16.450]According to UC Berkeley computer scientist, Dr. Stuart
- [00:11:19.040]Russell, the upside of such technologies is enormous
- [00:11:22.960]a potential golden age for humanity
- [00:11:25.840]but as informed citizens, we also need to understand
- [00:11:28.480]the associated risks, the application of complex algorithms
- [00:11:31.810]is continuing to expand in ways that many find alarming
- [00:11:35.120]or at the very least, outstripping our ability to assess
- [00:11:38.190]the associated risks and while this is not an exhaustive
- [00:11:41.470]list of such potential negative consequences,
- [00:11:44.460]we'd like to share with you a few potential examples.
- [00:11:51.011]Let's see if this will work.
- [00:11:53.984](mumbles) okay
- [00:11:55.420]So fortune 500 companies such as Hilton,
- [00:11:57.840]Goldman Sachs and Unilever are using ai
- [00:12:00.540]in their assessments of job candidates.
- [00:12:02.880]They work with third party vendors like HireVue
- [00:12:05.890]to use a candidate's computer or cell phone
- [00:12:08.470]camera to record job interviews,
- [00:12:10.620]their proprietary algorithm,
- [00:12:12.420]then analyzes the person's facial movements, word choice,
- [00:12:16.510]speaking voice before ranking them against other applicants
- [00:12:20.760]based on an automatically generated employability score.
- [00:12:24.360]This is called affect recognition
- [00:12:26.620]and HireVue claims that they can measure qualities
- [00:12:28.910]such as grit and track how many times a candidate smiles
- [00:12:33.070]and if you are interested, you should just go on YouTube,
- [00:12:36.600]and there's just like oodles of like videos
- [00:12:38.410]about how to trick the system to try to game
- [00:12:41.110]yourself in terms of your employability score.
- [00:12:44.130]Companies like HireVue and VCV claim that ai
- [00:12:47.130]removes human biases that are rampant in hiring practices
- [00:12:50.750]in their 2019 annual report
- [00:12:52.730]however, the ai Now Institute points
- [00:12:55.790]out that affect recognition
- [00:12:58.210]has not been proven to be scientifically valid
- [00:13:00.740]and that such technologies may simply automate
- [00:13:02.810]discrimination, Whittaker, who heard from earlier
- [00:13:06.320]of ai Now states, quote,
- [00:13:07.563]" It's profoundly disturbing,
- [00:13:09.740]that we have proprietary technology that claims
- [00:13:12.410]to differentiate between a productive worker
- [00:13:14.800]and a worker who isn't fit based on their facial movements,
- [00:13:18.730]their tone of voice and their mannerisms."
- [00:13:23.770]A 2019 study published in the Journal of Science,
- [00:13:27.220]found evidence of racial bias in one algorithm widely used
- [00:13:30.760]within the US healthcare system.
- [00:13:32.750]According to the researchers,
- [00:13:34.030]the bias occurred because the algorithm uses health costs
- [00:13:38.130]as a proxy for health needs.
- [00:13:40.580]The author's estimated that this racial bias
- [00:13:42.560]reduces the number of black patients
- [00:13:44.670]identified for extra care by more than half,
- [00:13:47.840]less money is spent on black patients
- [00:13:49.700]who have the same level of need
- [00:13:51.420]and the algorithm does falsely concludes
- [00:13:53.570]that black patients are healthier
- [00:13:55.130]than equally sick white patients.
- [00:13:57.500]This is a case where existing structural inequalities
- [00:13:59.970]based on race, were already part of the data input
- [00:14:03.050]and thus were exacerbated
- [00:14:04.450]by the Optum proprietary algorithm.
- [00:14:07.360]The good news is that the researchers found
- [00:14:09.840]that in the racial bias in predicting who needs care,
- [00:14:12.550]this could be eliminated by reformulating the algorithm
- [00:14:15.490]so that it no longer uses cost as a proxy for needs.
- [00:14:20.050]In addition to algorithmic biases in healthcare,
- [00:14:22.620]they've also been a number of high profile cases
- [00:14:24.610]of racial discrimination in facial recognition algorithms.
- [00:14:27.920]One with Google's facial recognition algorithm
- [00:14:30.830]that misidentified black faces and another were Nikon's
- [00:14:34.120]Coolpix digital camera exhibit bias against Asian faces
- [00:14:37.480]with regard to its " Did Someone Blink Morning?"
- [00:14:44.630]With this case, in terms of criminal prediction
- [00:14:48.000]and sentencing, we see just one of the troubling examples
- [00:14:51.130]of algorithmic governance,
- [00:14:53.370]where algorithms are now being used to govern citizens.
- [00:14:56.410]In 2016 Propublica published an investigative expose
- [00:15:00.570]of how criminal justice officials are using risk prediction
- [00:15:03.930]equations to inform their decisions about bail,
- [00:15:06.950]sentencing and early release.
- [00:15:09.090]They collected the risk assessment scores
- [00:15:11.240]for more than 10,000 people arrested for crimes
- [00:15:14.020]in Florida's Broward County and check to see
- [00:15:16.610]how many were charged with further crimes within two years.
- [00:15:20.310]They were looking to see if those with higher risk
- [00:15:22.490]assessments went on to be arrested for new crimes
- [00:15:25.530]compared to those with lower risk assessments
- [00:15:28.370]what they found was a racial disparity.
- [00:15:31.060]The data showed that black defendants were twice
- [00:15:34.200]as likely to be incorrectly labeled as higher risk
- [00:15:37.650]than white defendants, conversely, white defendants
- [00:15:40.610]labeled low risk were far more likely to end
- [00:15:43.220]up being charged offenses than African Americans
- [00:15:46.560]with comparably low compass risk score.
- [00:15:49.650]You can see one example highlighted on the screen
- [00:15:51.610]where the white defendant was given a low risk score
- [00:15:53.780]of three but went on to commit three subsequent offenses
- [00:15:57.270]within the two year study,
- [00:15:58.830]whereas an African American didn't was given a higher risk
- [00:16:01.370]score of 10 but went on to commit no subsequent offenses.
- [00:16:05.170]Within their 10,000 person study,
- [00:16:07.540]the results showed that the risk assessment predictions
- [00:16:10.210]overestimated risk for African American defendants
- [00:16:14.600]and underestimated risk for white defendants.
- [00:16:17.340]The good news here is that the in the aftermath
- [00:16:19.610]of this investigative journalism piece,
- [00:16:21.530]there's been a concerted effort to help improve
- [00:16:23.810]the compass algorithm to reduce racial bias
- [00:16:26.270]in risk assessment.
- [00:16:29.720]Warning, things are gonna get even more grim (chuckling)
- [00:16:33.310]According to some ai experts,
- [00:16:35.360]the most serious short term risk
- [00:16:37.370]involves autonomous weapons.
- [00:16:39.620]The company, STN, for instance,
- [00:16:42.350]is working with the Turkish Government
- [00:16:43.980]to create swarm drones that use facial recognition
- [00:16:47.100]technology to kill a target.
- [00:16:51.898](upbeat music)
- [00:17:41.458]Given what we already know about the risks and weaknesses,
- [00:17:44.020]of facial recognition algorithms,
- [00:17:45.840]this military applications is particularly chilling.
- [00:17:50.360]In response to this growing risk a coalition of ai
- [00:17:53.480]researchers and advocacy organizations commission to 2017
- [00:17:57.310]film called " Slaughterbots."
- [00:17:59.450]The film asks the question," What happens
- [00:18:02.050]when miniature drones with ai facial recognition
- [00:18:05.960]and three grams of explosive programmable to kill
- [00:18:08.860]a specific individual become available."
- [00:18:11.890]Spoiler alert, it's not good.
- [00:18:14.710]The film was presented to the United Nations Convention
- [00:18:17.290]on conventional weapons, hosted by the campaign
- [00:18:19.870]to stop killer robots.
- [00:18:24.770]On a more academic note, a research on ai
- [00:18:27.300]and their underlying algorithms has created
- [00:18:29.180]by what some account as a crisis in science
- [00:18:32.760]a foundation of scientific inquiry is the expectation
- [00:18:35.470]that results can be replicated, verified,
- [00:18:38.220]unpublished codes and sensitivity to training conditions
- [00:18:41.280]have made it difficult for ai researchers
- [00:18:43.600]to produce many key results.
- [00:18:45.930]We can add to this broad concern the issue of large scale
- [00:18:48.450]data sets used in training, which are largely in the hands
- [00:18:51.340]of just a few for profit corporations like Amazon,
- [00:18:54.180]Google, Facebook, IBM, and Microsoft.
- [00:19:00.100]Those of you paying close attention may have noticed
- [00:19:01.840]a recurring pattern in the negative consequences
- [00:19:04.530]we've mentioned thus far.
- [00:19:06.230]These issues were brought forward not by the industries,
- [00:19:09.010]or the corporations using the algorithms in question
- [00:19:12.070]but by outside researchers, journalists,
- [00:19:14.600]and concerned citizens.
- [00:19:16.240]So looking ahead, we might reasonably ask the question,
- [00:19:18.843]" Do we trust corporations who are using these technologies
- [00:19:22.250]for profit and competitive advantage to police themselves?"
- [00:19:26.760]There are various other areas of concern
- [00:19:28.430]that we'll briefly just mention
- [00:19:29.610]here, and for those of you who are interested,
- [00:19:32.170]we'd urge you to look at the 2019 report published
- [00:19:35.030]by the ai Now Institute.
- [00:19:37.840]According to their latest report,
- [00:19:39.180]algorithms are increasing the power asymmetry
- [00:19:41.570]between workers and employees.
- [00:19:45.660]Employers sorry through applications
- [00:19:47.820]such as algorithmic wage control,
- [00:19:50.120]governments are increasingly adopting ai systems to surveil
- [00:19:53.780]and control people but without regulation,
- [00:19:56.380]and corporations are expanding the scale
- [00:19:58.330]and scope of algorithmic health infrastructures.
- [00:20:03.690]These technologies are disproportionately impacting
- [00:20:06.280]vulnerable communities, people of color, women and the poor.
- [00:20:10.810]It is also important to note that these ai systems
- [00:20:13.360]are vulnerable to bad actors who can utilize backdoor access
- [00:20:17.160]to retrain the system on bad data.
- [00:20:19.700]An algorithm optimized for click through for instance,
- [00:20:22.730]can be used to manipulate public opinion.
- [00:20:26.030]At a broader level, what are the issues at play
- [00:20:28.390]that we should think about in the deployment
- [00:20:29.920]of these technologies across such diverse domains?
- [00:20:35.315]Campolo and Crawford have argued that there's a discourse
- [00:20:37.870]of enchanted determinism,
- [00:20:39.760]which surrounds artificial intelligence.
- [00:20:42.450]Ai systems are portrayed as exceeding human intelligence
- [00:20:45.300]or performance a form of magical or alien genius.
- [00:20:49.420]This rhetoric they argue, serves to shield the creators
- [00:20:52.599]of these systems from the consequences
- [00:20:55.640]their designers may not fully anticipate,
- [00:20:58.000]understand or control.
- [00:21:02.040]This leads us to an accountability gap,
- [00:21:04.130]where deep learning systems might be considered
- [00:21:06.300]beyond typical ethical responsibilities
- [00:21:09.090]and legal liabilities.
- [00:21:10.790]In 2019, Crawford and Shultz published an article
- [00:21:14.290]in the Columbia Law Review titled
- [00:21:16.233]" Ai systems as state actors."
- [00:21:18.880]In the article, they observe how governments
- [00:21:20.850]are increasingly turning to third party vendors
- [00:21:23.950]to provide algorithmic architecture for public services,
- [00:21:27.620]including welfare benefits and criminal risk assessments
- [00:21:31.120]but when challenged, many state governments
- [00:21:33.410]have disclaimed any knowledge or ability to understand,
- [00:21:36.500]explain, or remedy problems created by the ai systems
- [00:21:40.680]they procured from those third party vendors.
- [00:21:43.180]This accountability gap then means that algorithmic systems
- [00:21:46.730]are responsible for government decision making,
- [00:21:49.180]without any mechanism for accountability or liability.
- [00:21:55.320]Stated differently according to Kitchin
- [00:21:57.560]in a relatively recent piece and these are his words,
- [00:22:00.763]" We are now entering an era of widespread
- [00:22:03.110]algorithmic governance wherein algorithms play
- [00:22:05.910]an increasing role in the exercise of power,
- [00:22:08.390]a means through which to automate the disciplining
- [00:22:10.950]and controlling of societies,
- [00:22:12.760]and to increase the efficiency of capital accumulation."
- [00:22:16.900]End quote and yet, what we generally lack as a public,
- [00:22:21.000]according to Diakopoulos is clarity
- [00:22:23.660]about how algorithms exercise their power
- [00:22:25.980]over us so, moving forward,
- [00:22:29.300]how do we assure transparency and accountability
- [00:22:33.200]in these algorithmic systems?
- [00:22:35.790]In recent years, we've seen the advent of research
- [00:22:38.230]Institute's dedicated to this topic.
- [00:22:41.010]There seems to be a widespread recognition
- [00:22:43.310]that corporations who control many of these proprietary
- [00:22:45.810]algorithms, and much of the underlying data
- [00:22:48.930]are not going to police themselves.
- [00:22:51.170]In the interest of time we'll share
- [00:22:52.520]just a few notable examples.
- [00:22:56.550]The AI Now Institute at New York University
- [00:22:58.740]was co founded by Meredith Whittaker ,
- [00:23:01.220]Whittaker notably, previously founded Google's open research
- [00:23:04.590]group and was a court organizer in the Google walkouts
- [00:23:08.410]Ai Now and this is their mission statement
- [00:23:10.560]is an interdisciplinary research center
- [00:23:13.100]dedicated to understanding the social implications
- [00:23:15.740]of artificial intelligence.
- [00:23:17.590]The Institute has been issuing various reports
- [00:23:20.120]some of which you see here is just a smattering.
- [00:23:23.500]Also, individuals that are part of the Institute
- [00:23:25.900]have been testifying before the United States
- [00:23:28.100]House of Representatives Committee on Oversight
- [00:23:30.410]and Reform, and they've been publishing various legal
- [00:23:33.560]and technical academic journals in various journals.
- [00:23:39.160]Similarly, the Ada Lovelace Institute in London
- [00:23:41.910]is an independent research and deliberative body
- [00:23:43.990]with a mission to ensure data
- [00:23:45.820]and ai work for people and society.
- [00:23:48.980]They've been pursuing similar strategies,
- [00:23:51.360]trying to make the case for the social value of data
- [00:23:54.260]and addressing the asymmetries of power and data justice.
- [00:23:57.880]Their approach is focused on changing regulations
- [00:24:00.860]as you can see here, changing narratives
- [00:24:03.550]and changing practices.
- [00:24:07.490]It's also worth noting that our neighbor to the north
- [00:24:09.490]recently embarked on a Pan Canadian artificial intelligence
- [00:24:13.050]strategy and in Europe, there are moves to use
- [00:24:16.380]the e-use of general data protection regulation or GDPR,
- [00:24:20.540]to address transparency,
- [00:24:22.210]and help regulate algorithmic systems.
- [00:24:24.680]In the US, it seems most of the activity in this area
- [00:24:27.710]is coming not from the government
- [00:24:29.530]but from advocacy groups and research centers,
- [00:24:31.630]like I mentioned, the Ai Now Institute.
- [00:24:37.320]One thing is clear, algorithmic systems,
- [00:24:41.330]both hard to say are advancing rapidly,
- [00:24:44.780]with tremendous promise
- [00:24:46.160]and tremendous peril in equal measure.
- [00:24:49.040]We hope this presentation has given you all a sense
- [00:24:51.040]of the breadth and complexity for how algorithms
- [00:24:54.340]are being applied, and what challenges lie ahead
- [00:24:56.900]for all of us.
- [00:24:59.190]And in closing, we'd like to share three important
- [00:25:01.620]and optimistic considerations issued
- [00:25:04.520]by the Ada Lovelace Institute
- [00:25:05.950]in their rethinking data report.
- [00:25:09.810]So they're calling upon all of us, as actors in society
- [00:25:14.020]to consider the following:
- [00:25:16.780]how data might narrow any inequalities
- [00:25:18.690]rather than widen them.
- [00:25:21.600]How data might tackle power imbalances
- [00:25:24.430]rather than entrench them.
- [00:25:27.595]And how data might help us address the global issues
- [00:25:29.830]of resource allocation, climate change and migration,
- [00:25:33.320]rather than exacerbate them
- [00:25:36.193]and with that thanks very much for your attention.
- [00:25:38.717](clapping)
- [00:25:49.263]That's it?
- [00:25:51.750]Heather has to turn on her leg
- [00:25:53.117]There we go (chuckling)
- [00:25:55.950]All right, are there any questions?
- [00:26:00.730]From anyone in the audience?
- [00:26:05.058]Yes?
- [00:26:06.812]You gave us the perspective
- [00:26:07.645](mumbles) having control over ai technology
- [00:26:13.534]in what way can the everyday person kind of take back,
- [00:26:16.597]in what way do we have power
- [00:26:18.250]ourselves to influence (mumbles) ai
- [00:26:20.811](mumbles)
- [00:26:24.860]Great question.
- [00:26:27.339](mumbles)
- [00:26:29.055]You wanna start and then I'll take over
- [00:26:31.919]Can you repeat the question
- [00:26:32.752]so that everybody can hear it?
- [00:26:34.160]So the question was related to, we're hearing
- [00:26:39.490]about how big corporations, organizations,
- [00:26:41.600]et cetera are using big data potentially in negative ways.
- [00:26:45.870]So how as an everyday person can we counter
- [00:26:48.840]that and what strategies might be used to do so?
- [00:26:53.860]So to begin to address that one,
- [00:26:56.620]I think we could reiterate what Carrie
- [00:26:58.610]had said near the end right?
- [00:27:00.150]It sort of joined some advocacy groups, right?
- [00:27:04.360]Be aware of what's going on and that's really the first
- [00:27:06.700]thing is to, you know, step back, reflect
- [00:27:09.190]and think about what's happening,
- [00:27:10.470]like when you're using social media,
- [00:27:13.010]whether it's Facebook, Instagram
- [00:27:14.240]or whatever it might be thinking
- [00:27:15.660]about what it is that you're seeing
- [00:27:17.070]and what it might impact it might have
- [00:27:19.220]maybe on you individually or in a larger sense right?
- [00:27:22.030]So that idea of being self reflective,
- [00:27:24.650]potentially being proactive in ways
- [00:27:29.051]you could become a computer programmer, I guess.
- [00:27:32.346](chuckling) right?
- [00:27:33.430]In addition to bringing your humanities background
- [00:27:35.630]cross the two together and sort of address
- [00:27:38.590]algorithmic development in alternative ways
- [00:27:42.007]and alternative perspectives so.
- [00:27:43.990]I mean I think I'll just add to that, I think
- [00:27:47.880]demanding greater transparency
- [00:27:50.790]in how these things are working
- [00:27:53.060]is one thing that we can we can do, I think
- [00:27:57.200]that companies that are using algorithms that may impact
- [00:28:02.310]our daily lives in positive and negative ways
- [00:28:05.390]are aloof to release those and frankly,
- [00:28:09.050]they might be a bit opaque to us anyway
- [00:28:11.710]if we did actually see all of the source code,
- [00:28:14.620]but to reiterate Heather's point,
- [00:28:16.350]I think that getting involved,
- [00:28:18.630]being aware, educating yourself about all the ways
- [00:28:21.080]these are seeping into our lives is one way to start.
- [00:28:24.690]And in some, relatively,
- [00:28:27.300]it might seem like relatively simple studies
- [00:28:29.390]have actually led to scale changes,
- [00:28:31.920]you know, people have done things like just looking
- [00:28:34.130]for how for instance how when you Google search
- [00:28:38.100]something it auto fills right?
- [00:28:39.763]What it thinks you're gonna be asking for just by starting
- [00:28:42.850]to ask questions about like, "Okay, well, if I'm logging in
- [00:28:45.900]from home or from the state of Nebraska,
- [00:28:48.490]what does that look like?"
- [00:28:49.370]And then asking the same question actually,
- [00:28:51.100]when they are somewhere else, to try to start
- [00:28:53.790]to unravel what it is the algorithm thinks
- [00:28:57.890]it knows about you and then studies have been built
- [00:29:00.450]upon those kinds of observations
- [00:29:02.580]to make larger scale changes
- [00:29:04.040]or the observations about how these are impacting our lives?
- [00:29:12.205]Question over here?
- [00:29:14.518]My question is from biases,
- [00:29:18.327]color discrimination so we know the people who created
- [00:29:23.352](mumbles) to (mumbles) about discrimination,
- [00:29:26.701]that they have to do with the people
- [00:29:28.173]who create (mumbles)?
- [00:29:30.533]Who did not do that in (mumbles) why
- [00:29:32.738]because I believe machines only produce
- [00:29:36.450]what we (mumbles) on them
- [00:29:38.999]so could these biases be also be with the creators
- [00:29:42.079]of these algorithms?
- [00:29:44.328]And how can we change that? Absolutely,
- [00:29:46.130]though I think that's absolutely right.
- [00:29:47.840]I mean, Heather's on top of one of her earlier slides said,
- [00:29:51.840]you know, these are opinions embedded in code, right?
- [00:29:55.150]And so I think you're right to point out that these biases
- [00:29:58.680]came from people right?
- [00:30:00.196]That they didn't just fall out of the sky, right?
- [00:30:02.640]And so we need to be attentive
- [00:30:04.270]to that, who are the programmers?
- [00:30:05.770]What are the demographics of the people
- [00:30:07.517]who are writing these codes?
- [00:30:09.700]And how do we change that?
- [00:30:11.320]How do we change, how to do we expand that?
- [00:30:14.250]One of the things that the folks at the AI Now Institute
- [00:30:17.260]and other places have been pointing out is precisely
- [00:30:20.470]that disparity in terms of who's controlling
- [00:30:24.860]these algorithms?
- [00:30:25.693]Who's writing them?
- [00:30:27.660]Who's (mumbles)fortunately impacted by them?
- [00:30:29.870]And that we need to change address that power imbalance
- [00:30:32.650]within the tech industry as well.
- [00:30:36.050]So I don't know if that answers your question?
- [00:30:37.150]But you're absolutely right,
- [00:30:38.270]that we need to change who's behind the machine
- [00:30:41.910]to be able to be more attentive to these issues.
- [00:30:44.760]In addition, it relates to the data
- [00:30:47.140]that we have the data that's on hand
- [00:30:49.680]that's in the databases, right?
- [00:30:51.200]And what's being used to train sort of machine learning,
- [00:30:54.790]or ai and so the data we have is from certain
- [00:31:01.260]socio economic classes or different parts of the country
- [00:31:03.910]or different parts of the world
- [00:31:05.030]and it's not equitable right?
- [00:31:08.210]Across the globe and so that it also impacts
- [00:31:11.890]sort of the outcomes that we're getting
- [00:31:14.040]from the algorithms in addition to the bias
- [00:31:16.140]maybe perhaps written into them so.
- [00:31:22.478]Yes? (mumbles)
- [00:31:23.980]to that if they're producing
- [00:31:26.930]discriminatory of bias results in case we have only bias
- [00:31:31.022]data to work with, how can we continue to use algorithms
- [00:31:35.767]to handle problems that they don't have the consciousness
- [00:31:41.266]to understand that they cannot understand
- [00:31:44.470]that the data itself is bias
- [00:31:46.908]and there's no way for an algorithm (mumbles)
- [00:31:49.108]to mend that so how can you continue using them for almost
- [00:31:53.052]any societal problem when our data is (mumbles)
- [00:31:58.170]Right well so the question relates to how can we continue
- [00:32:02.260]to use algorithms right?
- [00:32:05.030]If the data that's going into them is potentially biased.
- [00:32:09.400]And it would say that all data or biased, right?
- [00:32:14.030]And I think you're really clear to point
- [00:32:16.240]out that yeah, the algorithm itself, the computer
- [00:32:18.810]itself, can't make those changes necessarily, right?
- [00:32:22.310]We have to look at and be self reflective,
- [00:32:25.350]as a society and as individuals and sort of see
- [00:32:28.330]what those biases might be in the data
- [00:32:30.310]and make light of them and certain changes have occurred.
- [00:32:35.020]As Carrie pointed out with some healthcare industries
- [00:32:36.980]and others, and when people become aware of those things,
- [00:32:39.030]you can rewrite the algorithms so for example
- [00:32:41.750]to take out health costs out of an algorithm in that sense.
- [00:32:48.780]Yeah and just to add, I would (mumbles) to guess
- [00:32:51.620]that you know it was some of the ones
- [00:32:53.750]that we've been critical of here today,
- [00:32:55.800]would simply point out that with time
- [00:32:58.120]and with more data input these will improve right?
- [00:33:00.270]And this will ameliorate some of the concerns
- [00:33:02.980]that have been brought to light,
- [00:33:04.670]the challenge though of course is the the fact
- [00:33:06.700]that the billions of dollars of investment and the pace
- [00:33:10.150]of this is far outstripping or outpacing our ability
- [00:33:14.640]to be some sort of checks and balance
- [00:33:16.550]about the unintended consequences of these algorithms.
- [00:33:26.588][Deep Voiced Student] Is there a list
- [00:33:27.421]of references that you all sighted available?
- [00:33:31.660]We can make them available.
- [00:33:32.790]We did not include a slide
- [00:33:34.050]with them on there but are happy,
- [00:33:35.800]I don't know if there are PowerPoints that then get shared
- [00:33:38.000]as part of CAS Inquire but we could add
- [00:33:41.020]a bibliography slide (laughing)
- [00:33:42.650]that would have all of these
- [00:33:43.680]on it [Deep Voiced Student] Be
- [00:33:44.513]quite beautiful (mumbles) Yeah sure
- [00:33:46.140]no, thanks happy to share that.
- [00:33:51.645]Yeah, I'm really struck
- [00:33:53.147]by sort of tension between (mumbles)
- [00:33:58.412]all presented as totally revolutionary right?
- [00:34:01.155]Partly because (mumbles)
- [00:34:04.546]and the (mumbles) is for, well humans have always done
- [00:34:07.973]that, (mumbles) to each other when new technology
- [00:34:09.821]comes along and (mumbles) some more
- [00:34:12.150]and there's sort of this continuity here rather
- [00:34:13.517]than the total (mumbles) that you have with the good stuff
- [00:34:17.087]so I was curious if you see the discrimination
- [00:34:20.824]and the sort of the negative things as just continuity
- [00:34:23.988]as more of the same for new technology (mumbles)
- [00:34:25.809]do something revolutionary in the negative side as well
- [00:34:30.917]if there's (mumbles)
- [00:34:32.547]Swear we have to put on our archeological
- [00:34:33.838]hats Yeah (chuckling)
- [00:34:36.132]how about this one?
- [00:34:38.407]I mean I wouldn't say that
- [00:34:41.800]it's interesting to make that observation,
- [00:34:44.340]given our different components brought together
- [00:34:47.010]so I appreciate that piece of feedback
- [00:34:48.820]in terms of how these two different sets of observations
- [00:34:52.680]were presented here today
- [00:34:54.950]I as an archeologist I certainly think that these biases
- [00:34:58.730]have always been there and that what has changed,
- [00:35:03.700]in my mind is the scale of their impact right?
- [00:35:09.434]You know, I study a society that is maybe five,
- [00:35:12.680]10,000 people living congregated in a place
- [00:35:15.660]in North Central New Mexico, right?
- [00:35:17.910]That's a very different scale than the impact
- [00:35:20.600]of an algorithm that goes wrong upon 10s of millions,
- [00:35:24.130]if not billions of people.
- [00:35:26.290](coughing)
- [00:35:33.546](mumbling) (chuckling)
- [00:35:36.639](mumbles) this is really interesting
- [00:35:38.270]I have a question that juxtaposes my experience
- [00:35:42.250]with my son whose a computer science major
- [00:35:45.286]and he works in a lab and he has done this, he loves ai
- [00:35:49.428]and that's what he wants to do, (mumbles) he spent
- [00:35:53.320]a lot of the break in December and January doing this really
- [00:36:00.157]what I consider just kind homeless archaic (mumbles)
- [00:36:04.773]circling like the fish in the ocean
- [00:36:07.207]because the computer has to be taught to recognize
- [00:36:10.091]that this is a fish in the ocean and it's just so basic
- [00:36:14.332]and fundamental, to juxtapose that which just seems
- [00:36:18.239]really very fundamental, it seems so early
- [00:36:22.357]in the (mumbles) fish (mumbles) intelligence area
- [00:36:24.910]to all the concerned about like Google and Facebook
- [00:36:29.021]and Face Recognition Efforts at China for example
- [00:36:33.970]I mean it seems like the research (mumbles)
- [00:36:36.656]cause he's working in a research lab at the University
- [00:36:39.220]versus kind of this other stuff, there's such a huge
- [00:36:43.983]difference between those two and I get scared
- [00:36:46.530]when I hear about ai as it relates to the commercial side
- [00:36:50.460]in the government (mumbles) versus what I see
- [00:36:53.390]my son doing (chuckling)
- [00:36:55.058](mumbles) and I just wonder if what your thoughts are.
- [00:36:59.830]I mean, there's probably various ways
- [00:37:03.090]to think about that, I mean,
- [00:37:04.530]there is a difference, often with the scale
- [00:37:07.910]of what's happening at a research level, right?
- [00:37:10.410]And what can be funded by large organizations.
- [00:37:14.260]I think part of that relates to when research,
- [00:37:16.800]I think, as scientists, we step back, we reflect
- [00:37:21.784]we often we'll take more care to look at our data,
- [00:37:26.090]to look at our algorithms, to look at what's happening,
- [00:37:28.390]and be clear about what's happening
- [00:37:29.730]rather than sort of the end goal is somewhat
- [00:37:32.900]maybe but maybe not the end goal is different
- [00:37:34.490]but the process is much different
- [00:37:36.320]and then often with larger corporations and organizations,
- [00:37:39.720]there's the profit maybe what they're searching for
- [00:37:42.800]when they're getting, you know, that's what they're trying
- [00:37:45.020]to get and so you often don't have that slow,
- [00:37:47.970]reflective, methodological iterative reflection
- [00:37:51.520]on what you're doing and you kind of throw it out
- [00:37:53.150]there and then that potentially can be it is more scary
- [00:37:58.330]and dangerous, right?
- [00:37:59.163]And so what we're doing and it relates to one
- [00:38:00.500]of the other questions in some senses
- [00:38:01.910]is we're letting the public and people
- [00:38:03.930]be our guinea pigs, right?
- [00:38:05.490]Oops that didn't work right.
- [00:38:06.630]So how many people were impacted when we come back
- [00:38:08.880]and go oops let's change that variable?
- [00:38:10.970]So that's I think one side.
- [00:38:14.360]The thing that, your comment made me think
- [00:38:16.020]about is that and I just know a little bit
- [00:38:18.990]about that but Google has gotten in trouble in recent,
- [00:38:23.816]last year so with regard to lower paid contract employees
- [00:38:29.600]versus full time, people with benefits, right?
- [00:38:33.437]And that there's just this huge army of Google
- [00:38:36.540]kind of contractual hourly workers
- [00:38:39.370]and my sense is that what Google has those people doing
- [00:38:42.950]in part on the algorithmic side
- [00:38:45.020]is doing exactly the same kind of training work
- [00:38:47.020]that you're son is working on
- [00:38:49.770]and that, you know, so there's this disparity
- [00:38:52.350]too in terms of the labor and the impact and the outcomes,
- [00:38:56.230]in terms of how this is being approached
- [00:38:58.880]by their staff, employees.
- [00:39:05.772]We had Antia, you had.
- [00:39:08.867]To seek to share
- [00:39:09.830]my skepticism about regulation ever being
- [00:39:15.030]a pathway in the US where people are notoriously
- [00:39:18.550]sort of skeptical about regulations
- [00:39:21.430]for some of the very reasons you've talked about
- [00:39:23.890]but half the watch groups in the US we made
- [00:39:27.973]any attempt to sort go that route or push
- [00:39:31.590]towards that or is the sort of concern that regulation
- [00:39:35.240]itself will sort of override
- [00:39:38.000]so it's too overpowering for them to even
- [00:39:42.110]it seems to be an area that cost calls
- [00:39:45.430]for some form of at least sort of writing it into law
- [00:39:50.170]that people provide open access and open sources right?
- [00:39:55.620]So then you could look and it would probably be
- [00:39:59.140]a disinsentive to people like Google
- [00:40:02.336]if they had to just make everything public
- [00:40:05.625]then it might not be yearly as profitable
- [00:40:08.020]to do all of this so (mumbles)--
- [00:40:10.537]I mean I'm heartened when somebody like Whittaker
- [00:40:14.820]is asked to come to Washington and testify right?
- [00:40:18.670]So I'm heartened by that, an expert
- [00:40:20.940]that's been on the Google commercial side
- [00:40:23.130]and is now on the non-profit advocacy side
- [00:40:26.010]so I find that heartening, I think also the case
- [00:40:29.063]that in my observation the place that we're seeing
- [00:40:32.340]regulation come to the floor is more at the state level.
- [00:40:35.320]There was a case in New York City about housing
- [00:40:37.180]discrimination using biometric recognition
- [00:40:40.220]and then there was a pushback and a task force
- [00:40:42.643]and kind of got close to regulation and then didn't
- [00:40:46.290]but I think there are cases about ai regulation making
- [00:40:50.180]it into a state law in Oregon, and in Michigan,
- [00:40:55.560]and of course in reaction to some things
- [00:40:58.760]that went terribly wrong.
- [00:41:03.886]Yeah (mumbles)
- [00:41:06.357][Foreign Female Student] I'm curious
- [00:41:07.190]about what is the (mumbles)
- [00:41:08.777]the conversation that's come about before
- [00:41:11.424]about the fact that diversifying some fields
- [00:41:14.930]of work can help to prevent those biases
- [00:41:18.660]or like change the data that we put into the algorithms
- [00:41:22.692]so they're not bias and I was wondering, how far along
- [00:41:26.825]do you think that we are too diversifying more
- [00:41:30.430]those fields so we have less of those biases?
- [00:41:35.840]That's a great question and I think
- [00:41:39.132]that that would have some potentially positive impacts
- [00:41:44.540]if we were able to sort of bring in wider perspectives
- [00:41:47.040]and more diverse perspectives.
- [00:41:50.730]In terms of I think there's an awareness
- [00:41:53.550]of that you'll see headlines periodically
- [00:41:59.180]saying, you know Apple is looking for not simply
- [00:42:03.950]computer programmers, but computer programmers
- [00:42:06.170]with a cultural awareness, understanding,
- [00:42:08.010]a human humanities background, which is great
- [00:42:10.630]that's the first kind of at least there's awareness
- [00:42:13.370]on that level but it comes again, I think back to scale
- [00:42:17.280]in some sense it is when it comes down to sort of getting
- [00:42:23.200]one sort of getting the profit out and getting things done,
- [00:42:25.740]right, maybe mass producing sort of in the more traditional
- [00:42:28.800]way, but hopefully through time, policies
- [00:42:32.100]and cultural practices and organizational practices
- [00:42:35.630]will change right to be more diversified,
- [00:42:38.820]include more diverse biases through advocacy,
- [00:42:41.910]through regulation, through other kinds of means.
- [00:42:46.990]Yeah I'll just add that I mean, I don't have the numbers
- [00:42:51.686](mumbles) are they're ready to be able
- [00:42:53.910]to give you a clear response
- [00:42:56.050]to how that's changed over time
- [00:42:57.550]and have we reached some sort of point that we're all happy
- [00:43:00.790]with right that we're feeling like people
- [00:43:02.360]are really represented at the highest levels of decision
- [00:43:05.730]making in these contexts
- [00:43:09.190]but I think that's one of the cases
- [00:43:11.500]where people within the general public have an opportunity
- [00:43:14.730]to educate themselves about what do those trends look like,
- [00:43:17.717]demand to have access to those kinds of numbers
- [00:43:21.240]and are the employees, the kind of contractual level
- [00:43:25.090]sort of within the Google model, right?
- [00:43:26.373]That these are hourly employees versus full time paid staff
- [00:43:30.010]like what are those numbers and demographics look like?
- [00:43:33.020]At those two different scales and I think
- [00:43:35.840]too that's where a lot of these outside research institutes
- [00:43:40.180]like AI Now and Ada Lovelace and others,
- [00:43:44.230]have a critical role to play in terms of gathering
- [00:43:47.940]those data, and helping to share
- [00:43:50.610]that with all of us so we know how far we have to go
- [00:43:54.290]and where are the areas that we need
- [00:43:57.955]to concentrate our efforts.
- [00:44:02.293](coughing) Sort of
- [00:44:05.539](mumbles) you think (mumbles)
- [00:44:08.853]kind of come out or do you think it'll be kind of harder
- [00:44:12.784]or would be less difficult in the sense of changing
- [00:44:17.721]those or updating those algorithms due to people
- [00:44:22.915]who create in the sense of like the whole
- [00:44:25.458]racial discrimination (mumbles) like medical,
- [00:44:28.638]hiring like all the different (mumbles)
- [00:44:32.276]would you guys (mumbles) or (mumbles)
- [00:44:39.302]and keeping it current (mumbles)
- [00:44:45.218](mumbles) I'm trying to understand the question.
- [00:44:46.051]So the question is then with more data,
- [00:44:49.145]do we, So
- [00:44:52.352]Yeah sorry, thank you
- [00:44:53.544]Yeah, yeah thanks (mumbles)
- [00:44:56.091]Sorry I (mumbles) right now
- [00:44:58.667]basically (mumbles) algorithms
- [00:45:00.655]in the first place, overtime with data like,
- [00:45:05.221](clearing throat) updates and gets more accurate
- [00:45:08.275]per se, do you (mumbles) the algorithms
- [00:45:11.187]like in what they teach in the (mumbles) companies
- [00:45:14.043]do you think they'll be easier to regulate
- [00:45:16.826]or necessarily take those biases
- [00:45:19.417]away towards racial profiling towards (mumbles)
- [00:45:22.803]what the medical practices and everything?
- [00:45:25.114]Do you think it'll be easier over time
- [00:45:26.543]or it might be more difficult to do this
- [00:45:29.242]you know, practice it if they keep teaching
- [00:45:31.023]them the traditional (mumbles)
- [00:45:32.769]or making it better (mumbles)?
- [00:45:37.403]I think there's this strong potential with more data
- [00:45:40.850]and more inclusivity right?
- [00:45:44.030]Including more data for more diverse parts of the world
- [00:45:48.740]and diverse populations that potentially
- [00:45:54.140]some of the algorithms may in one hand, work differently
- [00:45:58.050]and of course have different outputs right?
- [00:46:01.474]With a more diverse data on the other side, yes
- [00:46:03.700]so as you're saying the algorithms
- [00:46:05.340]themselves potentially need to be changed
- [00:46:07.480]because the variables that are being used
- [00:46:09.730]and so I think that definitely, I would be more positive
- [00:46:13.840]than negative in the sense and I think definitely
- [00:46:16.810]the potential exists.
- [00:46:18.350]The question is where you're at in the world, potentially,
- [00:46:21.760]how, it might be used differentially in different countries
- [00:46:24.010]and based on different regulations,
- [00:46:27.500]social political systems, ideologies, et cetera right?
- [00:46:30.820]How that may influence the way algorithms are controlled
- [00:46:34.970]and who is operating them and to what extent
- [00:46:38.150]they have a large impact or not.
- [00:46:41.730]Heather wore yellow cause she's a more positive
- [00:46:43.850]person than me (chuckling)
- [00:46:45.430]and I wore black, so we color coded
- [00:46:47.990]ourselves, I guess my concern,
- [00:46:50.520]I'd like to be optimistic that yes, over time,
- [00:46:52.930]these things will improve and perhaps they will.
- [00:46:55.950]The challenge though, of course,
- [00:46:57.230]is that we're not allowed to know how these decisions
- [00:47:01.680]You know, what the black box is doing?
- [00:47:04.691]Then it's hard to know years hence, is it better?
- [00:47:08.930]Is it worse right?
- [00:47:10.430]And some of the cases that we profiled here today,
- [00:47:13.750]were highlighted only because people, you know, gathered
- [00:47:17.430]the original data and actually challenged
- [00:47:21.930]is this doing what people say it's doing?
- [00:47:24.720]If you go to the HireVue hiring page
- [00:47:27.030]and listen to the pitch
- [00:47:29.080]about what that assessment tool will do in terms of hiring
- [00:47:33.840]practices, it's gonna move heaven and earth right?
- [00:47:37.820]It's gonna equalize everyone,
- [00:47:40.120]you're gonna interview more people,
- [00:47:41.210]you're gonna diversify your workforce
- [00:47:42.970]and so on, how do we know? (chuckling)
- [00:47:46.790]How do we hold them accountable
- [00:47:49.210]that's actually what it's doing?
- [00:47:50.900]As opposed to simply creating,
- [00:47:52.860]as I said, an automated form of discrimination
- [00:47:56.230]in terms of who's getting hired
- [00:47:57.440]and who has the higher employability score.
- [00:48:04.494]Yeah
- [00:48:05.327](mumbles)
- [00:48:06.560]Sure (chuckling)
- [00:48:07.462](mumbles) (chuckling)
- [00:48:10.004](clearing throat) What is this litigation over
- [00:48:11.715]this topic (mumbles), I'm thinking about the (mumbles)
- [00:48:15.762]to play out the couple of ways to finding
- [00:48:17.229]genuine material when I wanna use
- [00:48:18.605]that (mumbles) in a corporation
- [00:48:22.204]and a professional association (mumbles) building
- [00:48:25.184]with it more liable to (mumbles)
- [00:48:28.858]so this is more like the development of these tools
- [00:48:31.020]is more than halfway to an (mumbles)
- [00:48:32.590]back in (mumbles) like medicine
- [00:48:34.804]or something like that may be an analogy
- [00:48:37.100]and so is this gonna be saddled with like malpractice
- [00:48:39.325]like lawsuits?
- [00:48:40.419]Or you see litigation going along that front
- [00:48:43.582]or is it never gonna get to litigation?
- [00:48:46.789]Yeah I think it's already being litigated
- [00:48:48.420]and I think that's where we're seeing Taantia's question,
- [00:48:50.570]the pressure for greater regulation or task force oversight
- [00:48:53.970]because in some of these high profile cases
- [00:48:56.730]you have, like the government themselves, the City Council,
- [00:49:00.910]or whomever throwing up their hands and saying,
- [00:49:03.323]" Well we didn't know, we didn't understand
- [00:49:05.300]how this is gonna work, you can't hold us responsible
- [00:49:07.470]because we were ignorant (chuckling) right?
- [00:49:09.700]Blame them, blame the third party vendor right?
- [00:49:12.310]And so I do think that that's part of the reason
- [00:49:15.430]why the folks for instance at Ai Now
- [00:49:18.330]are positioning these sought pieces
- [00:49:20.620]and co authoring with legal scholars
- [00:49:23.870]in places like the Columbia Law Review
- [00:49:26.420]to get that message out.
- [00:49:32.983]As a fellow archeologist,
- [00:49:34.370]I have a very general question,
- [00:49:35.895](chuckling)
- [00:49:36.728]We are very interested in political systems and (mumbles)
- [00:49:41.022]that initially we thought (mumbles) all of these algorithms
- [00:49:44.928](mumbles) equalizer as you said but then we see
- [00:49:48.960]there is a (mumbles) that couldn't leave (mumbles)
- [00:49:57.969]So I just wonder what are your thoughts
- [00:50:00.136]whether (mumbles)
- [00:50:08.383](chuckling)
- [00:50:10.120]That's a big question.
- [00:50:11.550](chuckling) Yeah it is a big question.
- [00:50:14.850]Firstly, I don't think I have any heavy observations
- [00:50:18.250]about that I guess I'm in the same spirit
- [00:50:20.690]as we mentioned a few minutes ago,
- [00:50:23.120]I guess I'm heartened that there are really smart people
- [00:50:28.530]who are holding governments accountable
- [00:50:31.670]and are pushing as hard as they can to try
- [00:50:37.480]to hold the political systems,
- [00:50:40.180]the powers that be accountable if and when they choose
- [00:50:43.880]to use algorithmic systems in terms of automating governance
- [00:50:49.610]and so I guess I'm hopeful that I don't know
- [00:50:54.080]I don't know if your question is leading
- [00:50:55.190]towards like do we think there's a revolution
- [00:50:56.730]coming? (chuckling)
- [00:50:58.610]I don't know but I guess I am taking solace from the fact
- [00:51:03.411]that there are a lot of smart people
- [00:51:04.943]that are working very hard
- [00:51:06.280]to hold these governments accountable,
- [00:51:11.180]I mean yeah, just along those lines,
- [00:51:13.032]I just think it's important to think about these things,
- [00:51:18.210]not yes is there's a global scale
- [00:51:20.380]to it but there's also local scales
- [00:51:23.220]of these in various countries throughout the world
- [00:51:29.090]that are not as big a players in this right?
- [00:51:32.310]And they have different social political systems currently
- [00:51:35.840]already and so I think when we think
- [00:51:37.740]about the impacts of algorithms on the political level,
- [00:51:41.500]we have to sort of delve a little deeper
- [00:51:43.730]into the more local scale to think
- [00:51:45.180]about how that impact may be at different places.
- [00:51:54.474]Yeah? Jim?
- [00:51:56.100]To what extent from your perspective
- [00:52:00.666]is ai limited by definition to what it could do in (mumbles)
- [00:52:06.840]it is, ai really is a product of binary logic
- [00:52:11.994]and to what extent are you limited to applying binary
- [00:52:16.250]logic (mumbles)
- [00:52:18.994]Is there a limitation
- [00:52:21.275]that defies the maximum use,
- [00:52:29.380]the maximum application of ai
- [00:52:33.770]as opposed to an assumption that (mumbles) out there that ai
- [00:52:37.753]can form us an (mumbles) on a scale
- [00:52:41.197]much wider than the (mumbles)?
- [00:52:46.058]That's a great, it's a great question.
- [00:52:47.900]I mean, I would say, just as a more personal response,
- [00:52:51.940]yeah, I think you're limited (chuckling) right?
- [00:52:53.720]I mean, I think you know, humans are not zeros and ones.
- [00:52:56.560]We're not binary and sort of proves the underlying theme
- [00:53:02.310]of part of this right?
- [00:53:03.880]Talk is that humans in the beginning are the ones
- [00:53:07.680]programming and creating these and so that's always going
- [00:53:11.320]to be sort of the underlying basis of it now that may morph
- [00:53:16.020]into many different ways depending on how algorithms
- [00:53:20.350]are rewritten based off of training sets
- [00:53:23.170]but I don't think that, I think
- [00:53:26.560]that they're never gonna fundamentally
- [00:53:28.180]if I'm getting your question right,
- [00:53:30.680]either, you know, replace humans or become the same
- [00:53:33.750]as human decision makers-- (mumbles)
- [00:53:37.112]More restricted to the question that ai,
- [00:53:40.762]the assumption is ai could be function (mumbles)
- [00:53:44.600]so unless limited to (mumbles)
- [00:53:48.661]what areas can ai truly beat function (mumbles)?
- [00:53:52.685]And what are the limitations
- [00:53:55.399]because of it's binary structure, to what extent
- [00:54:00.887]can it (mumbles) regardless of how much it continues
- [00:54:04.682]to develop (mumbles)
- [00:54:07.150]I don't know that we know the answer to that yet
- [00:54:09.670]because it's still relatively early days
- [00:54:11.800]in terms of the broad scale application
- [00:54:14.750]of these algorithmic systems in a whole host
- [00:54:18.200]of areas of life, you know I,
- [00:54:24.110]a couple of things come to mind
- [00:54:26.030]in response to your question,
- [00:54:27.050]I guess there was like a 1950s dystopian novel
- [00:54:31.590]you guys can correct me, I'm sure there are people
- [00:54:33.620]in the audience that are more astute
- [00:54:35.330]about this than I am by Alan Turing who talks about
- [00:54:38.500]how computer systems there's only a matter of time
- [00:54:42.300]before they are our overlords
- [00:54:45.560]and I do think based on the work that you know
- [00:54:48.600]the research that we have done that there is an expectation
- [00:54:55.090]out there that automation whether it's decision making
- [00:54:58.500]in governments or otherwise is only gonna continue
- [00:55:01.610]a pace as more and more investment goes
- [00:55:05.490]into these and the other thing,
- [00:55:07.730]I'll just say in response to your question Jim,
- [00:55:09.810]is there's something else that comes to mind
- [00:55:13.240]that you know, in developing these algorithms,
- [00:55:16.670]we as humans define objectives
- [00:55:19.060]and it's been the observations of computer scientists
- [00:55:21.830]that we are terrible (chuckling)
- [00:55:24.010]at defining those objectives and so we have an ill defined
- [00:55:27.200]objective that we feed into a computer system,
- [00:55:30.570]and then it kind of spirals out right?
- [00:55:34.050]And that's where this kind of enchanted determinism
- [00:55:36.690]no one's quite sure what's happening under the hood,
- [00:55:39.040]after more and more training data are going
- [00:55:41.480]in and the algorithms being refined over time
- [00:55:44.500]by the system in ways that we don't quite understand.
- [00:55:48.180]So I guess that's my response to your question
- [00:55:51.293]about what the limits are, I don't know
- [00:55:53.082]that we know yet.
- [00:55:55.756]Previously (mumbles)
- [00:55:58.401]described (mumbles) is coming
- [00:56:01.481]and I think (mumbles) is no longer binary
- [00:56:06.496](mumbles) (chuckling)
- [00:56:08.736]No longer limited by
- [00:56:11.212]Should we wrap? Oh Priscilla,
- [00:56:13.691]I'm wondering if you have some advise
- [00:56:16.506]for us as we are living in this age of social media
- [00:56:23.184]is probably not an option to go off the grid
- [00:56:25.984](chuckling)
- [00:56:27.258]But-- Though tempting.
- [00:56:28.140]Practices are there things
- [00:56:30.315]from your perspective that we should or should not be doing
- [00:56:34.714]in order to expose us more or less
- [00:56:38.203]to the kind of (mumbles)
- [00:56:40.157]that might happen or are we so hopeless
- [00:56:44.338](mumbles) down the road that and there's so much data
- [00:56:47.241]out there that it doesn't matter
- [00:56:49.216]how we (mumbles)?
- [00:56:52.720]You're wearing the yellow
- [00:56:53.553]sc-- (laughing)
- [00:56:54.463]I guess we (mumbles) the positive side here, well,
- [00:56:59.170]I do think that there is a lot more data
- [00:57:02.420]about all of us than we realize, right?
- [00:57:04.350]Out there that's being captured in many different ways.
- [00:57:08.940]I think that we shouldn't completely just throw up our,
- [00:57:14.980]throw the towels aside
- [00:57:18.100]but I think that we can...
- [00:57:24.180]Yeah I'm trynna get this positive spin
- [00:57:26.440]here on this one (chuckling)
- [00:57:28.900]I think that if I might interject,
- [00:57:30.440]Sure The
- [00:57:34.740]positive side is what we all do right?
- [00:57:37.090]Is education, in terms of trying to understand
- [00:57:39.803]how are these technologies impacting our lives,
- [00:57:43.040]what are the ways in which these are bubbling
- [00:57:45.450]up to the surface, whether it's in health care,
- [00:57:47.540]whether it's in city, state federal governance right?
- [00:57:50.940]So to educate ourselves about how those things are happening
- [00:57:53.830]and what some of those pitfalls are, I think
- [00:57:57.340]is really the first step.
- [00:58:00.380]Like you I don't know that there's any way to really stem
- [00:58:03.310]the tide in terms of what's coming
- [00:58:06.830]and what's already being utilized.
- [00:58:09.640]So I think the key will simply be education and advocacy
- [00:58:14.310]for the issues that we care about.
- [00:58:18.657](mumbles)
- [00:58:20.801]Look into Taylor, Taylor (chuckling)
- [00:58:24.802]Do you wanna take on more
- [00:58:26.123]or Sure
- [00:58:31.630]With people starting to learn
- [00:58:35.199]coding and algorithm in earlier age, you know
- [00:58:39.595]couple of years ago (mumbles)
- [00:58:42.131]college (mumbles)
- [00:58:46.790]you know there are coding books for children
- [00:58:49.790](mumbles) business, what do you think
- [00:58:52.941]is probably the most realistic
- [00:58:56.047]I guess progression of ai (mumbles)
- [00:59:04.985](mumbles) The question is either
- [00:59:06.137]how far will it go?
- [00:59:08.370]Is that and there's other part of the question
- [00:59:10.390]is given that--
- [00:59:13.058]Progression in the next several decades
- [00:59:16.046]or so with the (mumbles) more insight
- [00:59:19.682]about product programmings and algorithms (mumbles)
- [00:59:23.851]I guess more people if able
- [00:59:25.924]to program these, how far do you think
- [00:59:31.345]it will likely progress in the next (mumbles)
- [00:59:36.480]That's hard to answer
- [00:59:37.540]but I think that if you're thinking about the use
- [00:59:39.660]and sort of being engaged at earlier ages in these processes
- [00:59:43.810]I think it sort of reflects back on Carrie's point
- [00:59:46.120]sort of education and so if we, I don't know
- [00:59:48.730]about how far are they gonna go
- [00:59:51.280]but actually what can they, how can we tailor
- [00:59:54.950]them to a more positive kind of way, right?
- [00:59:57.410]And so it's the what they can do and where they can go.
- [01:00:01.460]Hopefully, we as ethical society,
- [01:00:04.430]and people can sort of control that a little bit
- [01:00:06.710]through education and awareness in some way so.
- [01:00:11.156]In terms of where they're gonna go you know
- [01:00:14.035]that is a hard question (chuckling)
- [01:00:15.956]And I know I'm heartened that there are more and more
- [01:00:18.860]young people who are growing up,
- [01:00:21.100]understanding and being more sophisticated
- [01:00:23.100]consumers of information and understanding the technologies
- [01:00:25.930]that underlie so many of these things we're interacting
- [01:00:28.930]with on a daily basis but like Heather,
- [01:00:31.100]I don't know that I would prognosticate
- [01:00:35.070]about where things might end up
- [01:00:37.620]but, you're the future (chuckling)
- [01:00:43.412]Thank you so much for coming out today
- [01:00:44.938]and hoping to see you all at the end of March
- [01:00:47.413]for a panel discussion
- [01:00:48.970](mumbles) (clapping)
The screen size you are trying to search captions on is too small!
You can always jump over to MediaHub and check it out there.
Log in to post comments
Embed
Copy the following code into your page
HTML
<div style="padding-top: 56.25%; overflow: hidden; position:relative; -webkit-box-flex: 1; flex-grow: 1;"> <iframe style="bottom: 0; left: 0; position: absolute; right: 0; top: 0; border: 0; height: 100%; width: 100%;" src="https://mediahub.unl.edu/media/12279?format=iframe&autoplay=0" title="Video Player: "Discriminatory Algorithms and Cultural Complexities" | CAS Inquire" allowfullscreen ></iframe> </div>
Comments
0 Comments