Yes, You Have a Critical Role in Raising Artificial Intelligence.
Jana Eggers is a math and computer nerd who took the business path for a career. She’s CEO of Nara Logics and active in customer-inspired innovation, the artificial intelligence (AI) industry, and Autonomy/Mastery/Purpose-style leadership. Her passions are working with teams to define and deliver products customers love, algorithms, and their intelligence, and inspiring teams to do more than they thought possible. In her talk, she will address the ways that AI is already present in our lives, helping us understand what artificial intelligence is, where it’s heading, and why we should embrace it.
icon search Searchable Transcript
Toggle between list and paragraph view.
[00:00:03.370]EMCEE: Welcome to
the 30th anniversary year
[00:00:05.071]of the E.N. Thompson
Forum on World Issues.
[00:00:09.609]VOICEOVER: Today, you are part
of an important conversation
[00:00:12.045]about our shared future.
[00:00:13.780]The E.N. Thompson
Forum on World Issues
[00:00:16.783]explores a diversity
[00:00:18.651]on international and
public policy issues,
[00:00:21.488]to promote understanding,
[00:00:23.022]and encourage debate
across the university,
[00:00:25.425]and the State of Nebraska.
[00:00:27.427]Since it's inception in 1988,
[00:00:32.631]have challenged and inspired us,
[00:00:34.634]making this forum one of the
preeminent speaker series
[00:00:39.105]in higher education.
[00:00:42.175]It all started when
E.N. "Jack" Thompson
[00:00:44.978]imagined a forum
on global issues
[00:00:47.680]that would increase
[00:00:50.183]of cultures and events
from around the world.
[00:00:53.420]Jack's perspective was
influenced by his travels,
[00:00:56.656]his role in helping to
found the United Nations,
[00:00:59.592]and his work
[00:01:00.460]at the Carnegie Endowment
for International Peace.
[00:01:04.263]As President of the Cooper
Foundation in Lincoln,
[00:01:07.400]Jack pledged substantial
funding to the forum,
[00:01:10.603]and the University of Nebraska
[00:01:12.705]and Lied Center
for Performing Arts
[00:01:14.541]agreed to co-sponsor.
[00:01:17.177]Later, Jack and his wife, Katie,
[00:01:19.345]created the Thompson Family Fund
[00:01:21.714]to support the forum
and all their programs.
[00:01:25.018]Today, major support is provided
by the Cooper Foundation,
[00:01:30.623]Lied Center for Performing Arts,
[00:01:32.459]and University of
[00:01:35.395]We hope this talk sparks an
exciting conversation among you.
[00:01:42.202]And now, on with the show.
[00:01:47.207]MIKE ZELENY: I'm Mike Zeleny
with the university,
[00:01:49.242]and I'm pleased to welcome you
to this E.N. Thompson Forum
[00:01:51.678]on World Issues.
[00:01:52.946]As you know, for 30 years,
[00:01:54.614]the university, now
150 years young,
[00:01:56.950]and Cooper Foundation,
[00:01:57.917]have partnered with the Lied
Center for Performing Arts,
[00:01:59.752]to make this forum possible.
features Jana Eggers.
[00:02:04.123]Jana is the CEO of Nara Logics,
[00:02:06.626]a neuroscience-based artificial
[00:02:12.632]She received her
[00:02:14.467]in mathematics and computer
science at Hendrix College,
[00:02:18.338]followed by Graduate degrees
[00:02:22.041]and then performed
[00:02:24.043]at Los Alamos National Lab.
[00:02:26.546]Jana has held technology
and leadership positions
[00:02:28.982]as Intuit, Blackbaud,
Lycos, American Airlines,
and other startups.
[00:02:35.822]She has more than 25
years experience in areas
[00:02:43.563]and organizational leadership.
[00:02:45.365]Jana is a frequent
speaker, writer, and mentor
[00:02:48.368]on AI and startups.
[00:02:50.537]She is also a marathon
runner and an Iron Man.
[00:02:53.106]Tonight she will address what
artificial intelligence is,
[00:02:56.109]where it is present
in our daily lives,
[00:02:58.144]and why we should embrace it.
[00:02:59.979]Following her remarks,
[00:03:01.247]you'll have an opportunity
[00:03:02.715]to ask Jana questions
[00:03:04.384]using the hashtag
[00:03:07.854]Also, ushers will
be in the aisles
[00:03:09.289]to collect your
[00:03:10.723]and bring them to the stage.
[00:03:12.625]The title of tonight's
[00:03:14.761]yes, you have a critical role
[00:03:16.362]in raising artificial
[00:03:18.464]Now, please join me in a
warm, Nebraska welcome,
[00:03:21.067]for Jana Eggers.
JANA EGGERS: Thanks a lot.
[00:03:34.714]I wanna thank you
all for coming out.
[00:03:36.115]It's a little bit cold outside.
[00:03:38.651]And there's a bit of
snow on the roads.
[00:03:40.954]When I first moved
to the Boston area,
[00:03:43.523]I was told, "Oh,
you're from Arkansas,
[00:03:46.559]"you have no idea how
to drive in the snow!"
[00:03:49.362]And then the first
[00:03:51.264]and I was like, "You
people have no idea
[00:03:52.799]"how to drive in the snow,
[00:03:54.133]"'cause your roads are clear.
[00:03:55.602]"Mine look like that!"
[00:03:56.970]So thank you for welcoming me.
[00:04:00.139]And I feel like I'm at home
in Arkansas because of that,
[00:04:04.377]and it didn't bother me a bit,
[00:04:05.678]thought it was lovely.
[00:04:07.113]So I appreciate being
here in Lincoln,
[00:04:08.848]and excited to talk to you
about helping raise AI.
[00:04:13.720]As Mike mentioned,
how do we raise AI?
[00:04:19.692]And a lot of you are
[00:04:21.427]well, what part am I
gonna play in that?
[00:04:24.130]And so hopefully
[00:04:26.165]will give you some ideas of
how you can approach that,
[00:04:29.535]even if you're not a
mathematician or a computer
[00:04:33.239]How you can think about AI,
[00:04:35.475]how you can think about
the news when you hear it,
[00:04:37.610]the hype that you hear,
[00:04:38.845]and how you can think
about it critically.
[00:04:40.680]And, also, how you can
talk to people and help us,
[00:04:45.385]because as this says, raising
AI is gonna take a community.
[00:04:52.792]When you think about AI,
[00:04:54.193]this is what most people
think about, right?
[00:04:57.263]It's what most of my friends
that aren't in the field
[00:05:00.566]are saying, "Oh, so you're
doing the Terminator?
[00:05:03.670]"What about Skynet?"
[00:05:06.639]That's what I get
most of the time.
[00:05:08.775]But in reality,
[00:05:10.877]what I really do is
[00:05:13.913]It's cobbling pieces together.
[00:05:17.583]We're not quite
sure what they are
[00:05:19.952]or how they're gonna look.
[00:05:21.554]It's figuring out how
do these pieces work.
[00:05:24.957]And the bigger point
is, it's in its infancy.
[00:05:28.695]So it's very young,
AI is very young,
[00:05:32.398]and we haven't quite figured out
[00:05:34.067]how all these pieces
are gonna work together,
[00:05:36.602]but we're working on it.
[00:05:38.971]What's really important for
you to know despite that,
[00:05:43.042]is kids grow up really fast.
[00:05:45.511]You know this.
[00:05:47.113]You're gone for a month,
[00:05:48.948]and then you come back
and see your child,
[00:05:50.450]and you can't believe how
much they've progressed.
[00:05:53.186]This is what happens all
the time with us with AI.
[00:05:57.390]This is from Wait But Why,
[00:05:58.891]which is an awesome site,
[00:06:00.093]if you wanna know what's
going on with something.
[00:06:02.662]And I think they're really
to show this appropriately,
[00:06:07.800]like, haha, AI, it's not
that smart, I can fool it.
[00:06:11.938]You've all played tricks
with Siri and Alexa,
[00:06:15.174]and get them to
not understand you,
[00:06:17.577]and it doesn't take very long;
[00:06:19.145]heck, it doesn't even take you
very long to confuse Google,
[00:06:21.981]who's one of the powerhouses
in AI with the search.
[00:06:27.754]But it is progressing rapidly,
[00:06:30.656]and many pieces are
[00:06:33.659]and all of a sudden
we're gonna be like,
[00:06:35.762]"Whoa, wait a second!
[00:06:38.731]"This has gotten out of hand."
[00:06:41.234]And it's only when we get
to that point in the curve
[00:06:44.103]that we start getting scared.
[00:06:48.174]And what I'd like to do
[00:06:49.542]is make sure that you guys
are ready for that curve,
[00:06:53.079]that you're thinking about it,
[00:06:54.247]that you're engaging early,
[00:06:55.782]in this early stage when we
can still make fun of it.
[00:06:58.885]Before we get to that point,
[00:07:01.320]that it's going so fast that
we can't get out ahead of it
[00:07:05.658]and start working with it more.
[00:07:08.961]People ask all the time when
this is going to happen.
[00:07:15.935]If you look,
[00:07:16.736]I try and kinda keep up with it,
[00:07:17.904]and I look at all the experts
[00:07:19.839]and where they are on this.
[00:07:22.542]And what usually happens,
[00:07:25.978]I'd say that it
varies, and it's about,
[00:07:28.381]right now it's
running about 2060
[00:07:31.851]for when that we may
[00:07:35.988]And you have plenty of people
[00:07:37.323]that'll say it's
gonna be in 10 years,
[00:07:38.691]and you have other people
[00:07:40.193]that'll say it's
never gonna happen.
[00:07:42.395]I'm a little bit,
[00:07:44.630]I don't think it's
gonna be 2060.
[00:07:47.967]I don't think that
it'll be our grandkids,
[00:07:49.936]or their grandkids, honestly.
[00:07:52.338]There's a lot more complexity.
[00:07:54.640]We don't understand much
about how our own brain works,
[00:07:57.777]so when we say
[00:07:59.445]I think we have to understand
our own intelligence more
[00:08:02.481]before we can actually
say that's where it is.
[00:08:04.517]That doesn't mean that there
won't be smart things around,
[00:08:07.687]particularly in narrow fields.
[00:08:10.289]So I think those are some
important things to understand
[00:08:13.826]as we think about what might
happen with super-intelligence.
[00:08:18.898]But there is something
important, which is,
[00:08:22.134]kids do a lot of scary things
[00:08:24.103]before they're very
[00:08:27.607]Has your kid ever
scared you (chuckling)?
[00:08:32.378]So some of the scary things
that people see today,
[00:08:34.881]you've probably seen
things about deep fakes,
[00:08:38.217]where they can take
something that Obama said,
[00:08:41.419]and actually have
Putin saying it.
[00:08:44.991]And it's very scary that
what we believe is real,
[00:08:50.029]what we can see and heard
spoken actually isn't real.
[00:08:55.501]And so that's a scary
thing that AI can do now,
[00:08:59.238]and this is pretty easy,
[00:09:00.306]this is pretty widely available.
[00:09:01.941]So it doesn't take much
[00:09:05.011]Many of you as students
here could do this,
[00:09:08.147]probably in a night
[00:09:11.183]with the technology
that's available now.
[00:09:14.754]You've probably heard things
like this about the bias.
[00:09:18.891]This was a big one that came
out at the end of last year,
[00:09:24.330]where Amazon had been
using a recruiting tool
[00:09:27.166]for a few years,
[00:09:28.634]that was clearly
biased against women.
[00:09:31.604]No one trained it that way,
[00:09:33.339]no one said, "Hey,
ditch the women,
[00:09:35.708]"'cause the men are
more awesome at this."
[00:09:37.777]But it learned from
how they hired before,
[00:09:40.212]so it learned from its teachers,
[00:09:42.281]and its teachers were biased.
[00:09:46.218]So these are some of
the things that happen.
[00:09:50.823]And this is probably one
of the biggest things,
[00:09:53.759]and it I thought this,
John Battelle from NewCo,
[00:09:57.396]wrote how does Amazon lose?
[00:09:59.765]And his actual point was,
[00:10:01.567]they actually have
a big, big risk
[00:10:04.003]in all of their
[00:10:07.640]So has anyone here
ever been frustrated
[00:10:10.843]by the recommendations
that Amazon's given them?
[00:10:15.748]I hear some laughter, so
I think a few people have.
[00:10:18.451]You get the stories of,
[00:10:23.556]why does it continue
recommending things to me
[00:10:25.658]that I just bought.
[00:10:27.059]Why doesn't it know that?
[00:10:28.761]One person on Twitter wrote,
[00:10:33.032]hey Amazon, I bought a toilet
seat earlier this year,
[00:10:38.537]and I just want you to know
I'm not a toilet seat addict.
[00:10:40.840](audience laughing heartily)
[00:10:43.776]I bought one toilet seat,
[00:10:45.378]I'll probably need
another one in 10 years,
[00:10:47.947]but you're constantly
showing them to me.
[00:10:50.016]And we all laugh at that.
[00:10:52.451]The first reply after that was,
[00:10:54.854]the same thing happened with
my mother's funeral urn.
[00:10:59.792]It's funny, until you
get something like that,
[00:11:02.428]and we all go, "Oh."
[00:11:04.530]Amazon has been somewhat public,
[00:11:09.568]they're not completely
public about this,
[00:11:11.704]and this is not against Amazon,
[00:11:13.272]it's just understanding
that even the people
[00:11:16.142]that are the best at this,
have some major limitations.
[00:11:20.946]And Amazon has said publicly
[00:11:24.583]that they have more
like tens of algorithms
[00:11:27.720]that are doing this.
[00:11:29.221]I think that's one of the things
[00:11:30.289]that we're gonna have
to learn as a community,
[00:11:31.891]that we're probably gonna
need hundreds of them.
[00:11:34.026]Ones that can take into account
[00:11:36.028]a lot more contextualization of,
[00:11:38.864]there's probably certain
types of products
[00:11:43.069]that we don't want
to recommend more.
[00:11:45.805]There's different things we
need to take into account,
[00:11:48.874]and one algorithm, one
model, isn't going to work.
[00:11:53.212]But then it comes down
to how many you manage,
[00:11:55.181]and that's one of
the things he said,
[00:11:56.649]they're like, "Wow, we're
managing tens of algorithms!"
[00:11:58.851]And I'm like,
[00:12:00.186]"Yeah, we're probably
gonna need to be managing
[00:12:02.488]"thousands of them.
[00:12:03.856]"That's something that we need
to develop as a technology."
[00:12:06.325]And again, that comes from you.
[00:12:08.928]And I'll give you more
tips on what you can do.
[00:12:12.164]But speaking up
about this stuff.
[00:12:13.933]I'm glad that people are
tweeting about and saying,
[00:12:18.404]"Hey look this is a problem."
[00:12:19.772]Because that awareness,
[00:12:21.273]and the fact that
it impacts someone,
[00:12:22.775]like I said, everyone
laughs about toilet seats,
[00:12:24.643]but as soon as you
say the funeral urn,
[00:12:27.279]everybody feels it.
[00:12:30.015]And so those examples really
make a big difference.
[00:12:38.257]Someone tweeted, actually, when
we were talking about this,
[00:12:41.427]my coming to this forum,
[00:12:44.163]and they tweeted
back to me and said,
[00:12:45.631]I hope you're gonna
cover the jobs issue.
[00:12:47.867]So thank you, if you're here,
thank you for doing that,
[00:12:49.869]if you're watching
online as well.
[00:12:52.371]Kids are taking our
jobs, these crazy kids,
cheap labor, right?
[00:13:00.713]What I really like, these
are two very recent headlines
[00:13:04.984]that I just pulled out for this,
[00:13:07.253]is, you know, are they
competing for your job?
[00:13:10.055]And it says, probably, but
don't count yourself out.
[00:13:13.125]And what I love is that
we've evolved to that,
[00:13:15.528]for, I'd say the
last five years,
[00:13:17.830]it's been much more focused on,
[00:13:19.398]oh my God they are
going to take our jobs!
[00:13:22.535]And what are we gonna do?
[00:13:23.502]And, oh, the truck drivers!
[00:13:25.237]And I used to work in logistics,
[00:13:26.972]so I know a lot of
the truck drivers.
[00:13:28.574]But what I really
like is, you know,
[00:13:31.744]here's how we can
[00:13:33.379]don't count yourself out.
[00:13:35.147]That's more the message
that's starting to get there,
[00:13:37.750]and I think that's a more
helpful one for all of us.
[00:13:40.886]So rather than driving the fear,
[00:13:42.254]because actually, as humans,
we don't react well to fear,
[00:13:45.558]it makes us shut down,
[00:13:47.326]we go into our lizard brain,
[00:13:49.061]and all of our
[00:13:52.198]aren't working in that case.
[00:13:53.799]So I'm glad that the news
[00:13:56.602]is turning more to
this kind of messaging,
[00:13:59.238]and then that's gonna
help all of you engage
[00:14:02.341]and think about,
[00:14:03.142]okay, well how can we
make sure probably not?
[00:14:06.078]One thing I wanna point
out about that is,
[00:14:08.614]and this is a futurist
[00:14:13.219]that was presenting it at P&G,
[00:14:15.454]and I wanted to give
him credit on that,
[00:14:17.289]is he pointed out that in
the history of unemployment,
[00:14:22.328]actually major technological
shifts that we worried about,
[00:14:25.264]like automated teller machines
taking all the teller jobs,
[00:14:29.368]and we have
[00:14:32.071]you know we have three times
more people to employ now,
[00:14:35.207]and we haven't really,
[00:14:41.847]even with that many more people.
[00:14:43.649]So I'm not saying
I know for sure,
[00:14:45.217]well gosh, look at the history,
[00:14:46.485]it's obvious that that's
what's gonna happen;
[00:14:49.121]that's not my point.
[00:14:50.623]My point is more, we don't
know what's going to happen,
[00:14:54.360]so don't assume that we know,
going to go bad.
[00:14:59.932]Let's look at it and say,
[00:15:01.300]"Hey, in the past it hasn't,"
[00:15:02.635]so we have something to work
with and some frameworks.
[00:15:05.337]Well, what did happen
to the tellers?
[00:15:07.773]We can look at things
like that and say,
[00:15:10.009]"How do we do a better job?"
[00:15:11.844]Well, actually, bank employment
went up for the most point.
[00:15:15.714]My dad was a banker.
[00:15:16.949]And they did more
customer service work.
[00:15:19.985]So there's a lot of
things that we can do
[00:15:22.521]to protect ourselves
from massive job loss,
[00:15:26.525]but we still have
to take action.
[00:15:29.528]The other thing that I
wanna say about that is,
[00:15:33.065]we are really not good
at predicting the future.
[00:15:37.036]So I don't know, does
anyone here use a telephone?
[00:15:41.807]I mean, because Western
Union clearly thought
[00:15:44.576]that they weren't gonna be a
good means of communication.
[00:15:47.746]This is, you know, Bill
Gates said the spam problem
[00:15:51.817]would be solved in
a few years, right?
[00:15:55.954]Watson said there'd be no
use for a personal computer.
[00:15:59.425]We are really, really bad,
[00:16:03.562]particularly with tech,
at predicting the future.
[00:16:06.965]So remember these examples
when you go into it,
[00:16:11.303]when people say things like,
[00:16:13.105]"The whole human
race is gonna die."
[00:16:14.873]It's like, oh yeah, well, we've
heard these things before.
[00:16:18.310]But how do you deal with that?
[00:16:21.246]'Cause my message is
not no, it won't happen.
[00:16:24.416]And if any of you
read Good to Great,
[00:16:29.421]you probably know
about the story,
[00:16:32.191]which is Admiral Stockdale,
[00:16:33.325]who was a Vice Presidential
candidate at one point,
[00:16:35.861]he was a prisoner of war.
[00:16:40.833]So let's assume worst
case scenario, right?
[00:16:43.335]The robots take over, and
we're prisoners of war;
[00:16:46.338]that's the worst thing
that could happen.
[00:16:48.874]How did he survive
being a prisoner of war?
[00:16:52.211]He said the
pessimists died first,
[00:16:56.014]because they just
worked themselves into,
[00:16:59.318]this is all gonna end, this
is terrible and it's awful,
[00:17:01.887]and they died first.
actually died second,
the optimism broke,
[00:17:09.094]and they died of a broken heart.
[00:17:12.897]It was actually the
people that were realists,
[00:17:16.468]real enough to deal
with the reality
[00:17:18.804]of what's happening today,
[00:17:22.007]while still believing that
they would be rescued.
[00:17:24.943]Now again, I don't wanna say
[00:17:26.444]we're gonna be prisoners
of war (chuckling),
[00:17:28.213]that is not my message at all!
[00:17:30.282]But what's interesting
about this is,
[00:17:32.384]that's also a key to happiness.
[00:17:34.453]And if you look into some
of the work that's been done
[00:17:37.656]in positive psychology,
[00:17:40.325]the people that are
[00:17:42.961]that are the most productive,
[00:17:45.197]are actually people that
deal with the reality
[00:17:48.333]of where we are today,
[00:17:50.436]and are optimistic
about the future.
[00:17:53.972]So I hope I'm gonna
give you some tools
[00:17:56.442]to deal with the
reality of AI today,
[00:17:59.478]which is a lot of uncertainty,
[00:18:01.280]and we don't know
where it's gonna go,
[00:18:02.915]but be optimistic about
where we can take it,
[00:18:05.584]and optimistic that you
can have a role in it.
[00:18:09.288]So let's talk about
that current status.
[00:18:14.960]I have an analogy for you.
[00:18:16.962]So people often say,
[00:18:20.232]"Oh well, robots are
gonna replace us,"
[00:18:22.835]when they think about
[00:18:25.938]And so it took me a long time,
[00:18:27.973]but I thought a long time about,
[00:18:30.476]well what's a good analogy?
[00:18:31.810]And what I'd say is,
[00:18:32.945]rather than thinking about
[00:18:34.813]replacing a human intelligence,
[00:18:37.249]think about artificial
[00:18:38.917]just like artificial light.
[00:18:41.086]I think you would all agree
[00:18:42.354]that artificial light
hasn't replaced the sun.
[00:18:46.959]The sun is still very useful
[00:18:48.360]in this traffic
circle during the day,
[00:18:50.963]and the artificial light
doesn't help out very much.
[00:18:54.366]So what has
artificial light done?
[00:18:56.368]Well, for one, it's
allowed us to stand here
[00:18:58.403]on a stage like this, right?
[00:19:01.473]They used to use candles,
that wasn't safe.
[00:19:04.409]It's allowed us to explore areas
[00:19:07.546]that we couldn't explore before.
[00:19:10.649]But, it's also caused problems.
[00:19:13.519]People that work
under artificial light
[00:19:16.488]have higher incidents of cancer.
[00:19:18.991]Think about migration;
[00:19:21.493]I used to live in South Carolina
[00:19:23.095]and we worried about the
sea turtle migration.
[00:19:26.331]So it's impacted
[00:19:29.101]So there's a lot of bad impacts
[00:19:30.869]that artificial light has.
[00:19:32.371]So I'd like you to think about,
[00:19:33.605]when you think about
[00:19:35.407]stop saying, "It's like us,"
[00:19:38.243]and think more about,
[00:19:40.245]"Wow, this is a new tool
that may help and augment us,
[00:19:44.149]"but also can open
up new avenues,"
[00:19:47.252]and think about it a
little more that way.
[00:19:49.721]Now, I'm gonna take
this light analogy
[00:19:51.623]a little bit further,
[00:19:53.158]because it's the second question
that I get asked the most,
[00:19:56.161]which is why is
this happening now?
[00:19:59.164]And there is an old
New Yorker cartoon
[00:20:02.534]about a drunk underneath
[00:20:05.437]and he's fumbling around,
and a policeman comes along.
[00:20:09.541]And the policeman says,
"What are you doing?"
[00:20:13.011]And he says, "I'm
looking for my keys."
[00:20:15.414]And the policeman says, "Oh,
did you lose them here?"
[00:20:18.450]And he said, "No, I lost
'em about two blocks away,
[00:20:20.352]"but the light's better here."
[00:20:25.123]So, the mathematician,
physicist, and computer
[00:20:30.829]like me, are drunk on
data and computer power;
[00:20:35.601]this is what's giving us
the ability to do the AI
[00:20:38.103]that we always dreamed of.
[00:20:40.405]The stuff that I was
working on at Los Alamos
[00:20:42.507]in the early '90's,
[00:20:44.843]is now available to everyone.
[00:20:51.049]It's not locked up in the
way that it was before.
[00:20:54.453]So we're excited, and
[00:20:57.623]and we're under-delivering,
[00:20:59.625]because we're still
getting our sea legs here;
all that computing
power and all that data
[00:21:06.398]and we're excited that, wow,
[00:21:08.433]this is actually working
the way we thought!
[00:21:10.802]But we're forgetting
to say it's working
[00:21:12.170]in a very, very narrow way.
[00:21:15.107]So what we need is for more
people to get involved.
[00:21:18.644]This is why I joined Nara,
[00:21:21.513]is because they weren't
doing the same thing
[00:21:23.548]that I was doing out at Los
Alamos almost 30 years ago.
[00:21:27.753]They were bringing
neuroscience into it.
[00:21:30.155]Like, not the neuroscience we
thought in the '40's and '50's
[00:21:33.492]which gave the coining the
term of artificial intelligence
[00:21:36.561]and neural net.
[00:21:37.863]But what actually we know,
which is a lot more now,
[00:21:43.235]It's also bringing in
people, product managers,
[00:21:46.004]it's becoming less
about the engineering
[00:21:47.773]and the actual algorithms,
[00:21:50.575]and more about what
are the results;
[00:21:53.178]and I'll talk a bit
more about that.
[00:21:55.947]But we also need ethicists,
[00:21:58.216]we need even lawyers, I love
the lawyers that are out there,
[00:22:01.720]but most people don't
say you need a lawyer,
[00:22:04.523]or you want to need them.
[00:22:05.857]We need entrepreneurs.
[00:22:06.992]We also need suits.
[00:22:08.593]We need kids, and
we need grandmas.
[00:22:10.796]You're getting the
picture here, we need you.
[00:22:13.131]We need all of you involved
in AI and thinking about AI,
[00:22:16.468]and challenging thinking on AI,
[00:22:18.603]so that we're not the drunks
under the streetlights,
[00:22:20.706]looking for our keys
that are two blocks away.
[00:22:26.545]So, let me quickly
define AI for you.
[00:22:28.680]I'm gonna give you,
[00:22:30.449]you know the magicians' code
[00:22:31.883]is that you never
reveal your secrets,
[00:22:33.885]I'm gonna reveal
the secrets to you,
[00:22:35.987]which is, the definition
of machine learning.
[00:22:40.859]So AI and ML
[00:22:41.927]are used pretty much
interchangeably these days.
[00:22:44.763]We've kinda gotten over,
like, people would say,
[00:22:46.965]"Oh, that's not AI,
that's machine learning,"
[00:22:50.602]or, "That's not machine
learning, that's AI."
[00:22:54.172]And people use them
interchangeably a bit now.
[00:22:57.109]You'll have some people that
would argue with me on that.
[00:23:01.046]But the bigger point is, both
of them are really about,
learning from data.
[00:23:08.720]So, it's that simple.
[00:23:11.957]How do they do it?
[00:23:13.291]Oh, some common words,
sorry, are, you know,
[00:23:15.727]you'll hear people say
supervised and unsupervised.
[00:23:18.463]Supervised is, you're
basically tagging the data
[00:23:20.966]so the machine
can learn from it,
[00:23:22.567]and then it's just doing
a lot of calculations
[00:23:23.902]about what's important.
[00:23:25.403]Unsupervised is you're
not telling it anything,
[00:23:27.739]and it's trying to
figure it out itself.
[00:23:30.275]And much more of the AI
that's done right now,
[00:23:34.546]or the machine
learning/AI is done,
[00:23:36.815]is supervised these days.
[00:23:39.785]The other things are
different types of algorithm,
[00:23:42.020]that again, I'll talk
a little bit more about
[00:23:44.623]a little bit later on.
[00:23:48.193]And then, you know, it's
really the next generation
[00:23:50.829]of things that we've
been doing for a while;
[00:23:54.132]It's just, it's a more
complex form of statistics.
[00:23:57.335]Overall, the biggest point
I want you to know is,
[00:24:00.105]it's just maths, it's different.
[00:24:01.807]And I use maths, plural,
[00:24:02.974]because it really is lots
of different fields of math
[00:24:05.777]that are coming together.
[00:24:11.116]This is why computers
are so good at it.
[00:24:14.219]But remember that a lot
of what we do in our brain
[00:24:18.089]Our brains are actually
bad at computing.
[00:24:20.725]So when I give you that analogy
[00:24:22.327]between natural intelligence
and artificial intelligence,
[00:24:28.200]is really just about that
computing side of it.
[00:24:32.304]So keep that in mind.
[00:24:35.941]Which makes you question
things like this.
[00:24:38.710]So this is a reporter,
sorry, I'm forgetting now,
[00:24:41.847]I'm pretty sure he's
with The Economist.
[00:24:43.915]And he said, "I'm
sick of these videos."
[00:24:45.817]And this was a video
by Boston Dynamics,
[00:24:47.719]about the robot that
could do parkour.
[00:24:51.056]You may have also seen the dog
robot that could open doors,
[00:24:56.328]You know, you see
all of those things.
[00:24:58.697]And so his point was,
[00:24:59.931]I'm really sick of
seeing these videos,
[00:25:01.366]because they're garbage.
[00:25:03.802]The robot can't
actually do parkour.
[00:25:08.740]And his point is,
they're highly scripted.
[00:25:11.243]This isn't AI.
[00:25:13.278]And I totally believe it.
the way, the Boston Dynamics
people will tell you that.
[00:25:16.615]Are there aspects
of it that's AI?
[00:25:19.017]Can that robot see that
there's something there?
[00:25:23.021]But can it actually judge
how it's supposed to jump up,
[00:25:26.091]how it should land?
[00:25:27.492]No, that's all scripted.
[00:25:29.294]There's a lot of cool
stuff there, by the way.
[00:25:32.030]That doesn't mean
that this isn't cool,
[00:25:33.665]there's a lot of cool
stuff in robotics
[00:25:35.000]that has nothing to do with AI,
[00:25:36.768]it's really mechy.
[00:25:38.370]And so that's to take
nothing away from that.
[00:25:41.973]But when people see a robot
reach out and turn a knob
[00:25:45.810]and open a door,
[00:25:48.580]they think it's just like them,
[00:25:50.348]and it's not.
[00:25:52.183]That's a very, very
[00:25:55.820]So, someone says,
[00:25:58.056]"Agree, but don't dis them,
because they're not garbage.
[00:26:01.226]"This is really cool
stuff going on."
[00:26:03.762]But the point is, exactly,
[00:26:06.865]but people need to
make that context.
[00:26:09.834]And what I'd say
is what you can do
[00:26:11.336]is ask for that context.
[00:26:13.605]You can ask, "What's
really going on here?"
[00:26:17.943]You can ask companies,
you can demand.
[00:26:22.347]Think of all the
microphones that we have now
[00:26:24.182]with the social media,
[00:26:25.216]and think of what's
happened with that,
[00:26:27.052]which is exciting.
[00:26:28.553]You have that right
to push people,
[00:26:30.789]and I hope that people
more go and do this.
[00:26:33.625]I know Boston Dynamics actually,
[00:26:35.327]they're great guys,
[00:26:36.428]this is not a judgment on them,
[00:26:38.296]they did come out
and actually explain
[00:26:41.032]how one of the videos was made,
[00:26:43.001]and how much time it took for
them to actually do those.
[00:26:47.839]They're all scripted.
[00:26:51.042]It can't walk into a random
room, a random hotel,
[00:26:53.979]and open the door and greet
you at the door (laughing).
[00:26:59.117]That can't happen right now.
[00:27:01.052]So, where are we
good at AI with that?
[00:27:07.359]Yann LeCun who's at Facebook,
[00:27:10.095]and one of the top
researchers in AI,
[00:27:12.864]say that we're really
good at perception,
[00:27:15.533]but not contextualization or
prediction of cause and effect.
[00:27:19.971]So you can imagine as you
think about those robots
[00:27:22.007]that are going in and reaching
out and opening the door,
[00:27:25.243]they're not able to
really contextualize that.
[00:27:28.380]They can actually perceive
that a knob there,
[00:27:30.715]they can perceive how much
they need to squeeze it or not,
[00:27:33.451]they can perceive that
they need to turn it.
[00:27:35.420]But they're not actually
thinking through all of that,
[00:27:37.489]they've just been trained
to do all of those things.
[00:27:40.291]I think that depends on
your measuring stick.
[00:27:44.496]Certainly we can say that
some AI can read slides,
[00:27:52.504]so radiology slides,
better than humans.
[00:27:55.707]What I would say is,
[00:27:57.475]our measuring stick
is wrong on that,
will pick up things
[00:28:01.646]that the AI doesn't pick up.
[00:28:03.048]So while AI may be at 85%
and humans are at 75%,
[00:28:07.552]the two together could
probably be at 95%.
[00:28:10.588]And we pick up things that
the machines don't pick up,
[00:28:15.293]and that's a really
[00:28:17.362]And it's been shown, it's
called centaur in chess,
[00:28:23.268]that humans and computers can
almost always beat computers,
[00:28:25.937]humans and computer combined,
[00:28:28.139]can almost always
beat computers alone.
[00:28:32.477]And so that's more
the measuring stick
[00:28:33.978]that I like to look at,
[00:28:35.313]and it frustrates me every time
I see some studies like that
[00:28:39.017]that say, "Oh yeah,
they beat the human."
[00:28:40.952]And I'm like,
[00:28:41.853]"Well, did they beat the human
and the computer together?
[00:28:43.388]"Then we're talking
[00:28:45.290]So, while I agree with this,
[00:28:47.592]that that's the
most advanced area
[00:28:49.060]is on the perception side and
not on the cognitive side,
[00:28:52.630]I still have issues
with even saying,
[00:28:54.365]yeah, check, we're there,
on the perception side.
[00:28:58.103]And I'll give you
an example of that.
[00:29:02.507]If you can't tell, what
those words say is poop.
[00:29:06.144]Does anybody know about this?
[00:29:07.479]Does anybody recognize
[00:29:09.614]It came from Arkansas, yeah.
[00:29:11.850]It happened in Little
[00:29:13.551]so I was very proud.
[00:29:14.953]This was called the
[00:29:18.456]And it was done by a
Roomba vacuum cleaner.
[00:29:23.928]And this was someone's
diagram of their living room.
[00:29:27.665]So, sorry, it's gotten
a little bit washed out,
[00:29:30.635]but there's, you know,
there's a rug in there,
[00:29:33.805]It was in, you know,
[00:29:35.373]it had splattered up
onto the bookshelves.
[00:29:37.942]So when we say that
computer, or that AI,
[00:29:40.845]is really good at
[00:29:42.847]one of the easiest things
for humans to perceive,
[00:29:45.016]I don't know if you
guys know this or not,
[00:29:51.256]So that's where
I said, you know,
depends on the measuring
stick that you're using.
[00:29:57.195]There's a lot of ways
of detecting poop,
[00:29:59.931]but we still haven't
gotten the AI to do that.
[00:30:02.433]And I'm not saying that
that's the biggest problem
[00:30:04.169]that we should solve.
[00:30:05.236]But just realize,
[00:30:08.173]I'm not allowed to say
the S word on the stage,
[00:30:14.045]computers don't know poop.
[00:30:19.017]So, let's get down to the
really important things,
[00:30:22.520]which is how can
you get involved.
[00:30:24.255]So hopefully I've
given you some context
[00:30:26.057]of what's going on,
[00:30:27.358]given you the status, so you
really understand better
[00:30:30.995]about AI and what its
knowledge actually is.
[00:30:36.100]How can you get involved?
[00:30:38.269]So, I would say the
biggest thing to remember,
[00:30:43.074]is stick to the first
principles of AI.
[00:30:46.110]And I call it the chicken,
the egg, and the bacon,
[00:30:51.316]because it makes it
easier for me to remember.
[00:30:54.586]I call it AI's holy trinity.
[00:30:58.590]The chicken is the algorithm,
the eggs are the data.
[00:31:03.094]And the reason why I
originally came up with that,
[00:31:05.230]I didn't have the
bacon at first,
[00:31:06.397]but everybody needs bacon.
[00:31:08.299]The reason why I
came up with that
[00:31:10.435]was 'cause people asked me,
[00:31:11.669]"Which do I start with?
[00:31:13.204]"Do is start with the data
and find an algorithm,
[00:31:14.973]"or do I start with the
algorithm and find the data?"
[00:31:17.842]And my answer was,
[00:31:19.177]there isn't really a
great answer to that,
[00:31:20.979]it's kinda both.
[00:31:23.147]You have to, you're going
back and forth between both
[00:31:25.883]on a regular basis.
[00:31:28.152]And so I deemed it the
chicken and egg problem.
[00:31:31.022]But then I realized, actually
there's a key component there,
[00:31:33.925]is the results.
[00:31:35.260]Bring home the bacon?
[00:31:37.028]I was like, oh, it goes with
my chicken and egg thing!
[00:31:40.031]So that's what I'd say is,
[00:31:42.600]this is the whole is greater
than the sum of the parts.
[00:31:46.170]So these three things together,
[00:31:47.905]and considering them together,
[00:31:49.540]and considering that
when one changes
[00:31:51.276]you need to think about
changing the others,
[00:31:53.778]and really, seriously
[00:31:56.347]So I hope that helps you
[00:31:57.615]as you think about how
you get involved in AI,
[00:31:59.751]think about oh wait,
[00:32:01.286]is this a chicken,
an egg, or a bacon?
[00:32:03.621]Now, let me teach you a
little bit about chickens.
[00:32:06.924]The chicken's the
[00:32:08.126]There are many, many
breeds of chicken.
[00:32:10.561]This is from Pedro Domingos,
[00:32:13.231]who's out of the
University of Washington,
[00:32:15.500]and he wrote a book called
The Master Algorithm.
[00:32:18.202]I think it's really good,
[00:32:18.770]I think it's very accessible;
[00:32:20.171]a lot of people I know
have read it and said,
[00:32:23.808]"Wow, I feel better,
I understand more."
[00:32:27.011]And he breaks it
into five tribes.
[00:32:30.848]I think what's cool that's
happened since then,
[00:32:33.551]and he kinda talks
about this too,
[00:32:35.720]is it's becoming
more of an orchestra,
[00:32:38.156]so people are using different
tools and combining them,
[00:32:41.793]and they're coming
in at different,
[00:32:43.594]the instruments are coming
in at different times,
[00:32:47.532]and together at the same time.
[00:32:50.234]But I do think this is a
good overview to talk about.
[00:32:54.505]So talk to people about,
[00:32:56.641]"Hey, what algorithm
are you using?
[00:32:58.376]"Hey, can you, on this chart,
[00:33:00.878]"can you point to me and
show me which one it is?"
[00:33:03.681]And it will help you understand
in a pretty basic way,
[00:33:08.086]I think all of these,
[00:33:09.153]if you talk about, you know,
[00:33:12.056]over there on your
right hand side,
[00:33:14.726]that's really, for the most
part, what Amazon is using.
[00:33:20.598]That's what drives the
[00:33:22.133]people who bought this
also bought this, right?
[00:33:25.436]So you can get,
[00:33:26.771]and get an understanding of,
[00:33:28.339]oh, when does that algorithm
come into play and talk,
[00:33:31.642]you know, what are examples
[00:33:32.810]of who uses that
kind of algorithm?
[00:33:34.979]And the other point that
I wanted to make clear
[00:33:40.351]is they aren't all black boxes.
[00:33:42.487]So what's really a black
box is the most popular one,
[00:33:45.990]so that's that blue
line right here,
[00:33:47.859]that has taken
off just recently.
[00:33:49.794]That's why when people
say AI is a black box,
[00:33:52.430]that's mostly what
they're talking about,
[00:33:55.032]and it's deep
learning neural nets.
[00:33:57.201]And I'm gonna show
you in a minute
[00:33:59.203]that they're not exactly the
black boxes that people think.
[00:34:02.774]But a lot of these other
tools that Pedro covers,
[00:34:07.912]I actually switch from using
neural nets at Los Alamos
[00:34:10.915]to genetic algorithms,
[00:34:12.183]because of how they worked,
[00:34:14.619]I could actually do what
I'm gonna show you next,
[00:34:16.587]which is sensitivity analysis,
a little bit more easily.
[00:34:19.524]So that's why they're
not as black box
[00:34:21.458]as neural nets, for
what I was using it for,
[00:34:24.295]you can argue some of the
other way for other things.
[00:34:27.665]Expert systems are really
the whitest box of all,
[00:34:31.268]because expert systems,
[00:34:35.505]That's why they never took off,
[00:34:37.375]is because they take a
lot of labor to write,
[00:34:40.478]and also they're very brittle;
[00:34:43.815]so they'll break a lot when
you need to change the rules,
[00:34:46.284]it takes a lot of
maintenance to do.
[00:34:48.652]Bayesian's kind of in between,
[00:34:52.255]the color actually didn't
show up on that one,
[00:34:54.257]but it's the one, the
second from your right.
kind of the same thing
[00:35:04.602]as an expert system in a way,
[00:35:05.803]but it's based on probabilities;
[00:35:07.438]which way would you go?
[00:35:08.840]And sometimes you don't
know what the probability,
[00:35:10.875]or we as humans don't
deal with probabilities.
[00:35:12.877]The point is not to
get into all of that.
[00:35:15.179]We could talk about, that's
a whole talk unto itself.
[00:35:17.748]The bigger point is,
[00:35:19.250]these tools have been
around for a long time,
very well understood.
[00:35:23.187]There's nothing wrong
with using one of these,
[00:35:25.256]rather than neural nets.
[00:35:27.492]It's just that neural
nets, as you can tell,
[00:35:29.327]is the very popular, en
vogue thing to do, right?
[00:35:32.296]So, you know, if you're
using one of these others,
[00:35:34.665]you may look like you're
wearing plaid pants
[00:35:36.400]with bell bottoms,
[00:35:38.102]but eventually those
are gonna come back in.
[00:35:41.939]So don't worry about it,
[00:35:43.007]'cause they've actually
been used quite a bit,
[00:35:44.575]and a lot of people know
a lot of things about it.
[00:35:46.677]And just the kids these days
[00:35:47.945]don't quite know
as much about them.
[00:35:50.348]But there's even night vision
goggles for deep learning.
[00:35:54.785]So I agree, deep learning
does a lot of great things,
[00:35:58.289]particularly on that
[00:36:00.291]it's really, really good.
[00:36:01.826]So it is a great tool to use.
[00:36:03.361]And the bigger thing is,
[00:36:06.230]if you need explainability
you can get it,
[00:36:08.533]and that's where I started.
[00:36:10.101]I had the chemists
that I was working with
[00:36:12.570]out at Los Alamos,
[00:36:13.404]they wanted to know why
when I gave them an answer,
[00:36:15.573]they wanted to know
why it was that way.
[00:36:17.408]And basically what I did
[00:36:18.609]is what's called a
[00:36:20.678]And that's what the
graph on the left shows,
[00:36:22.947]is what parameters that
are going into this
[00:36:26.350]are most sensitive to
the input parameters
[00:36:29.520]to change the result.
[00:36:31.389]And so that's what I did.
[00:36:32.456]The problem with that is,
[00:36:33.791]it took me a lot of
compute power to get there,
[00:36:36.761]and I didn't know the
directionality of the input,
[00:36:41.265]and that's why genetic
algorithms actually gave me
[00:36:43.334]a better way of controlling
and guiding the input,
[00:36:47.638]and figuring out
where I could get
[00:36:48.839]to the optimum answer.
[00:36:50.074]Like I said, these are
some pretty complex topics.
[00:36:53.077]You don't have to
understand all of that
[00:36:55.279]to ask the right questions
and get involved.
[00:36:57.982]There's a lot of,
what's great is,
[00:36:59.750]different from when
I was doing it,
[00:37:01.819]there's a lot of great
tools out there now,
[00:37:04.055]for figuring out where
is this sensitive?
[00:37:06.924]You may have heard
[00:37:08.292]about the what's a wolf
and what's a dog example.
[00:37:11.762]It was probably about
six or seven years ago.
[00:37:16.067]It was out there where it's
like, oh, AI is so smart
[00:37:19.804]it knows the difference
between a wolf and a dog.
[00:37:21.739]Well then people
[00:37:23.374]there was a whole
paper published on it.
[00:37:24.875]People started playing with it.
[00:37:26.143]And it turns out it was
detecting the background,
[00:37:27.812]rather than detecting an
actual dog versus a wolf.
[00:37:32.817]And so that's what a
sensitivity analysis does.
[00:37:37.154]And this is what it's saying is,
[00:37:39.123]with this tool,
it's called Space,
[00:37:41.892]you actually could figure
out where those red zones,
[00:37:45.763]and sorry it's a
little washed out here.
[00:37:47.531]What in the picture
it's actually detecting,
[00:37:50.768]that really identifies this
bird versus another bird,
[00:37:55.306]or this animal,
[00:37:56.474]in this case it was a
meerkat and a mongoose.
[00:37:59.710]What tells them apart?
[00:38:01.278]That's a sensitivity analysis.
[00:38:03.147]So if you need explainability,
you can get it,
[00:38:06.484]and don't let people
say you can't.
[00:38:08.953]It may be you need to
change the algorithm,
[00:38:12.490]and that's what you need
to talk to people about,
[00:38:14.592]and understand that. But
just don't let them say,
[00:38:17.094]"Oh, it's a black box."
[00:38:19.664]Another big thing is,
[00:38:21.065]also don't expect the
computer to be right.
[00:38:23.901]You might remember in '97,
[00:38:28.272]Garry Kasparov lost
to Deep Blue in chess,
[00:38:33.177]and it was a big deal, right?
[00:38:34.178]The Grand Master of chess
came down to a computer.
[00:38:37.782]What came out later,
in about 2010,
[00:38:40.551]and he's verified this,
[00:38:43.054]is that it was actually
a bug that beat him.
[00:38:46.157]And how that bug beat him
[00:38:48.259]is it picked a move at random,
[00:38:50.895]because it didn't
know what to do.
[00:38:54.465]So the computer didn't
know what to do,
[00:38:56.600]so it picked a move at random,
[00:38:58.502]Kasparov looked at it and said,
[00:38:59.937]"Holy crap it knows
something I don't.
[00:39:03.507]"What has it figured
out that I don't?"
[00:39:05.843]And his mind was
[00:39:09.547]with the fact that he
believed it saw something
[00:39:13.918]that he didn't see.
[00:39:18.456]They can be wrong.
[00:39:20.324]Even when they're
playing at that level,
[00:39:22.426]they can be wrong.
[00:39:25.096]So we've gotta remember
that computers have bugs,
[00:39:31.936]and we can't always,
[00:39:33.471]even if they're right a lot,
[00:39:34.672]even if they've won against
a lot of chess masters,
[00:39:38.576]we have to remember,
(hands slapping together)
[00:39:40.711]that there's time that
they're going to be wrong.
[00:39:43.414]So don't get caught up in the,
[00:39:46.217]well, it's been
right all this time,
[00:39:47.918]so it must be right now.
[00:39:49.386]If it really doesn't
look and feel right,
[00:39:51.222]you know, if Garry Kasparov
could have let that go,
[00:39:54.191]and just played his game,
[00:39:56.160]we don't know what
would have happened,
[00:39:58.863]but he could have won.
[00:40:00.965]So, the point on the chickens,
[00:40:04.235]is you can question
[00:40:07.204]your chicken's fit
to the problem,
[00:40:08.706]and don't be afraid to change
or question your chickens,
[00:40:12.276]or your chicken's keepers.
[00:40:15.446]Just because you don't
speak the same language,
[00:40:18.215]just because you
don't talk in code,
[00:40:21.018]doesn't mean you can't ask
some of these basic questions.
[00:40:25.089]I hope I'm giving you
some examples to say,
[00:40:27.158]"Yeah, but you know,
[00:40:29.894]"you know what happened there?
[00:40:31.128]"How do we know that this
isn't just a random move
[00:40:34.265]"picked by the computer?"
[00:40:37.835]So (hands clapping),
[00:40:42.306]let's talk about the eggs,
[00:40:44.375]and in particular
that egg on your face.
[00:40:46.977]Again, I'm not picking
on Google, or Microsoft.
[00:40:52.650]They have some of the most,
[00:40:53.784]and are doing some of the
most amazing work in AI.
[00:40:57.388]But it's also really
important to see
[00:40:59.790]that even people like that,
[00:41:01.292]who have some of the
most advanced resources
[00:41:06.063]outside of universities,
[00:41:07.331]'cause I still think
[00:41:09.567]Thank you by the way.
[00:41:11.135]Universities still have the hold
[00:41:13.771]on the really advanced
research that's happening,
[00:41:17.174]although these guys are
really supporting it,
[00:41:18.943]so it's not to take them down.
[00:41:20.911]They make some
really big mistakes.
people as gorillas.
[00:41:28.719]That's a big mistake.
[00:41:31.889]Tay being online for only hours,
[00:41:35.292]hours she became a
[00:41:39.864]What you might not know is,
[00:41:44.101]because Google said
that they'll fix it,
[00:41:46.303]we're taking immediate action
[00:41:47.872]to prevent this
result from happening.
[00:41:49.974]You might not know that it's
almost three years ago now?
[00:41:55.012]The immediate action
that they took
[00:41:58.048]was just stopping the algorithm
[00:41:59.216]from tagging things as gorillas.
[00:42:02.653]They didn't actually go
in and fix the problem.
[00:42:08.058]They just kind of stopped
it from happening again.
[00:42:12.129]Tay is, and has been,
running in China for years
[00:42:17.101]before and after Tay was
launched here in the US.
[00:42:21.438]The difference between
the US and China
[00:42:25.709]is a really big difference
in Internet culture.
[00:42:30.614]There is no 4Chan in China.
[00:42:33.918]There's no subversive element
that's trying to go after,
[00:42:40.391](speaking in Chinese)
is what it's called,
[00:42:42.393]and it works great.
[00:42:44.962]So what happened is
Microsoft got lulled
[00:42:49.033]into a false sense of
security, if you will,
[00:42:51.936]because of the success
and the importance
[00:42:55.673]of (speaking in Chinese) in China,
[00:42:58.075]thinking that when they brought
it over here it'd be fine.
[00:43:01.045]Now, both of these companies,
[00:43:02.279]and again, I'm not,
[00:43:03.580]I can't judge them
'cause I'm not there
[00:43:05.416]dealing with their business
problems every day,
[00:43:08.419]but what I will say is if
it was really a priority,
[00:43:11.288]both of these people
[00:43:12.890]could have made major
advances and fixes.
[00:43:15.392]I will never say
[00:43:16.527]that they could have stopped
the bad things from happening
[00:43:18.495]in every incidence.
[00:43:20.230]But there's a lot
of training material
[00:43:22.099]to not talk about Hitler, right?
[00:43:25.035]Just like they trained
her how to talk,
[00:43:26.770]they can also train
her how not to talk.
[00:43:29.273]There's a lot of things
that could have been done
[00:43:32.876]that weren't done.
[00:43:34.778]And again, I can't talk
about their priorities,
[00:43:36.680]and why they did that.
[00:43:38.382]But as a technologist,
[00:43:39.783]it does bother me,
[00:43:41.952]that these people that
have the most power
[00:43:44.355]and the most resources
in this technology,
[00:43:46.523]aren't making some
of those steps
[00:43:48.625]in being very
transparent about it;
[00:43:50.461]they just make
statements like this,
[00:43:51.895]saying, "We'll fix the problem,"
[00:43:53.998]and then they go dark.
[00:43:55.766]And I think that's a
problem for all of us,
[00:43:57.468]and I think, again,
as a community,
[00:43:59.103]we need to demand more.
[00:44:06.677]This is a whole
talk, this slide.
[00:44:08.545]Too many of these slides are
whole talks unto themselves.
[00:44:10.881]But I will just, the thing
that I wanna say about this,
[00:44:14.518]is when I said I didn't know
[00:44:16.453]whether the chicken
or egg came first,
[00:44:18.288]it's the egg;
[00:44:19.656]it's the data.
[00:44:21.025]And that's good news for you,
[00:44:22.693]because that's more of
what you're familiar with
[00:44:25.095]than the technology.
[00:44:26.430]So learn more about the data,
it's more understandable.
[00:44:29.133]Learn about pieces of the data,
[00:44:30.667]and just know that.
[00:44:33.170]And if anybody's
interested in this,
[00:44:34.371]I'm happy to come and
give the whole talk
[00:44:36.407]about the difference between
software development and AI.
[00:44:40.244]This is, if you believe
in the lean startup,
[00:44:43.047]usually where you
start is the idea.
[00:44:45.182]What I'd say is what
we're moving, whoops!
[00:44:48.285]What we're moving into is
an era of the data leading.
[00:44:51.922]And so we're gonna
start with the data,
[00:44:53.690]and formulate ideas off of that,
[00:44:55.959]so that's just doubling
down on that point.
[00:44:57.661]What is the problem with that,
[00:45:02.266]is I was really excited when
HBR, a couple of years ago,
[00:45:06.437]came out with this article,
[00:45:07.871]which was, how Unilever
built an insights engine.
[00:45:11.975]And I look at that and
go, that's an engine,
[00:45:14.912]I like technology.
[00:45:16.046]So I'm very excited and
I start reading this.
[00:45:18.248]And I'm like, "This has
nothing to do with technology,
[00:45:21.919]It's all about
[00:45:23.821]and how they built
[00:45:27.024]And I read that and I'm like
(hand slapping forehead),
[00:45:29.026]"Oh my gosh!
[00:45:30.360]"This is the thing I
haven't been talking
[00:45:32.096]"to our customers about."
[00:45:33.697]I've been so excited
about our technology
[00:45:35.766]and telling them
about our technology,
[00:45:37.935]I didn't bring it back to
yeah, but how does that impact?
[00:45:41.171]And here's the important
thing about that,
[00:45:42.873]on how it impacts,
[00:45:44.074]is it goes back to my point.
[00:45:45.309]This is why this is under
the egg section, right?
[00:45:48.779]You probably heard that
AI is eating the world.
[00:45:53.617]You are what you
eat, by the way.
[00:45:55.586]So software was
eating the world,
[00:45:57.821]and then Marc Andreessen
came back and said,
[00:45:59.723]"Now AI is eating the world."
[00:46:01.558]I think both are true.
[00:46:03.093]But AI is fed by data, right?
[00:46:07.197]Well who feeds data?
[00:46:09.032]It's your organization.
[00:46:11.435]So how do you get to
a WOW cycle with AI,
[00:46:15.272]is understanding, in my opinion,
[00:46:17.274]understanding this food chain.
important your org is
[00:46:22.679]in curating and
cultivating that data.
[00:46:26.150]So the key point
on the egg side is,
[00:46:28.519]keep your eggs free range,
[00:46:29.920]meaning don't block them off.
[00:46:31.455]A lot of eggs are in silos
in your organization.
[00:46:34.391]Keep your eggs free range,
[00:46:36.660]and set your organization
[00:46:38.795]to be the cultivators
of those eggs.
[00:46:41.398]Now the best part, the bacon.
[00:46:45.602]You've heard for breakfast,
[00:46:47.471]the chicken is involved
but the pig's committed.
[00:46:55.012]Well what does that mean?
[00:46:56.580]The pig being committed says,
[00:46:58.315]your AI and the AI you develop
and throw out into the world
[00:47:03.120]reflects your values.
[00:47:06.089]That's how committed you are.
[00:47:10.060]You're committed to the
results that you're driving.
[00:47:13.864]Are you sure you're
driving the right results?
[00:47:17.134]Let me give you some examples.
[00:47:19.703]Again, this happened
[00:47:23.240]An algorithm cut healthcare
and medicare benefits.
[00:47:28.745]And we can go into, like,
[00:47:31.281]this was more
[00:47:34.418]than what most
people would call AI,
[00:47:36.853]but AI can do the same thing.
[00:47:38.455]This is recent, 2018.
[00:47:41.024]This is what really
bothered me the most
[00:47:44.161]about when this happened is,
[00:47:46.196]this is the engineer that said,
[00:47:47.531]remember we talked
about black boxes?
[00:47:49.499]He said, "There's no breast
practice for alerting people
[00:47:52.502]"about how an algorithm works."
[00:47:55.906]We duck that all the time.
[00:47:57.107]We're like, "Oh, it's
[00:47:59.209]"It's black box."
[00:48:00.744]So there's no way of doing that.
[00:48:02.946]And then it says, he says,
something we should do.
[00:48:07.684]"Yeah, I should also
dust under my bed."
[00:48:10.187]This is what this engineer said
[00:48:13.423]about taking away people's
[00:48:17.661]"I should probably also
dust under my bed."
[00:48:20.631]That's how trivial
that person took this.
[00:48:25.769]Is that the values you want
your organization to represent?
[00:48:34.111]Be represented by?
why it's really important
that you're in the room
[00:48:41.518]and having these discussion,
[00:48:43.053]you're part of that community,
[00:48:44.921]so that an engineer
doesn't just say it's hard.
[00:48:48.892]There's lots of hard
problems that we've solved.
[00:48:52.829]Let me give you another example.
[00:48:55.098]The Uber self-driving car,
[00:48:56.500]what you may have heard,
[00:48:57.734]and it killed a woman,
heard about that.
[00:49:00.537]I don't know if you also heard,
[00:49:02.306]that, I mean, they
say right there,
[00:49:06.009]a software bug led to the death.
[00:49:08.278]The problem is, that the
sensors, the AI actually...
[00:49:11.315]They say the sensors, but
the sensors don't detect,
[00:49:14.484]they just bring in
[00:49:16.486]the AI actually detects,
that's that perception thing.
[00:49:21.091]It actually saw the woman.
[00:49:22.759]It actually saw an
object even earlier,
[00:49:24.594]it's like, six seconds
it identified a bike.
[00:49:27.731]At two seconds before impact
it identified her as a person.
[00:49:33.503]Why didn't it stop?
[00:49:37.474]Because they had disabled
the feature to say stop,
[00:49:43.046]because it led
[00:49:44.648]to too much jerkiness
[00:49:46.083]in the car.
[00:49:48.051]So the AI was saying
too often, stop.
[00:49:51.988]And so I am quite sure
[00:49:53.590]there was not a
conversation that said,
[00:49:55.959]"How much jerkiness would
we take for a human life?"
[00:50:00.597]These are really hard
conversations to have,
[00:50:04.134]and oftentimes we ignore them,
[00:50:06.370]except in cases where we know
we're supposed to have them,
[00:50:09.005]like in medical devices.
[00:50:10.674]So we have areas that
we know really well
[00:50:13.076]how to have those conversations,
[00:50:14.811]we're just not used to having
them in the broader context.
[00:50:17.647]It was much easier to
say to an engineer,
[00:50:19.616]"Hey, let's not brake as often."
[00:50:22.352]Okay, I won't brake as often.
[00:50:25.489]And I can tell you,
[00:50:27.491]I'm not saying the
engineers were thinking,
[00:50:29.226]and we'll kill someone,
but I won't tell you that.
[00:50:31.695]They weren't thinking that
[00:50:33.296]because they got focused
(hands slapping together)
[00:50:34.965]on that result of not
braking too often.
[00:50:38.301]We need to have these
conversations where we say,
[00:50:40.170]"Okay, I know we want that,
but what does that really mean?
[00:50:43.407]"How do we unpeel that?"
[00:50:47.310]So my point here
[00:50:48.145]is this really isn't about
[00:50:50.547]it's about human intelligence.
[00:50:52.883]It's about you
being in the room,
[00:50:54.718]and you asking the questions,
[00:50:56.753]and not feeling afraid to say,
[00:50:58.722]"Well I don't know
anything about AI
[00:50:59.923]"so I can't contribute to this."
[00:51:02.726]You can contribute to it,
[00:51:04.728]and you can challenge the
thinking that's happening,
[00:51:07.464]so that we're in a
better place together.
[00:51:10.400]So the point here is,
[00:51:11.902]metrics have no moral compass,
[00:51:13.336]that's why we need you.
[00:51:16.006]The metrics of not braking
too often is a fine metric;
[00:51:18.975]that's not a moral compass.
[00:51:22.145]We need you in the room.
[00:51:25.248]So, I brought up the
whole point about light,
[00:51:30.120]so how do you make
that light shine?
[00:51:32.722]Algorithms are introverts.
[00:51:37.694]Those chickens, they're not
out there in your face saying,
[00:51:40.464]"This is what I'm doin'!
[00:51:41.798]"This is how I'm workin'!"
[00:51:47.204]and make sure you've
hired the right algorithm
[00:51:49.206]for the problem you're
trying to solve.
[00:51:52.175]Learn from Kasparov.
[00:51:56.680]Remember you need
to expect mistakes,
[00:51:58.748]just like kids.
[00:52:00.517]Have your kids made mistakes?
[00:52:03.019]If you were a kid, did
you make any mistakes?
[00:52:06.256]Again, think about that,
[00:52:08.959]and don't just think the
computer's smarter than us,
[00:52:11.127]because it's not.
[00:52:12.162]It's different from us.
[00:52:13.063]It's artificial light,
not natural light.
[00:52:17.634]Start small, this is really
where the eggs come in,
[00:52:20.637]so you can understand
what's going on,
[00:52:22.506]and that's hard because a
lot of AI needs big data.
[00:52:25.909]What's the smallest amount
that you can work on?
[00:52:27.878]And by the way, we're
improving the algorithms
[00:52:29.513]so they don't need
as much big data.
[00:52:31.681]But still carry that big vision;
[00:52:33.049]where do we wanna go?
[00:52:34.684]Start with something you
can start to understand,
[00:52:36.753]get your team,
[00:52:37.721]start having those
types of conversations,
[00:52:39.923]it's really important.
[00:52:41.491]Results are bacon,
focus on those results,
[00:52:43.727]but make sure you
know what the results
[00:52:45.529]you're really trying to drive,
[00:52:46.763]and what trade-offs
you have in that.
[00:52:48.598]And then, you
know, poop happens.
[00:52:51.768]Make sure you're bringing
in some expertise.
[00:52:54.804]And some people, whether
that expertise is ethicists,
[00:52:58.575]which you may need help having
some of these discussions
[00:53:02.746]and they're not ones
that we're used to.
[00:53:04.481]It can be technical.
[00:53:06.049]It could be data.
[00:53:07.150]It could be how is
this data generated?
[00:53:08.718]Where is it coming from?
[00:53:09.886]Who is the person that
knows that the best?
[00:53:12.122]So, expertise is not
just technical expertise,
[00:53:14.691]it's expertise in
all of these areas
[00:53:16.459]that I've been thinking about.
[00:53:18.461]So where I'm gonna end is,
[00:53:20.397]this is the other big question.
[00:53:22.966]So hopefully that helps you and
gives you some things to do.
[00:53:25.368]But I wanna answer the
other big question;
[00:53:27.938]so sorry if somebody
asked it already,
[00:53:30.140]is, you know, the
question is normally
[00:53:33.243]what do you worry about?
[00:53:35.478]I'm much more
where Stockdale is.
[00:53:40.383]I'm not an optimist.
[00:53:43.420]But I'm very much a realist.
[00:53:45.355]And the reality is we have
so much to gain from AI.
[00:53:49.826]I really think AI is gonna be
the cure to us solving cancer.
[00:53:53.797]I am not gonna trade that
off for some of the risks.
[00:53:56.866]So I'm gonna deal with
my current reality,
[00:53:58.902]that there's risks
that are involved,
[00:54:01.004]while I am pushing to solve
some of those bigger problems
[00:54:05.842]that I know can't be
solved without AI.
[00:54:09.112]Fear is pessimistic.
[00:54:11.448]I tend to focus on the reality,
[00:54:13.650]which is more sunlight.
[00:54:17.454]So sunlight is not only just
seeing what's happening there,
[00:54:20.757]sunlight's also a great
[00:54:24.127]So this is that
[00:54:25.862]this is having these
[00:54:27.697]that's what's important.
[00:54:30.467]We're not to the hype,
[00:54:32.435]so be patient and let the
kid fall and make a mistake,
[00:54:36.406]but put boundaries around that,
[00:54:38.141]just like we do with our kids.
[00:54:41.111]we need more people involved,
[00:54:42.178]it can't just be the
[00:54:44.848]and computer scientists
that I talked about, right?
[00:54:47.417]We need you coming
in and helping us,
[00:54:50.220]and asking the question.
[00:54:51.421]Which means it
needs more of you.
[00:54:53.356]So that's what I think
about when I think about AI.
[00:54:55.592]I worry about,
[00:54:56.926]how do I get people like you?
[00:54:58.528]That's why I'm here tonight,
[00:55:00.330]is this is not
about my business,
[00:55:02.132]this is about me wanting
to bring more people in AI,
[00:55:06.469]because I know that's the only
way we're gonna be stronger.
[00:55:10.407]So, that's it.
[00:55:12.509]And I look forward to questions.
Thank you so much Jana.
[00:55:24.821]At this time, Jana will
take questions from you.
[00:55:27.390]You may submit
questions on Twitter
[00:55:29.426]using the hashtag
[00:55:31.494]or simply write your
questions on the note cards
[00:55:33.763]provided by the ushers.
[00:55:35.365]Jana, we've got an active
Twitter feed already tonight.
[00:55:38.334]Let's start with a gentleman
whose question you answered
[00:55:41.037]about the workforce,
[00:55:42.572]and he appreciated you
responding to that.
[00:55:44.541]His follow up question is,
[00:55:47.043]can you address
the role universities
and community colleges
[00:55:49.612]will need to fill to
provide affordable workers,
[00:55:53.149]retraining for those
displaced by AI?
[00:55:55.385]A couple questions around
that topic tonight.
[00:55:57.253]JANA EGGERS: Yeah.
[00:55:58.121]I think that's a
really good point.
[00:55:59.823]One of the things that I say...
[00:56:01.658]So, as I mentioned,
I was in logistics,
[00:56:03.426]so I actually know
a lot about truckers
[00:56:07.130]and truckers themselves.
[00:56:08.965]And they're one of the
people that are talked about
[00:56:10.900]as being displaced.
[00:56:12.168]And what I'd say is
these are some people
[00:56:17.040]that most of them did not
wake up as a kid and say,
[00:56:20.810]"I wanna grow up
to be a trucker."
[00:56:22.645]They did it because it's
a great living to be make,
[00:56:27.350]and that's our societal values;
[00:56:29.486]we value truckers.
[00:56:31.554]And we can value other things.
[00:56:33.790]So I think it's gonna
require a shift in values,
[00:56:36.860]and the universities
and community colleges
[00:56:41.297]can help us respond
to those shifts.
[00:56:44.267]And so as we as a society shift,
[00:56:46.102]them being aware,
hey, what's going on.
[00:56:48.605]I do think there is
a big shift going on
[00:56:52.842]because of the population shift,
[00:56:54.677]so that's an example
of community colleges and universities
[00:56:58.681]really doing, expanding
their trainings in eldercare,
[00:57:03.319]and understanding of eldercare.
[00:57:04.921]MIKE: All right.
JANA: Just one example.
[00:57:06.356]MIKE: Thank you.
[00:57:07.557]We've got an audience member
here at the Lied Center
[00:57:09.492]asking for more examples of
where AI is being used today,
[00:57:13.329]and where should it be used,
[00:57:14.864]maybe other than eldercare.
[00:57:17.400]JANA: So AI is being used,
[00:57:21.137]I mean, you would,
[00:57:23.373]I don't know would
you be shocked?
[00:57:25.308]I don't know,
(hands slapping legs)
[00:57:26.309]depends on who you are
whether you'd be shocked.
[00:57:27.744]It's used in so much
that's happening today.
[00:57:30.980]Do you use Google search?
[00:57:34.284]There's tons of AI
[00:57:36.286]Everything with vision.
[00:57:37.554]You know, we were
talking before, offstage,
[00:57:40.490]about the use in agriculture,
[00:57:43.893]for weed detection,
[00:57:46.262]for autonomous vehicles
driving around the fields.
[00:57:49.432]Monitoring, it's being used
for monitoring all the sensors;
[00:57:55.738]IOT has driven a lot...
[00:57:57.240]Internet of Things has
driven a lot in the AI space.
[00:58:00.844]So it's really
ubiquitous these days
[00:58:03.980]and being used all over.
[00:58:06.015]You know, most people
just don't realize
[00:58:08.551]that that's happening,
[00:58:10.186]because it's not in their face.
[00:58:11.187]And that's really what's
gonna happen with AI is,
[00:58:14.724]it's not the physical
[00:58:17.927]it's the computation that's
going on behind the scenes.
[00:58:22.098]As far as where should it be,
[00:58:23.633]you know, the joke I
always make is (chuckling),
[00:58:25.668]I'd just like it to
manage my calendar.
[00:58:27.937]So I hope that some more things
[00:58:31.241]are coming out in those areas,
[00:58:33.476]though realizing that's
a really complex problem.
[00:58:36.246]So I do think, as I
mentioned in the talk,
[00:58:40.016]that healthcare is a huge area,
[00:58:42.318]and I'm really glad we
do some work in there,
[00:58:44.020]but there's also a lot of really
great work going on there.
[00:58:47.023]MIKE: All right, thank you.
[00:58:48.091]Also from our Twitter
feed this evening, Jana.
[00:58:50.293]How may net neutrality
or similar restrictions
[00:58:53.229]hinder AI to the
public in the future?
[00:58:56.499]JANA: I wouldn't say net neutrality.
[00:58:59.135]Net neutrality has a
lot of other issues,
[00:59:01.337]I don't think that it
really impacts AI that much.
[00:59:04.507]GDPR has a bigger impact on AI,
[00:59:10.546]just because of,
[00:59:11.581]and I don't think
that's a bad thing,
[00:59:13.383]I'm actually very pro-GDPR.
[00:59:15.418]I think it's gonna teach us
how to be more responsible
[00:59:19.055]with the use of data,
[00:59:20.089]and we should, that's
not a bad thing,
[00:59:21.824]that's a good thing.
[00:59:23.393]Teaching us how to do
that is absolutely good.
[00:59:27.030]I do think that, you know, the
things that are gonna hinder
[00:59:31.100]can be some regulations
[00:59:36.406]and what they can be used for.
[00:59:39.575]So I worry about regulations,
but I'm not anti-regulation.
[00:59:43.246]I think we need more people
that are doing that work,
[00:59:46.883]to understand AI and
[00:59:51.387]so that they can write
better regulations around it.
[00:59:54.357]MIKE: Great, thanks.
[00:59:55.792]Do you think AI will ever
become truly sentient?
[00:59:58.895]If so, how do we deal with that?
[01:00:01.230]JANA: I don't.
[01:00:03.766]I'll just be honest.
[01:00:05.301]I'm not in that camp.
[01:00:08.104]I mean, it's so far off that
I just can't even predict it.
[01:00:12.608]But again, you know my
analogy of natural light
[01:00:15.611]versus artificial light.
[01:00:16.779]That doesn't mean that it's
not more advanced than us
[01:00:20.416]in some areas.
[01:00:22.418]As far as what do
we do about it?
[01:00:24.620]You know, again, I'll go back
to being on team Stockdale.
[01:00:28.024]I think we just deal
with the present reality.
[01:00:30.927]I don't think it's gonna be
a prison camp, like I said.
[01:00:33.997]But if it did happen
to be that horrible,
[01:00:36.632]and the sentient beings
decided not to take care of us,
[01:00:41.037]I mean, Fairabee said,
[01:00:42.972]"If there was
[01:00:44.574]"they would have
contacted us already."
So, you know.
[01:00:49.579]But other people would say,
[01:00:51.481]"Oh, they have contacted
us, and they left."
[01:00:55.585]So I just, I don't
think that it's gonna be
[01:00:59.722]in any kind of form
that we can predict,
[01:01:01.858]and that's where it's
not a conversation
[01:01:05.461]that I think about a lot.
[01:01:07.830]MIKE: That's fair.
[01:01:09.198]Could you say more
about starting with data and algorithms
[01:01:11.634]instead of ideas,
[01:01:13.102]in your lean startup example?
[01:01:15.571]JANA: So I think the data can
bring you a lot of ideas,
[01:01:18.875]and AI can help you analyze
that data in new ways.
[01:01:22.578]So the way data connects...
[01:01:26.449]It's a really long answer so
I'm gonna try and shorten it.
[01:01:29.052]We build automatically with
AI a knowledge graph of data.
[01:01:34.023]People, our customers, have
learned thing from their data
[01:01:37.860]that they didn't know before,
[01:01:39.495]that they couldn't get with
general statistical analysis,
[01:01:42.265]or they may have, but again,
kinda like doing sensitivity
analysis on neural nets
[01:01:46.869]rather than using
[01:01:51.040]So, I think the
bigger point there is,
[01:01:54.243]what used to happen
is we'd have an idea
[01:01:56.746]and then we'd start coding it,
[01:01:57.947]we'd write a minimum
[01:01:59.515]and then we'd launch that,
[01:02:00.983]and we'd get some data
from people using that,
[01:02:03.453]and then generate.
[01:02:04.587]I think now we're
gonna look at the data
[01:02:06.789]of what exhaust has happened,
[01:02:08.357]whether that's exhaust
[01:02:09.492]is coming from the
Internet of Things,
[01:02:12.061]or whether it's
coming from website;
[01:02:15.698]there's lots of data
floating around out there.
[01:02:18.601]And so I think our ideas
[01:02:19.735]are more gonna come from looking
at that data holistically,
[01:02:23.940]of I have this
maintenance report here,
[01:02:25.942]I have this stream
of IOT data here,
[01:02:29.045]I have an inspector that
noticed something here.
[01:02:31.447]I think that merging
those streams of data
[01:02:34.217]is where we're going
to get more ideas.
[01:02:36.586]And then the algorithms again,
[01:02:38.054]like I said, my thing is,
[01:02:39.355]I really think
it's the egg first,
[01:02:41.290]rather than the chicken,
[01:02:43.059]and the chickens
will follow the eggs.
[01:02:46.762]And the bacon of, okay.
[01:02:47.630]JANA: And the bacon.
[01:02:49.365]MIKE: In what ways--
[01:02:50.099]JANA: Someone was listening!
[01:02:51.334]MIKE: Pig farmer, yeah.
[01:02:54.137]So some questions
from our audience
[01:02:55.771]here at the Lied Center tonight.
[01:02:57.507]In what ways might we hold
[01:02:59.976]for the job descriptions
[01:03:02.345]JANA: I think that's,
it's a good question.
[01:03:05.114]I am not one that
believes, as I said,
[01:03:08.584]raising AI takes a village.
[01:03:11.087]I don't think that
we can blame it all
[01:03:13.055]on one of the villagers.
[01:03:14.524]If you are blaming the engineer,
[01:03:16.893]I would point back and blame you
[01:03:19.028]for not getting
[01:03:20.763]I would say that
it's still realized
[01:03:24.567]that human tendencies
[01:03:28.404]what they are.
[01:03:29.705]Engineers tend to
be more introverts.
[01:03:33.242]That doesn't mean
[01:03:34.810]that you can just let them
be their introvert self
[01:03:37.013]and go off in a corner,
[01:03:38.014]it's up to you to engage them;
[01:03:43.452]Despite the fact that I
stand up here and do this,
[01:03:47.156]there's a wall right here.
[01:03:48.791]You guys don't know,
[01:03:49.959]I will also go home and
curl up in my little ball
[01:03:52.395]and have my introvert recharge.
[01:03:54.630]I do this 'cause I think
it's really needed,
[01:04:00.303]and it's fun to explain
things to people
[01:04:03.806]and have them say,
[01:04:04.874]"Wow, I feel like I
understand something now."
[01:04:08.211]So I think that's a problem
if we're blaming the engineer,
[01:04:11.314]but it doesn't
mean that engineers
[01:04:12.648]don't have responsibility.
[01:04:14.517]MIKE: All right, that's fair.
[01:04:15.785]We'll perhaps broaden
from AI a bit.
[01:04:18.120]Someone here in the audience
would like you to expound on
[01:04:21.290]what you meant earlier
[01:04:22.458]about universities having
a hold on research.
[01:04:25.027]JANA: Oh, I didn't mean hold,
sorry if that came across wrong.
[01:04:28.497]I think you guys are still
the epicenter of research,
[01:04:33.402]which I think is great,
[01:04:34.337]and it should be that way.
[01:04:37.106]I don't think that
[01:04:40.409]they have a different purpose.
[01:04:43.946]And so I don't really
feel like corporations
[01:04:47.717]should be the ones that are the
primary drivers of research,
[01:04:50.686]and it actually
makes me nervous,
[01:04:52.288]because I think they're
constrained by other things,
[01:04:56.259]and they have different
motivations than you do
[01:05:00.129]when you're doing more
pure, scientific research.
[01:05:02.465]And that's doesn't mean
that they shouldn't do any,
[01:05:04.333]but it does bother me
when a lot of, like,
[01:05:09.972]the AI researchers are being
sucked into large corporations.
[01:05:13.042]I think there is a place,
place like Los Alamos,
[01:05:17.580]for pure scientific research,
[01:05:19.415]and there is a place for
[01:05:24.754]but those are also
two different things,
[01:05:26.422]and we shouldn't expect
corporations to do the research
[01:05:30.726]that universities do.
[01:05:33.696]So I didn't mean like,
you have a hold on it
[01:05:35.531]and you won't give it up.
[01:05:36.799]But you have an advantage and
you should use that advantage,
[01:05:41.437]and we as society should
value that advantage.
[01:05:45.474]MIKE: Okay, you studied math
[01:05:46.709]and computer sciences
as an undergraduate.
[01:05:48.878]So besides math and
[01:05:50.579]what areas of study in college
[01:05:52.548]will best prepare
today's young people
[01:05:54.250]for working in an AI world?
[01:05:56.585]JANA: Definitely business,
[01:05:58.220]because business is the
results part of the equation.
[01:06:01.390]Also there's a lot that
needs to be done with design,
[01:06:05.194]particularly with AI,
it's really changing.
[01:06:07.697]You know, when I was
growing up in the field,
[01:06:10.733]there was no thing
called user experience,
[01:06:15.004]it was all what
color is the button,
[01:06:17.907]so that people notice it.
[01:06:21.143]It was really just
about the button design,
[01:06:24.580]and so you're really talking
about graphic designers.
[01:06:27.717]That became a very different
field with user experience.
[01:06:31.954]So it went from graphic
design to user interface
[01:06:35.925]to user experience.
[01:06:37.860]And those are really
[01:06:39.395]And I do think we
need more work around,
[01:06:43.065]what does it mean
to design with AI?
[01:06:45.568]So that's another field that
involves more of the arts.
[01:06:49.672]I also think ethics, and
someone asked me earlier today
[01:06:53.676]about who's driving the policy.
[01:06:55.511]I mean, people that can
understand the technology enough
[01:07:00.883]to really drive the policy
and have that passion,
[01:07:03.152]you know, they need
to spend some time
[01:07:05.054]understanding the technology,
[01:07:06.622]but it's understandable enough.
[01:07:09.592]And so people, some
people doing that as well.
[01:07:12.661]So I think there's
a lot of areas.
[01:07:14.663]I guess the basic
point, I'd say,
[01:07:16.832]is technology is gonna
[01:07:19.101]So being comfortable
[01:07:21.437]you can leverage
that in any field
[01:07:25.808]that you want to go to, almost.
[01:07:28.310]MIKE: Great, okay.
[01:07:29.512]So much for not going
political tonight, but.
[01:07:31.614]Can you give us an
example of algorithms
[01:07:33.349]used for election outcomes?
[01:07:36.752]JANA: I mean, fake news is
the biggest one, right?
[01:07:40.156]And people were generating
fake news with algorithms,
[01:07:43.492]for knowing what
people would react to,
[01:07:46.529]and then share more.
[01:07:48.130]So AI did have, unfortunately,
had a part in that.
[01:07:52.601]It also was helped out,
[01:07:54.804]it was kinda like a, you know,
[01:07:57.973]it didn't mean to help,
[01:08:00.743]but the Facebook algorithms.
[01:08:02.978]It wasn't that they were
creating the fake news,
[01:08:05.948]but they were creating
[01:08:08.350]to where it would
be shared more,
[01:08:09.585]because again, that
was their profit
[01:08:11.487]that they were trying to do.
[01:08:12.421]You know, Facebook didn't
generate their algorithm
[01:08:16.292]to spread fake news,
[01:08:17.693]but it ended up spreading
algorithm was all about
[01:08:20.629]what makes people
interact with this;
[01:08:22.965]what makes people share it?
[01:08:24.633]So sadly, AI did have a
role in propagating it,
[01:08:29.371]and it also was, helped
write and support it too.
[01:08:34.643]MIKE: Okay, from our Twitter
feed tonight, Jana.
[01:08:36.479]Will AI be proprietary or
controlled by big business?
[01:08:41.149]JANA: I think it's really hard.
[01:08:43.519]I believe that it's gonna be
hard to control it right now.
[01:08:48.491]And going back to
my point about,
[01:08:50.926]yeah, Google and
Facebook and Microsoft,
[01:08:55.965]and all the big guys are
doing a lot of AI research,
[01:08:58.734]but they don't
have a hold on it.
[01:09:00.269]There are still
plenty of universities
[01:09:03.072]who are trying to
train people in AI.
[01:09:05.140]So I don't think that
we have to worry about,
[01:09:09.411]oh, three people
are going to own
[01:09:11.779]all of the AI in the world.
[01:09:13.916]I think it's way
too far out there.
[01:09:16.118]What used to happen is,
[01:09:17.386]only people like a Los Alamos,
[01:09:20.890]big research institutions,
[01:09:22.957]had the compute
power and the data.
[01:09:24.760]But now that that's free,
[01:09:26.761]I think that's really...
[01:09:29.098]And by free I don't mean
it doesn't cost anything,
[01:09:31.399]but it's out there,
[01:09:33.269]I think it's gonna be really
hard for anyone to really,
[01:09:36.971]including any nation,
I get asked that a lot,
[01:09:39.608]you know, are we
gonna lose to China?
[01:09:45.014]Yes. But is it a
[01:09:48.417]And so I don't think
[01:09:51.120]that there's gonna be that kind
of control in any one area,
[01:09:54.623]unless we really
screw up regulation.
[01:09:59.028]Is your contrast between
[01:10:00.896]and human intelligence
[01:10:02.231]also applicable to the
condition of ethics?
[01:10:04.400]Do you think there's a link
[01:10:05.568]between the intelligence
and ethics of both sides?
[01:10:12.641]JANA: Yeah, there probably, wow.
[01:10:15.044]I've never been asked
that question in that way,
[01:10:17.446]and there, you know,
[01:10:20.349]I went to the whole, like,
we don't understand ethics,
[01:10:22.885]which is some of
what I was saying is,
[01:10:26.422]we're not good at having
[01:10:28.958]We feel like our
ethics, we understand,
[01:10:32.795]and they're kind of
hard lines to us.
[01:10:35.164]But the person next to
us may feel differently.
[01:10:38.267]And so we don't often
have those conversations.
[01:10:41.203]So yeah, I would say
they're very highly linked,
[01:10:44.540]because we're not good
[01:10:45.841]about talking about just
our own intelligence,
[01:10:47.710]and what we would do in
an ethical situation,
[01:10:50.646]because we feel
like that's taboo.
[01:10:53.048]And so therefore we don't
do it with machines.
[01:10:56.685]And maybe, actually, I hadn't
thought about it that way,
[01:10:59.221]but it could be
kinda cool in that,
[01:11:02.124]maybe having it kinda
be something different,
[01:11:05.828]and it's not us,
[01:11:06.996]and talking about it
in the technology sense
[01:11:09.131]will actually help
us talk about it,
[01:11:10.799]ourselves as human,
with human intelligence.
[01:11:16.705]On a recent 60 Minutes
[01:11:17.673]it was stated that China
[01:11:18.607]is much more AI research
oriented than the US.
[01:11:21.610]Do you agree?
MIKE: All right,
JANA: Next question.
[01:11:25.414]Search for chicken,
eggs, and bacon,
[01:11:29.184]and Jana Eggers, and you'll see.
[01:11:31.787]Actually, Kai-Fu Lee wrote the
book, called AI Superpowers,
[01:11:36.325]and it just came out
a few months ago,
[01:11:38.093]and that's why it's
being talked about a lot.
[01:11:39.862]And I go through,
actually, in that article.
[01:11:41.864]That's what got me to the
chicken, egg, and bacon thing.
[01:11:44.033]I actually go through
his main points.
[01:11:47.269]Now, I wrote that before
he wrote the book,
[01:11:50.906]'cause he had written a
World Economic Forum post
[01:11:53.575]that is a very brief
summary of his book
[01:11:56.345]before he wrote it.
[01:11:57.613]He has some other points,
[01:11:59.415]I have a whole presentation.
[01:12:00.516]He has added some other points.
[01:12:03.152]And I do think some
of those points
[01:12:05.587]are a bit more on,
[01:12:06.855]as far as why China could win,
[01:12:09.591]but I just, I don't think
that's gonna happen.
[01:12:12.327]MIKE: All right. At the
end of your talk, Jana,
[01:12:14.863]you mentioned that you hope
AI could help cure cancer.
[01:12:17.933]But cancer and disease
processes are just algorithms
[01:12:20.235]that operate without
sufficient regulatory oversight
[01:12:22.838]and disrupt homeostasis
in their host organism.
[01:12:26.408]How can AI solve cancer
without being a cancer?
[01:12:33.348]Is there an ethicist
in the room?
[01:12:34.950](woman calling out)
[01:12:39.254]So I think it's a good point.
[01:12:44.293]Part of the challenge there is,
[01:12:47.730]we may be able to solve
that biological problem,
[01:12:51.767]but in doing that
we create something
[01:12:54.036]that causes some other
problem that isn't biological.
[01:12:59.141]Does that mean we shouldn't
solve the biological problem?
[01:13:04.446]So that's how I would
look at it, personally.
[01:13:07.583]Again, I'm not a
pessimist on AI,
[01:13:11.487]so I'm not gonna
look at it and say,
[01:13:14.723]"Well gosh, if I cure cancer,
[01:13:17.392]"I'm really in a worse position,
[01:13:18.961]"because I've created
this horrible thing
[01:13:20.763]"that's also going to do X."
[01:13:22.765]And I don't even know
what that X would be,
[01:13:23.999]but I can imagine
there would be an X.
[01:13:26.368]But I would say we'd figure
out ways of controlling that.
[01:13:31.039]I mean, you know, I
worked at Los Alamos.
[01:13:33.475]We dealt with the nuclear
weapons issue all the time.
[01:13:36.478]I regularly got
stopped on a run where,
[01:13:39.281]'cause they were moving
[01:13:42.818]We've handled that.
[01:13:45.687]It doesn't mean
it's solved forever.
[01:13:48.390]But we do have ways
of handling things
[01:13:51.059]that do become dangerous.
[01:13:53.695]Doesn't mean we'll always win,
[01:13:55.864]but I think it's worth
the risk to solve cancer.
[01:13:59.101]MIKE: Thanks for thinking
through that with us.
[01:14:01.770]Before Jana takes her final
question this evening,
[01:14:03.872]I want to thank each of you
for joining us here tonight
[01:14:06.008]at the Lied Center,
[01:14:06.575]and via our web stream.
[01:14:08.777]I also want to encourage you
to watch for more information
[01:14:10.779]about the exciting next season
[01:14:12.748]of the E.N. Thompson
Forum on World Issues,
[01:14:14.583]and hope to see you back then.
[01:14:16.585]Jana, one final
question this evening,
[01:14:20.489]How about the role
of AI in government?
[01:14:22.991]And are you aware of
any increased usage
[01:14:24.927]of the government regarding AI?
[01:14:27.062]JANA: I was hoping you were
gonna ask me something,
[01:14:29.164]like about Razorback
football, or (laughing).
[01:14:31.900]MIKE: That's not quite
the color here, but--
[01:14:35.170]JANA: I thought, you
know, Boston sports.
[01:14:37.773]You know, I thought we were
gonna have something fun,
[01:14:39.808]but we're talking about
[01:14:42.311]As I mentioned, I'm
actually a big fan of GEPR.
[01:14:50.219]And my reason is,
as I've said before,
[01:14:52.721]I think data and us being
good custodians of data,
[01:14:57.793]and I'm not saying it's perfect,
[01:14:59.061]but I think it's a right
step going forward,
[01:15:01.363]to tell people what
you're collecting on them,
[01:15:05.934]allow them to opt out,
[01:15:08.370]and be able to take it
actually out of the algorithm
[01:15:11.206]and the impact on the algorithm.
[01:15:13.141]I think those are all
really good things to do.
[01:15:15.744]So I do believe that we can
come up with smart regulations.
[01:15:20.215]I do worry
[01:15:22.451]that the people that are
gonna make the regulations
[01:15:25.087]aren't gonna understand
[01:15:27.322]and that's where I think
that we have to be involved.
[01:15:29.892]I'm not good at that,
[01:15:30.826]'cause I have little
patience for bureaucracy.
[01:15:33.395]That said, I know some
people that are great at it,
[01:15:36.465]and I do give them my
point of view (laughing),
[01:15:38.500]on a regular basis,
whether they ask me or not.
[01:15:41.670]So yes, I think government
has a role in it.
[01:15:44.506]I don't think that they're
going to be the great overseers.
[01:15:47.409]I get nervous, for example,
[01:15:49.244]when Elon Musk says, "Yes,
government should regulate!"
[01:15:52.848]Because I think he wants to
be the person that regulates.
[01:15:55.951]And so I worry about that,
[01:15:58.520]because I'm not in the same
place on technology as he is.
[01:16:01.623]He thinks it's gonna kill us.
[01:16:02.991]He thinks we're
summoning the demon.
[01:16:04.559]I don't think we're
summoning the demon.
[01:16:07.763]And so that's where I get
nervous about regulation,
[01:16:12.634]is that the wrong people
would be doing it,
[01:16:16.371]either because they
[01:16:18.307]or because they do understand
[01:16:19.775]and they're gonna do it
for their own advantage.
[01:16:22.878]MIKE: Thank you.
[01:16:24.079]Jana, we're much smarter
and much more future-focused
[01:16:26.415]as a result of
[01:16:27.683]Ladies and gentlemen,
[01:16:28.684]please join me in
thanking Jana again.
[01:16:31.553]Thank you so much.
[01:16:32.387]JANA: Thank you.
[01:16:33.388]Thank you very much.
[01:16:34.356]Thank you guys.
Log in to post comments