The Tech Humanist Show: Episode 8 – John C. Havens

About this episode’s guest:

John C. Havens is Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He is also executive director of the Council on Extended Intelligence (CXI). He previously served as an EVP at a top-ten global PR firm, where he counseled clients like Gillette, HP, and Merck on emerging and social media issues. John has authored the books Heartificial Intelligence and Hacking Happiness and has been a contributing writer for Mashable, The Guardian, and The Huffington Post. He has been quoted on issues relating to technology, business, and well being by USA Today, Fast Company, BBC News, Mashable, The Guardian, The Huffington Post, Forbes, INC, PR Week, and Advertising Age.
John was also a professional actor in New York City for over 15 years, appearing in principal roles on Broadway, television, and film.

He tweets as @JohnCHavens.

This episode streamed live on Thursday, September 3, 2020. Here’s an archive of the show on YouTube:

About the show:

The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.

Subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube for updates.

Transcript

01:44
because something’s screwing up
01:46
the live stream
08:48
hello everybody hello humans
08:53
uh glad to see some of you turning out i
08:56
think
08:56
uh everybody’s kind of excited we have a
08:59
good show lined up for you today
09:01
let me hear from the from those of you
09:02
who are already online
09:04
where are you tuning in from uh who’s
09:07
out there
09:08
say hi let’s get some audience
09:10
interaction going because we’re gonna
09:12
want that audience interaction i want
09:13
you guys asking questions
09:15
of our guest today i will have some nice
09:17
interactions some fun
09:19
uh we’ll have a good time so go ahead
09:21
and start commenting let me know who’s
09:23
out there and
09:24
and where you’re where you’re dialing in
09:26
from
09:28
hope you’re not dialing in we’ve uh
09:31
we’ve moved past the dial-in days on the
09:33
internet
09:34
uh thank goodness so uh if you’re just
09:37
joining
09:38
for the first time this is the tech
09:40
humanist show
09:42
and it is a multimedia format program
09:44
exploring how data and technology shape
09:47
the human experience
09:48
we’ve got sam lau from southern
09:51
california hi sam
09:52
welcome um and
09:56
go back here so i’m your host obviously
09:58
kate o’neil and
09:59
uh oh i’m georgia from chicago
10:04
that’s my mom my mom is tuned in that’s
10:07
fun
10:08
we’ve got mark bernhard from wisconsin
10:11
yay hi mark
10:13
and davia davia uh tuning in from
10:16
jamaica glad to have you
10:19
we’re we’re truly cosmopolitan now we’re
10:21
all over the place
10:23
so you hopefully your fault i see i see
10:26
um
10:27
sam and mark are tuned in from linkedin
10:30
and
10:30
uh my mom georgia is tuned in from
10:32
youtube
10:33
and davia is tuned in from facebook so
10:35
good we’re getting good
10:36
good uh reach across all the different
10:38
channels we stream across
10:40
youtube facebook linkedin twitter and
10:43
twitter although no one watches on
10:45
twitch if you’re watching on twitch
10:47
give a special shout out because i don’t
10:50
think we’ve had
10:51
any viewers from twitch so far um
10:54
all right so i’m going to go ahead and
10:56
introduce our guest because i know
10:57
that’s why
10:58
a lot of you are tuned in today it’s
11:00
really exciting
11:02
today we are talking with john c havens
11:05
who is executive director of the ieee
11:08
global initiative on
11:09
ethics of autonomous and intelligence
11:11
systems he is
11:12
also executive director of the council
11:14
on extended intelligence or cxi
11:17
he previously served as an evp at a top
11:20
10 global pr firm
11:22
where he counseled clients like gillette
11:24
hp and merck
11:25
on emerging and social media social
11:28
media issues
11:29
he is authored the book’s heart official
11:31
intelligence if you caught that it’s
11:33
heart official intelligence and hacking
11:36
happiness
11:37
and has been a contributing writer from
11:38
mashable the guardian and the huffington
11:40
post he’s been quoted on issues related
11:42
to technology
11:43
business and well-being by usa today
11:46
fast company bbc news
11:48
mashable the guardian the huffington
11:50
post
11:51
forbes inc if you are awakened
11:54
advertising age it just goes on and on
11:56
wait there’s more this is the best line
12:00
in the entire bio you ready
12:02
john so a professional actor in new york
12:05
city for over 15 years
12:07
appearing in principal roles on broadway
12:09
television and film so please audience
12:11
start getting your questions ready for
12:13
our fantastic
12:14
guest and please welcome the obviously
12:18
multi-talented
12:19
john c havens john you are live on the
12:22
tech humanist show
12:24
yeah kate o’neil hey
12:28
thank you so much for being here it is
12:31
an honor to be here seriously i’m stoked
12:33
to be on your show thank you
12:34
i’m stoked to have you you have a fan
12:37
following
12:39
announce the show i i typically do over
12:42
the weekend and
12:43
all of a sudden there was just this
12:45
mountain of of
12:46
uh in of in stream streaming in
12:49
notifications that people were super
12:51
excited and and
12:52
i got a bunch of outreach from in and
12:54
everywhere going like oh my gosh i’m so
12:56
glad you’re having john on the show so
12:58
your your audience is very excited right
13:01
now
13:03
well thank you very much and again honor
13:05
to be here and by the way you were
13:06
rocking the glasses
13:07
headset thing i mean it’s like you
13:10
should look like
13:11
really good i could probably just like
13:13
glue the top of them on to the top of my
13:15
headset
13:18
i’m never gonna be seen without the
13:20
sunglasses on my head it’s one of my
13:21
please
13:23
i’d be sad one of my brand things i
13:25
don’t know so naturally john i gotta
13:27
start off
13:28
the question that’s the part of your bio
13:30
that’s the least relevant to the topic
13:32
of the show
13:33
but the most colorful so you were a
13:35
professional actor tell us all about it
13:38
yeah i moved to new york in 1992 um
13:41
when i was like four i’m kidding anyway
13:44
i moved to new york and
13:46
yeah i was in the screen actor skill uh
13:48
equity had a great agent
13:49
for about 15 years i did small parts but
13:53
like in law and order
13:54
law and order svu i did a partner
13:56
broadway show
13:57
and uh yeah i was an actor for 15 years
14:00
that’s fantastic
14:01
i i love so it just sort of reinforces
14:04
my
14:05
long-standing theory that people who
14:07
have lived multiple lives within their
14:08
lifetime
14:09
are the most interesting that’s very
14:13
cool
14:14
and so what what’s the trajectory there
14:17
though like
14:18
when do you go from acting to advising
14:21
companies on social media to
14:23
leading ethical guidance for the world’s
14:25
largest association of tactical
14:26
professionals
14:28
sure that’s a normal trajectory um
14:32
i think a lot of it comes from
14:33
introspection um my dad was a
14:35
psychiatrist he’s passed away
14:37
my mom is a minister so apparently i was
14:40
just raised in a household where like
14:41
examining your feelings was kind of a
14:43
thing
14:44
um and then acting most of your work
14:46
being the human condition
14:48
and then i was on sets a lot of times my
14:50
parts were comedic
14:51
roles and i did a lot of really bad
14:53
industrial films like bob i don’t think
14:55
the photocopier works that way
14:58
and they’d say can you make this funny
15:00
and so i went from writing scripts
15:02
uh to then working in pr and that’s
15:05
where i got the pr
15:06
job back when i got that pr job my
15:08
friend was like
15:10
come help me run the new york office of
15:11
this big pr firm and i was like i don’t
15:13
know about pr
15:14
no one here under twitter and i was like
15:17
um
15:17
and then i found out about ieee when i
15:20
was writing a book on ai ethics
15:21
and i pitched them about this idea so
15:23
that’s the fast version of the
15:25
trajectory
15:26
well how did you get to writing the book
15:27
on ai ethics was that part of the work
15:29
you were doing with the pr firm or was
15:31
that
15:31
on your own somehow pure unbridled fear
15:35
fear matter seriously it was about
15:38
six years ago i was writing a series for
15:40
mashable all those articles are still
15:42
live and um what i was finding is that
15:45
even back six years ago there were there
15:48
were only the extremes
15:49
here’s the dystopian aspect of ai here’s
15:52
the utopian and i just kept calling
15:55
people
15:55
and saying okay is there a code of
15:57
ethics for ai because i’d like to know
15:59
and that will kind of help balance
16:01
things out and more and more no one knew
16:02
of ones
16:03
like like here’s the here’s the code of
16:05
ethics for ai from the yada yada you
16:07
know
16:08
doc you know and so i was like that
16:10
seems like a good thing to have
16:11
yeah and you have
16:14
helped create what is one of the most
16:18
useful and informative sets of design
16:21
ethics but
16:22
or design guidelines i should say but
16:24
we’ll come to that because
16:26
i want to make sure that we also build
16:28
into you know
16:29
your uh your job now your your multiple
16:32
roles there are a lot of words
16:36
i wonder if you could briefly explain
16:38
your various roles for us
16:41
so many words i’m going to take the rest
16:43
of this episode
16:45
enjoy uh well first of all i should say
16:47
this i am deeply honored to work at ieee
16:50
i love my job
16:51
on the show i’m john so i’m speaking
16:54
it’s john not all of my
16:56
statements formally obviously um you
16:58
know represent at your belief so
16:59
disclaimer alert retweets are not
17:01
endorsements yeah exactly so you know
17:04
but i just want to say that um so one
17:06
job
17:07
what happened is that this book about uh
17:09
a artificial intelligence was really
17:11
saying
17:12
what is it about our own values as
17:14
individuals we may not know
17:16
because if we don’t ask we won’t know
17:18
and then if we don’t know them we can’t
17:20
live to them
17:21
so it’s pretty basic right people often
17:23
why i’m
17:24
unhappy and if you don’t actually know
17:26
your values
17:27
maybe you’re not living to your values
17:28
it’s not the only reason you’ll be
17:30
unhappy but it’s one of them
17:32
so this is a big jump so enjoy anyone
17:34
technical on the phone
17:35
but when you come to like data
17:37
scientists human uh
17:38
hci human computer values
17:42
alignment it’s a technical term but it’s
17:44
similar right
17:45
what are you building who’s going to use
17:47
it what are their values how can you
17:49
align it
17:50
so i was writing this book um really
17:52
thinking about
17:53
how is our personal data related to our
17:55
values and then how are all these
17:57
beautiful machines and technologies kind
17:59
of in one sense looking back at us
18:01
and i just had the really good fortune
18:03
there were some senior people from ieee
18:05
in the audience it was at south by
18:06
southwest they’d asked me to come speak
18:09
and i pitched them and i got really
18:10
lucky there’s this
18:12
guy named constantinos cataholios he’s
18:14
the managing director of ieee standards
18:16
association
18:18
and he and so many people at ieee had
18:20
already been planning something along
18:21
these lines
18:22
so i was really catalyst and then
18:24
there’s hundreds of people who’ve
18:26
actually really done
18:27
the work to create all the work perfect
18:30
because you know
18:30
i think a lot of people for a long time
18:33
have thought of
18:34
ieee as a sort of a a dry
18:37
organization concerned primarily with
18:39
standards and whatnot i mean that that’s
18:41
kind
18:42
of the impression that i had when i
18:44
first came into attack
18:45
25 years ago so it’s interesting to
18:48
change and
18:49
and you know to know the origin of that
18:52
change but
18:53
how did how did you come to hold roles
18:55
that are so clearly focused on human
18:57
impacts was it that the
18:58
shape was already being created or did
18:59
you bring that that
19:01
uh the vision of that to the role
19:05
um well first of all tribally uh the
19:08
tagline itself is one of the reasons i
19:10
actually wanted to work
19:12
with the organization it’s advancing
19:13
technology for humanity
19:15
i actually i genuinely love that yeah
19:17
when i when i first pitched this idea
19:19
and constantinos
19:21
it resonated with he built on it i can’t
19:23
say enough good things about him
19:25
he’s kind of a mentor and he’s brilliant
19:27
but that word for
19:28
fo r right advancing technologies for
19:31
humanity it goes back to values
19:34
what is the success you’re trying to
19:36
build you can’t just be like yay
19:38
we’re advancing technology for humanity
19:40
how are you doing that
19:41
what does that mean so he and so many
19:44
other people within ieee
19:46
and then ieee is volunteer driven so 700
19:49
people
19:50
wrote ethically aligned design uh
19:52
constantinos and the team kind of helped
19:54
shape how it started
19:55
but then it was really the experts who
19:57
wrote the different sections
19:58
in consensus that created the document
20:01
it got
20:01
pages of feedback and it had three
20:03
versions and a lot of that feedback also
20:06
came from people
20:07
the first version was like americans
20:08
people from the eu created it
20:10
but then we got feedback from south
20:12
korea mexico city and japan which was
20:15
awesome
20:15
because many of them said this seems
20:18
really good but it feels non-west
20:19
or yeah it feels very western you need
20:21
more non-western views and so
20:24
that always to me and this is like the
20:25
favorite part of my job is like huh
20:28
feedback that means you want to join a
20:30
committee awesome
20:34
well that’s great i i think it’s really
20:36
uh it’s important that you were open to
20:38
that feedback that you you know you got
20:40
that kind of feedback
20:41
it it shows a lot of trust that your
20:44
constituency
20:45
came back and said can we incorporate
20:48
more of a viewpoint that
20:49
that that deviates from you know this
20:52
kind of western
20:53
standard agreed
20:57
yeah we so i you know i think what’s so
21:00
interesting to me is you know
21:01
i i read through your book artificial
21:05
intelligence
21:06
and you have a couple of uh quotes in
21:09
there that that really stood out so
21:10
one is that you said i am not anti-ai
21:14
i am pro-human which you know that
21:16
resonates with me
21:17
um but also i feel like it ties into
21:20
what you were just talking about it with
21:21
the slogan of
21:22
ieee but what to you what does it mean
21:24
to be
21:25
pro-human yeah by the way owe you a cup
21:29
of coffee for reading the whole
21:30
book it was wonderful
21:33
oh thank you thank you um i think
21:35
especially from a media narrative
21:37
standpoint there’s a lot of
21:39
us versus them titles that we’re often
21:42
working against in both ieee and the
21:44
work and the council on extended
21:45
intelligence
21:46
where you re and i’m doing this for
21:48
effect right you know
21:50
x new ai whatever new ai program
21:53
playing a sport or something destroyed
21:56
this human in soccer right these
21:59
extreme hyperbolic terms like
22:01
eviscerated a human
22:04
and it’s like how do you read that and
22:05
as a human just as anybody
22:07
not feel kind of like like crab you’re
22:10
like
22:11
and it it makes the technology and the
22:14
human
22:14
feel devalued and more importantly
22:18
the pro human thing means it’s okay to
22:20
recognize
22:22
that humans are inherently different
22:25
than the machines and the tools that
22:27
we’re building and to honor both
22:29
you can say here’s where they’re
22:31
different it doesn’t mean you’re saying
22:32
this is bad
22:33
but for instance and i won’t go into
22:35
this unless you want to because it gets
22:37
very philosophy geeky
22:38
right but i’m all about the philosophy
22:40
geeky
22:41
all right two hours later
22:46
a lot of western ethics is built on
22:48
rationality right and rationalities a
22:50
lot of like democratic ideas come from
22:52
awesome right but the yes and to
22:55
rationality
22:56
is things like relationality how do you
22:58
and i interact as people with our
23:00
emotion
23:01
and then the systems how do we interact
23:03
with nature
23:04
so if you kind of look at through one
23:07
lens
23:08
uh only a feature of who humans are it
23:11
can be easy to say well humans are only
23:13
about what’s in our brain
23:15
and once john’s just information the
23:17
cognitive
23:18
sort of stuff i have in my hard drive is
23:20
kind of spilled out
23:22
that’s all i am but i’m a musician i’m
23:24
an actor i’m a dad
23:26
i’m a friend of kate honored to be here
23:28
right
23:29
and that means those those ephemera
23:32
are not minor in terms of how it means
23:35
we relate to each
23:36
other and we relate to the world and
23:37
then when you go especially to
23:38
non-western traditions like the shinto
23:40
tradition in japan
23:42
or many indigenous traditions around the
23:44
world we cannot assume
23:46
anybody royal we making technology
23:49
that unless we know how others frame
23:52
these ethical questions about how they
23:54
look at human
23:55
that the system building are going to be
23:57
applicable to them we have to know what
23:59
they are
24:00
and work together um you know towards
24:02
consensus so say all that
24:04
what i want to get more at a in a
24:08
in a minute into you know that sort of
24:12
compilation of ethical views and all the
24:15
all the philosophical viewpoints that
24:17
sort of cobble together
24:19
to inform that but but i still want to
24:21
stay with this pro-human
24:22
idea because i feel like also what
24:24
you’re talking about there
24:26
you know you talked about the human
24:27
condition earlier and it feels like
24:30
some of what you’re saying is this
24:32
multi-dimensionality is a really
24:33
important facet
24:34
of humanity and of being pro-human is
24:37
that fair is that a fair
24:38
characterization of what you’re saying
24:40
yeah i think it’s easy and i sympathize
24:42
or should say understand
24:44
a lot of times people are like well let
24:46
usually it’s ai but let this technology
24:48
take over because humans have screwed
24:50
everything up
24:51
right it’s a it’s the sentiment is
24:55
understandable right people make
24:57
mistakes we’re all flawed
24:58
but i’m not quite sure what someone yeah
25:01
i tend to get frustrated to sometimes
25:02
with those statements because i’m like
25:03
well
25:04
a people build the systems
25:07
for you so you know like guess what
25:10
secondly it’s the systems underneath the
25:13
technology that need to be addressed
25:15
right and then third i have this sitting
25:17
by my desk
25:18
um i’ll read this to you it’s this
25:20
japanese adage
25:22
in japan broken objects are often
25:24
repaired with gold
25:25
the flaw is a unique piece subject’s
25:28
history which adds to its beauty
25:30
consider this when you feel broken right
25:33
like what are we supposed to be
25:34
perfect what does that mean and what’s a
25:37
perfect man what’s a perfect woman
25:38
what’s a perfect american what’s a
25:40
sense not who cares right we’re asking
25:43
to understand our values
25:45
but the starting point for me as a
25:47
person and a lot of work that we’re
25:48
doing and i
25:49
focused on well-being is to say
25:52
inherently all humans have
25:53
worth simply because they exist and so
25:56
to start to frame the humanness being
25:58
worthwhile
26:00
because of up here immediately means
26:03
we’re saying
26:04
we’re willing to design technology that
26:06
is in one sense only for a very small
26:09
portion of the planet
26:10
which is not the case with me is not the
26:12
case with ieee
26:14
so if that makes sense that’s the deeper
26:16
human stuff it makes great sense to me
26:18
i also want to remind the audience feel
26:20
free to start
26:21
funneling in any kind of questions if if
26:23
you’re hearing what john is saying and
26:25
you have questions about what we’re
26:27
talking about please go ahead and ask
26:28
them but you know here’s one that i have
26:30
is
26:30
another excerpt from artificial
26:32
intelligence is you wrote
26:34
if machines are the natural evolution of
26:36
humanity we owe it to ourselves to take
26:38
a full measure of who we are right now
26:40
so we can program these machines with
26:42
the ethics and values we hold dear
26:44
and here’s a question i get asked all
26:46
the time and i’d love to pass it along
26:48
to you
26:48
whose ethics and whose values are we
26:51
programming
26:52
and how can we be sure we’re getting
26:53
that decision right
26:56
uh my ethics
26:59
john’s way or the highway
27:02
you know it’s not aggressive it’s just
27:04
it’s the way to go no
27:06
question i mean first of all um applied
27:09
ethics right there’s a lot of discussion
27:10
around
27:11
aiox and i’m using air quotes maybe too
27:14
much
27:14
but it’s a huge phrase what do we mean
27:17
by ai
27:18
is it machine learning is it you know
27:20
inverse reinforcement learning what do
27:22
we mean by ethics
27:23
is it just philosophy or is it
27:25
compliance
27:27
but the basic idea is applied ethics is
27:30
essentially designed right in form of
27:32
design it’s saying we want to build a
27:34
technology
27:35
who are we building it for what is the
27:38
definition of value
27:39
for what we’re building oftentimes the
27:42
value framed in
27:43
exponential growth right not just profit
27:45
i want to be clear
27:46
we all need money to pay bills and and
27:48
profit is what sustains an organization
27:51
but exponential growth is an ideology
27:55
that it’s not just about getting some
27:57
profit or speed
27:58
it’s about doing this well when you when
28:01
you maximize
28:02
any one thing other things by definition
28:05
empirically take less of a focus
28:09
and especially with humans that can be
28:11
things like mental health
28:12
right i got to kick out this technology
28:15
to the world
28:16
because i’m pressured because of market
28:18
needs
28:19
this is not bad or evil this is why the
28:21
term ethics can be so confusing
28:24
but it is a decision and in this case
28:26
it’s a key performance indicator
28:27
decision
28:28
where there may be pressure the priority
28:30
is to get something say to market
28:32
versus how can we get something to
28:34
market that best
28:35
honors end user values in the context of
28:38
the region where they are
28:40
kind of to your last question and then
28:42
also how do we understand
28:44
what risk and harm is in the algorithmic
28:47
era
28:48
because one thing i’ll say quickly here
28:49
is a lot of times people are like ai is
28:51
just the new tech
28:52
you know and i’m like sorry it’s just
28:54
not here’s why
28:57
data right is key 100 years ago like the
29:00
first car
29:00
or whatever didn’t have data that would
29:03
measure us and then go to the cloud
29:06
so human data being measured and the
29:08
ability to immediately go to the cloud
29:10
is utterly different and how that data
29:13
is translated back to us about who we
29:15
are is deeply affecting human agency
29:18
identity and emotion
29:19
yeah it’s almost like the the earlier
29:21
example the car
29:23
is deciding where to drive us or at
29:25
least recommending
29:26
well you’re saying you want to drive to
29:28
chicago but detroit is nicer this time
29:31
of year
29:32
you should really go to detroit
29:35
well what do we do about all of the
29:38
human bias that’s already encoded into
29:40
data sets and algorithms and business
29:42
logic and
29:43
and all of that i think the easiest
29:46
thing is just hate
29:47
everyone universally right just pure
29:49
irrational
29:50
yeah no not relational but rational yes
29:54
um i think first of all for me is
29:57
there’s different levels i’m learning
29:59
about bias and again i want to be clear
30:01
i’m speaking here as john
30:02
not a psychiatrically the whole
30:03
organization um and if i have the book
30:05
here i’ll show it
30:06
yeah i have the book here so one thing
30:08
that you know everyone
30:10
assumedly in the industry the ai
30:12
industry is focused on is things like
30:14
eradicating bias
30:16
and here personal heroes of mine joy
30:18
bulamuni
30:19
uh has done some phenomenal work with
30:21
aspects to um
30:23
you know any device that won’t measure
30:25
brown or dark skin or or black skin
30:27
tones in the same way as white tones
30:29
she’s also done some amazing work with
30:31
the actual terminology
30:33
and i’m blanking on the term but like
30:35
the taxonomy of how
30:37
different data sets are created around
30:40
the framing of those skin colors anyway
30:42
joy bulimi
30:43
awesome the thing i think i’m just
30:46
learning
30:47
and i’ll hold up the book that’s called
30:49
race after technology by dr ruja
30:51
benjamin from princeton
30:52
and forgive me if you i know you’ve
30:54
interviewed people i think you’ve talked
30:55
about this type of stuff
30:56
um and i heard about her from the
30:58
radical ai podcast
30:59
shout out to my friends dylan and jess
31:01
they have a great show
31:02
um benjaminus on the show she gave the
31:05
example about bias and i’m going to
31:07
paraphrase this wrong so please
31:09
read her book but the logic of for
31:11
people creating tools
31:13
looking for data anyone creating ai they
31:16
might go to say like i live in new
31:18
jersey right so there’s an area of new
31:19
jersey where there’s a hundred thousand
31:21
citizens
31:22
who have been measured by one metric
31:24
which is the census data
31:25
right so a hundred thousand people live
31:27
here then there’s data about
31:29
something health or medical oriented all
31:31
these hundred thousand people
31:33
x amount did whatever in terms of i
31:35
don’t know cardiology
31:37
now that that insight or that data about
31:40
that data set
31:41
is now what’s being used hypothetically
31:44
or in reality but i’m giving an example
31:46
by everyone creating ai and then they’re
31:49
saying
31:50
we’re saying let’s make sure that that
31:52
ai
31:53
is accountable and transparent fair and
31:55
all those things which is we should
31:57
but she made the key point to me which
31:58
blew my mind and i’m
32:00
frankly a little embarrassed i hadn’t
32:01
thought of it before is
32:03
the assumption is that of those hundred
32:06
thousand
32:07
all hundred thousand citizens have
32:09
access to the health and medical data
32:12
when in fact whether it’s marginalized
32:14
populations whether it’s people that
32:16
just didn’t have you know
32:17
they weren’t able whatever the number
32:19
may be significantly lower
32:21
so underlying and analyzing the systems
32:24
by the way is a design
32:25
thing i know the term marginalized
32:27
obviously can be very
32:29
heated and whatever else for me let’s
32:32
move some of those terms out they’re
32:33
critically important
32:34
but the point is as people who want to
32:36
design this technology holistically well
32:38
for everyone
32:40
especially dr benjamin’s ideas really
32:42
helped me think about
32:43
we have to be thinking about building
32:45
for all not just those who we are
32:47
building for not realizing who we’re
32:49
missing
32:49
in the process well and you’re speaking
32:52
about
32:52
layers of design right it’s it’s the
32:56
important thing about a term like
32:57
marginalization and what it implies
32:59
is that there are systems and we can
33:01
recognize that there are
33:02
but you’re i think a lot of times the
33:05
the
33:06
design of technology or of technological
33:08
experiences
33:10
is focused on the technology and not on
33:13
the sociological
33:14
and cultural implications that are
33:16
wrapped in and around that technology
33:18
and i think
33:19
so much of what’s important about the
33:21
work that you’ve been doing and the work
33:22
of some of the people that you’ve
33:23
mentioned
33:24
is to unpack a lot of those those
33:26
assumptions and say
33:28
it’s not just going to exist in a void
33:31
or vacuum
33:32
it’s going to be used in culture and
33:34
these things create
33:36
experiences that scale our culture and
33:38
we need to be able
33:39
to understand you know what the
33:41
implications of of those design
33:42
decisions are
33:44
yeah exactly so i i want to ask you too
33:48
about automation because we have
33:50
actually had a pretty good amount of
33:51
discussion with some of the guests who
33:53
have been on
33:54
the show past episodes of the show so
33:55
far about ai ethics
33:58
and less about automation per se
34:01
and obviously i realize that a lot of
34:03
what needs unpacking about automation
34:05
does have to do with intelligence
34:07
but there are still questions about what
34:09
we automate and how
34:11
and who is affected so do you anticipate
34:14
ever being a discourse on the ethics of
34:16
automation that that gets much attention
34:18
that’s
34:19
separate or related to the ethics of ai
34:23
well i’m really glad you asked that we
34:24
for ethically line design
34:26
uh we actually use the term uh
34:28
autonomous and intelligent systems
34:30
because to your point you know if we
34:32
want to define artificial intelligence
34:34
we’d be here for seven hours
34:35
you know when you get in a room of
34:37
anybody defining it
34:39
it’s it’s very challenging so at least
34:41
to your point or i’m saying
34:42
i agree with you we said let’s talk
34:44
about automation
34:46
versus air quote intelligence without
34:48
being anthropomorphic
34:49
but the term intelligent systems is a a
34:52
classifier of say like certain types of
34:56
uh learning and what have you automation
34:59
you know everyone uses this example and
35:01
i always forget what it’s called but in
35:03
a car for the like the last 30 years
35:05
uh cruise control right you’re driving
35:07
at 60 miles an hour and you push a
35:09
button that’s already option
35:10
and then we’re used to with simple tools
35:13
i don’t know
35:14
spell check things like that’s a play a
35:16
good example
35:18
a lot of my book was focused on what are
35:20
the things either that we don’t
35:22
ever want to automate we want to make
35:25
sure that we have the option to be in
35:26
the midst of
35:28
that process and not always automate
35:31
so a good example i give there say like
35:33
a dating app
35:34
right the tools and this is like
35:36
e-harmony and some of the other services
35:38
use
35:39
really complex and frankly very
35:41
impressive
35:42
machine learning uh algorithms to help
35:45
you choose who you’d want to be with
35:47
and by the way some of the some of these
35:49
things there’s not a moral or ethical
35:50
issue it’s like
35:51
do you live in denver colorado yes do
35:54
you want to date someone in nome alaska
35:56
no thank you you know so it’s not like
35:59
this complex thing
36:01
but the thing is at some point there may
36:03
be aspects of a decision someone else
36:05
has made
36:07
where you now aren’t in the mix and
36:09
maybe you won’t meet a person
36:10
who you would have met under different
36:12
circumstances by the way that happens in
36:14
real life as well
36:16
point is is if we have upfront
36:18
disclosure
36:19
about those tools access to data and
36:22
most
36:22
importantly we have a choice and we know
36:25
that we have that choice and we make the
36:27
choice
36:28
this is where for instance in my life i
36:30
don’t want anything
36:31
to quote automate my decision around
36:35
parenting right or whatever it is it’s
36:37
not that it’s wrong or right it’s just
36:38
that that reflects my
36:40
values is hey look at this parenting app
36:43
i don’t have to do anything hey son
36:45
talking to here apparently you’re mad
36:47
you know
36:47
yada yada what do i do okay i’m supposed
36:49
to spank you
36:50
right that is totally fictional but my
36:53
point is is like
36:54
how easy it could be to avoid any choice
36:57
this is not about the technology
36:58
technology is astoundingly beautiful and
37:00
amazing
37:02
but us not being in the mix means that
37:05
we don’t learn
37:06
ourselves or train ourselves or focus on
37:08
our own values well there was just that
37:09
article in i don’t remember if it was
37:11
the new york times or what but that was
37:13
about
37:14
parents sort of offloading the the
37:17
dictation of terms of various kinds to
37:20
their children
37:21
to their alexas and smart speakers so if
37:25
they need to tell
37:26
a kid what to do it’s like have the
37:29
smart speaker tell the kid what to do
37:31
or something like that so the job of
37:34
discipline
37:34
is already i think being automated in
37:36
some sense but i think
37:38
to some of what you’re saying there’s
37:40
there’s the distinction between
37:42
automating away versus automating around
37:45
like you know when you talk about
37:47
automating parenting
37:49
i think you know it’s implied that
37:50
you’re saying you don’t want to automate
37:51
away
37:52
parenting but you could certainly make
37:55
some
37:55
seamlessness or some conveniences around
37:57
parenting through automation and it
37:59
wouldn’t be
38:00
uh it wouldn’t necessarily be a moral uh
38:03
controversy right
38:05
no not at all and i’m glad you brought
38:06
it up and i will say though the metrics
38:09
are key here right like as a parent i
38:11
have two kids who were young i would
38:13
have given
38:14
a good amount of money to sleep through
38:16
the night when they were getting sleep
38:17
trained for instance
38:18
right and and a lot of these a lot of
38:21
these tools can read bedtime stories etc
38:23
but i wrote an article about this um i
38:26
have to i’ll send you the link for the
38:27
show notes um
38:29
where where the question that i about is
38:31
take something to the nth degree and
38:33
again it will be clear this is not about
38:34
the technique
38:35
right this is about societal choices but
38:38
what happens if you use like six
38:39
different parenting
38:40
apps or tools and eventually your kid
38:43
says
38:43
you know i’m good you know dad i know
38:45
you wanted to go on a walk with me or
38:47
you wanted to talk about whatever but
38:49
i’m going to go through my six or seven
38:50
different things and
38:52
thanks so much i i don’t really i don’t
38:54
want you to read me a story i don’t want
38:55
you to take me on a walk i hang a robot
38:58
i don’t need your advice about girls
38:59
great 21st century cat’s cradle
39:02
story and i’m not trying to judge any
39:05
family or or a kid
39:07
right like balance of i’m not telling a
39:09
kid what they should or shouldn’t do but
39:10
i think
39:10
if we sort of usurp i’m sorry if we
39:13
eschew
39:14
and give away that and what i’m saying
39:17
give away is more like the ultimate
39:19
sense of
39:20
of why looking at our values and this in
39:23
case for parenting
39:24
are so critical the answer is outside of
39:27
the technology
39:28
right or policy you may wake up one day
39:31
and be like
39:31
what did i just give away i don’t know
39:34
but the metrics are critical here
39:36
right because mostly a lot of times um
39:39
and this is outside of like gdp and
39:41
exponential growth
39:43
um we tend to focus on
39:46
what can i do to get from five now to be
39:48
happy and a lot of times that
39:50
productivity i can be more productive
39:52
so we ignore the now and a lot of my
39:54
work has been in positive
39:56
psychology my last two books focused a
39:58
lot most of gratitude is just being able
40:00
to look at what you have now
40:02
and say this is stuff i really treasure
40:04
and value
40:05
and then that’s where you’d be able to
40:06
make that decision if parenting for
40:08
instance is one of those
40:10
things well then i’m going to allocate
40:12
time with my kids
40:14
even though that half an hour or hour at
40:16
dinner i could be doing more work
40:19
so it’s actually very pragmatic and
40:20
practical and that’s most of what my
40:22
last two books are focused on
40:24
is please think about this so you can
40:27
make these choices
40:28
so you’re not 10 years later like you
40:30
know to your point cat’s cradle
40:32
weeping in your beer like why don’t my
40:34
kids talk to me anymore
40:35
you know they talk to the smart speakers
40:37
still
40:38
[Laughter]
40:40
it’s also it reminds me in in my own
40:43
work one of the things that i talk about
40:44
is
40:45
with with the concept of meaning uh
40:47
being a very human-centric
40:48
concept and that meaningful experiences
40:52
the meaningfulness is one of the the
40:54
great
40:55
sort of characteristics of experiences
40:57
that we should be
40:58
trying to design through technology and
41:00
beyond but
41:01
that one of the things that happens with
41:03
automation it feels like is that we
41:05
focus a lot on as you say productivity
41:08
and we try to automate the things that
41:10
are
41:10
mundane or repetitive or that that feel
41:13
like they take away
41:15
our cognitive focus and yet
41:18
i feel like if you take that to scale
41:21
and you
41:22
only have automated the things that are
41:24
mundane and meaningless
41:25
then you end up in a horrible dystopia
41:28
when that
41:28
is what surrounds us and so there’s this
41:31
kind of counterpoint where i feel like
41:33
we need to be infusing more meaning and
41:35
i think it comes back to your idea of
41:36
infusing the values into into the
41:39
discussion and making sure that what’s
41:41
automated reflects
41:42
meaning and reflects values but it isn’t
41:45
automating the meaningful
41:47
things that you’re doing is that would
41:49
you agree with that
41:51
yeah i would and i i think also i’ve
41:53
read a lot of
41:54
media where there’s a lot of assumptions
41:57
that i would even call
41:58
if not arrogant certainly dismissive if
42:00
not wildly rude
42:02
so you know there’s you’ll read an
42:04
article that’s like well this machine
42:05
does x it shovels because no one wants
42:07
to shovel
42:08
for a living right i’m just bringing
42:10
this up and no that’s good it’s a good
42:12
point
42:12
on the tech right if like there’s a john
42:14
deere automated shoveler
42:16
i’m sure it’s fantastic is to say we’ve
42:18
all done jobs
42:20
of any kind uh that elements of it you
42:24
really don’t like
42:25
and you wish could be automated but
42:27
usually that’s because you do the job
42:29
long enough to realize this part of my
42:31
job i wish
42:32
would be automated right things like
42:34
shoveling i don’t know
42:36
yeah a lot of people would not be like
42:38
give me 40 years of shoveling
42:39
i’ve done a lot of like especially when
42:42
i was a younger person i did a lot of
42:44
like
42:44
you know camp counselor jobs for the
42:46
summer where i was outside
42:48
you know i was doing physical labor it
42:49
was awesome that said
42:51
i knew okay this was great for what it
42:53
was i kind of don’t want to do this for
42:55
my whole
42:56
life but the other thing there which i
42:58
really get upset about when i read some
42:59
of those articles
43:01
is what if whatever the job is insert
43:03
job x
43:04
which could be automated is how someone
43:06
makes their living
43:07
right then it’s not just a value
43:09
judgment about the nature of the actual
43:11
labor itself
43:13
but is sort of saying like really what
43:15
someone says there
43:16
is from the economic side of it it’s
43:19
justified to automate anything that can
43:21
be automated
43:22
because someone can make money from it
43:24
outside of
43:26
what that person needs to do to make
43:28
money for them and their family
43:29
and again a company having a cool idea
43:32
to build something that’s automation
43:34
oriented
43:34
that’s awesome but we have to have the
43:36
discussion about
43:38
what jobs you know might go away where
43:40
again the metrics are
43:42
if it’s exponential growth ultimately
43:45
then i don’t see why anything that
43:47
humans do would not be automated
43:50
period like i have not been to a policy
43:52
meeting or whatever yet where someone’s
43:53
like hold on
43:54
we need to not build x because some
43:57
humans won’t whatever the work
43:59
discussion comes up
44:00
a lot right but there’s no like policy
44:02
saying okay
44:04
uh veterinarians nothing’s going to be
44:07
automated with veterinarians ever again
44:09
because animals touched by humans or
44:12
whatever you know
44:13
but why is that not brought up it’s
44:15
because
44:16
there’s the assumption at all times that
44:19
the main
44:21
indicator of success is exponential
44:24
growth
44:25
and a lot of my work is to say i don’t
44:27
think that’s true
44:28
especially when mental health and the
44:29
environment come into play what about
44:31
those two pretty big areas
44:33
of our lives yeah that’s a really
44:36
good distinction and i think it’s also
44:38
it leads me into another question that i
44:40
had which is around
44:42
that we talk a lot when it comes to
44:44
automation about i think the first
44:46
impact people tend to think of is the
44:48
automation as it relates to human jobs
44:50
and human work
44:51
in the not distant future and and how
44:54
that’s going to displace and
44:55
replace in some cases categories of jobs
44:58
and it certainly feels like that’s an
45:00
important topic but it feels like it’s
45:01
also important to step back and
45:03
look even more broadly at the way
45:06
automation affects
45:07
categories of human experiences as a
45:09
whole right across
45:10
employment and healthcare and
45:13
communication
45:14
and government and and so on so when you
45:18
think about that
45:19
where do you spend the most mental
45:22
cycles or mental energy
45:23
when it comes to automation and the
45:24
future of humanity and making sure we
45:26
get
45:27
that focus right in terms of the policy
45:29
in terms of the way we we go about
45:31
implementing around
45:32
it oh great question um there’s three
45:35
things
45:36
in an ethically aligned design now
45:37
here’s where i’m i’m being either pitchy
45:39
or i’m not objective and a lot of this
45:42
is because the 700 volunteers that wrote
45:44
ethically line design they’re friends
45:46
and i massively respect
45:48
the work they did so we have the first
45:50
chapter of a 300 page document
45:53
is our general principles and the logic
45:55
is those other chapters inform those
45:57
general principles like which one should
45:58
we have and then from a consensus kind
46:01
of voting
46:01
standpoint um the logic was we ordered
46:04
those three first general principles in
46:07
the order of importance
46:08
so the first principle is human rights
46:11
and that’s something that
46:12
a lot of the lawyers you know and the
46:14
work that we had were like hey
46:16
and the first couple times i said it
46:18
frankly it kind of me maybe because it’s
46:19
like this is easy we finally were able
46:21
to draft something
46:22
but we had this big event in austin and
46:24
a lot of the lawyers were like look
46:26
ethics is great applied ethics we get it
46:28
that’s design
46:29
we cannot ignore international human
46:31
rights assets been established from the
46:33
un and whoever else
46:35
i thought that made sense and globally
46:38
human rights is very challenging
46:40
how do you implement it etc etc but it’s
46:42
also it is international
46:44
right the logic is any country violates
46:46
human rights
46:47
any country supports human rights in
46:49
different instances but the point is
46:51
if when you’re building any technology
46:54
but especially with ai
46:56
a first lense is what can we look at
46:58
like the rougie principles or roughy i
47:01
always get this wrong
47:02
ruggy principles anyway those principles
47:04
the un principles as they were
47:06
adapted to business that lens of let’s
47:09
not design
47:10
something that knowingly would violate
47:12
human rights and that to me is
47:13
that’s a line in the same and again that
47:15
comes from ethically aligned design not
47:16
just non
47:17
uh the second area is data agency or
47:20
data sovereignty
47:21
and that means beyond privacy right gdpr
47:25
in the states the california privacy
47:27
acts e-estonia
47:29
all this amazing work saying how do we
47:31
protect people’s data
47:33
but you know in one sense it’s like what
47:35
else are we going to do
47:36
right like government’s protecting
47:38
people’s data that’s kind of the job
47:40
right now i’m not saying this in a
47:41
teasing sense i’m saying like thank god
47:43
for the gdpr
47:44
however that does not empower the person
47:49
to be able to say with my identity
47:51
source
47:52
kind of a personal data locker right
47:54
john has his data
47:56
then when i want to exchange data with
47:58
my friend you
47:59
i can do it in a peer-to-peer way
48:01
blockchain or whatever
48:02
i have access to my data i have
48:04
portability of my data
48:06
this is data sovereignty the end user
48:09
means they actually have a voice and
48:11
especially now that we’re getting into
48:13
augmented and virtual and extended
48:14
reality right and the spatial web
48:17
we have to understand in every art
48:19
conversation
48:21
to move not just talk about privacy but
48:23
we must talk about data sovereignty
48:25
and giving people agency over their data
48:27
the last thing i’ll say i’ve already
48:29
touched on
48:30
is what we talk about well-being
48:32
indicators that term can be confusing
48:35
because people being and they think
48:36
wellness
48:37
and it sounds like a yoga mat or
48:40
something
48:41
or it may be like well-being that’s like
48:43
fitness it’s not it’s
48:45
started with bhutan’s gross national
48:46
happiness now there’s the un
48:48
sdgs the oecd has their better life
48:51
index
48:51
it simply means that when we build
48:53
anything multiple lenses
48:55
of societal success must be taken into
48:58
account
48:59
or we will build for the one thing that
49:01
we say is the most important
49:02
it’s not really rocket science and it’s
49:04
not about good or evil or whatever else
49:06
it’s about recognizing that
49:08
years ago the gross domestic product was
49:10
born in bretton woods new hampshire
49:12
with very specific values at the time
49:15
news flash
49:16
was written by men and that’s why
49:18
caregiving for instance
49:20
is not recognized in the gdp guess what
49:22
right now the world is facing the most
49:24
with kovid
49:25
the lack of caregiving right how
49:28
ludicrous would it be to design
49:30
a tracing app or anything for covit and
49:32
be like caregiving i don’t need to know
49:34
right but it wasn’t put in the gdp so
49:37
that doesn’t mean throughout the gdp
49:39
with the bathwater
49:40
but it certainly means other measures
49:43
must be used when we create these
49:45
amazing technologies and here i’ll end
49:47
with
49:48
an amazing precedent which is new
49:49
zealand i bow down to new zealand
49:52
we all do prime minister i have a very
49:56
appropriate
49:56
and ethical crush on you um they do so
50:00
much work they had a well-being
50:02
2019 budget where they just have five
50:04
areas fiscal is one of them
50:05
right kind of the gdp those metrics can
50:07
still be used
50:09
but they talk about children’s mental
50:10
health and they talk about the
50:12
environment
50:13
and then their ai plan it’s a whole
50:15
separate ai plan
50:16
the success metrics for the ai the
50:19
technology they build
50:20
ladders up to that all five of them have
50:23
to be served
50:24
not just one and by the way not even
50:26
just children’s mental health right it’s
50:28
a balance
50:29
that also means the money really is
50:30
about how do we serve our people and our
50:32
planet
50:33
with the money versus exponential growth
50:35
is number one
50:36
and we’ll give you some spare change for
50:38
the other stuff so new zealand
50:40
is a wonderful precedent along these
50:42
lines
50:43
yeah i wonder it really struck me when i
50:45
was looking at by the way
50:46
while we were talking about this i’m
50:47
going to throw the url
50:50
ethics in action.ieee.org up on the
50:53
screen
50:54
uh and of course this will go to audio
50:56
for pod just read that aloud one more
50:58
time
50:58
ethics in action triple e dot org
51:02
yeah as we’re talking about this um
51:04
ethically aligned design
51:06
landmark resource uh on a uh
51:09
autonomous how do you characterize it
51:11
it’s autonomous intelligence systems is
51:13
that right
51:14
yeah now we say artificial intelligence
51:16
systems and it’s also the oecd preferred
51:18
term
51:19
it just means kind of everything in one
51:20
so you don’t have to say like
51:22
this type of learning that type of
51:23
learning this type of learning it just
51:24
means all of it
51:25
artificial intelligence systems but at
51:27
the time we said autonomous and
51:29
intelligent systems so we’ve just
51:30
evolved what we say
51:31
okay so i did find what was one thing
51:34
really interesting about that was that
51:36
aspect that found as you you just talked
51:38
about the well-being and that that
51:39
really struck me
51:41
um as something very encouraging uh and
51:44
fascinating because it feels
51:45
very open-ended and it makes sense to me
51:48
because i’m someone who loves to live in
51:50
nuance and i think a lot about you know
51:52
human
51:52
experience and and the outcomes of of
51:55
technology
51:56
but i wondered if that has prompted uh
51:59
feedback from the practice
52:00
practitioner community because it is so
52:02
open-ended and
52:04
seemingly subjective as a recommendation
52:07
sure no it’s i think a lot of it is also
52:09
um
52:10
i just became fast like if someone’s
52:12
like hey you’re going to talk about gdp
52:13
all the time
52:14
like 10 years ago you’d be like wow that
52:15
sounds like a boring life
52:18
i mean it’s no law and order svu but
52:21
well
52:21
hello if you need a fake police officer
52:24
i’m your guy
52:27
but really it’s like if i just said to
52:28
you you know kate tell me about your day
52:30
today what made it worthwhile and we
52:32
were having a cup of coffee at the end
52:33
of the day
52:34
you might be like i had a great show
52:36
thank you
52:37
and then like i had time with friends
52:38
and whatever else
52:40
if you never reflected on that right you
52:43
just have a pastiche of a kind of
52:44
general
52:45
sense of whatever and then certainly if
52:47
you were going to build say policy
52:49
future aspects of your life built on
52:51
data you’d be like
52:52
you know what i only know my bank
52:54
account number and i know how much i pay
52:56
for rent
52:57
right and i’m giving a stark example of
52:59
those are the things that we’ve been
53:00
trained to think are the only things
53:02
that are worthwhile measuring
53:05
ludicrous yeah of course know about your
53:07
money that’s incredibly important
53:09
and i understand too like i was in the
53:11
quantified self movement
53:12
a lot two years ago with my other book
53:14
hacking happiness
53:15
and look this is not about for six
53:17
months like walking around
53:18
and being like i’m wearing 94 sensors
53:21
and kate just used a vowel and i write
53:23
that down right
53:24
it’s about measuring things that are
53:26
important to you for a while
53:28
and realizing these are my values right
53:30
now
53:31
even writing them down and then knowing
53:33
that you’re living to them right
53:34
so i bring that up because the
53:36
well-being thing first of all there’s a
53:38
lot of ignorance for me there was
53:40
about what the gdp for instance even is
53:43
there’s a lot of great ted videos
53:45
uh ted talk videos and i’m trying to
53:47
remember the guy that’ll come to me
53:48
uh all right i’ll tell the story and
53:50
then i’ll remember who he is
53:52
but it’s one of the highest rated ted
53:54
videos and he talks about
53:56
the gdp there’s this sense that
53:59
subjective data is kind not worthless
54:02
but it’s like kate and john both give
54:03
our views
54:04
right on netflix eventually that might
54:07
mean something because we’ve given
54:08
five-star ratings
54:10
he pointed out in his talk chris it’ll
54:12
come to me darn it
54:13
anyway it’s in there somewhere
54:16
that the majority of the gdp data is
54:19
based on subjective data
54:21
and why that is is that service
54:23
industries
54:24
base most of their success criteria on
54:27
surveys
54:28
right and service industries make up
54:30
like 65 percent of the gdp
54:32
so really what that means is i go to a
54:34
hotel what does everyone still ask right
54:36
it’s the modern algorithmic age
54:38
what have you never not gotten in an
54:40
email a survey how was your stay at our
54:42
hotel
54:43
and you give that rating now in
54:45
aggregate
54:46
this is where it sort of magically
54:47
transforms into
54:49
objective data that now apparently is
54:52
worthwhile
54:53
so all that is to say the well-being
54:55
indicators there’s a lot of confusion
54:57
about what they are understandably but
54:59
then when you also think about a country
55:01
like bhutan
55:02
that has measured their environment very
55:04
specifically with really
55:05
interesting fascinating in-depth metrics
55:08
that’s why for instance they were able
55:10
to eradicate smog
55:11
above their tree line because they made
55:13
it a priority to
55:15
to to get rid of and also reflects their
55:17
values
55:19
so there’s so much about we measure
55:22
that reflects a real core ideology that
55:25
none of us really even knows
55:26
how it started or why but it reflects
55:29
deep systems from
55:31
70 80 90 years ago that frankly
55:34
just totally need to change because the
55:36
status quo it’s not just about
55:38
dealing with questions of critical
55:40
questions of systemic bias
55:42
racial bias or what have you it’s about
55:44
how are we designing these tools
55:46
and why would we only want to have them
55:48
do this one thing versus question
55:51
and that’s what like we have a standard
55:52
that came out called 7010
55:54
7010 focused on well-being which is
55:57
mainly about showing look at all these
55:59
indicators and start asking
56:01
what would it look like if i built my ai
56:03
tool with all of these like three or
56:05
four of these lenses not just this one
56:07
and it becomes innovation right because
56:10
the last thing i’ll say in this
56:11
answer is i rarely hear innovation
56:14
framed
56:15
around anything except money right don’t
56:18
hinder innovation really means don’t
56:20
mess with money
56:21
which my answer is why can’t that
56:23
question be how can we
56:25
make innovation about how can we make it
56:27
about the environment
56:29
that’s beautiful and it’s so it’s so
56:30
aligned with my own work so i talk about
56:32
uh
56:33
the the lenses on meaning and the
56:34
different the the framework
56:37
of meaning as all these different kinds
56:39
of
56:40
the things that we understand as humans
56:42
like all the way from the lowest level
56:43
like semantics and communication
56:45
through things like relevance and
56:47
significance and truth and purpose that
56:49
are kind of
56:49
mid-grade but but they frame a lot of
56:52
what we we
56:52
do and what we think all the way out to
56:54
these big macro questions of existential
56:57
and cosmic meaning like what’s it all
56:58
about and why are we here but they all
57:00
boil down to
57:01
what matters and then for me the
57:04
question of innovation is always a
57:06
question of what is going to matter
57:08
so if you use those two lenses you know
57:10
what matters and what’s going to matter
57:12
it feels like that’s a
57:14
very aligned uh way of looking at what
57:16
you’re talking about
57:17
as well that you know that that emphasis
57:20
on
57:20
uh not measuring everything you don’t
57:23
have to be
57:23
sort of the quantified self person
57:25
that’s got as you say 90 sensors on
57:27
yourself
57:28
because not all of that matters and and
57:30
you don’t have to know
57:32
you know how many vowels you’re tracking
57:34
at any given point in time
57:35
uh that’s not it’s never going to be a
57:37
meaningful thing it’s not
57:39
unless there is some some specific
57:42
meaningful application at some point in
57:43
your life but
57:45
uh yeah that’s that’s a that’s a big
57:47
takeaway i love that and i i’m glad that
57:50
you’ve put that out there that you and
57:51
your team
57:52
have put that out there in such an
57:53
unambiguous terms for the community to
57:55
be able to
57:57
think about well-being in this very
57:59
dimensional way
58:00
and and make sure that that’s
58:01
represented in in the way that they’re
58:03
building
58:05
yeah and one one point i’ll make here is
58:06
that well i don’t we need to take the
58:08
credit but
58:09
in the sense of like um joseph stiglitz
58:12
who ever and i think knows world famous
58:14
economist
58:15
uh was 2009 at the time president
58:18
sarkozy of france
58:20
got uh
58:23
all these globally renowned economists
58:25
together and said look
58:26
is gdp the ultimate measure of societal
58:29
success yes or no
58:30
stop messing around and you can look at
58:32
the 2009 document
58:34
where all these different leaders said
58:37
it is easier to measure well-being
58:39
than it is to measure the statistical
58:41
aspects of gdp
58:42
and it may sound boring and maybe i’m a
58:46
gdp geek but when you look at statistics
58:49
right for gdp it’s a backwards measure
58:52
how did we do last quarter how did we do
58:54
last year
58:56
right so you first of all by definition
58:58
and i’m not trying to put gdp down i’m
58:59
just putting it down to the side and
59:00
saying you’re not living in the moment
59:02
right you’re not asking about value
59:05
right now why is john happy right now
59:07
because he’s talking with his friend
59:08
kate who’s awesome
59:10
and his is i really feel purpose talking
59:12
about this work for my trip believe that
59:14
i love so much
59:15
right as compared to i just live my life
59:18
and
59:18
next you know the end of the month hey
59:20
what did you do
59:21
and when you also think about what is a
59:24
person’s life like as a parent i’m a
59:26
parent
59:26
you know what you say to your kid like
59:28
hey you’re making friends
59:31
you know you’re doing this i’ve taught
59:33
you the nature of like whatever oh you
59:35
found a love you know a partner
59:37
cool did you get an a in class and how
59:40
much money do you make at your job
59:43
right but guess what guess what most of
59:46
their lives
59:47
yeah i’m looking for love and this is
59:49
all important but i can’t say that’s
59:50
important because that’s
59:52
that’s frivolous did you make money do
59:55
you have a good job
59:56
that’s it that’s because underlying so
59:59
much
60:00
of modern society 70 years ago
60:03
this one thing became the thing
60:06
that’s why i love ieee by the way too
60:08
because advancing technology for
60:09
humanity like i mentioned
60:11
by definition you can’t just say
60:14
how do we build stuff for whatever it’s
60:15
always beautiful and by the way the
60:17
global aspect of my job
60:18
i love i’m on the phone granted my sleep
60:21
schedule gets messed
60:22
with but i’m on the phone all the time
60:24
like with india or
60:25
the philippines and and that’s a real
60:28
blessing for me because automatically
60:30
you you just hear right away different
60:32
people’s values
60:34
right money’s always important but it’s
60:35
not the only thing
60:37
yeah my day started out with a call with
60:38
india as well and and uh we just joked
60:41
before we
60:42
got on the broadcast you and i were
60:44
joking i’ll share with our audience
60:46
we’re like we haven’t seen each other
60:47
since lisbon
60:48
it’s like oh we should definitely say
60:49
that on the air because it sounds
60:51
so bad like oh gosh how have you been
60:54
since
60:55
lisbon well we know how you’ve been
60:56
since lisbon actually
60:59
it’s called covid but no it is it’s
61:02
wonderful
61:03
and i love the fact that you know my
61:04
work is also global and i
61:06
appreciate that you appreciate that
61:07
about your work as well it’s it’s great
61:10
that that holistic global perspective
61:13
on on the work i was wondering you know
61:16
i have i have generally these kind of
61:17
recurring show questions that i like to
61:20
to bring in a few and and
61:21
one of them that i like to ask is you
61:23
know what are you most optimistic about
61:25
when it comes to technology like what
61:27
when you look at the future of
61:29
human experiences and how technology can
61:31
impact it what
61:32
what gives you the most hope
61:35
uh i know i’m going to sound like a
61:36
fanboy but it’s not it’s actually true
61:38
one thing i love about ieee is anyone’s
61:41
welcome
61:42
to something and i think about this all
61:44
the time i’m so blessed in this job
61:46
but i am not an engineer i am not an
61:49
ethicist i am not a data scientist
61:52
like the list of what i am technically
61:53
not like if you want a phd
61:55
is pretty extensive um but i’m
61:59
i’m not just proud i’m genuinely honored
62:01
and thrilled
62:02
that because of my skill sets which
62:04
involve like business development
62:06
community development i got to come with
62:09
this idea
62:09
that was a very early germ of an idea to
62:12
this amazing organization and say
62:14
my my perception is it does need to be
62:17
you you seem like the only organization
62:19
that could build a global
62:20
code of ethics because it will work
62:23
because you’re engineers
62:24
so it will actually function but the
62:27
other thing about that cross-pollination
62:29
and interdisciplinary aspect of our work
62:32
one thing with ethically aligned design
62:33
is one of my favorite things
62:35
is saying to the amazing engineers and
62:37
data scientists
62:38
hold on let me get my friends who are
62:40
mental health specialists
62:42
and sociologists and anthropologists
62:45
and you know women and by the way kids
62:48
and it’s like then the consensus
62:50
building aspect
62:51
consensus is by the way very hard it’s
62:54
very hard
62:55
and it’s not about getting uniform
62:57
agreement across everything
62:59
but ieee especially the standards
63:01
association what i love is
63:03
coming to consensus means we’re going to
63:04
get in a room we’re going to be
63:06
passionately arguing for a while
63:08
but we all have this vision of advancing
63:10
technology for humanity
63:11
or the stuff and ethically aligned
63:12
design and we’re going to do it together
63:14
it’s wonderful and when you think about
63:18
what we could do in culture uh
63:21
and whether it’s governments companies
63:22
or individuals to stand a better chance
63:25
of bringing about the best futures with
63:26
tech
63:27
rather than the worst features obviously
63:29
you know you could point to and i’ll
63:30
throw that url back up on the screen
63:32
here
63:33
uh ethicsandaction.ieee.org
63:36
but in addition to that like or or maybe
63:39
one thought that kind of
63:40
stands apart from that what do you think
63:43
it is that we could most
63:44
emphasize to focus on in culture to to
63:47
bring that best
63:48
bring about those best futures and with
63:50
tech
63:51
um i’ll go back to the metrics which is
63:54
my dream
63:54
on a personal level but it’s reflected
63:56
in ethically line design
63:58
is that nothing ever ever ever ever ever
64:01
is built again and by this i mean
64:02
designed at the very outset
64:04
you know someone susan goes i have an
64:06
idea for a new tech and she says that
64:08
and then she goes to someone else and
64:10
immediately not just 70 10 but the
64:13
mindset
64:13
of how we’re going to build this knowing
64:16
that the actual ultimate key performance
64:19
indicator is
64:20
the increase the knowable provable
64:22
increase of long-term human well-being
64:25
holistic well-being mental physical
64:28
education access to stuff and that’s on
64:30
top of maslow they have to have access
64:32
to water and food and all that
64:33
so human well-being in uh symbiosis
64:37
meaning complete connection
64:38
with environmental flourishing where we
64:41
cannot think about
64:42
ai as like i’m just a brain kate’s just
64:46
a brain
64:46
copy of those brains and we’re good to
64:47
go right it ignores
64:50
so many indigenous traditions so many
64:51
true and just the reality of
64:53
the interplay of systems thinking from
64:55
people like donella meadows and other
64:57
you know titans and heroes of mine
64:58
so if every bit of every bit of
65:00
technology was developed
65:02
where you knew the key performance
65:04
indicator that would be blessed from
65:06
shareholders stakeholders ceo all that
65:08
value chain was wait a second
65:10
is that tech how can we point to it
65:13
increasing human well-being in the
65:14
environment
65:15
then that means we wouldn’t be facing
65:17
mental health crises
65:19
um you know the environment is today
65:21
with these amazing tools that we’re
65:23
building right all the great algorithms
65:26
ai all the wonderful things happening
65:28
think of the
65:29
majesty of what it would mean if around
65:32
the world
65:33
we knew that we could point to
65:34
indicators saying well this thing is
65:36
going to be built
65:37
and i know and again look to new zealand
65:39
they’re doing this it’s so
65:41
exciting right and by the way they
65:43
invite their indigenous
65:44
uh first nations citizens into the work
65:47
so that’s the other part of it is seeing
65:50
those who aren’t seen
65:51
right the marginalized women a lot of
65:54
times you know and
65:54
in many instances we cannot build these
65:57
amazing technologies and say it’s quote
65:59
you know
66:00
for all of humanity when we’re not
66:01
actually listening
66:03
to all of humanity and especially asking
66:05
really tough questions
66:06
who’s been marginalized and why and it’s
66:08
not about making other people feel bad
66:10
it’s about saying design won’t be
66:12
holistic
66:13
unless we have everyone participating
66:15
participating in the process
66:18
yeah and i think another thing about
66:19
what you were saying in terms of like
66:21
it’s not just john as a brain and kate
66:22
is a brain and let’s just replicate that
66:25
is that it not only does it ignore uh so
66:28
many of the uh the nuances of
66:30
of human existence but it ignores that
66:34
human experience is an embodied
66:36
experience and that we are sensory
66:38
beings
66:39
and that senses are how we make meaning
66:42
and so we couldn’t possibly have meaning
66:44
variances if they didn’t
66:46
have an understanding of our embodied
66:49
sort of
66:49
acceptance of those experiences which is
66:52
why it takes
66:52
having a diverse and inclusive group to
66:56
be able to understand
66:57
how those experience affect us when we
67:00
when we experience them in an embodied
67:01
way so that
67:02
i feel like that’s a really important
67:04
facet too of what you’re talking about
67:08
yeah i mean oh sorry i know we’re at
67:10
time i’ll just say the environment the
67:11
more i
67:12
i read indigenous and first nations uh
67:15
work uh and i hear especially a friend
67:17
of mine in new zealand i forget which
67:19
tradition
67:20
tradition when you speak to someone
67:22
where the environment is actually more
67:24
of like a sister or a brother as
67:25
in the west i was raised to think about
67:27
my actual nuclear family
67:29
it becomes a very different conversation
67:31
about quote protecting the environment
67:33
like i wouldn’t say to kate like kate
67:34
are you cool with me threatening the
67:35
life of your mom
67:36
on the phone still
67:37
[Laughter]
67:40
that’s a given right so this is also
67:42
being sensitive to the end user values
67:44
of the people who are using the
67:45
technology
67:46
um it’s been really helpful because then
67:48
it’s like the system of what we’re part
67:50
of
67:50
of course involves the environment of
67:52
course is it’s part of who we are
67:54
yeah yeah it’s a really important
67:57
clarification too what is what is the uh
68:00
the ieee’s
68:01
roadmap from here and what is your
68:03
roadmap from from this point forward
68:06
well ieee is a big organization because
68:08
there’s multiple parts to it the
68:10
advancing technology for humanity
68:12
there’s a huge amount of work on the
68:13
environment which is astounding and
68:15
awesome there’s like 40 societies
68:17
a ton of great work going on i believe
68:19
work that i helped drive
68:21
we’re really focusing on aspects of
68:22
trust and agency
68:24
ai i find agency fascinating because
68:26
trust is
68:28
a lot of times it’s like how do we trust
68:30
technology where for me the the question
68:32
about
68:32
agency means if i make technology that’s
68:35
safe
68:35
i have to give it to kate and and give
68:38
kate the opportunity to really interact
68:39
with it
68:40
and then you’ll tell me if you trust not
68:42
just am i a
68:44
safe manufacturer but what it does in
68:46
your life and that agency then means
68:48
that’s when trust really happens it’s
68:50
two-sided also i love these non-western
68:53
questions
68:54
i’ve been learning a ton from other
68:56
cultures
68:57
about what safety and risk and all those
69:00
things mean
69:01
but anyway any thank you so much for
69:03
saying the ethics in action url
69:05
all of the stuff that i drive is free to
69:07
join you don’t have to be an ieee member
69:09
we’re always looking to get people
69:11
hundreds working groups
69:12
we have about 13 standards working
69:14
groups focused on
69:16
things like transparency children’s data
69:19
effective computing and we’re always
69:21
looking to grow those groups and get new
69:23
brains
69:24
and and voices involved so we’d love to
69:26
have people join should people go to
69:28
just ieee.org to find that or is there
69:31
a particular area actually ethics and
69:34
action.ieee.org is the stuff that i help
69:36
lead
69:37
and and by the way i’m on twitter anyone
69:39
who wants to contact me directly i’m
69:41
happy to
69:42
to to reach out on twitter too yeah
69:43
that’s the last question here is how can
69:45
people find
69:46
and follow you and your work online and
69:48
so we know john c
69:49
havens on twitter right uh any other
69:52
pertinent urls or handles that people
69:55
should know
69:56
well thanks again i’m going to mention
69:57
it again ethics in action.i triple org
69:59
is like the big one
70:01
um and then really if you want to get in
70:02
touch to get involved with something
70:04
twitter is usually what i check
70:06
freakishly often enjoy freakishly often
70:09
how i would describe my twitter
70:11
lifestyle too
70:13
supposed to be honest yeah absolutely
70:16
well john i can’t tell you
70:17
what a pleasure it’s been and and i what
70:19
a fan i am of your work
70:21
you had a lot of fans who were excited
70:23
about you being
70:24
on the show and i can see why of course
70:27
because
70:28
you’re brilliant and i love where your
70:29
heart is in this work
70:32
so thank you for doing what you’re doing
70:34
well thank you seriously it was so great
70:36
i’m going to say this
70:37
it was awesome to see you in lisbon
70:38
because i was reminded like oh my gosh
70:40
it is amazing
70:41
and i love you know what you’re doing
70:42
with the show and it’s a real honor to
70:44
be here so thank you so much
70:45
thank you thank you john and thanks to
70:47
all of our listeners and viewers out
70:49
there
70:50
see you next time

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.