The Tech Humanist Show: Episode 7 – Dorothea Baur

About this episode’s guest:

Dorothea Baur is a leading expert & advisor in Europe on ethics, responsibility, and sustainability across industries such as finance, technology, and beyond. Her PhD is in NGO-business partnerships, and she’s been active in research and projects around sustainable investment, corporate social responsibility, and increasingly, emerging technology such as AI. She’s also been developing an audit system for contact tracing against the background of COVID-19 as a ForHumanity Fellow. She is founder and owner of Baur Consulting AG, and among her many distinctions, has been named as one of the “100 brilliant women in AI ethics.”

She tweets as @DorotheaBaur.

This episode streamed live on Thursday, August 27, 2020. Here’s an archive of the show on YouTube:

About the show:

The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.

Subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube for updates.

Full transcript:

00:02
[Music]
02:09
all right
02:17
oh there we go hello humans
02:22
who have we got in the in the uh live
02:26
please uh say hi tell me where you’re
02:28
watching from
02:30
glad to to see folks turning out i see a
02:32
few people
02:34
turning up love to see some some
02:36
comments from you
02:37
where are you watching from today and
02:39
how’s it going there in your
02:41
in your version of the lockdown or
02:43
whatever state of covid you’re in
02:46
uh in your neck of the woods so uh
02:50
while i wait for some some folks to
02:52
respond
02:54
just so that you know in case you are
02:56
tuning in for the first time
02:58
this is the tech humanist show and it is
03:00
a multimedia format program that means
03:03
we’re doing this as a live stream uh and
03:05
it will be available
03:07
across all the different channels that
03:08
you can watch live streams pretty much
03:10
and then it’ll stay archived on youtube
03:13
and then there’ll be an audio podcast
03:16
available
03:17
at a certain point so typically i have
03:19
those available
03:22
the friday after so we do these thursday
03:24
afternoon tomorrow
03:26
last thursday is audio podcast will be
03:28
available
03:29
so i look forward to sharing that with
03:31
you
03:34
i’m trying to see your comments
03:38
and for some reason they are not showing
03:41
up there we are
03:44
okay tony says hi from germany hi tony
03:48
menzel
03:49
nice to see you tuning in uh bill butler
03:53
says bill from nashville it’s swampy hot
03:55
here
03:57
sorry to hear that bill glad you’re here
03:59
i don’t know why my camera keeps
04:01
flipping out
04:02
there we go now we’re getting everything
04:03
all set up all everything is well
04:05
all right we got tony we got bill i know
04:07
there’s a few of you a few more of you
04:09
on the line
04:10
so feel free to comment and get your
04:13
questions ready i’m going to tell you
04:14
about our guest
04:15
and that’ll get you ready to to ask the
04:17
questions that you have to ask
04:19
so please feel free to subscribe or
04:22
follow wherever you’re watching and
04:23
you’ll always have
04:24
access to the the coming shows oh wait
04:26
we have one more hi watching from
04:28
switzerland checking for dorothea
04:30
yeah you’re gonna love it all right
04:32
chris buehler thanks for being here
04:35
all right so today we are talking with
04:37
dorothea bauer a leading expert and
04:40
advisor in europe in on ethics
04:42
responsibility and sustainability
04:44
across industries such as finance
04:46
technology and
04:47
beyond her phd is in ngo business
04:50
partnerships
04:51
it’s really interesting and she’s been
04:52
active in research and projects across
04:54
sustainable investment
04:56
corporate social responsibility and
04:58
increasingly emerging technologies such
05:00
as ai
05:01
so she’s been developing an audit system
05:03
for contact tracing against the
05:04
background of covid19
05:06
as a for humanity fellow so we’ll talk a
05:08
little bit about that too
05:10
she’s founder and owner of bauer
05:11
consulting ag among her many
05:13
distinctions she’s been named as one of
05:14
the hundred brilliant women
05:16
in ai ethics so start getting those
05:19
questions ready
05:20
for our guest and
05:25
oh hang on one second here
05:30
there we go dorteo you are live on the
05:33
tokyo mini show thank you for being here
05:35
thanks for having me it’s a pleasure my
05:38
pleasure
05:39
uh i’m so glad you’re here i know that
05:41
we have so much
05:42
ground to cover because you’re doing so
05:44
much someone referred to you as a wonder
05:46
woman
05:47
on twitter this morning which is kind of
05:50
totally exaggerated and i think she was
05:52
more
05:52
referred to herself she would be the
05:54
appropriate one to parry the label
05:55
i think she actually clarified that she
05:58
definitely was not talking about herself
06:01
but some of our listeners may not be as
06:04
familiar with some of the acronyms
06:06
and and terms that we talked about so i
06:07
wanted to just orient people in some of
06:10
the areas of
06:11
work that you do so corporate social
06:13
responsibility csr talk us through a
06:15
little bit about what that entails and
06:17
and what some of the practices are in
06:19
that yeah corporate social
06:20
responsibility
06:22
is like one of the most used acronyms
06:25
that actually describes
06:26
all extra financial obligations and
06:30
responsibilities of corporations so
06:32
it’s not just social actually it also
06:34
covers environmental responsibilities
06:36
you could also call it
06:37
corporate sustainability or
06:39
responsibility there are tons of
06:41
terms that are used for the same thing
06:43
but it’s important to know that this
06:45
is basically what companies mostly take
06:48
on voluntarily
06:49
what you know what they commit
06:50
themselves to but it’s not pure
06:51
philanthropy
06:52
so for example they want they might want
06:55
to give more transparency on their
06:56
supply chain
06:57
that is legally mandated or they go
07:00
beyond the minimum legal standards in
07:02
some countries where the standards are
07:04
very low etc
07:05
so all of these like non-directly
07:08
financial
07:10
aspects of corporate activities
07:13
can be summarized by corporate social
07:15
responsibility okay
07:16
that’s really helpful i’m sure that
07:18
that’ll help our watchers and listeners
07:21
have
07:22
useful questions to to ask and then we
07:24
talked also about esg so environmental
07:26
social and governance investing so
07:28
help us understand what that term refers
07:31
to yeah esg is basically the csr
07:34
of the financial industry so it’s a term
07:37
that
07:38
investors use to talk about
07:40
non-financial aspects of their
07:42
investments so when
07:44
nowadays blackrock says we are promoting
07:46
esg investments that means they don’t
07:49
just
07:49
don’t just judge their investments based
07:51
on expected return
07:52
on investment but they also take into
07:55
account environmental and social and
07:57
governance factors when they invest so
07:59
esg
08:00
is the buzzword among the financial
08:01
community
08:03
and uh and csr is like the old term like
08:06
most of the business ethics aspects of
08:09
all corporations across
08:11
industry now i know you’re also doing a
08:12
lot around ai ethics and tech ethics too
08:15
are you finding that these areas are
08:16
starting to intersect more and more
08:18
they do i think sometimes you know both
08:22
can learn from each other also
08:23
especially because
08:24
you know csr so my background is in
08:27
business ethics it’s very theoretical
08:29
but so
08:29
the whole questions about legitimate
08:31
business models legitimate ways of
08:33
making
08:34
money legitimate ways of distributing
08:37
profit ethical ways of you know
08:41
kind of treating your stakeholders or
08:43
unethical ways
08:44
all these issues have been thoroughly
08:46
discussed by the csr community or the
08:49
in the business ethics context on a more
08:51
theoretical or more practical level
08:53
depending on what you look at
08:54
and so a lot of these questions are kind
08:56
of history repeating when it comes to
08:58
the tech sector
08:59
tech sector has some distinctive you
09:02
know
09:02
ethical challenges that have never been
09:04
seen before such as the
09:06
you know the speed and the in
09:08
transparency of
09:09
machine learning etc so that’s
09:12
distinctively new
09:13
but all those things about is this an
09:16
okay
09:16
way of doing money is this an okay way
09:19
of treating our employees our
09:21
stakeholders
09:22
and gig workers etc those are questions
09:25
that have
09:26
been i mean discussed but of course not
09:28
resolved
09:29
by the csr community but at least you
09:32
know it’s i think these two
09:33
fields can really benefit from each
09:36
other
09:37
is it also the scale and the velocity of
09:40
ai
09:40
and tech that changes that conversation
09:43
yeah and it’s also
09:44
you know it’s not just a technically a
09:46
black box i mean machine learning and
09:48
algorithms are not just
09:49
really a black box in a technical sense
09:51
it’s also kind of
09:53
deterring non-engineers to take a closer
09:56
look
09:56
because you feel like deterred by the
09:58
whole shargo around it
09:59
which is needed of course it’s a
10:00
technical vocabulary you cannot expect
10:02
engineers to talk about you know you
10:05
know whatever soap bubble stuff
10:07
just to make it feel tangible or visible
10:10
for us
10:11
but still i think it’s a it’s a double
10:13
black box it’s it’s it’s a it’s a black
10:15
box in the real technical sense of the
10:16
world that you don’t know what you know
10:18
what’s being the input and the output or
10:19
how the two relate
10:21
but it’s also a black box like oh
10:23
there’s not much we can say about that
10:25
it’s
10:25
machines processing something whereas
10:28
other industries it’s more like
10:29
oh there’s a mining company they have
10:31
these and these mines
10:32
with this kind of people working there
10:34
suffering from these problems and
10:36
uh causing someone so much emissions is
10:38
much more tangible in other
10:40
industries so i think that makes ai a
10:42
bit uh
10:43
evasive for us and you know there’s this
10:45
whole
10:46
concept of ai for good and tech for good
10:48
that’s really kind of a hot
10:50
topic or a trending hashtag at least
10:53
right
10:53
uh what do you make of those ideas
10:57
as it applies to the work you see and do
11:00
yeah i know
11:00
that in my bubble i’m still i think i
11:03
move in a bubble
11:04
uh online etc there is a lot of cynicism
11:07
about these terms but i think there is
11:09
something genuinely
11:11
good to be gained from you know ai
11:14
and from tech for good that’s more than
11:17
just
11:17
ethics washing my favorite pet peeve
11:20
and uh and and you know i really think
11:23
that
11:24
ai has huge potential to cause good
11:27
especially
11:28
when it comes to environmental
11:29
sustainability so
11:31
for example you know the whole problem
11:34
of pattern recognition
11:36
in machine learning where you feel like
11:38
when it’s applied to humans
11:40
it is you know full of biases and it
11:42
kind of confuses correlation and
11:43
causation
11:44
and it is uh violating privacy etc
11:48
there are a lot of issues that you don’t
11:49
have when you use the same kind of
11:51
technology in a natural
11:54
science context you know where you just
11:55
observe patterns of of
11:57
of oceans and and clouds and whatever
12:00
or when you when you try to control the
12:03
extinction of species i mean
12:05
animals don’t have a need for or a right
12:08
to privacy so
12:10
you know why not use ai in contexts
12:13
where
12:13
it really you know doesn’t violate
12:15
anyone’s um
12:17
moral rights i think there are enough of
12:19
enough use cases and where you at the
12:21
same time
12:22
resolve a real problem those two were
12:24
conditions for me for tech for good
12:27
yeah that’s a really good distinction
12:28
and i talked with um yale’s forestry and
12:31
sustainability program last year and i
12:33
i felt like that was a lot of the
12:34
discussion there too is that there’s so
12:36
much data to be harnessed about
12:37
naturally occur
12:38
occurring phenomena and be able to
12:40
impact human life
12:41
on the other side of that process as
12:44
opposed to tracking
12:45
justified right where the emissions are
12:47
justified not just for some nonsense
12:49
uh algorithm whatever predicting
12:52
something you don’t really want to know
12:53
and
12:53
and overriding my rights etc and
12:57
and so i think there’s really there’s
12:58
plenty of stuff to do the problem is of
13:00
course um how to make money of it of
13:02
course i mean i
13:03
can’t just talk about yeah i feel good
13:05
without you know thinking about the
13:06
monetary aspects
13:07
well and you’re a uniquely qualified
13:09
person to talk about that so
13:10
it it does kind of come full circle
13:12
right so you want you want to be able to
13:14
have this
13:14
csr and estuarization in the corporate
13:17
space
13:18
and then the ai for good and tech for
13:20
good discussion sort of emerges
13:22
out of wanting that to scale and one of
13:24
wanting all the
13:25
proposals to scale and then you have to
13:27
bring it back around
13:28
into this holistic csr esg model right
13:32
so
13:32
how do you reconcile all of that how do
13:34
you bring those into
13:35
a holistic framework that actually makes
13:38
sense
13:39
well it’s not about reconciling it’s
13:41
about being able to
13:42
to just uh tolerate the tension
13:47
a just way to put it it’s not always
13:50
it’s not always win-win and it’s another
13:52
not
13:52
triple win as some people would say like
13:54
you know environment
13:56
uh society and a business they all win
13:59
at once
14:00
often it’s about trade-offs but of
14:02
course uh
14:03
you know i think if you can make money
14:06
with esg
14:07
and if you can make money with csr
14:10
that’s totally the achievement that’s
14:11
great for you
14:13
do it but please also do it if you earn
14:16
less money with it
14:17
and if it does not pay off immediately
14:19
anymore so
14:20
uh don’t make your your your ethics
14:24
dependent on the business case
14:26
but i mean if you find a business case
14:28
for ethics go for it yeah there’s
14:30
nothing bad about making money i’m not
14:32
talking about maximizing profit and
14:33
making yourself a billionaire and giving
14:34
your shareholders
14:36
i know what kind of dividends and and
14:37
you know totally losing all kind of uh
14:39
relation to to to the real problems in
14:42
the world but
14:43
you know it’s it’s not bad in itself to
14:44
make money but so when i i don’t
14:46
i you know it’s hard some business
14:49
models are probably
14:50
inherently more ethical and others are
14:52
inherently less ethical or totally
14:54
unethical
14:54
i mean you can’t bring in my opinion
14:59
uh well it’s a dangerous field what am i
15:01
saying here i’m live
15:03
you know it’s it’s it’s hard to bring
15:05
ethics to a nuclear arms
15:07
company it’s hard to bring ethics to
15:11
certain areas right i think you know you
15:13
can you can maybe
15:14
dodge that a little bit if you’d like
15:16
but
15:17
i want to give you an out thank you
15:21
it feels like what you’re really saying
15:23
is that there are
15:24
or if i’m interpreting correctly that
15:26
there are there are some
15:28
approaches which are closer to solving
15:30
real human
15:31
problems right and and they’re closer to
15:33
the human experience they’re closer to
15:35
what
15:36
you know genuine human needs are and
15:38
then there are other things
15:40
that go wildly awry or you know
15:43
a far far afield of that
15:46
um because you know sorry it’s just
15:49
gonna say most things are in the middle
15:50
somehow
15:51
it seems like right yes they are and
15:53
then you need to shift you know then you
15:55
need to
15:55
need to take some decisions and and you
15:58
know and
15:58
and you need to make trade-offs but
16:00
ideally i’m the one
16:02
pushing for trade-offs in favor of
16:03
ethics of course without losing sight of
16:06
the
16:07
business reality yeah and so so what i
16:10
think
16:10
is in uh ai ethics wait i just lost what
16:13
i wanted to say that’s why i’m smart
16:15
okay we’ll come back to it if you like
16:18
dementia i’ll
16:19
remember all right uh you know one of
16:21
the things that struck me
16:22
as i was uh looking over some notes and
16:25
and
16:25
thinking about our conversation was the
16:28
more i
16:29
read up and kind of refreshed my
16:31
thinking on esg and csr and kind of the
16:34
the evolution of those fields the more
16:36
it struck me that
16:37
those fields evolved out of concern for
16:41
for labor right and for workers
16:43
collectives and labor power
16:45
and then of course when you start
16:46
thinking about you know the emergence of
16:48
ai
16:49
and tech driven capacity it’s an
16:52
entirely different equation so i wonder
16:55
if
16:55
if you would be able to to speak to how
16:58
do you
16:59
maybe it’s another case of just
17:01
tolerating the tension as you said
17:03
earlier but
17:04
how do you kind of make sense of the esg
17:07
and csr models and
17:09
and their concern with labor and and
17:11
workers
17:12
when we’re looking at a completely
17:14
different model of scale and capacity
17:18
i think it’s uh the the type of human
17:21
factor that we’re looking at
17:23
changes and so the difference is if you
17:25
look at
17:26
heavily industrialized context like a
17:29
heavy manufacturing or like textile
17:31
industry and all those you know tangible
17:33
goods that are being produced of course
17:36
the rights we talk about offers like the
17:38
human rights of labor
17:40
uh health and safety etc so
17:43
but i mean trade unions have become have
17:45
come out of fashion a while ago
17:47
now what i could observe from also all
17:49
those sustainability reports
17:51
a lot of companies don’t really like to
17:53
talk about trade unions anymore they
17:55
you know they talk about human rights
17:56
etc but the unionization is kind of
17:59
has lost a bit unfortunately has lost a
18:00
bit of uh momentum
18:02
so and and so when we switch to ai you
18:05
think oh you’re in the service industry
18:07
it’s not labor intensive et cetera but
18:10
the human factor is still there
18:11
maybe not mainly as the well certainly
18:14
not the blue color employee
18:16
at least not within the owned operations
18:18
of tech companies
18:22
and and maybe also not as many white
18:24
collar employees
18:26
in relation to their turnover as in
18:28
other contexts
18:29
but there are a lot of people linked to
18:31
the to tech
18:32
companies or to ai first of all often
18:35
invisible we have those
18:36
ghost workers that they’re called you
18:38
know the whole you know
18:40
gig economy or like people doing a very
18:43
uh
18:43
low paid work of tagging uh pictures uh
18:47
for for to feed out to train um data
18:50
sets etc
18:51
and and so there is a labor issue a
18:54
classical one
18:55
for content moderation or the mechanical
18:58
turks sort of thing yeah
18:59
yeah right i mean that’s that’s a that’s
19:01
a really straightforward human rights
19:03
case there
19:04
you know they’re just not so visible and
19:06
you think about the pure service
19:07
industry it’s only like you know servers
19:09
and you know
19:10
one human taking care of 100 servers etc
19:13
but there are a lot more people
19:14
linked in the supply chain you could say
19:16
and also
19:17
more over affected by the technology you
19:20
know
19:21
so in textile i’m not that affected by
19:23
you know the the
19:24
the clothes i’m where i’m wearing should
19:26
not be toxic on my skin
19:27
and and that’s it you know but so the ai
19:30
i’m using can violate my
19:32
privacy it can you know uh the the ai
19:35
that is used on me it can discriminate
19:37
me it can
19:39
make it impossible for me to get a
19:40
mortgage etc so it’s more the users or
19:42
like the customers or the users you say
19:45
that are in the focus but it’s still
19:46
very much about humans
19:48
and at the same time yeah even though we
19:50
think like yeah it’s all like you know
19:52
just uh electricity etc you know there’s
19:54
also sustainability like the emission
19:56
part of ai that
19:57
i always like to emphasize there’s also
20:00
an environmental dimension
20:01
yeah yeah and not to not to mention so i
20:03
think it’s a really important point
20:04
about you said the ghost workers and and
20:06
the many many people who are involved
20:08
in bringing scale to things that are
20:11
dependent on you know
20:12
sort of a gig economy infrastructure but
20:14
also the communities that are impacted
20:16
and that feels like it brings it right
20:18
back around to the sustainability and
20:19
the environmental discussion
20:21
right out of out of places where let’s
20:23
say you know minerals are harvested for
20:25
chips or things like that so that must
20:28
be part of the discussion at some level
20:30
as well i would imagine
20:31
yeah of course i mean the whole supply
20:33
chain of the i.t industry is also
20:36
you know heavily based on minerals and
20:38
there are actually they’re really
20:40
interesting um initiatives also by by
20:42
tech companies or like
20:43
commodity companies that uh specifically
20:46
focus on the on the minerals or the
20:49
metals that are
20:50
you know in our computers like on on
20:52
cobalt there is new
20:54
transparency initiative a fair fair
20:56
cobalt initiative that i just read about
20:58
so they are aware of this but if you
21:01
look
21:02
at you know you know where is the main
21:05
focus it’s more on on the output than on
21:07
the input it’s
21:08
more like we call it like downstream
21:11
than
21:11
upstream but but of course i mean that’s
21:14
that’s a huge issue like the rare
21:16
earth discussion i said emissions
21:19
right that would be emissions as well it
21:22
would seem like
21:22
emissions emissions i mean honestly you
21:25
know
21:27
it’s you know when i read like
21:30
how many emissions gpt3
21:33
must have generated you know the
21:35
training the elect the energy that we
21:37
used
21:37
for training gpt3 and i don’t want to
21:40
downplay gpd3 and i’m fascinating by
21:42
what it is capable of
21:44
but i’m not sure yet how long it takes
21:47
until
21:47
gpg3 will save any human life
21:50
right it’s also you know the same about
21:52
bitcoin and
21:53
all these you know um like
21:57
technologies that that that don’t
21:59
directly contribute
22:00
to the real economy and and they use up
22:03
so much
22:04
energy and even though the tech
22:06
companies say oh we’re going to be
22:07
carbon neutral or even carbon negative
22:10
you know as long as they uh you know
22:12
sell their cloud services
22:14
to the fossil industry and that’s
22:16
basically
22:17
irrelevant so yeah there’s this
22:20
hypocrisy and they say we’re going to
22:22
plant
22:22
trees the size of i don’t know
22:24
kazakhstan and uh good luck with that
22:26
yeah you just made me write a little
22:29
note to remind myself to look into
22:30
carbon offsetting this show
22:33
definitely some emissions okay great
22:38
well then so so offsetting seems like it
22:40
must be an enormous part of the
22:42
discussion
22:43
in in many parts of the industry so uh
22:46
corporate partnerships and foundations
22:48
they must be investing in carbon
22:50
offsetting and other
22:51
projects and that feels like it’s really
22:53
needed that’s a an
22:54
easy piece in some ways but i would
22:57
imagine there may also be areas of
22:59
concern there
23:00
yeah because i mean offsetting is a
23:02
really good thing it’s good to know that
23:04
you know
23:05
there are ways to kind of uh undo or
23:09
like
23:09
reduce i say yeah compensate for the
23:11
carbon for the emissions you generate
23:13
and uh but the first question to ask
23:16
should
23:17
not be can i offset it or how can i
23:19
offset it but it should be
23:21
like is what i’m doing is it even
23:23
necessary so i mean
23:24
let’s say my favorite passion is to fly
23:28
to barcelona every other weekend just
23:31
for fun
23:32
for a party so instead of offsetting it
23:35
maybe i should stop doing it
23:37
and and you know the same for companies
23:39
i said you know
23:40
you know the tech companies saying we’re
23:42
going to be carbon negative
23:44
but they make most money from totally
23:46
unsustainable industries by selling
23:48
their services to unsustainable
23:49
industries
23:50
that’s kind of a yeah a bit of a
23:52
double-edged sword
23:54
yeah and uh it’s also controversial you
23:56
know they
23:57
always joke about the amount of trees
23:59
that have been promised to
24:00
to be planted i mean i can’t imagine you
24:04
i’m waiting i’m waiting for the day when
24:05
i look out of my window in the middle of
24:07
the city and they start planting trees
24:10
i think like the whole planet must be
24:12
covered with trees
24:14
and the the thing is it takes decades
24:17
until the tree you plant
24:19
really turns into a carbon sink so
24:22
all that planting trees um and then also
24:25
saying like oh we’re going to bring
24:27
solar
24:28
cookers to to to people in africa it’s
24:31
also you know
24:32
with a latently colonialist attitude so
24:34
you know
24:35
we we make it possible for you to cook
24:37
in a clean way but we dictate it to you
24:40
and
24:40
yeah so do you think it’s not that easy
24:42
do you think the tree planting
24:43
discourse is really shorthand though
24:46
it’s like it’s an easy way to convey
24:48
to or do you think it’s a promise that’s
24:50
just never going to be delivered on
24:51
yeah it sounds nice but the result i
24:53
think there’s some double accounting
24:55
sometimes
24:56
surgeries are kind of twice it’s really
25:00
it’s easy to get the credit for
25:02
uh for planting a tree but it’s hard to
25:04
verify the reduction you achieve because
25:06
it takes such a long time
25:08
right and and so there are all these
25:10
issues so
25:11
i think you know it’s also interesting
25:13
easyjet for example says
25:15
oh we’re going to offset uh our
25:17
emissions and then you see
25:19
they calculate i think four dollar per
25:23
image ton of carbon that they emit to
25:25
compensate
25:26
okay but microsoft calculates 15
25:29
per ton that they emit so you can
25:31
imagine the difference that you can make
25:32
with four
25:33
dollars per ton or fifteen dollars per
25:36
ton that’s huge
25:37
that’s an enormous difference yeah so
25:40
i imagine easyjet is generating more
25:42
than a few tons of
25:44
cars yeah well recently have they have
25:46
slowed down a bit
25:47
yeah right right as we all have you made
25:50
me very uh
25:51
wistful when you talked about going to
25:52
barcelona for a party
25:58
uh you know i want to switch gears for
26:00
just a moment and and ask you about the
26:01
contact tracing project speaking of the
26:03
impact that you know coveted has had on
26:06
our lives in terms of
26:07
you know keeping us from travel and
26:08
things like that um
26:10
what what how has that project been to
26:13
to try to put
26:14
some uh audit i think you’re you’re
26:16
working on auditing the process of
26:18
contact tracing
26:19
what is it exactly it’s a great
26:21
initiative it’s by
26:23
this not-for-profit organization called
26:25
for humanity that is
26:26
that is led by ryan carrier a guy from
26:30
east coast like uh near new york and and
26:33
he has launched this organization and
26:35
this whole project and we’re still
26:36
working on it heavily it’s not you know
26:38
it’s not done yet so on monday i think
26:41
we’re going to
26:42
release 2 000 lines of code so it’s an
26:45
audit system
26:47
contact tracing not just the digital one
26:49
but also the the analog one you know the
26:51
making phone calls etc and it uh
26:54
covers basically five silos that we call
26:57
silos so it covers aspects related to to
27:01
of course to privacy
27:02
but also to bias to cyber security
27:05
and to ethics and so
27:09
ethics privacy trust buys cyber security
27:12
so those five styles and
27:13
some questions are really technical you
27:16
know
27:16
how you know how is the data stored and
27:18
what kind of protocol are you using and
27:20
you know questions that i don’t really
27:22
understand because i’m not another tech
27:23
person an engineer but there’s also a
27:26
lot of
27:27
as i said one issue one silo is trust
27:30
like you know
27:31
how is it enforced how transparent is it
27:34
um you know is this a
27:37
contact tracing system that you’re using
27:39
is it authorized by
27:41
public authority is it automatically
27:44
downloaded so these are all ethically
27:46
relevant questions you know so
27:48
uh once we have that ready that um audit
27:51
system
27:51
uh we can basically you know implement
27:54
it with
27:55
authorities that use contact tracing and
27:57
they like an audit system they can say
27:59
oh yeah we
27:59
we have we can certify that only these
28:02
these people have access to the data or
28:04
that the data is only
28:05
stored in an aggregate manner etc and so
28:09
working on this has made me aware well
28:11
we’re even more aware of the complexity
28:13
of contact tracing and
28:15
you know i’m so you know in my swiss
28:17
ivory tower where everything is like
28:20
um democratic and
28:23
directly democratic so i was only aware
28:26
like
28:26
how many different facets of contact
28:28
tracing systems
28:29
there are and what could be what what
28:32
could what is possible with contact
28:34
tracing when i
28:35
joined that group so we are now i don’t
28:37
know how many people we are working with
28:38
but we’re still looking for more people
28:40
and everyone can shine it’s you know
28:41
it’s open for everyone to join and it’s
28:44
a really
28:44
it’s a really great initiative so we’ll
28:48
try to include that
28:48
the url yeah in in the show notes is it
28:51
just for humanity.org or something like
28:53
that uh yeah i would have to look it up
28:54
okay all right i’ll look it up don’t be
28:56
sure so
28:58
so i would imagine that there’s a
29:00
there’s some tech challenges in there
29:01
you talked through some of those and
29:02
and there’s some human challenges right
29:04
like i think on some level it feels like
29:06
whenever this conversation
29:08
comes up about contact tracing everyone
29:10
knows that that or
29:11
i shouldn’t say everyone knows a lot of
29:13
people understand that that is a key
29:15
managing the pandemic yet it feels as if
29:19
there’s an awful lot of suspicion or
29:22
doubt about how that data is going to be
29:25
used and how safe it is to participate
29:27
in these programs so
29:28
what have you learned about the tech
29:30
challenges versus the human challenges
29:31
and
29:32
how do we overcome them yeah i mean i’ve
29:35
learned that
29:36
to relate but i cannot really answer the
29:38
technical questions you know it’s more
29:40
about
29:40
accuracy there so know how much should
29:42
the distance be between people etcetera
29:44
for the signal to work
29:46
and you know what is acceptable false
29:49
positive rate etc well that’s also an
29:50
ethical question of course
29:52
because you don’t want to be notified
29:53
every other day that you should
29:55
quarantine yourself
29:56
right you know unjustifiedly so but all
29:59
the interoperability what happens you
30:01
know that
30:02
what you have in europe now what happens
30:04
if you go to another country so
30:05
switzerland is not part of the eu
30:08
so we’re not part of the
30:09
interoperability network that the eu has
30:11
set up because we don’t have
30:12
disagreement with them
30:13
so so our app does not work in germany
30:16
etc so but that’s a price we pay for
30:18
being stubbornly
30:20
independent in quotes so
30:23
and so so the you know all the technical
30:25
questions and but
30:26
also the human challenges like you know
30:29
how you how you ensure trust what makes
30:31
for a trustworthy system and and one
30:33
thing is of course technology
30:35
but what i can see in switzerland and
30:37
probably also in the us
30:38
you have these decentralized versions
30:41
you know where it is clear
30:42
that it is only stored on decentralized
30:45
on the gadgets
30:46
and that you know the authorities don’t
30:48
have access to
30:49
it’s without location tracking etc and
30:52
even though we have the
30:53
the the most privacy preserving
30:56
technology that’s possible people still
30:58
have
30:58
a heavy distrust and it’s not even you
31:01
know the distrust that you might expect
31:02
because we know that
31:04
the whole decentralized technology
31:06
depends on
31:07
google and apple it’s not just a
31:08
distrust towards
31:10
uh tech companies but it’s just a
31:13
general
31:13
unease and then i sometimes i wonder but
31:16
these are just my thoughts whether
31:17
people
31:18
just don’t want to face the consequences
31:21
of having the app
31:22
um on their phones because you know it’s
31:26
more interesting the question is
31:28
what happens if you get notified uh is
31:31
the quarantine forest is someone going
31:32
to look whether you’re really staying
31:34
indoors
31:35
are you going to be compensated for the
31:38
loss
31:38
in in your wage while you are at home if
31:41
you’re a white collar worker you might
31:43
continue to do your job at home that’s
31:44
the case in switzerland where you get
31:46
paid and
31:46
if you’re a blue collar worker you’re
31:48
going to to lose your
31:50
money so it’s also creating very unequal
31:53
incentives and moral hazards for
31:56
different kinds of people
31:57
and i sometimes i think i know you have
32:00
a degree in german
32:01
i think it’s a stateful trait to the
32:03
schools the thing about privacy i’m sure
32:05
you understand
32:07
i’m not sure our listeners do but
32:10
some of them i know we have some german
32:12
viewers on yeah we have some but i mean
32:15
for this course it’s like a a this
32:18
maneuver distraction it’s not really
32:20
hitting the point you know it’s not
32:21
really about privacy it’s more about
32:23
am i willing to you know bear the
32:26
consequences
32:27
of of my actions and and not just am i
32:30
willing i’m not also
32:31
able to because maybe i’m not able to
32:33
protect myself from being exposed to the
32:35
virus because i have to use public
32:37
transport etc
32:38
and i have duties and then am i able to
32:41
am i financially able to lock myself
32:43
in and maybe not earn money etc so i
32:46
think the privacy is just the
32:47
the lowest hanging fruit to focus on
32:49
yeah but
32:51
uh there is a lot more behind that it’s
32:53
really challenging for the solidarity in
32:55
all our communities
32:56
i think solidarity challenge yeah that’s
32:58
a fair it’s such a fair
32:59
um set of questions to surface and does
33:02
it also bring you into
33:04
have you begun to have any discussions
33:06
uh through for humanity or any of the
33:08
work that you’re doing
33:09
around some of the proposed ideas around
33:11
sort of immunity passports or
33:13
theoretical ideas about what might
33:15
happen as a next stage beyond
33:17
these contact tracing apps yeah i have
33:20
followed those debates a bit we haven’t
33:22
really discussed it in context for
33:23
humanity yet or not
33:25
as far as i know but it’s just the
33:27
immune passwords again it’s it’s this um
33:30
continuation of replication of i mean
33:34
structural injustice it’s always along
33:37
the same
33:38
dividing lines as society kind of
33:41
yeah and and you know it will those who
33:44
are more vulnerable and more dependent
33:47
will be more heavily affected and pay a
33:49
higher price
33:50
for not being immune or be forced to
33:54
expose themselves
33:56
as soon as they are immune even though
33:58
we don’t know yet for how long
33:59
immunity lasts etc so you know if you
34:02
have
34:03
plenty of space and money you are better
34:06
off in times of corona because you can
34:07
keep a distance
34:09
from other people and you can you know
34:11
buy yourself
34:12
out of the pandemic basically somehow
34:16
and that’s i think so the same with
34:18
contact tracing very different
34:19
implications for different people
34:21
immunity passports and and there is no
34:23
quick technological fix and also what i
34:25
observe you know you can have but the
34:27
best technology in the world you know
34:29
the best privacy preserving technology
34:31
and you can have the most democratic
34:33
authorities in the world as we have in
34:35
switzerland if these
34:36
authorities are not are not you know
34:39
up to date with technology know-how
34:41
that’s the case in switzerland where
34:42
they use fax machines
34:43
where they prefer fax machines over
34:46
digital tools
34:47
right you cannot use that technology in
34:50
that totally
34:51
analog context so there needs to be a
34:54
cultural
34:55
fit for the technology to your
34:59
authorities and the people who use the
35:01
technology
35:02
it’s like those governments are working
35:04
with my landlord who insists on me
35:06
faxing documents
35:08
i don’t know how to distract i got the
35:10
cd for an x-ray
35:12
and what am i doing with the cd sorry
35:17
tape recorder it’s such a weird time
35:20
yeah it’s a weird time because not only
35:22
of the pandemic but
35:24
you know we’re we’ve been in the midst
35:26
of this multi-year
35:28
digital transformation i mean it’s been
35:30
digital transformation has been a hot
35:31
topic for so long anyway
35:34
but but it feels like we have we were
35:36
kind of in the throes of a big push and
35:39
now
35:39
you know it’s a much bigger push but
35:42
what really strikes me
35:44
about your perspective and your
35:45
background is that you’re in this
35:48
very interesting position to be able to
35:51
think
35:52
systemically across a lot of different
35:54
dimensions
35:55
and not many people have that that
35:58
perspective
36:00
no it’s it’s it’s nice i take it as a
36:02
compliment it’s sometimes hard to frame
36:04
so it’s always
36:05
like great to have a chance to really
36:06
talk about these issues in depth
36:08
and how they relate in my opinion
36:10
because these are you know they’re
36:12
connected and even contact tracing
36:14
you know it all you know like it touches
36:16
on the pandemic touches on whole
36:18
questions of welfare state and social
36:20
security etc and
36:21
and it’s all all kind of connected so
36:24
yeah i really appreciate that chance
36:26
yeah you know it’s been a theme of my
36:28
work for the last couple years that
36:29
everything is connected and nothing has
36:31
proven it more
36:32
than this pandemic but but i i think
36:35
even looking at
36:36
your work it’s even more clear like when
36:38
you talk about things like you know
36:40
ghost workers and the local economies
36:42
when you talk about
36:43
uh you know who’s impacted across
36:45
different types of earners and
36:47
and people who are in different sort of
36:49
class stratifications in society
36:52
and you have so many different lenses
36:54
through which to look at this
36:56
with a lot of credibility so that’s
36:58
fascinating
36:59
yeah thanks for that yeah well you know
37:02
one
37:02
one thing i want to ask you is a
37:04
question that i like to ask some guests
37:06
now and then is when you think about
37:07
technologies
37:08
um and we talked about tech for good and
37:11
ai for good and all this
37:12
but when you think about the
37:13
technologies that you see on the horizon
37:15
that you see being developed and that
37:18
are emerging
37:19
which ones of them strike you
37:21
potentially as
37:22
as good for humanity as a boost to
37:24
humanity
37:26
i’m really excited about uh you know
37:29
technological developments in healthcare
37:30
so and also again fascinating from an
37:33
ethical perspective is that if you you
37:35
know
37:36
you talk about precision medicine you
37:38
get you know the the right
37:39
treatment based on all your specific
37:42
characteristics and with the help of you
37:44
know ai this can be
37:46
you know patterns can be evaluated
37:48
better so there it’s like a reverting of
37:50
the ethical problem it’s like
37:52
it’s a it’s context where you want to be
37:54
discriminated
37:55
please discriminate me i want you to
37:57
take into account
37:58
all my characteristics i want you to
38:00
know how many you know veggie burgers i
38:02
eat
38:03
i want you to know how much i sleep
38:06
whatever i do all my risk factors etc
38:09
because if this helps getting
38:11
me to get the right treatment uh you
38:13
know i’m willing to lay it there so
38:16
i think you know precision medicine uh
38:18
is going to be a
38:19
really great uh area but also in
38:21
medicine we can
38:22
read that and i think that’s true i mean
38:25
that machine learning can help
38:26
identify candidates for vaccines mass
38:29
vaccines
38:29
much faster of course always needs to be
38:32
safe and reliable
38:33
and and that’s a problem of the black
38:34
box and in transparency sometimes
38:37
but so all of healthcare and as i stated
38:40
before
38:41
in sustainability for example i don’t
38:43
recently it’s a bit quirky but i like
38:45
that stuff i read about a challenge
38:47
where
38:48
um coders were asked to to to develop
38:52
okay i call it facial recognition for
38:54
snakes
38:55
oh yeah not just it’s not just the face
38:58
of snakes but you know
38:59
the goal was to develop an app that you
39:01
know you could take picture
39:02
when you get bitten by a snake you take
39:04
a picture and then the snake is
39:06
automatically identified so when you get
39:08
to the hospital you can say
39:10
this type of snake has bitten me because
39:12
it’s very important to identify the
39:14
snake corrected to give you the right
39:15
anti-venom
39:17
so and this is so fascinating you know
39:19
why why
39:20
waste our money and our emissions on
39:22
facial recognition on people i don’t
39:23
want to be recognized please
39:25
but that snake doesn’t care whether it
39:27
is recognized it has to be recognized
39:29
even
39:30
and that’s you know where you can use
39:31
this machine learning for good so all
39:33
these there are so many fascinating
39:35
areas where you don’t run into these
39:36
ethical problems
39:37
yeah yeah it also strikes me that you’re
39:39
talking less about recognition
39:41
in in a an identity sort of sense as
39:44
you’re talking about recognition
39:45
in a pattern recognition sort of sense
39:48
right so it’s not
39:49
important that it’s snake jane doe
39:52
versus snake john what about naming the
39:54
snake we’re not going
39:59
but it has no consequences for the snake
40:00
it has culpability for the snake
40:04
no that’s a that’s a really interesting
40:05
premise and also the the precision
40:07
healthcare
40:08
seems like such a an exciting idea
40:11
because it does seem like
40:13
there’s at least when i think about
40:15
american healthcare and of course you
40:16
know we’re one of the most screwed up
40:18
systems
40:19
in the world uh in terms of the inequity
40:22
of it and
40:22
we have so much richness in the system
40:24
and it’s so unavailable to so many
40:26
um but that potential seems like it
40:29
could start to write some of that
40:30
potentially right like you could
40:32
you could be able to integrate uh what
40:34
you know from one specialist what you
40:36
know from another specialist and no
40:38
one ever has time or the capacity to
40:40
bring to
40:42
a doctor and say you know here are all
40:44
the things that are going on with me
40:45
across
40:46
this different health system in this
40:47
different health system because you
40:48
can’t get doctors to talk to one another
40:50
no i think that’s you know i always rant
40:52
about democratizing
40:54
ai because i i find the misuse of the
40:58
term democracy because i’m political
41:00
scientist by training
41:02
so because it suggests like what is it
41:04
is it going to be participatory ai
41:06
or self-legislating ai but i you know
41:08
what people mean when they talk about
41:10
democratizing ais
41:12
meaning that they make it accessible to
41:14
people
41:15
and i think in healthcare ai
41:18
it’s not about democratizing ai it’s
41:20
about democratizing health care because
41:22
you know if if you have some reliable
41:25
programs that can you know
41:27
do a radiologist work partially and not
41:30
fully substitute him or her but
41:31
partially it means that it is much
41:34
easier available
41:35
for more people so so it lowers excess
41:38
barriers if you have this technology at
41:40
scale and of course
41:41
again there will be unfairness because
41:43
some people will be able to still afford
41:45
a human doctor so
41:46
people will say oh can you still afford
41:48
a human doctor so
41:50
no sorry i have to take the ai of course
41:52
it will never be entirely fair
41:54
but i think there it really it is okay
41:57
to talk about democratization for once i
41:59
accept it for once
42:00
okay but you’re being very specific
42:03
about
42:03
what it is that’s being democratized
42:06
yeah yeah
42:06
it is yeah that’s a really important
42:08
distinction too and you also touched on
42:10
something that reminded me of
42:12
you recently uh tweeted about this
42:14
notion of participation
42:16
watching right and and the notion that
42:19
you can’t have
42:20
uh participatory ai if what you’re
42:23
really talking about is
42:24
you know people that are clicking on
42:27
recaptcha images that are
42:29
stop signs and that’s just training ai
42:32
you know you’re distributing the work
42:33
across
42:34
a lot of random people and there and
42:37
their human experience
42:38
but it’s not really making the ai any
42:41
more democratized it’s not making
42:43
that process any more true participation
42:46
and you know it’s like this the the
42:48
ideal that i’m adhering to like the
42:50
human in the loop
42:51
i mean these people they are very human
42:53
but they’re not humans in the loop right
42:55
you know it’s not like they really have
42:57
a human
42:58
impact and they’re degraded to doing
43:00
work and they’re
43:02
kept probably separately from each other
43:04
so they will never unionize and they
43:05
will never raise against their bosses so
43:08
i mean it’s it’s like it’s it’s a very
43:10
dehumanizing
43:11
way of having humans in the loop yeah
43:14
it’s like i guess
43:15
adjacent to the loop or under the loop
43:17
or yeah
43:18
something like something like that
43:22
do you generally think about technology
43:25
as
43:25
empowering humanity overcoming it or
43:28
threatening it or
43:29
some other relationship to uh between
43:32
humanity and technology
43:34
i think it’s all at once well maybe
43:37
at least of all overcoming i don’t see
43:40
like the a.i
43:41
overlord knocking on our door anytime
43:43
soon
43:44
i’m not transhumanist i’m i’m a
43:46
permanently human humanist
43:48
i just finished reading herrera’s homo
43:51
deus
43:52
and it took me it took me a couple of
43:54
weeks but it takes a while
43:57
yeah like wow humanist is a is a
44:00
term he uses critically and then no no
44:03
i’m i’m proud to be a humanist
44:04
and uh you know the negative term would
44:06
be anthropocentrist
44:08
like putting the human in the center and
44:10
disregarding all non-human entities
44:12
but you know i can only think from my
44:15
own species perspective and i can still
44:17
care about animals et cetera but i’m
44:19
humanist i believe that there is
44:20
something distinct
44:21
distinctive about humans that we need to
44:23
keep alive and
44:24
uh so this is our responsibility that we
44:26
have as humans
44:27
and so it’s it’s but it is empowering of
44:30
course because it you know
44:32
it connects us it makes like technology
44:34
it connects all of us it makes
44:36
information accessible
44:37
it kind of you know on so many ways
44:41
you know the democratization aspect
44:44
is real so it is empowering
44:47
it is threatening of course because if
44:51
technology you know the whole fake news
44:53
and and then the deep fakes etc that’s
44:55
really threatening
44:56
although immediately in the context of
44:59
upcoming elections and even in
45:00
democratic big
45:01
countries and uh and so it is
45:04
threatening in many ways
45:06
and uh overcoming it is the least i’d
45:08
say
45:09
so but it has always been that you know
45:12
i mean
45:13
you know nuclear weapons nuclear power
45:16
has already threatened us so yeah
45:18
i love what you said about the the sort
45:21
of being a proud humanist because i feel
45:22
like
45:23
it’s it’s funny to me since i am i am
45:25
vegan i’ve been vegan for 22 years so i
45:27
am very much concerned with animal
45:30
welfare
45:31
i i want there to be uh concern
45:34
for all living things and you know
45:37
absolutely
45:38
but i do agree with you i completely
45:40
think that there’s something
45:41
special about humanity and as a human
45:44
that may be a biased perspective but i
45:46
think that it makes sense
45:48
in the context of human creation of
45:50
technology
45:51
to protect that which is human uh as we
45:55
develop technology right
45:57
well i mean first of all you know uh
45:59
being a humanist doesn’t mean
46:01
that you cannot or should not be
46:02
vegetarian or vegan i mean i’m i’m like
46:05
95
46:06
vegetarian or 99 percent vegetarian so
46:09
so one is like your you call it
46:12
epistemological uh perspective like
46:15
what can you know i can only know what
46:17
humans know and what humans feel
46:19
i cannot certainly know what animals
46:22
feel i have
46:23
a lot of indications that they don’t
46:24
like to suffer they
46:26
don’t want to be killed and kept in you
46:28
know in stables et cetera
46:30
and so that’s why i don’t eat them uh
46:32
i’ll still eat their
46:33
cheese etc but um so and and i
46:36
that’s you know from epistemological
46:39
restriction that i have as human and the
46:40
other thing is do i think they have
46:42
rights and i think they do have rights
46:43
so in that way i’m not a humanist
46:45
but um um yeah so so what we need to
46:49
so i think one of the biggest
46:51
achievements is that like
46:53
200 and you know 40 years ago
46:56
when the enlightenment set in you know
46:58
that movement that philosophical
46:59
movement uh
47:00
in europe emanuel kent et cetera where
47:03
he said
47:04
hey people dare to use your own mind
47:07
it was like a wake-up call because until
47:09
then we have been called
47:10
as you know the slaves of god or i mean
47:12
that’s i’m sorry i don’t want to you
47:14
know offend anyone but we didn’t really
47:16
make an effort to to to explore the
47:18
world because we thought every
47:19
everything was determined by god so
47:21
by stepping out of this dependency and
47:24
using our own brains we
47:25
liberated ourselves from the shackles of
47:28
religion or other
47:30
authorities and so now are we just you
47:33
know taking it too far have we used our
47:35
brains so far that we’re eventually
47:37
trading
47:37
machines that are smarter than us and
47:39
they’re kind of imposing their decisions
47:41
again
47:42
upon us and not just imposing their
47:43
decisions upon us but also imposing
47:45
decisions that are
47:48
about equally in transparent as god’s
47:51
decision
47:51
if you look at certain algorithms what
47:54
is this like
47:55
are we inverting are we going back into
47:57
places of darkness
47:59
yeah yeah that’s a really interesting
48:01
way to frame that
48:02
uh that it takes the um the human agency
48:06
out of of the framework yeah i think
48:08
it’s a duty
48:09
we cannot delegate our responsibility to
48:12
machines we can use machines to improve
48:14
our
48:14
you know um health and our well-being
48:17
etc to improve the world
48:19
but we cannot entirely delegate
48:22
responsibly to machines
48:23
especially not if we don’t fully
48:26
understand them
48:27
and if we cannot revert or intervene at
48:30
any point in time
48:32
so when you think about humanity and the
48:35
human condition
48:36
and what it means to be human what what
48:39
do you feel like
48:40
is the most uniquely human trait or what
48:43
do you kind of come back to when you
48:45
think about humanity as the thing that
48:47
really characterizes
48:49
human experience or or what it means to
48:51
be human
48:52
i know i’m totally out of fashion with
48:54
that but i’m a
48:55
i’m a bit of a candidate i think about
48:57
free will and and responsibility you
49:00
know
49:01
the ability to take on responsibility
49:04
and to kind of decide how we want to act
49:07
and then
49:07
be held accountable for how we acted
49:09
which you know
49:10
the whole agency thing which is also
49:12
what sets us apart from
49:14
animals you know i can’t hold a dog to
49:17
account when he sits on the couch i mean
49:19
he wouldn’t understand why
49:21
so so and and as hans jonas like a
49:24
german
49:25
us philosopher said uh but
49:30
something like uh by having this
49:32
responsibility we need to keep this
49:35
responsibility alive and we must never
49:37
extinguish the human
49:38
species because extinguishing the human
49:41
species
49:42
would mean to extinguish responsibility
49:44
from this world
49:45
because when you know when humanity is
49:48
gone
49:48
or if we you know eradicate us he wrote
49:50
that in light of the
49:52
nuclear threats you know in the 70s etc
49:54
where suddenly you had the potential to
49:56
erase all of humanity with you know a
49:58
few nuclear bombs or like
49:59
significant parties that that’s not okay
50:02
we need to uh preserve our species
50:04
because
50:05
we are the only ones who are capable of
50:08
responsibility
50:09
and and and this gives us the
50:11
responsibility to keep ourselves
50:13
alive or like preserve ourselves now i i
50:15
really like jonas in that regard even
50:17
though he’s considered very
50:19
ludite or like anti-tech and
50:23
yeah that’s okay we’ve had people who
50:24
describe themselves as tech
50:26
abolitionists on this show oh
50:27
okay okay i’m relatively mild yeah
50:30
you’re being you’re being
50:31
rude right in line
50:35
i’m moderate moderate we’ll uh we’ll go
50:37
with that
50:38
um when you think about how tech plays
50:41
into
50:42
scaling the possible futures for
50:44
humanity i
50:45
i think you know the way i think about
50:46
it and i just want to frame this up is
50:48
that
50:49
there are ways that we could build
50:51
toward the best futures for all of
50:53
humanity and there are ways that we
50:55
could build toward the worst futures
50:57
for all of humanity and of course i feel
50:58
like you know we’re always doing a
51:00
little bit of both
51:01
um and i hope that we’re always trying
51:04
to aim for
51:05
the good side but what we have to
51:07
characterize what it is
51:09
what that means right what how do we
51:11
steer toward the best futures with tax
51:13
so
51:14
in your mind how do you think or what do
51:16
you think we can do
51:17
in culture and in business in
51:20
organizations to
51:22
to stand a better chance of bringing
51:23
about the best futures
51:25
with tech for humanity i think we have
51:28
great chance again by connecting the
51:30
dots so
51:31
after a long you know debate etc the the
51:34
united nations have finally established
51:36
those
51:37
17 sustainable development goals which
51:39
is kind of a global consensus and
51:41
and that’s not just about co2 emissions
51:44
but
51:44
that covers a wide range of uh goals
51:47
that are desirable
51:48
uh under the headline leave no one
51:51
behind
51:51
so democratizing leave no one behind
51:54
but in a real in the real sense of the
51:57
word basically
51:58
and so why not subject
52:02
ai or technological development to those
52:05
standards also if we have
52:07
a globally acknowledged framework of
52:09
course with flaws and trade-offs etc
52:12
but instead of reinventing the wheel and
52:14
saying like you know what kind of future
52:16
do we want
52:17
and then what can technology do in that
52:20
future
52:21
like ah we have already a vision of the
52:23
future we want or
52:24
what we have to avoid and what we want
52:26
to achieve with these
52:28
sdgs sustainable development goals why
52:30
not
52:31
integrate the whole tech discussion into
52:33
this framework and maybe add
52:35
some tech specific challenges that i
52:37
mentioned not everything fits under
52:39
these sustainable development goals but
52:40
that’s what i mean
52:41
don’t reinvent the wheel i mean tech is
52:44
just
52:45
a means to an end and if if the if the
52:48
end is sustainable development
52:49
goal sustainable development make tech
52:53
a means to achieve this end and measure
52:55
it
52:56
based on how it contributes to achieving
52:59
those goals
53:01
that makes sense and i talk about that
53:03
myself on a regular basis about using
53:05
the sdgs
53:06
as a roadmap for for development and
53:09
bringing ai
53:10
and emerging technology discussions in
53:12
alignment with that road map
53:14
and what’s interesting to me about that
53:16
too is that
53:18
it gives it gives plenty of
53:20
commercializable opportunities you know
53:22
you talked earlier about you can’t just
53:23
you can’t just have this conversation in
53:25
a vacuum not acknowledging
53:27
that corporations want to make money and
53:29
and yes you know there’s no reason why
53:31
there isn’t an incentive to make money
53:33
there’s plenty of incentive within
53:35
those sdgs uh when uh when technology is
53:39
applied to them or outside of technology
53:40
applications excellent to make money
53:43
yes in alignment right now yeah
53:46
i see synergies but i also see red flags
53:49
like or
53:50
i mean but it’s also that sometimes you
53:51
think like uh technology
53:53
you know like deep fakes you couldn’t
53:55
say
53:56
they violate violate the sustainable
53:58
development goals so you cannot answer
54:00
all the the questions about tech with
54:03
the sustainable development goals i mean
54:05
the question about deep fakes just it’s
54:07
another question it’s just a purely
54:08
ethical question it’s just
54:10
i say ai for nonsense or a.i for bad or
54:13
you take for evil however you want to
54:15
call it and that’s a
54:16
an entirely separate discussion that we
54:18
also need to lead but a lot of it can be
54:21
aligned or like judged based on the
54:23
contribution of the sustainable
54:24
development goals
54:26
i think you just coined the hashtag ai
54:28
for nonsense
54:30
that’s the truth what um if if companies
54:34
if people are watching
54:36
and they’re representatives of
54:37
corporations or organizations and they
54:39
want to
54:40
take actionable steps to bring their
54:42
work that are in line
54:43
you mentioned the sustainable
54:45
development goals are there other
54:47
actionable steps or or kind of
54:49
guidelines that you can recommend
54:51
for for organizations and individuals to
54:54
to bring their work in line
54:55
with with these principles i mean
54:57
there’s a like
54:58
overwhelming amount of uh you know
55:00
volunteer standards
55:01
industry specific or regional standards
55:04
etc
55:05
you know it really depends on you know
55:08
where your business is located and what
55:09
you do etc but i think
55:11
very important is always talking about
55:13
participation participatory machine
55:15
learning
55:15
you know take your stakeholders on board
55:17
see who is affected by your business
55:20
and and and and and you know take them
55:23
on board
55:24
have a discussion take your critics on
55:25
board don’t corrupt them
55:27
take them on board listen to them and
55:30
and and so those multi-stakeholder
55:33
approaches i know
55:34
they have their own risks of uh
55:37
inequality
55:38
and and structural injustice etc
55:41
whether you’re certainly doing better by
55:43
talking to your stakeholders and by not
55:44
talking so stakeholder approaches
55:47
yeah right yeah it’s come up a few times
55:49
on this show that we’ve talked about
55:50
you know the development of technologies
55:53
or of solutions without involving
55:54
communities that are going to be
55:56
affected by those solutions
55:57
is a completely silly and wrong-headed
56:00
approach
56:01
but yeah and and you’re right that
56:03
sometimes involving
56:04
those communities can actually lead to
56:07
problematic
56:08
uh work as well but it’s it’s got to be
56:11
better
56:11
in general it’s got to be the better
56:13
approach so that’s great
56:15
invite your stakeholders and invite the
56:18
communities
56:19
that are affected uh that’s wonderful
56:21
also uh
56:22
chris buehler coined the uh hashtag here
56:26
ai for nonsense invented by dorothea
56:29
bauer so there you go
56:30
thanks wow nonsense i mean i feel like
56:34
that’s almost the twitter character
56:35
limit right there so
56:38
just all we can ever tweet is just that
56:40
hashtag
56:43
uh where can people find your work if
56:45
they want to follow along
56:47
i’m sure that you’ve got a lot of new
56:49
fans after the show
56:50
so where can they track you well i’m
56:53
mostly active on twitter as you know
56:55
that’s where i kind of spend too much
56:57
time you could say but i also gain a lot
57:00
by you know being up up to date all the
57:03
time with the debates
57:04
uh twitter linkedin i have a website
57:07
that’s in urgent need of a
57:09
revision you can look at it but don’t
57:11
judge me based on my website
57:13
please and that’s uh consulting that ch
57:16
is that right yeah that would be one but
57:18
don’t
57:18
don’t repeat it just you know it’s easy
57:20
to find your social media
57:23
i need to revise it for the yeah
57:26
fair enough i always figure it’s the the
57:28
cobbler’s kids you know that make the
57:30
the worst shoes or whatever that don’t
57:31
have shoes that’s the whole thing
57:34
that’s really undermining no i get it
57:37
it’s totally where i’m at too
57:39
dorothy thanks so much for being on the
57:41
show i know
57:42
uh there were there were a lot of
57:44
comments that i didn’t read aloud it was
57:46
a lot of people just going yay
57:48
yay they were so excited to have you on
57:50
so
57:51
thank you for being here it’s really
57:52
wonderful conversation i really
57:54
appreciate what you’re doing out there
57:56
to to bring together these
57:58
different types of conversations and
58:00
these different holistic views and make
58:02
sure
58:02
that corporate discussions and
58:04
organizational discussions are all
58:06
happening
58:07
in in alignment with one another so
58:08
thank you so much for that
58:10
thanks so much for having me it was
58:12
really pleasure to talk to you
58:13
thank you thanks bye-bye bye-bye

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.