The Tech Humanist Show: Episode 15 – Abhishek Gupta

About this episode’s guest:

Abhishek Gupta is the founder of Montreal AI Ethics Institute (https://montrealethics.ai ) and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. He represents Canada for the International Visitor Leaders Program (IVLP) administered by the US State Department as an expert on the future of work.

He additionally serves on the AI Advisory Board for Dawson College and is an Associate Member of the LF AI Foundation at the Linux Foundation. Abhishek is also a Global Shaper with the World Economic Forum and a member of the Banff Forum. He is a Faculty Associate at the Frankfurt Big Data Lab at the Goethe University, an AI Ethics Mentor for Acorn Aspirations and an AI Ethics Expert at Ethical Intelligence Co. He is the Responsible AI Lead for the Data Advisory Council at the Northwest Commission on Colleges and Universities. He is a guest lecturer at the McGill University School of Continuing Studies for the Data Science in Business Decisions course on the special topic of AI Ethics. He is a Subject Matter Expert in AI Ethics for the Certified Ethical Emerging Technologies group at CertNexus. He is also a course creator and instructor for the Coursera Certified Ethical Emerging Technologist courses.

His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world that has made significant contributions to the Montreal Declaration for Responsible AI, the G7 AI Summit, AHRC and WEF Responsible Innovation framework, PIPEDA amendments for AI impacts, Scotland’s national AI strategy and the European Commission Trustworthy AI Guidelines. His work on public competence building in AI Ethics has been recognized by governments from North America, Europe, Asia, and Oceania.

More information on his work can be found at https://atg-abhishek.github.io

He tweets as @atg_abhishek.

This episode streamed live on Thursday, October 22, 2020. Here’s an archive of the show on YouTube:

Show Notes & Highlights:

Just released as this podcast episode came out! Montreal AI Ethics Institute’s “The State of AI Ethics Report” (October 2020): https://montrealethics.ai/oct2020/

Links:
Montreal AI Ethics Institute
Abhishek Gupta

About the show:

The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.

Subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube for updates.

Transcript

02:41
hey humans hello
02:43
i’d like to see uh those of you out
02:46
there and glad to see some numbers some
02:47
some people tuning in i’m glad you’re
02:49
out there
02:50
go ahead and drop some comments into
02:52
wherever you’re watching just to say hi
02:55
we are getting ready to to fire it up
02:58
and we’re going to be talking about ai
03:00
ethics and all of the whatnot around
03:02
that so please start getting
03:04
your questions ready and your comments
03:06
and i’ll i’ll introduce you to our guest
03:08
in just a moment
03:09
i am of course your host kate o’neil
03:11
every week
03:12
it’s just me so let me hear from those
03:15
of you who are online say hi
03:17
tell me where you’re joining in from we
03:19
are already covering a pretty big chunk
03:21
of the globe
03:22
uh between me and our guests so i hope
03:25
we’re
03:25
covering even more of the globe with
03:27
some of our our watchers and listeners
03:30
today we’re gonna talk about ai ethics
03:32
get those questions warmed up
03:35
and uh this as you may know is a
03:37
multimedia format program as i like to
03:39
call it means that as i’m speaking
03:41
it’s being broadcast live but it’ll also
03:43
be it’ll live on as an archive on
03:46
all the channels where you can find it
03:48
now so that’s youtube and facebook
03:50
linkedin twitter and twitch even though
03:52
nobody watches us on twitch
03:54
they might start to after aoc was just
03:56
live on twitch last night so
03:57
that’s really getting getting the
03:59
channel going for us thanks aoc
04:02
uh so it’ll live on as an archive on all
04:04
those channels and then of course
04:06
uh so those of you who find it later uh
04:09
are seeing this as an archive so hello
04:11
to those of you in the future from those
04:12
of us in the past
04:14
we appreciate you also each episode gets
04:16
turned into a podcast the following week
04:18
so today’s
04:19
episode will be available as an audio
04:21
podcast next friday
04:23
and tomorrow last thursdays will be
04:25
available so
04:26
keep up with the podcast that way if you
04:29
prefer
04:30
audio and every week we explore
04:32
different facets of how data and
04:33
technology
04:34
shape the human experience so i hope
04:36
you’ll subscribe or follow along
04:38
wherever you’re watching or listening to
04:39
this
04:40
don’t miss any new episodes because
04:41
they’re wonderful oh please do note that
04:44
as a live show we’ll do our best to vet
04:46
comments and questions in real time
04:47
we’re going to take those and and try to
04:49
make them live into our interactive
04:51
format but we
04:52
may not get to all of them so we very
04:54
much appreciate you being here
04:56
chatting with us and just generally
04:57
participating in the show
04:59
so now to introduce our guest today we
05:02
are chatting with abhishek gupta who is
05:04
the founder of montreal ai ethics
05:07
institute
05:07
and a machine learning engineer at
05:09
microsoft where he serves on the csc
05:12
responsible ai board
05:13
he is representing canada for the
05:15
international visitor leaders program
05:17
administered by the u.s state department
05:19
as an expert on the future of work
05:22
he additionally serves on the ai
05:24
advisory board for dawson college
05:26
and is an associate member of the lfai
05:28
foundation at the linux mount
05:30
uh yeah elliot sorry that was a lot of
05:32
acronyms sorry lfai foundation at the
05:34
linux foundation we got it
05:36
avishtech is also a global shaper with
05:38
the world economic forum and a member of
05:40
the banff
05:41
forum he is a faculty associate at the
05:43
frankfurt big data lab at the goethe
05:45
university
05:46
an ai ethics mentor for acorn
05:48
aspirations and an ai ethics expert at
05:50
ethical intelligence company
05:52
he is the responsible ai lead for the
05:54
data advisory council at the northwest
05:56
commission on colleges and universities
05:57
he’s a guest lecturer at the mcgill
05:59
university school of continuing
06:01
continuing studies for the data science
06:03
in business decisions course on the
06:05
special topic of ai ethics
06:07
he’s a subject matter expert in ai
06:09
ethics for the certified ethical
06:10
emerging technologies group at certnexis
06:13
also a course creator and instructor for
06:14
the coursera certified ethical emerging
06:17
technologist courses
06:18
so if you want to take those you can
06:20
take them his research focuses on
06:22
applied technical and policy methods to
06:24
address
06:25
ethical safety and inclusivity concerns
06:27
in using ai in different domains
06:29
many of which we’ll discuss today he’s
06:31
built the largest community driven
06:33
public consultation group on ai ethics
06:35
in the world
06:36
that has made significant contributions
06:37
to the montreal declaration for
06:39
responsible ai
06:40
the g7 ai summit ahrc
06:44
and wef responsible innovation framework
06:47
i’m gonna have to ask for help on is
06:49
this p-i-p-e-d-a
06:50
amendments for ai impacts he’ll have to
06:52
help me with that one
06:54
scotland’s national ai strategy and
06:56
europe european commission trustworthy
06:58
ai guidelines
06:59
i mean this is quite a bio right his
07:02
work on public confidence building and
07:03
ai ethics has been recognized by
07:05
governments from north america europe
07:07
asia and oceania
07:09
and we’ll give you more information on
07:10
how to find more about his work
07:13
toward the end of the show but gosh if
07:16
you’re
07:16
not bowled over and falling out of your
07:18
seat from how amazing those credentials
07:20
are
07:21
i hope you’ll help me welcome our
07:23
amazing guest
07:24
abhishek gupta so abhishek you
07:28
are live on the tech humanist show thank
07:30
you for being here
07:33
hey thank you for having me nice to see
07:35
everybody here
07:36
yeah and as i mentioned when we were
07:38
when i was warming up that we’re
07:40
already covering a lot of the globe you
07:41
are you are joining us from
07:43
india today right yeah yeah
07:46
joining everybody from india uh we’ve
07:50
been experiencing some heavy rains and
07:53
yeah it’s uh so yeah getting to enjoy a
07:57
little bit of that
07:58
it’s a little past midnight here so if i
08:00
seem a little bit loopy
08:01
i hope you’ll give me a little bit of
08:03
rope there luffy is
08:05
sort of the theme of the show so i don’t
08:07
think we’re gonna have any problem there
08:10
but yeah that’s it’s amazing that you’re
08:12
you’re uh
08:13
joining us so late i would be i would be
08:15
not competent
08:16
at midnight but
08:20
well it’s so cool like man what what a
08:23
background oh yeah correct me on the
08:24
pronunciation on that p-i-p-e-d-a is it
08:26
just
08:27
pipa or pi how do you say it
08:30
yeah yeah it’s trippida so it’s uh
08:33
canada’s privacy legislation
08:35
oh okay great excellent yeah you’re
08:37
involved in everything you have a hand
08:39
in all of
08:40
it that’s amazing i also have a
08:44
stellar team behind me so that
08:46
definitely makes life a little easier
08:48
that’s fantastic how did you get going
08:50
in this field what got you started
08:54
um it was it was an interesting journey
08:56
so i was
08:58
at the first ai for good global summit
09:00
in geneva
09:01
in 2017 at the united nations
09:04
and it was interesting of course this
09:06
was one year before the
09:08
start of the or before the uh
09:11
you know sort of coming into effect of
09:13
the gdpr
09:14
and uh discussions in europe were much
09:17
further ahead
09:18
i have to say uh compared to north
09:20
america and the rest of the world when
09:21
it came to thinking about the ethics of
09:24
technology and of course you know gdpr
09:26
is specifically focused on privacy
09:28
and i i found it fascinating that
09:32
uh in in canada where we really pride
09:35
ourselves
09:36
on being inclusive being
09:39
you know concerned with some of the
09:40
societal impacts of technology we
09:42
weren’t really
09:43
paying as much attention as we should
09:45
have
09:47
to this topic and area and
09:50
the other thing that i realized was that
09:52
even the places where these
09:53
conversations were taking place a lot of
09:55
these were
09:57
sort of behind closed doors they were
09:58
limited in terms of
10:00
who was able to participate in those
10:02
discussions and i think
10:03
that was one of the things that really
10:05
stirred me towards
10:07
uh the work that i started to do which
10:09
was to invite people
10:11
from different parts of the
10:14
community from different walks of life
10:16
to really come in and participate
10:18
in these discussions because i think we
10:20
all have something to offer
10:22
uh we all have different experiences
10:24
that and and
10:25
a lot of the challenges that we’re now
10:26
facing have surfaced in one form or
10:29
another in different fields
10:30
and that was something that was missing
10:33
where
10:33
there was this sort of you know not to
10:36
maybe you know for the lack of a better
10:38
phrase there was a degree of elitism in
10:40
terms of
10:41
uh who was able to participate and where
10:42
these discussions were taking place and
10:44
that’s that’s sort of how i got started
10:46
was to really create an arena for people
10:49
to
10:49
feel welcome to feel uh appreciated in
10:53
terms of what they had to offer
10:55
and at the same time be a conduit for
10:57
providing a little bit more nuance to
11:00
the conversation
11:01
rather than trying to make blanket
11:02
statements saying that uh
11:04
you know robots are going to come and
11:05
take my job yeah
11:07
i like using that that uh phrase within
11:10
my work too just because it seems like
11:12
it’s the it’s the pinnacle of what
11:14
people’s anxiety is about well no no the
11:16
pinnacle is
11:17
i hope we don’t become servants to
11:18
robots like that’s sort of the
11:21
the worst case scenario that people are
11:22
always anxious about but
11:24
well and i love that you uh made sure
11:26
that you were doing the work
11:28
of of inclusivity as as it’s mentioned
11:30
in your bio that that is one of
11:31
your focus areas but you know a lot of
11:33
people talk about
11:35
broadening the the reach and bringing in
11:37
more of the community but
11:38
it’s it’s really important work to to do
11:41
that and actually
11:42
get you know more stakeholders and more
11:44
constituencies at the table
11:47
so yeah 100 yeah and and and you know a
11:51
lot of this
11:51
uh actually well while we were still
11:54
allowed to meet in person
11:56
this was something that we prided
11:58
ourselves on was to be able to invite
12:00
people
12:01
to you know different venues from all
12:03
these walks of life
12:05
uh to come together and and to have this
12:09
you know time together to discuss these
12:11
issues
12:12
where we we were able to facilitate uh a
12:15
sort of common ground when it came to
12:18
different levels of understanding uh but
12:20
different backgrounds and experiences as
12:22
well because
12:22
again as you said uh when it comes to
12:26
inclusivity inclusivity uh
12:29
shows up in different forms it can be in
12:31
the form of
12:33
tokenism where you know you you just
12:35
throw some
12:36
something on there to appear to be
12:38
inclusive but to really
12:40
be inclusive is hard work yeah and and
12:43
listeners will uh of course appreciate
12:46
that as they listen to your show they
12:47
already probably realize that
12:49
a lot of people doing great work in this
12:51
space but when it comes to
12:53
being really inclusive that’s hard work
12:56
it it requires persistence perseverance
12:58
uh and that’s something that we’ve you
13:01
know
13:02
kept us something that’s integral to our
13:04
work from the very beginning and i think
13:06
over time that’s paid off in terms of
13:08
the community that we’ve
13:10
managed to build so i think one of the
13:12
ways in which your work seems like it’s
13:14
inclusive
13:14
is that you’re not just focused on
13:16
montreal despite the
13:17
the name of the organization right
13:19
you’re you seem as if your scope is
13:22
actually
13:23
um broader than montreal it’s canada
13:25
it’s north america it’s really the whole
13:27
world you’re
13:28
really playing in a very global kind of
13:30
international arena so
13:32
how does your work entail keeping up
13:34
with with
13:35
ai ethics developments or how much does
13:37
it entail keeping up with developments
13:39
around the world as opposed to
13:41
you know just those that are within the
13:42
scope of montreal say or canada or north
13:46
america yeah no and and
13:49
uh so that that’s a great point and and
13:51
you know one of the reasons for why
13:52
uh uh you know we have montreal in our
13:55
name was well
13:56
because it’s it’s it’s my home and uh
13:59
you know that’s that’s where we started
14:01
but certainly we we now have a a very
14:04
global scope
14:05
we do keep up with most developments not
14:08
all
14:09
because well it’s just impossible with
14:10
the pace of change
14:13
and and the reason for doing that is
14:15
also
14:16
each different part uh sort of brings
14:20
a facet or nuance that gets missed
14:22
elsewhere
14:23
and i think one of the things that we’ve
14:25
noticed recently which
14:26
a colleague of mine and i highlighted in
14:29
a
14:30
recent mit tech review article was how a
14:33
lot of the ai ethics work
14:34
is sort of western and northern european
14:38
and not
14:39
american focused both in terms of the
14:42
impacts of technology the kind of
14:44
platforms that it analyzes but also the
14:46
sort of ethical philosophies that it
14:48
looks at and and there is there’s a
14:50
tremendous amount to be gained from
14:52
uh looking at perspectives from other
14:54
parts of the world
14:56
and and some of the challenges in other
14:58
parts of the world are
14:59
are also quite specific to those places
15:02
and that’s something that doesn’t get
15:04
talked about
15:04
often one of the things that we see uh
15:07
which
15:07
i think is particularly problematic is
15:10
lumping everything together
15:11
in the global south i mean the global
15:13
south isn’t a
15:14
a homogenous whole it is
15:17
uh rich with diversity each with its own
15:21
specific context its culture
15:23
their own sets of values and what people
15:25
aspire to
15:26
and and hold near and dear and and that
15:29
again i think
15:30
uh we we need to draw a bit more focus
15:33
towards that and and and through our
15:36
sort of global
15:38
you know analysis and and keeping in
15:40
touch with the developments from
15:41
different parts of the world we’re able
15:43
to
15:44
bring in some of those perspectives into
15:46
these conversations which is something i
15:47
think that
15:49
tends to get reflected in our work uh
15:52
when we’re talking about this
15:54
uh breaking out of that you know as you
15:56
said the north american mold
15:58
which is dominant yes but there are
16:00
other parts of the world as well
16:02
what what are some of the developments
16:04
that you see happening
16:05
in other countries or other parts of the
16:07
world that you think maybe north
16:08
american ethicists and stakeholders
16:10
should have their eyes on and should be
16:11
paying attention to
16:14
so one of one of the uh one of the
16:16
developments that i think
16:18
one of the developments or rather one of
16:19
the perspectives let’s say
16:21
that i think doesn’t get enough
16:23
attention is
16:25
uh the the way robots
16:29
and i’m talking about you know not
16:30
robots and terminators but like simple
16:32
embodiments but let’s say we’re talking
16:36
about social care robots and and the
16:37
perspective that
16:39
uh say the japanese have versus the
16:41
perspective that you have in the united
16:43
states and how that’s
16:45
uh there are differences in terms of how
16:47
robots are
16:48
uh portrayed from a cultural perspective
16:51
and subsequently how these societal
16:55
perceptions are
16:56
shaped and how willing and accepting
16:58
people are
16:59
of those of uh robots in a social care
17:03
setting
17:03
which is radically different in a place
17:05
like japan versus
17:07
in the united states other perspectives
17:11
that
17:12
are interesting to think about for
17:14
example
17:15
is the ubuntu philosophy when it comes
17:17
to thinking about
17:19
uh the place of a human being
17:23
and and how it is something that’s
17:25
relational
17:26
rather than rational um and and
17:29
there is a lot to be learned if we were
17:31
to reimagine
17:32
or let’s say imagine technology ethics
17:35
from these ethical philosophies
17:38
rather than just thinking about it from
17:40
you know utilitarianism
17:42
or from deontology or virtue ethics or
17:45
any of these
17:46
uh sort of let’s say more well-known
17:49
uh ethical philosophy so there are all
17:52
these different perspectives that you
17:53
would see from
17:54
different parts of the world that get
17:56
sort of missed in that
17:59
conversation yeah that i know john c
18:02
havens was talking about that too when
18:03
he was on the show and i love
18:05
that idea of bringing that ubuntu
18:07
philosophy
18:08
into into the the fold and making sure
18:11
that we’re having this you know very
18:12
integrative
18:12
holistic discussion about about all of
18:15
the different viewpoints
18:17
what um you know in general are you
18:20
seeing
18:21
movements towards certain kinds of
18:23
regulations
18:24
in other parts of the world that you
18:25
think are going to be
18:27
meaningful in their in the way they
18:29
influence like say
18:30
you know the gdpr for example has been
18:32
hugely influential
18:34
as you mentioned it’s you know privacy
18:35
focused but of course it has
18:37
repercussions for ai and for other parts
18:40
of technology are you seeing anything
18:41
like
18:42
that anywhere else on the horizon that
18:43
you think like oh this could really
18:45
this could really play out in some
18:47
interesting ways for ai
18:51
so i think you know one of the places
18:52
where i’ve seen
18:55
or let me let me rephrase i think i
18:57
think what’s interesting when we’re
18:58
when we’re looking at some of these
19:00
legislations like the gdpr is that
19:02
i think they were it it sets a great
19:05
standard right there is no
19:06
there are no two ways about it but it’s
19:09
it’s
19:10
given that it was the first endeavor
19:12
that was
19:13
you know sort of wide-ranging and
19:15
sweeping i think it’s it’s held up as
19:17
the gold standard
19:18
even though it does have uh sort of you
19:21
know chains in the armor where
19:23
uh you know things do slip through the
19:25
cracks and there are ways that we can
19:26
improve it
19:27
on the other hand i feel that some of
19:30
the legislations like the beeper
19:32
in in the state of illinois uh the ccpa
19:35
uh these are these are interesting in
19:37
the in the fact that they’re they’re
19:39
quite targeted
19:40
in in terms of either uh you know sort
19:42
of domain scope
19:44
or in terms of uh regional scope and
19:47
uh there are pros and cons to this
19:50
approach right the pros being
19:52
that uh given that it’s more targeted uh
19:55
the potential for being more
19:57
fine-grained in terms of the
19:59
uh the measures that you can put through
20:01
the controls that you can ask for
20:04
and subsequently the impact that you can
20:06
have is just so much
20:07
uh uh it’s just so much better uh
20:11
compared to having something that’s very
20:13
abstract and high level uh and and of
20:15
course you know
20:16
legal scholars will challenge me that
20:17
the beauty of having
20:19
something that’s high level and abstract
20:20
is that it allows you for flexibility
20:23
uh when it comes to uh you know
20:25
encountering
20:26
uh you know uncertainty or different
20:28
scenarios that you didn’t envision in
20:30
the beginning
20:31
but i’m a big fan of both and though so
20:33
i feel like that’s where
20:34
this really applies in this case
20:37
absolutely
20:38
and and and exactly here what what
20:41
happens then is that
20:42
there there is potential for a
20:46
different interpretations of these
20:50
regulations of these laws which leads to
20:52
inconsistency in application
20:54
and i think that’s part of the problem
20:55
in terms of the frustration that people
20:58
face
20:58
when they uh let’s say you know face
21:00
injustice because of an algorithmic
21:02
system
21:03
is depending on where you are uh
21:06
you might you know you might receive
21:09
different outcomes in terms of
21:11
the uh you know the process of recourse
21:14
the
21:14
the uh sort of judicial process that you
21:17
have at your disposal just because of
21:19
the way different bodies choose to
21:21
interpret those laws and
21:22
and again i think i think that’s you
21:24
know it’s it’s not a knock
21:26
on the legislative or the legal system
21:28
it’s just that
21:30
maybe there is room there for being a
21:33
bit more fine-grained in terms of the
21:35
recommendations that we make
21:36
and ultimately i think making them a bit
21:38
more actionable so that
21:40
there there is yes room for
21:42
interpretation but there isn’t
21:44
so much room for interpretation that
21:45
you’re twisting
21:47
it to your own needs so basically
21:49
following it a letter but not in spirit
21:51
yeah that’s it makes sense and i i think
21:54
about that a lot with
21:55
things like as you mentioned the
21:56
illinois biometrics law and things like
21:58
that that
21:59
that they are you know at least within
22:02
the context of u.s
22:03
law it seems to happen a lot that you
22:05
get you know states or even cities that
22:08
that try out different kinds of
22:09
legislation and then you can sort of see
22:11
how it plays in a small scale before
22:13
it’s ever really entertained at the
22:14
federal level and i don’t i don’t know
22:16
how often
22:17
that model is you know plays out in
22:20
other countries in other parts of the
22:22
world that you know you get that sort of
22:24
almost like test ground for different
22:26
markets uh to see how these things play
22:28
on in small scale before they can
22:30
you know really be entertained at the
22:33
federal level
22:34
uh but but that seems like there’s a lot
22:36
of value to that to having you know as
22:38
you say at that federal level you
22:39
definitely want to have these nuanced
22:41
but abstract but you know
22:43
applicable kind of you know those kinds
22:44
of conversations
22:46
but it’s really it seems like at least
22:48
from my perspective
22:49
uh you know it must be a lot easier once
22:52
you have some
22:53
demonstrable kind of results that are
22:55
are from smaller
22:56
sort of test markets in a sense yeah
23:00
no and and i like that you use the word
23:02
uh you know test ground or you know
23:04
you know sandbox really and i think when
23:07
we you know we keep
23:09
mentioning how policy making and
23:11
lawmaking
23:12
you know always trails behind technology
23:15
uh
23:15
just because of you know the slower you
23:17
know cycle for it and whatever other you
23:19
know checks and balances that we have in
23:21
place rightly
23:22
so because we don’t want to rush things
23:24
at the same time
23:26
sometimes it feels like you know we’re
23:29
trying to legislate cars when we
23:31
uh you know still have horses uh or
23:34
sorry we were
23:35
trying to uh you know uh legislative
23:38
forces
23:40
yeah and and and and and what’s
23:43
uh problematic there is it’s because of
23:45
that slow cycle right
23:47
in or if if we have these you know
23:49
smaller sandboxes and test grounds
23:52
then we could really attempt to
23:55
experiment and experiment in a positive
23:57
sense not experiment as in
23:59
you know trying to see you know if there
24:02
are some negative
24:03
you know harms that we can get away with
24:05
but really to see
24:06
how we can create the most positive
24:08
impact how we can be the most actionable
24:10
uh that we can uh and and and to do that
24:13
at a local level
24:14
to to involve at that local level the
24:17
community itself
24:18
in in gaining that feedback because
24:20
gathering feedback at a federal level
24:23
is incredibly hard to even assimilate
24:26
all of that and to feed that back into
24:28
the process whereas if you did it at
24:30
a municipal level now that’s something
24:32
that’s interesting because you
24:34
you do have some of those processes in
24:36
place to be able to you know manage
24:39
and assimilate and act on the feedback
24:41
that you could gather from the local
24:43
communities
24:44
yeah now that makes sense now you’re
24:46
talking too about um
24:48
you know the interpretability of uh
24:51
regulations and of course
24:52
i know that your um focus is on ai
24:55
ethics not ai regulations but of course
24:57
they go
24:58
a little hand in hand so i’m wondering
25:00
in terms of interpretability you know
25:01
what are some of the concepts that
25:03
uh ai ethics initiatives for the most
25:06
part
25:06
seem to agree upon because i know that
25:08
there are
25:09
a lot of different ways that ai ethics
25:11
initiatives kind of approach
25:13
the scope and the layout of how they’re
25:16
going to
25:18
define what standards are and how how
25:20
they’re going to be
25:21
measured and how they’re going to you
25:22
know roll out and involve the community
25:24
and things like that so what are some of
25:25
the things that
25:26
that you see as as the things that are
25:28
generally agreed upon or that you think
25:30
should be the things that are agreed
25:32
upon in a sense
25:34
well i mean you know that’s that’s a
25:37
it’s an easy and a hard question it’s an
25:39
easy question in the sense that i think
25:41
at this point we’ve had so many
25:44
sets of guidelines and principles that
25:46
uh
25:47
you know i one would imagine that every
25:50
possible principle out there has been
25:52
covered
25:52
you know many times over that it’s it’s
25:55
fairly easy to drop a list of things
25:57
that are
25:58
you know commonly accepted uh you know
26:00
we can talk about fairness
26:01
privacy uh security uh
26:04
accountability uh uh transparency
26:09
but i think one of the other issues
26:12
you know that’s that’s the flip side of
26:14
this coin is that the the taxonomy
26:17
of this entire sort of space is also
26:21
muddied quite a bit in the in the way
26:23
people use these terms so
26:25
everybody means slightly different
26:28
things when they’re talking about
26:30
even something uh like privacy which one
26:33
would imagine has
26:35
you know fairly you know well settled
26:38
upon
26:39
definitions for what privacy should be
26:41
but when you’re
26:42
talking to people coming from different
26:45
walks of life
26:46
uh and and even if they’re from the same
26:48
walk of life but
26:49
coming from different uh regions or
26:53
jurisdictions the way they think about
26:55
privacy is
26:56
is different uh the nuances are where
26:59
things are interesting right because
27:02
ultimately when it comes to implementing
27:04
this stuff
27:05
you know the devil is in the details
27:06
right and and without that
27:08
uh we’re really um we’re we’re really
27:11
just you know patting ourselves on the
27:13
back with all of these high level
27:14
principles but not really
27:16
uh you know coming to grips with some of
27:18
the more concrete ideas
27:20
around well when we’re talking about
27:22
privacy if
27:23
if if you make a claim about the privacy
27:26
of a product that you’re building and i
27:28
make claim about the privacy of the
27:29
product that i’m building
27:31
a third person as a consumer how are
27:34
they to judge whether abhishek or kate’s
27:36
product is better
27:37
in terms of privacy protections like
27:39
what are how do we evaluate that
27:41
because we might be coming from
27:43
different
27:44
definitions of what we think it
27:47
guarantees the privacy of
27:49
the data of the consumer but
27:53
you know your protections the
27:55
productions that you’re offering might
27:57
be stronger than the ones that i’m
27:58
offering
27:59
but the consumer has no way of knowing
28:01
about it and neither do we have any
28:02
standardized benchmarks
28:04
that help us evaluate that right and and
28:07
the reason for that is because we don’t
28:08
have a shared taxonomy
28:10
and every new set of uh you know
28:13
guidelines or principles that come out
28:15
yeah
28:16
they all come up with their own new
28:17
definitions of what uh you know privacy
28:20
should mean but
28:21
not just privacy that’s just one
28:23
dimension right what is fairness what is
28:25
accountability what is transparency
28:27
and then interpretability explainability
28:30
intelligibility
28:31
all of these you know sort of slightly
28:33
adjacent areas but each of them means
28:35
something slightly
28:36
you know different lots lots to talk
28:39
about there but
28:40
again i think i think we need to all
28:42
agree at least for the most part
28:45
on some of the core aspects of these
28:47
definitions so that we can start to move
28:49
past
28:50
uh you know just posturing when it comes
28:52
to taxonomy and really
28:54
start to move towards implementing those
28:56
uh in practice
28:58
yeah and then so so you know first of
29:01
all i guess one question is
29:03
have you seen anyone do a sort of
29:06
super matrix of all of these different
29:08
taxonomies and definitions and try to
29:10
map one to the other like
29:12
is there some body that is tracking you
29:15
know
29:16
privacy as it’s defined over here versus
29:18
over here
29:19
transparency as it’s defined over here
29:20
as opposed to over here
29:22
have you seen anything like that so
29:24
there are several initiatives right and
29:27
and uh again you know i’ll preface all
29:30
of my comments by saying that
29:32
uh it’s it’s a laudable effort to to
29:34
even embark on doing that work because
29:36
it’s it’s a timeless job
29:38
um that said uh
29:41
initiatives like uh i believe there’s
29:43
one from the workman client
29:44
uh center um that has uh this sort of
29:48
very nice
29:48
you know sort of visualization it’s like
29:51
a mandala
29:52
and has you know sort of all the
29:53
principles and everything mapped out and
29:55
you know which
29:56
uh sort of you know sets of principles
29:58
talk about which definitions and all of
30:00
that stuff right
30:02
the problem with these initiatives is
30:04
that they quickly go out of date
30:06
well why uh because these are typically
30:09
research projects that are funded
30:12
uh for a specific period of time so you
30:14
have a you know
30:15
a sprint of effort uh that goes into you
30:18
know creating them working on them and
30:19
then
30:20
and then it just sort of all you know
30:23
you know goes
30:24
uh static right because there’s more
30:27
stuff that comes out and then
30:28
if if i was to even go back and look at
30:30
that visualization that’s already i
30:33
don’t know how many months old it is at
30:35
this point but
30:37
uh it’s it’s definitely more than six
30:38
months old right and there have been
30:40
developments in the past six months that
30:41
of course would warrant inclusion in
30:44
that
30:45
and would perhaps alter the landscape of
30:48
that
30:48
uh taxonomy but of course uh no one’s
30:51
maintaining that actively anymore and
30:53
there are other initiatives right
30:56
and that’s the problem with these is
30:57
that it’s
30:59
it’s necessarily something that evolves
31:02
all the time and it’s just so hard to
31:04
keep up so either
31:06
we need folks to stop uh
31:09
making amendments to the taxonomy which
31:11
you know i think
31:12
when there is significant changes
31:16
proposed
31:16
i think it warrants putting out
31:19
something new but just
31:20
you know publishing a new paper at yet
31:24
another conference that
31:25
reframes the definition of what
31:28
intelligibility means just because
31:30
uh you know that’s just a little bit you
31:33
know something different there
31:35
i think it just muddies the water and
31:37
and confuses the whole
31:38
you know discussion and takes away
31:40
valuable effort from where
31:42
we could now be moving towards actually
31:44
putting it into practice rather than
31:46
again just theorizing and posturing
31:48
around it
31:50
well so it sounds like i was going to
31:52
ask you what in your view have been
31:54
some important concepts that maybe some
31:57
or most of these initiatives have
31:58
overlooked and one of them it sounds
31:59
like
32:00
maybe sort of standardization in a sense
32:02
like kind of coming to agreement
32:04
on what even the terms mean or the
32:06
taxonomy is or whatever is that would
32:08
you would that be fair to say
32:11
yeah yeah well that and also i think
32:14
even within these principles there is an
32:15
over emphasis on some areas right
32:17
and and that’s bound to be the case um
32:20
uh
32:21
you know when when more people start uh
32:23
you know
32:24
entering into the domain which is great
32:27
uh and and learning about it and
32:29
engaging with some of the the history of
32:32
uh
32:33
the work that’s been done by the
32:34
scholars in the space
32:36
one of the problems with that though is
32:38
that you can
32:39
end up focusing far too much on a few
32:41
areas to the exclusion of others
32:44
an example of that would be uh how
32:47
machine learning security
32:48
is something that is an area that’s
32:50
overlooked machine learning security
32:52
when we’re talking about that as an area
32:54
i think is is quite fundamental in terms
32:57
of
32:57
the impact that it will have on all the
33:00
other tenets
33:01
within uh ai ethics and the reason i say
33:03
that is because
33:05
it is it so when we’re talking about
33:08
machine learning it basically
33:10
opens up these new attack surfaces that
33:12
fall outside of traditional cyber
33:14
security measures
33:15
meaning that we’re now able to attack
33:19
machine learning systems in novel ways
33:21
that
33:22
aren’t covered by the traditional
33:24
cybersecurity protections that we put in
33:26
place
33:27
which means that if you in all ernesty
33:31
earnestness put um you know biased
33:34
mitigation measures
33:35
in place as you were developing a an ml
33:39
system
33:39
if i was to compromise the
33:43
the model uh through
33:46
you know an attack like data poisoning
33:50
i could sort of negate the effect
33:54
of that bias mitigation measure that you
33:56
had put in place in the beginning
33:58
and and hence basically render
34:02
you know useless the efforts that you
34:06
put in
34:07
and and create this situation where
34:10
uh slow i i’m slowly able to compromise
34:14
all of these different facets of the
34:15
ethical
34:17
considerations that you put in place for
34:19
that system and
34:21
it it continues to baffle me that uh
34:24
despite
34:25
uh a lot more work being done in the
34:27
space it
34:29
machine learning security barely ever
34:31
gets a mention
34:32
uh in in popular media or or even in
34:36
other research circles that are outside
34:38
of that
34:39
uh ml security domain where most of the
34:43
effort still seems to be focused on
34:44
issues like privacy fairness bias
34:47
interpretability explainability and then
34:49
those are yes those are very very
34:51
important areas
34:53
but i think we also need to not
34:56
[Music]
34:57
sort of not ignore some of these other
34:59
areas which are
35:01
important to consider especially when uh
35:03
machine learning security
35:05
is sort of uh like this foundational
35:08
cross-cutting piece where
35:10
if you ignore it you you’re sort of uh
35:12
you know
35:14
diffusing the the efficacy of the
35:16
efforts and all of these other areas
35:18
where you’re working so hard really to
35:20
do right by
35:21
everybody well it seems like it’s a
35:23
vulnerability that
35:25
you know might be easy to overlook if
35:27
you’re not someone who spends
35:29
time with you know execution and
35:31
building and and trying these things and
35:33
that’s what i was going to say it’s
35:34
actually an interesting segue because i
35:36
was going to say you’re an interest in
35:37
an interesting position
35:38
as someone who’s a software and machine
35:41
learning practitioner as well as someone
35:43
who’s
35:44
you know shape helping shape and lead
35:46
the discussion on ai ethics so
35:48
this seems like one of the insights that
35:50
you bring as a result of that but what
35:52
other kinds of insights do you think
35:54
that being a practitioner affords you in
35:56
this in this dual
35:58
dual focus i think the biggest
36:01
uh benefit that i uh get by being able
36:04
to sort of straddle both worlds is
36:07
is to look at the places where we
36:09
experience friction
36:10
uh and and what i mean by that is when
36:12
we’re talking about principles
36:14
uh it’s it’s great when we’re talking
36:17
about them in the abstract but at some
36:18
point
36:19
some of it needs to be put in into
36:22
practice either in the form of
36:24
a technical intervention or an
36:26
organizational change
36:28
and i’m not talking here about you know
36:29
larger systemic changes because that’s
36:32
uh those are of course you know those
36:33
require a great deal of momentum and
36:35
there are you know other scholars were
36:37
much better placed than i am to talk
36:38
about how we might do so most
36:40
most effectively but at least from a
36:43
practitioner’s perspective
36:44
this is the place where i see a lot of
36:46
um efforts and initiatives
36:48
failing to gain traction or failing to
36:51
make an impact is because they
36:53
fail to consider the needs of uh
36:56
developers who are on the ground
36:58
and what i mean by that is if you come
37:01
and tell me hey
37:02
you know we should build models that
37:05
you know are unbiased
37:08
i you know that that really isn’t
37:10
actionable
37:12
advice for me as a practitioner because
37:14
unbiased can mean a lot of different
37:16
things
37:17
and unbiased also can be implemented
37:20
in well one unbiased is a fallacy like
37:24
you you can never get that right you can
37:26
mitigate bias you can never completely
37:28
eliminate it
37:29
that said there are many different ways
37:31
to go about it there are many different
37:32
implications of that
37:34
but most importantly where in my
37:36
workflow should i start to think about
37:37
that right and a lot of people talk
37:39
about well
37:39
you should do it in sort of all parts of
37:41
your life cycle or you should do it here
37:43
you should do it
37:44
you know towards uh you know engage with
37:46
designers do this and that
37:49
but a lot of the times that advice is
37:51
sort of
37:52
uh not well received because again it’s
37:56
too
37:57
it’s too amorphous it’s it’s not
37:59
specific enough to the point where
38:01
if i’m an engineer working within a
38:04
particular department which is within a
38:05
business unit
38:06
which is within an organization you need
38:09
to make sure that all of these
38:11
uh recommendations that you’re making
38:13
align
38:14
with those uh you know technical
38:18
policies and practices of that
38:20
department that you’re a part of
38:21
and and how do you orient that with the
38:23
business goals of the
38:25
business unit that you’re a part of and
38:27
ultimately the values of the
38:28
organization hold steer
38:30
and without doing all of that it’s it’s
38:33
very idealistic to say that
38:35
yes this will happen because at some
38:36
point uh
38:38
the people who are on the ground only
38:40
have so much decision making power and
38:42
and if you
38:43
give them advice that requires
38:46
you know massive overhaul
38:50
that’s something that’s not going to
38:52
gain as much
38:53
traction just because it’s it’s really
38:56
hard for one person to make that change
38:58
and so
38:59
i’d rather that we start by at least
39:02
taking
39:02
concrete steps in the right direction
39:05
and and then creating local champions
39:08
within the organization who understand
39:10
the value of this
39:11
and who then become advocates for this
39:13
change because that’s
39:14
that’s how you start a movement you
39:16
don’t come and say hey we’re going to
39:18
append everything that you’re doing
39:19
because then people are just going to
39:21
ignore what you’re saying and go about
39:23
doing what they what they’re doing
39:24
because at the end of the day
39:26
they’re getting paid to you know build
39:28
products and services and if you ask
39:29
them to
39:30
abandon all of that chances are your
39:33
advice is not going to be well received
39:35
so is that part of is there an
39:37
initiative within the montreal ai ethics
39:39
institute
39:40
are you are you focusing on trying to
39:42
sort of translate
39:44
a lot of sort of uh
39:47
guidelines or advice into what can be
39:50
applied within organizations or is that
39:52
outside the scope of the
39:54
organization no no and no and and that’s
39:57
exactly the kind of work that we focus
39:59
on at the institute is to look at
40:01
what are the right points of
40:02
intervention and then to really work
40:05
on the ground in terms of uh how
40:09
how painless can we make it you know and
40:12
uh
40:12
apologies for using that term but i i
40:14
think for the lack of weather phrase
40:16
really that is what we’re
40:18
looking to do is to eliminate the pain
40:20
uh to make
40:21
to make it easy to do the right thing i
40:23
think that’s that’s that’s how i would
40:25
think of it is
40:26
is if if you make it dead easy to do the
40:29
right thing
40:29
people will do the right thing or at
40:31
least they’ll have much more of an
40:33
incentive to do
40:34
to do the right thing but if you make it
40:35
incredibly hard
40:37
it’s just easy for us as humans to
40:40
to not have to put in that extra effort
40:43
to do the right thing and so
40:45
you know to give you an example when we
40:48
one of the projects that we’re working
40:50
on is to help
40:52
developers assess the environmental
40:54
impacts of the work
40:55
uh that they do and of course you know
40:57
both you and your listeners are
40:59
are no strangers to the uh massive
41:02
computational requirements for
41:04
um for uh for you know running large ai
41:07
systems
41:09
how do we make uh uh you know decisions
41:12
in terms of
41:13
when we should use uh large ai models if
41:16
there are better alternatives if as
41:17
consumers should we be
41:19
picking a particular you know
41:21
manufacturers
41:22
of these systems uh who are you know
41:25
more green
41:26
and how do we make those comparisons
41:29
what do we do that
41:30
but all of that starts with somebody
41:33
being able to
41:34
quantify the impact that they’re having
41:37
and not to say that there aren’t
41:38
initiatives already there are
41:40
uh the problem with a lot of these are
41:42
they ignore
41:43
the developer’s workflow in the sense
41:46
that they ask you to go to an external
41:47
website and
41:48
input a bunch of numbers and parameters
41:52
about
41:52
you know how you build your model where
41:54
you train them and let’s be honest you
41:56
know as developers
41:57
uh we hate documentation it’s it’s just
42:00
true um
42:02
hey i got my start doing developer
42:04
documentation so i can
42:05
i can attest to that so you’re gonna
42:07
test it out right
42:09
so you you know better than comments and
42:11
code
42:13
there’s no such thing nobody likes
42:18
so the problem then is if you’re asking
42:20
me to go to an external website and put
42:22
in all of this information chances are i
42:23
might do it the first couple of times
42:25
but
42:25
uh you know i start to drop the ball
42:29
uh later on because it’s just something
42:31
that you wouldn’t
42:32
lend too much of a priority on but if
42:34
you were to integrate this
42:36
uh in in a manner that’s something
42:38
similar to ml
42:39
flow uh which you know helps you sort of
42:42
um
42:42
you know within the context of your code
42:44
capture hyper parameters do you know
42:46
experiments across
42:48
or sorry do comparisons across the
42:50
different experiment runs that you’re
42:51
doing
42:52
now that’s something that’s a little bit
42:53
more natural to the developer workflow
42:55
to the data science workflow
42:57
now if you were to integrate the
43:00
environmental impacts in a manner that
43:02
follows this precedent that’s set by
43:05
something like mlflow
43:07
there is uh you know there is a lot
43:11
higher of a possibility for people
43:14
taking you up on that
43:15
and and subsequently reporting those
43:17
outcomes back to you rather than me
43:19
having to go to an external website fill
43:21
out a form
43:22
get the result take that pdf report or
43:24
whatever
43:25
and and now you know append that as a
43:28
part of the rest of the
43:29
product or service that i’m shipping
43:30
that’s just too much effort and and i
43:32
think that’s
43:33
so those are the kinds of things that
43:35
we’re working on is is to inject
43:37
uh this this sort of uh um
43:41
you know doing the right thing or uh
43:43
thinking about ethics
43:45
uh right into the developer workflow so
43:47
that we
43:48
again as i said you know i can take
43:50
those incremental steps in the right
43:52
direction we don’t ask you to come and
43:54
you know
43:54
completely throw out everything that
43:56
you’ve been doing because we know that
43:58
maybe you’ll do it the first time but
43:59
then you you know you’ll
44:00
go back into old habits and and so that
44:03
that’s really what we’re trying to do is
44:04
to make it easy for you to do the right
44:06
thing
44:06
well it sounds like it’s almost like a
44:08
wrapper around the existing
44:10
workflow or process anyway and and
44:12
beyond like you’re right of course
44:14
the the barrier to uh participation
44:18
you know being being removed means that
44:20
you’ll probably get more participation
44:22
but it also seems like
44:23
um you know if you if we are interested
44:26
in in being champions of ethics and of
44:31
good outcomes then we have to think
44:33
about the ways to be most effective
44:35
in getting those those kinds of
44:37
initiatives you know deployed and
44:39
and followed and so on so it it’s it is
44:42
incumbent on
44:43
us as as advocates of of these ideas to
44:46
to think in the ways that you’re
44:47
describing to
44:48
think about like how how can i actually
44:50
make this more effective how could i
44:52
make this more
44:53
integrative into the workflow that’s
44:55
actually going to make it
44:57
you know more likely that we’re going to
44:59
have compliance and more likely that
45:00
we’re going to have you know feedback
45:02
that’s useful and
45:03
and and instructive right yeah
45:08
yeah no absolutely and i like how you
45:09
put it because i think it is incumbent
45:11
upon us in the sense that
45:14
ultimately you know you can have the
45:17
greatest idea but if you’re if you’re
45:18
unable to communicate it and the other
45:20
person isn’t able to receive your idea
45:22
then then it
45:23
it isn’t a great idea after all right i
45:25
mean or or rather i should say that you
45:27
you didn’t do a great job of
45:29
communicating that idea and and i think
45:32
a lot of that
45:33
is is what plagues our domain today is a
45:36
lot of the
45:37
knowledge and the insights are trapped
45:39
uh within these white papers within
45:41
these research papers that are
45:43
written in dense academic jargon and
45:46
again you know you’ve
45:47
you’ve worked in in in technical
45:49
documentation in the past so you’ll
45:51
appreciate this like
45:52
if if you don’t make things clear no
45:54
one’s gonna read it right
45:56
people just gonna try a bunch of random
45:58
things and hope that it works
46:00
and that’s we don’t yeah and we don’t
46:02
want that i mean
46:03
the number of memes that you’ll find on
46:05
reddit where you know people just don’t
46:07
read the documentation because it’s too
46:08
hard and just try a bunch of things and
46:10
hope that it works uh that’s sort of we
46:13
we don’t want that to happen here
46:14
especially that we’re
46:16
dealing with real impacts on people’s
46:18
lives
46:19
and you know drawing out these insights
46:22
from these dense academic papers and
46:25
you know 600 page reports uh there are
46:28
some 600 page reports would you believe
46:30
it
46:30
uh yes who’s reading that stuff right
46:33
it’s it’s it’s folks like us who are
46:34
deeply
46:36
interested in the space but for the
46:37
everyday developer who
46:39
um you know just wants to do
46:42
the right thing but also still get on
46:45
with their day job
46:46
they’re not going to read the 600 page
46:48
report one because they don’t have time
46:50
to maybe they don’t even have the
46:52
expertise to parse all the
46:53
you know technical jargon that you’ve
46:55
used there so
46:57
we it really is as you said incumbent
46:59
upon us to communicate this message
47:01
in as clear a manner as as we can and i
47:04
want to pivot just a little to
47:06
you know with the time we have left i
47:08
wanted to ask you about
47:09
um you know thinking about ethics in
47:12
application
47:13
so are there applications of ai that
47:16
you would argue can just not be ethical
47:19
so for example maybe military use of ai
47:22
weapons are there
47:24
any acceptable uses there that you know
47:27
we should be
47:28
thinking about or should just whole
47:30
categories of of ai
47:32
application be avoided from an ethical
47:34
standpoint or
47:35
you know there’s also of course cities
47:37
and regions enacting bans on facial
47:39
recognition
47:40
um so can the ethics therapy resolved
47:43
and how will we know when we’ve resolved
47:45
them
47:47
those are really really hard questions
47:50
they’re not simple
47:53
it’s my favorite thing to do to ask like
47:55
a big wide-reaching question when
47:56
there’s only a few minutes left on the
47:57
clock
47:58
there’s a few minutes left and and all
48:00
my brainpower has been exhausting
48:03
i’m just kidding um but i know it’s it’s
48:06
interesting in the sense that i think
48:07
um there isn’t a clear black and white
48:11
answer and and the reason i say that is
48:13
because
48:14
a lot of it does depend on the
48:17
surrounding measures as you pointed out
48:19
right like
48:20
it depends on some of the regulations
48:22
that’s around how you use some of this
48:24
technology
48:25
but also uh making sure that there is
48:27
alignment with the
48:28
context and the culture uh the values
48:31
that
48:32
uh of where that system is being
48:33
deployed yes
48:35
at the moment there are uh use cases
48:38
where uh we should completely
48:40
um abandon the use of
48:43
ai because we haven’t figured out the
48:46
problems just yet
48:47
there are too many things that can go
48:49
wrong uh the systems are too brittle
48:51
they’re too
48:51
vulnerable uh to being misused that uh
48:54
we
48:55
we should just uh you know lay them
48:58
dormant
48:59
or you know abandon them revisit them
49:02
once we figured some more things out and
49:04
and that’s you know
49:06
and that that might be you know
49:08
discouraging to some folks who spend a
49:10
lot of their time figuring this stuff
49:12
out but that’s just the nature
49:13
of of uh you know discovery and
49:17
invention
49:17
really where you know we might come up
49:20
with great ideas but
49:22
uh sometimes we just don’t have the
49:24
potential to use them
49:25
in the right manner and hence we should
49:27
just hold off
49:29
on it till we figure out the right way
49:31
to use them uh or
49:32
or sometimes not use them at all and and
49:35
that’s totally fine and i think
49:37
that’s one of the things where you know
49:39
as you pointed out
49:41
we we sort of see this inevitable march
49:44
towards more automation and more ai
49:47
where in some areas maybe maybe that’s
49:49
not the case maybe we should stop and
49:51
question and say hey
49:53
maybe the good old ways are there
49:56
they’re good
49:57
just because they’re old doesn’t mean
49:58
they’re bad people
50:01
you know hey you know think about this
50:03
right you know everybody said we were
50:05
going to move to a paperless office
50:07
right but
50:08
that never happened we still use a ton
50:10
of paper so
50:11
you know i mean that’s of course that’s
50:13
a facetious example when it comes to
50:14
what we’re talking about but
50:16
i think you know for the time being when
50:18
we’re thinking about the military uses
50:19
of ai
50:20
or when we’re thinking about facial
50:21
recognition technologies i think just
50:23
the
50:24
outsize potential in terms of how this
50:26
can be misused
50:28
it doesn’t warrant us from using it when
50:30
we have other alternatives that
50:32
work and uh you know still sort of get
50:36
the job done in terms of let’s say
50:37
we want to you know do identification is
50:40
it’s necessary that we use facial
50:42
recognition technology i mean
50:44
right up until now we had been using
50:46
other forms of identification and they
50:48
worked
50:48
despite having flaws so if we know that
50:52
you know facial recognition has all of
50:54
these problems and
50:56
can be misused maybe we you know and a
50:59
lot of companies of course are holding
51:00
off and using it
51:02
and now they’re you know emerging
51:03
regulations and different parts
51:05
uh that you know prevent its use um
51:08
i think i think you’re right in in
51:10
saying that you know in some places we
51:12
we should just not use it at all well
51:14
and that’s why i guess
51:15
you know sort of the follow-up thought
51:17
there is is it even
51:19
really useful to talk about ai ethics
51:21
quote unquote as a whole discipline
51:24
or is there such a fundamental
51:25
difference between talking about the
51:27
ethics of something like gpt-3
51:29
versus something like intelligent
51:31
autonomous weapons
51:32
that it makes sense to discuss you know
51:35
ethics of sub-disciplines or
51:36
sub-categories of ai or specific
51:39
applications of ai or contexts where
51:41
it’s being applied
51:44
so i for one am more of a proponent of
51:48
tailoring things down to different
51:50
domains because i think
51:52
uh and different domains and i mean use
51:54
cases as being too specific but at least
51:56
different domains because some of the
51:57
challenges that are presented
51:59
uh differ quite a bit where for us to
52:02
say
52:04
hey you know let’s apply explainability
52:06
wholesale
52:08
across all of these domains uh you know
52:11
keeping in mind these principles
52:13
sometimes can fall flat and which is why
52:16
i think in a lot of places we aren’t
52:19
seeing as much traction with some of
52:21
these principles being implemented in
52:23
practices because
52:24
if you want to cover a large area you
52:26
necessarily have to be broad but by
52:28
being broad you’re
52:29
you know again making them not
52:31
actionable and i think that’s that’s one
52:32
of the problems that i see
52:34
where perhaps you know starting uh
52:38
having have those overarching principles
52:40
but never forget that we need to be
52:42
uh domain specific as well we need to be
52:45
jurisdiction specific as well
52:48
because again if you’re proposing
52:50
something that is orthogonal
52:51
to the legal frameworks within a
52:54
jurisdiction
52:55
that is never going to fly either
52:57
because you know people are going to
52:59
side
53:00
with complying first with the legal
53:03
requirements that
53:04
uh are mandated for them uh
53:07
than to try and do something that is for
53:10
the most part you know
53:11
self-regulation or just you know
53:12
prescribed industry standard so
53:15
again i think you know when it comes to
53:17
thinking about this
53:19
i would say thinking about the ethics of
53:20
ai in specific context
53:22
to me makes more sense than to talk
53:25
about
53:25
ethics of ai because again ai itself is
53:28
such a broad term and
53:30
there’s so many different ways of uh you
53:32
know how how it gets implemented in
53:34
practice that
53:35
we really need to uh be a bit more
53:38
granular in how we approach it
53:40
so then i also want to give you a chance
53:42
to to be
53:43
an optimist at the end of this
53:45
discussion uh
53:46
thinking about all the different
53:48
applications of ai and all the different
53:49
way
53:50
you know the huge space that ai covers
53:52
what are you most
53:54
optimistic about when it comes to i
53:56
would i would usually just
53:57
ask my guest when it comes to you know
54:00
technology or the future of human
54:01
experiences but i’d love to hear
54:02
specifically
54:03
in your mind an application of ai that
54:06
that makes you
54:07
uh feel optimistic and hopeful about
54:09
what it could do for humanity
54:12
so one one of the things that i think is
54:15
is fascinating and then we’ve been
54:16
making
54:17
uh sort of so many strides in this space
54:19
is machine translation and
54:21
the reason i i you know i’m so
54:24
fascinated by it
54:26
is because it has the potential to
54:29
unlock
54:30
participation from people from different
54:32
parts of the world a lot of the
54:34
knowledge today
54:35
still is locked in different parts of
54:37
the world just because we aren’t able to
54:39
effectively
54:40
uh communicate with each other or it
54:42
requires a ton of translation effort
54:45
and again the people who are effectively
54:47
able to you know sort of straddle two
54:48
different or three or four different
54:50
languages there’s so few of those people
54:52
right
54:53
and and even even those who are uh to be
54:55
able to
54:57
truthfully translate between one
54:58
language to another without losing
55:00
the meaning and nuances is hard right
55:02
there’s very few people who can do that
55:04
very effectively
55:05
and so a lot of knowledge and and the
55:07
richness of our you know culture
55:09
and history is is locked away is isn’t
55:12
it inaccessible and if we were
55:14
able to do that in an automated fashion
55:18
i think i think that has the potential
55:20
to to unlock all of those pieces of
55:22
knowledge and then on my point around
55:24
participation
55:25
uh when you look at uh the dominant
55:28
language for the internet
55:29
uh it’s english of course um a lot of
55:32
the
55:32
you know research findings and
55:34
everything is published in english
55:36
that doesn’t mean that there aren’t
55:39
people who
55:40
uh aren’t making you know contributions
55:43
uh in different languages it’s just that
55:45
again we we don’t get to see them but
55:47
also
55:48
that they sometimes face barriers and
55:50
being able to consume this knowledge
55:52
because it’s only present in english i i
55:55
i wonder what the potential for uh
55:58
making progress even if we were to just
56:00
talk about you know let’s say preprints
56:02
on archive
56:03
if they were to automatically be
56:05
translated and be available
56:06
in a hundred different languages imagine
56:09
the kind of
56:10
uh you know strides and progress that we
56:13
could make
56:14
because there would be all these people
56:15
all these budding researchers who
56:18
you know maybe english is not their
56:19
first language maybe they’re not
56:20
comfortable with it
56:21
but because it’s now translated into uh
56:24
you know
56:25
french or maybe it’s translated into
56:27
hindi
56:28
are now able to read it and understand
56:30
it and say hey oh this idea clicks for
56:32
me
56:32
uh relates to some work that i’m doing
56:35
why not
56:36
build on top of this work yeah now you
56:38
really speak to my heart with that one
56:39
as a
56:40
linguist by education and a total human
56:43
knowledge geek
56:44
i it’s really exciting to think about
56:46
what you’re describing there
56:48
so last question is with our just our
56:50
two minutes to go how can people find
56:52
and follow you and your work online and
56:54
connect with what you’re doing
56:57
yeah so um i’m active on uh
57:00
all social media platforms why i
57:02
shouldn’t say all i mean i’m not active
57:03
on twitter i don’t know how to use the
57:05
uh you know
57:06
sorry i don’t know how to use sticktop
57:07
and no i used it
57:12
i can be found on twitter uh quite
57:14
frequently tweet there so it’s atg
57:16
underscore a b h i s h e k
57:20
uh i also uh uh have my website where
57:23
people can find uh all the work that i
57:25
do that’s a t
57:26
g a b h i s h e k
57:30
dot github dot io uh and of course
57:33
i’m on linkedin uh if you type in my
57:35
name uh
57:37
along with the words ai ethics i presume
57:39
you should be
57:40
able to find me and if you don’t
57:42
actually do shoot me a message and
57:43
uh you know let’s see what happens
57:47
and and and a good old google search
57:49
with my first and last name with the
57:51
words ai ethics
57:52
tends to turn up uh information about me
57:55
last thing i’d like to say there is uh
57:58
next week
57:59
we have the state of ai ethics report
58:01
coming out it’s the second iteration of
58:03
our report
58:04
uh from the montreal ethics institute
58:06
where we capture
58:08
all of the research and development
58:10
that’s happened in this space over the
58:11
past
58:12
uh quarter sort of summarized and made
58:15
accessible
58:16
uh in an easy to consume fashion so
58:20
if you haven’t uh you know subscribe to
58:22
our newsletter from the institute please
58:24
do so
58:24
uh it is at uh ai ethics.substance.com
58:29
excellent great we’ll be on the lookout
58:32
for that report
58:33
you said next week that’ll be coming out
58:35
yup all right
58:36
hang out on tuesday next week perfect um
58:39
well thanks so much for joining us here
58:41
on the tech humanist show abhishek it’s
58:43
been wonderful to hear your perspective
58:44
on
58:45
ai ethics and the world view of it and
58:47
and your hopeful view on how machine
58:49
translation can
58:50
actually connect human knowledge in a
58:52
richer way than ever before
58:54
so thank you so much for that it’s my
58:57
pleasure thank you kate for having me i
58:59
really appreciate it
59:00
all right take care and get to sleep now
59:05
thank you thank you all right bye-bye
59:08
everybody

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.