The Tech Humanist Show: Episode 2 – Dr. Rumman Chowdhury

About this episode’s guest:

Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI.

She tweets as @ruchowdh.

This episode streamed live on Thursday, July 23, 2020. Here’s an archive of the show (in two parts, due to a connection interruption) on YouTube:

Episode highlights:

(Part 1)

3:17 how Rumman’s background in political science shapes her thinking in AI
3:28 “quantitative social science is math with context”
3:58 “often when we talk about technologies like artificial intelligence… we’ve started to talk about the technology as if it supersedes the human”
4:11 Rumman mentions her article “The pitfalls of a ‘retrofit human’ in AI systems”: https://venturebeat.com/2019/11/11/the-pitfalls-of-a-retrofit-human-in-ai-systems/
4:56 What is the core human concept that shapes your work?
5:25 “I recognize and want a world in which people make decisions that I disagree with, but they are making those decisions fully informed and fully capable.”
5:49 A DOG ALMOST APPEARS!
7:18 transparency and explainability in Responsible AI
8:17 on the cake trend: “reality is already turned upside on its head — I want to be able to trust that the shoe is a shoe and not really a cake” 🙂
9:04 on the critiques of Responsible AI, “cancel culture,” and anthropomorphizing machines
11:11 Responsible AI is not about having politically correct answers; her role leading Responsible AI is part of core business functions
12:00 Responsible AI is about serving the customers, the people; credit lending discrimination example
12:40 need for discussion that’s bigger than profitability and efficiency; humanity and human flourishing
13:27 “human flourishing — creating something with positive impact — is not at odds with good business”
15:21 “I think sometimes people can get overly focused on value as revenue generation; value comes from many, many different things”
17:05 a political science view on human agency relative to machine outcomes
19:22 AI governance
20:34 “constructive dissent”
21:13 the “human in the loop” problem
25:14 algorithmic bias
29:20 “building products with the future in mind”
29:44 are there applications of AI that fill you with hope for the good they could potentially do?

(Part 2)

0:45 how can we promote humanity and human flourishing with AI and emerging technologies?
1:16 what can businesses do to enable Responsible AI
1:22 “I have a paper out… where we interview people who work in Responsible AI and Ethical AI… on what companies can do” (see: https://arxiv.org/abs/2006.12358)
6:22 what can the average human being do
8:40 where can people find you?
on Twitter: https://twitter.com/ruchowdh
on the web: http://www.rummanchowdhury.com/

About the show:

The Tech Humanist Show is a multi-media-format program exploring how data and technology shape the human experience. Hosted by Kate O’Neill.

Subscribe to The Tech Humanist Show hosted by Kate O’Neill channel on YouTube for updates.

Full transcript:

(Video 1)

00:17
there we go uh hopefully you’re seeing both of us
00:19
now hi there it’s great to have you on Rumman yeah
00:24
thank you for having me on I am so happy to be talking
00:27
to you in the person-ish pandemic person
00:29
in person-ish that’s probably the best uh best term to describe conditions right
00:36
it’s uh it’s been a strange time I’m sure for you as well
00:40
yeah absolutely um although i must say it is nice to not continually
00:45
be on the road yeah yeah you know usually i’m away from home like 60 to 70
00:49
percent of the time so you know it’s nice to be not jet
00:53
lagged and in one location i do miss the travel myself so i have
00:57
also a pretty aggressive travel schedule for my work and uh it’s
01:02
it’s a little bit of a bummer not to have you know kind of a new country to
01:06
check out every couple of weeks that is very true
01:09
that is very true and by the way i will add that we’ll
01:12
probably get some sort of accompaniment from
01:14
my cat and/or my dog my dog is sitting right right down here and the cat’s kind
01:18
of behind the computer right now so i love that so my cat is in hiding
01:23
somewhere and she’s done really well so far at not
01:26
making an appearance in this show but I feel like one of these days she’s
01:29
going to demand her cameo so and my pets are attention hog so the
01:35
cat makes it a point to be vocal I I’ve just joined this um
01:39
this like Oxford commission we’re actually going to be announcing it
01:42
pretty soon and we’ve decided that she is our
01:44
unofficial mascot because she’s very vocal during all of our commission
01:48
at all so that’s perfect so what’s what’s really
01:51
cool about getting a chance for us to finally
01:54
sit down and talk is that you know as you mentioned you’re traveling a lot and
01:58
i’m traveling a lot and i know that we have been following
02:00
each other on Twitter for a while and it seems like
02:03
our our paths came crossing we’ll be like
02:06
in the same city uh either on the same day
02:09
or like yeah ships passing in the night and we just haven’t had a chance
02:13
to uh overlap enough to sit down so this is it this is our first chance to do
02:17
that and that’s really exciting um to me it’s
02:20
exciting because uh from the moment I I first kind of interacted with your
02:25
profiling and with you online I got the sense that first of all I love
02:30
how you put come across online but that your area of focus is so um
02:35
it’s so relatable to me you know this this intersection of course of AI and
02:38
humanity is very parallel to mine in tech and humanity
02:41
but also I noticed um that you have degrees in political science
02:46
and so I thought it’s your your phd’s in political science
02:50
even isn’t it yeah yeah so i to me that is incredibly intriguing because it’s
02:56
sort of I can relate to the idea not that you
02:58
know I have a background in political science mine is in languages but just
03:01
this idea that that that education and that framework
03:05
probably shapes you know your thinking and your mindset
03:08
about things right and the idea of systems and and the public
03:13
good and that sort of thing how does that how does that shape your work and
03:16
your thinking yeah um so the thing that really drew me
03:19
to political science so this was actually even
03:21
as an undergrad at MIT was the idea of essentially like distilled two it’s
03:26
basic it’s like quantitative social sciences math with context
03:30
and i really like massive context right or maybe another way to put it
03:34
would be uh i think it’s really fascinating to understand
03:38
at a high level uh patterns of human behavior using data
03:42
but the way I framed all of those sentences
03:46
well especially the second one you know centralizes the human it centralizes
03:50
society and what i find intriguing frustrating depending on what
03:55
you know my mood at the moment is that often when
03:58
we talk about technology like artificial intelligence
04:01
with technologies in general but especially with AI we’ve started to
04:04
kind of talk about the like the technology as if
04:08
it supersedes the human and this whole article i wrote called the
04:12
retrofit human where i raised that concern like why is
04:16
it we build technology and assume the human being fits in afterwards and we
04:19
really should be doing it the other way we need to be designing our tools
04:23
because these things are tools need to be designing our tools to help
04:26
us we shouldn’t be we like you know recreating how we
04:30
naturally are or what to be to fit someone’s notion of how society
04:34
ought to be so in your mind what is that kind of
04:38
core human concept because to me i i also mentioned um you know that that a
04:43
lot of the ideas felt parallel in our work and one of the things that
04:46
that i find i keep coming back to in my work
04:49
at the core of human experience feels like meaning and the
04:52
making meaning and the quest for meaning and so that’s one
04:56
theme that just over and over again i keep finding myself
04:59
returning to is there a similar concept for you that you find yourself returning
05:03
to in your work yeah and i think like it’s
05:06
very parallel unsurprisingly i would say it’s either
05:09
something like human self-determination or human agency
05:12
but ultimately it’s just the right to make an informed decision
05:16
the ability to uh you know have all the information for
05:20
yourself and make that choice and and i very carefully say that because i
05:24
recognize and want a world in which people make decisions that i disagree
05:28
with but you know but they are making those
05:30
decisions fully informed fully capable so to your point on
05:34
on meaning whether it’s you know being able to derive good
05:37
meaning from the systems we’ve created to make the decisions
05:40
or understanding what our meaning is or what our purpose is as
05:45
a human being and not having that be shaped or guided by
05:48
other forces unknowingly that’s my dog yeah is the dog gonna make a cameo is
05:55
that i mean i’m sure he wants to come here do
05:58
you want to say hi to everybody yeah he’s not tech canaanist show
06:03
that’s fantastic at the door so i apologize for the flying at the
06:08
door oh no the the pawing at the door just makes me
06:12
feel sad kind of get a little cameo actually the
06:15
thing is if i like open the door he’ll go out and he’ll father
06:18
come back
06:21
well i love how you put it and i think i think
06:25
you know the agency and the self-determination is is a really
06:29
solid piece of of what always kind of comes back to me too i’ve lately started
06:33
thinking about you know how we talk in in
06:36
literature and culture about the human condition
06:39
and exactly yeah and i feel like that when you break down what the elements
06:43
you know what we’re typically talking about when we talk about the human
06:45
condition it does seem like you know agency and you know sort of
06:49
control over your own destiny at some level
06:52
is is part of that right absolutely absolutely um and i think what’s really
06:57
great about it is you know it’s it’s not normative or judgmental like i
07:01
said i’m not trying to enforce my values on someone else my point is
07:05
that we should all make informed decisions yeah and we have transparency
07:10
into the systems that are maybe shaping us or
07:13
guiding us or giving us opening doors or closing others
07:17
so when it comes to ai then it seems like
07:20
the where that carries over is into the idea of
07:24
you know transparency or explainability and
07:27
is that what generally when you talk about responsible ai
07:30
as the scope of the work that you do is it generally focused on those
07:34
attributes or are there other attributes that are even maybe more
07:38
pertinent to that consideration yeah i mean so
07:42
certainly responsible ai covers those fields i think those schools are
07:45
incredibly important i think that you know of course any
07:48
conversation about responsibility would be remiss not to talk about
07:51
uh fairness and accountability um particularly when we think about
07:55
biases and biases and the technology that’s being built
07:58
and i know this has been kind of a contentious topic lately especially on
08:02
twitter that what isn’t different just topic on twitter right uh violence
08:06
is talking about worms has become a topic of contention if
08:10
you’ve seen don’t bring up cake that’s all we just
08:13
don’t need to talk about cake right now that’s a lot i mean look
08:16
like reality is already turned upside in its head i want to be able to trust that
08:19
that the shoe is a shoe and not really a cake
08:23
but you know what what i’m sad to see often
08:27
is that so much of the work on responsible ai
08:30
you know gets divided into camps of like this you know politically correct
08:33
culture and non-political or like whatever
08:35
whatever the opposite of politically right right um but that’s
08:39
not not what it is it’s not like normative judgment
08:43
passing uh at least not for me um you know if for me it is you know just
08:50
making sure that we are aware and have some control or agency and some
08:54
right to uh understand and have impact on the
08:57
systems that are you know shaping uh like the actions we’re able
09:02
to take in our lives yeah and i think you you bring up a
09:04
really good point because it does seem like
09:06
that issue of um sort of the critique of responsible ai or the the mechanisms of
09:13
responsible ai um that talks about political
09:17
correctness and you know we’re having such a moment
09:19
where people are uh you know hitting at this bogeyman of
09:24
cancel culture and and political correctness so so this uh
09:28
tweet from paul graham uh in the last couple days
09:32
that he says you know people get mad when ai’s do or say
09:35
politically incorrect things what if it’s hard to prevent them from
09:37
drawing such conclusions and the easiest way to fix this is to teach them to hide
09:41
what they think that seems a scary skill to
09:43
start teaching ais i imagine you have a response
09:47
for that i mean just like before i even get into what i think like
09:52
sort of let’s unpack all of the assumptions behind that statement
09:55
there’s just a lot of anthropomorphizing happening
09:58
like what is this like teaching the ai to hide like
10:02
these are not like these are technical systems right they are making
10:05
yes it is a predictive model it is quote making decisions but not from the sense
10:08
that human beings make decisions like teaching you don’t
10:12
really teach an algorithm to quote lie you can do
10:16
particular things to it to make it come up with some answers and
10:19
not come up with certain answers but if you are
10:21
hiding output or hiding outcomes that’s a human decision
10:25
from the design perspective so like let’s talk about the people who are
10:28
creating ultimately that’s the weird thing about
10:30
that statement this is weird and morphizing happening i just i simply
10:34
cannot understand like you know and this is not like sort of
10:37
the responsibility of community saying this
10:39
we have plenty of people you know who are some of the
10:43
the trailblazers in the field of artificial intelligence saying like we
10:46
are nowhere near the singularity we are not near
10:48
any sort of ai system and you know we will define it as like narrow ai if
10:52
you’re in the world of narrow ai so let’s
10:54
let’s let’s let’s box this into what it is today like we are nowhere near
10:58
creating this system that’s called lying or quote making decisions we’re in a
11:02
world of narrow ai we apply things to very narrow use cases so that’s
11:06
that’s one that’s hiding things it’s very odd to me
11:09
um and like it’s not about having politically correct
11:13
answers to be like i i work for accenture
11:17
um accenture is you know obviously that they were poor thinking and hiring
11:20
somebody to be responsible ai i don’t sit in corporate social
11:24
responsibility i don’t sit in corporate citizenship those are amazing parts of
11:28
extension parts of every company but i sit in core business functions if
11:31
accenture a half million person company thought that
11:36
responsible ai was creating politically correct answers
11:41
i don’t know if i you know i mean i’m not a ceo but like
11:45
that’d be a strange place to put somebody yeah that’s a really
11:47
interesting point right like i’m part of core business
11:50
functions my job is to create solutions with value
11:53
if you’re creating a product that doesn’t serve a portion of your
11:56
population you have not created a good product
11:59
so for example if you are making a credit lending model
12:02
that is discriminatory towards women because of the history of credit
12:06
discrimination against women this is not about put not about
12:09
politically correct culture do you just want to not give people money who would
12:13
pay you back like i don’t understand like do you not want to make revenue off
12:17
your product because you are you have literally an underserved market
12:21
so you are telling me that you don’t want to address an underserved market
12:24
like and and in some sense like in a business
12:27
sense that is what some of this work is about
12:31
it is about making good products that serve your
12:34
that serve your clients that serve your customers yeah yeah 100
12:38
and i just want to interject there i feel like i’ve seen interviews with you
12:42
where you talk about the need for there to be discussion
12:45
that’s bigger than profitability and efficiency
12:48
when it comes to you know business uses of technology we need to understand you
12:51
know what is um you know what’s about
12:54
humanity and her human flourishing so it’s really
12:57
important that those attributes be part of that discussion
13:00
too but you’re right at this core level of course business is going to be
13:05
investing you know primarily into technology that’s going capacity
13:09
and scale to their opportunities you know i think i
13:12
think that’s a savvy observation and you’re right like why
13:15
would there be a function for responsible ai
13:18
in the core business if it weren’t uh likely to produce
13:23
you know desirable outcomes for the business right exactly and also i’d say
13:27
like human flourishing you know creating
13:30
something with positive impact is not at odds with good business and
13:34
and frankly you know some this is what some of the
13:37
biggest ceo the ceos and the biggest companies in the world
13:40
recognize that you know some of what you build especially if you’re a b2c company
13:44
is about brand it’s about how people feel
13:48
when they interact with your technology or your product or
13:51
you know if you’re making like soda or a fast food chain
13:54
uh or clothing like you are trying to spark an emotion
13:58
uh frankly right people by like especially in the us we have no lack of
14:02
choices a lot of our goods are actually perfectly uh substitutable
14:07
why why do you buy coke versus pepsi why do you go to mcdonald’s versus burger
14:11
king i’m just like naming things right right like some of this is an emotional
14:15
decision um and so it’s again not necessarily
14:19
some sort of weird like lefty pc culture to say you know we want
14:23
to make things that make people feel good that are aligned with like
14:26
society’s values um and you know we’re getting some
14:29
pretty clear indicators of what a lot a lot of people feel today uh you
14:34
know branding is is definitely important for
14:38
companies yeah yeah that’s a really good point too
14:40
i think that that comes up a lot in my own work that that the uh you know
14:44
i talk about meaningful experiences and people are
14:47
always like well how do you measure meaningful experiences it’s like well
14:49
you know actually if you’re creating meaningful
14:52
experiences then you should have a whole host
14:54
of holistic measures that tell you that you’re on the right path
14:58
and everything you just talked about is all you know part of a model that
15:01
actually tells you you know you’re moving in the
15:03
right direction people can remember your brand people have delightful experiences
15:07
they’ll recommend you they’ll you know your cost
15:09
of acquisition and retention is going to be lower
15:12
because people have good experiences with your brand
15:15
all of those things right it’s also this notion of value
15:19
right i think sometimes people can get overly narrowly focused on value as
15:23
revenue generation value comes from many many different
15:26
things and to be perfectly frank you know people often choose less quote
15:31
efficient outcomes or you know less economically sound
15:34
outcomes because of how it makes them feel right uh you know and and i suppose
15:39
maybe a frivolous example but an extreme example of it would be
15:42
why people buy luxury brands you know like why would i buy a canvas bag
15:46
from like louis vuitton versus target canvas is basically canvas right like
15:51
louis vuitton doesn’t make better canvas but like they recognize
15:54
that how it makes you feel and the experience or to give a techie example
15:58
apple spends so much money on design they spend like
16:01
like there are entire articles on how every apple product
16:05
opening it is designed to feel like you’re opening a present like you’re
16:08
getting something special right that was purely intentional and if we’re
16:12
going to try to make this case that tech is about efficiency and value then you
16:16
know go talk to apple because they don’t seem
16:18
to believe that right they fully understand the
16:21
experience of an individual in interacting with technology like a phone
16:24
or a computer is also an emotional experience yeah
16:29
yeah so so in terms of of ai and and what the experiences we’re
16:35
going to be we are increasingly creating with
16:38
algorithms algorithmically optimized systems you know how can
16:43
people think about more meaningful and more human
16:47
flourishing kind of systems when it comes to those types of
16:51
interactions what what do you recommend there for people
16:54
yeah and here’s where i think it’s really interesting like as a political
16:57
scientist and the social sciences because i draw a lot from my background
17:00
when i think about these things you mentioned the concept of systems earlier
17:03
and this is absolutely true like these technologies don’t live in a bubble they
17:06
exist as part of an existing infrastructure of systems that impact us
17:10
so if we’re talking about for example a recommendation system
17:14
to decide if um you know to help judges decide if certain
17:18
prisoners should you know get bail or not
17:21
fail um what’s really interesting is not just how this impacts the prisoner
17:25
but also the role of the judge in sort of the structure of the judicial system
17:30
and whether or not they feel they can they need to be subject to the output of
17:34
this model or whether they have the agency to say i disagree with this
17:38
and i don’t and that impacts you know how this outcome plays
17:41
out for the individual who’s on trial right
17:44
so a judge is somebody who is a position of high social standing
17:48
you know they’re considered to be highly educated if there’s an algorithm and
17:51
it’s telling them something that they think is wrong
17:54
they may be in a better position to say i disagree i’m not going to do this
17:57
versus somebody who is let’s say um you know an employee
18:01
like a warehouse employee at like at amazon
18:04
or you know somebody who works in retail at a store where your job is not
18:08
necessarily considered to be high prestige
18:10
and you may feel like your job is replaceable or worse
18:14
you may get in trouble if you’re not agreeing with the output of this model
18:17
so like thinking about the system that surrounds these models it could
18:21
actually be kind of an identical structured model but because of the
18:25
individual’s place in society they can or cannot take action on it so
18:28
i think these things are really important
18:30
really important to think of yeah that’s a really important point i find
18:34
in talking with companies too about employee experience and about thinking
18:39
about how culture is going to be developed around digital
18:42
transformation and how they’re going to incorporate more and more automation
18:45
into their businesses so much i find of that discussion needs
18:49
to be about you know the increasing importance
18:53
of good judgment from humans you know like
18:56
people being able to make good judgment calls and being able to
18:59
say like this is asking me to do the wrong thing
19:02
and the machine doesn’t necessarily know that as you already said like there’s
19:05
not kind of hidden motives within the machine there there are
19:09
hidden motives within code because coders put them there but you know that
19:14
it’s not like um like humans shouldn’t be able to
19:17
question the output of these things so that’s a brilliant point
19:21
yeah and like two two points to that one is when i
19:24
talk to companies about governance um and ai governments has actually become
19:29
like one of the bigger things to think about rather than
19:31
just purely focusing on like monologue or modern explainability
19:35
so like a few thoughts on governance so again kind of drawing from my
19:38
backgrounds of political scientists i find it very interesting that all of
19:42
us even those in the responsibility community
19:44
are approaching this notion of governance from a non-democratic
19:46
perspective like what what every organization is
19:49
doing uh when we create systems of governance is put
19:52
the smartest people together and figure out what governance means for everybody
19:56
and it’s quite interesting because we all claim to adhere to very democratic
19:59
principles but very few organizations have actually
20:02
created a truly democratic process for government so that’s one
20:05
right uh the second very few organizations have created really flat
20:08
organizations too and even though they claim the two have done
20:11
so so yeah that’s a very good point yeah um and then the second is like so
20:17
i i we created um like this this handbook for companies called the
20:21
government’s guidebook it’s a publicly available document
20:24
i can share it with you if you have like show notes and yeah
20:27
put it in there uh one thing one thing that we call for is the notion of
20:31
constructive descent so how do you actually enable safe
20:35
channels of descent within your organization
20:37
how can people feel comfortable saying you know this is not working or this is
20:41
being done unethically or i disagree with what’s happening here
20:44
and not just in the way that they’re protected but also
20:46
in a way that they feel like their voices are being heard
20:50
i think one of the issues with you know with uh
20:53
people being at odds with the organizations that they’re with is not
20:56
just that they disagree with what they’re doing but they everybody has the
21:00
same story when i tried to go to management i was
21:02
shut down nobody listened to me it wasn’t meaningfully addressed and i
21:06
think that that’s a that’s a component of this
21:08
um that’s really important which kind of also ties into the third point that
21:12
we haven’t really solved this human in the loop problem everyone
21:16
loves to use that phrase but i you know it’s
21:19
really hard to think of difference a good situation
21:22
in in which we really resolved meaningful interaction between
21:27
you know a an advanced predictive technology and a human being
21:32
say more about that because i’m not sure that many of our listeners will be
21:35
uh as familiar with with that concept yeah so folks always talk about human in
21:40
the loop within an ai system so you know the the narrative would be okay
21:44
well we’re worried about runaway ai or ai that makes biased
21:48
decisions and then the answer seems to be we’ll
21:50
put a human at the end of it and then the human will kind of
21:53
judge the output and then the human has agency they can say yes or no and then
21:57
that’s that right but there’s so many problems with
22:00
this when you unpack that that story like it seems to
22:03
work on face but then we’ve already talked about a few issues so number one
22:06
like who is this person in this structure of you know the
22:10
hierarchy of humanity within their organization within society
22:13
and can they actually agree or disagree with the output
22:17
of the model are they in a position where they would be punished if they did
22:20
are they incentivized to do so and not do so et cetera and then the second
22:24
question is this person on the end can they even
22:27
understand whether or not that decision was a good one or a bad one because that
22:31
person may not and actually often is not a technical person
22:34
they’re not a data scientist so how are they to understand whether or
22:38
not this output makes sense or not um and just a really good example um and
22:43
a few months ago there was a whole apple card debacle um
22:47
when apple launched the credit card and we had the husband and wife and the
22:51
husband got approved and the wife did not even though i think she had a higher
22:54
credit score and made more money but here’s the part that i think to me
22:57
was the most meaningful around what we’re talking about so they
23:01
call you know apple or whoever and they ask
23:05
you know and again back to this notion of constructive
23:07
descent and human in the loop they asked like hey
23:09
you know my wife didn’t get approved for the card and we’re kind of wondering why
23:13
because you know like that’s weird and the answer was well the algorithm said
23:19
so and so that’s that’s that right and genuinely that is
23:23
not a good answer to give but to the person on the end
23:26
who’s a customer service rep right the question here then becomes
23:29
how do we enable a customer service representative to understand whether or
23:32
not this model output was problematic yeah like these are the people who
23:37
should understand it’s not me as a data scientist or
23:39
you know you as a technologist it’s actually the people who will be on the
23:42
receiving end who will be and who end up actually
23:45
being the front line with the human beings who are being impacted
23:48
so like that’s that’s the human in the loop that i think needs to be resolved
23:51
yeah and i think in in in a number of models business models
23:56
you know the the um the proposed answer tends to be
23:59
we’ll use a rating system to evaluate how reliable this person’s judgment or
24:05
outcome or whatever it is and of course then you end up with sort
24:07
of algorithms all the way down it’s like you know yeah yeah i mean and also in
24:13
this example like you know this is the this customer
24:16
service rep didn’t even get any visibility so they
24:19
they couldn’t they actually didn’t really know how to answer this person’s
24:22
question um and then even thinking through at a
24:25
higher level whether or not that model was biased like
24:27
i will say i haven’t followed the story all the way through but at first glance
24:32
i think captain also had a good article about this is
24:35
it’s not whether or not there are these one one-off cases in which things go
24:39
wrong because fundamentally all of these
24:41
systems are probabilistic not deterministic meaning like there is an
24:44
error rate and there things will go wrong
24:46
but that is just that is just a true truism that’s not even debatable
24:50
but what the problem would be is if this is systemic
24:53
it’s not just that this this one woman with
24:56
good you know who makes a good salary and has a high credit rating got denied
25:00
it would be if and obviously that should be fixed but
25:03
the system is a problem if we are seeing this across the board
25:06
across a number of women you know as compared to like a data
25:10
scientist have to do an analysis of this system to see if it’s a problem
25:14
and certainly i mean it’s easy to come up with examples
25:17
from across different parts of society and parts of technology where
25:22
you know this algorithmic algorithmic bias reflects
25:25
systemic bias and that we have those problems and
25:29
i think the the discourse on that is is raising but it seems like
25:33
we probably also need you know beyond discourse we need
25:37
other solutions where are you on regulations from for much of this like
25:40
where are you feeling like we stand on you know the maturity of that discussion
25:44
and where we need to be with with that yeah um it’s been really interesting to
25:49
see what different regulatories are regulatory bodies are coming up with all
25:52
around the world um so most likely europe will be ahead of
25:57
the pack on this um the european commission’s uh
26:01
the hleg has come up with a white paper that came out in april i think there’s a
26:05
follow-up to it that’s scheduled for december but who knows in
26:08
endemic times if they’re going to get everything done by then
26:11
which would be understandable if they didn’t the uk information commissioner’s
26:15
office also has a really great paper on risk-based approaches
26:18
to understanding ai systems singapore has launched this project called project
26:23
veritas which is getting financial services
26:25
financial service agencies together with their financial regulatory bodies
26:29
or thinking about it in the u.s we’ve had
26:31
uh the ftc federal reserve there’s been a lot of noise and there are also bills
26:35
on the table and what we’ve seen interestingly is
26:37
there’s been this bottom-up movement in the u.s so for example banning facial
26:41
recognition is such a great example you see it’s we saw it starting in
26:45
cities right before we see we saw anything
26:47
happening at the federal level there are algorithmic accountability
26:50
bills in like in multiple different cities and states and
26:54
again before we see it hitting at the federal level
26:56
so i think the us is going to be really interesting just again as a political
27:00
scientist yeah focused on american politics this
27:03
is why american politics is fascinating because of the way we’ve divided federal
27:06
and state powers and how that push pull like ends up being like sometimes a
27:10
contentious debate but ultimately like back to like my
27:14
first point it’s good to have people with different opinions
27:16
talking right that’s kind of what ends up being
27:20
and it also seems like it gives you know in theory at least it gives an
27:24
interesting model for being able to test different
27:26
approaches in different markets and see you know what are the consequences of
27:30
doing it this way versus that way uh and and then what’s going to happen
27:33
with that at scale but of course yeah of course that’s uh
27:38
it it supposes that that um that we can
27:40
actually anticipate that scale uh with just what happens at that city
27:44
level and often often that’s uh that’s going to be very different when
27:48
it’s applied federally right exactly yeah those are that’s such an
27:52
interesting area for you given your political science background
27:55
so do you find that you’re drawn more and more
27:57
into those not only governance discussions within
28:01
uh corporations but the governance at an actual
28:05
sort of political uh government level are you uh participating more and more
28:08
in those kinds of uh discussions yes absolutely um and i kind
28:13
of have been from why i wouldn’t say day well no almost from day one um and i
28:19
i don’t know whether it’s because it’s just my inclination to do so whether
28:22
it’s kind of a natural part of this job um because it does kind of combine both
28:27
i can’t just think about the technology i would be remiss not to think about
28:31
what sort of policy and regulation would be
28:33
would be coming down the road uh in part because you know
28:36
i want all these public servants to make informed decisions
28:40
um and you know it is difficult to wrap your head around the technology
28:44
when you’ve you know your experience has been something totally different and
28:47
it’s very difficult to get good information
28:50
um you know from from these different bodies and like all these
28:53
groups have you know people may have you know different uh incentives and
28:57
different reasons for sharing certain kinds of information not sharing
29:00
others but also i think you know when it comes down to
29:03
businesses everyone just wants to know what the regulatory
29:07
landscape will be and it’s useful to have that
29:10
information i mean not that i have any sort of insider information but just
29:13
to be aware of what’s happening so that businesses can make
29:17
good decisions um you know so they’re they’re kind of
29:20
building products with the future in mind yeah and
29:24
you know so sort of speaking of building products with the future in mind i
29:28
i guess i’m i’m curious about your own disposition and views like are there
29:32
particular applications of ai or just emerging
29:35
technologies in general that you get really really excited about that sort of
29:39
maybe even fill you with hope for what they
29:42
they’re for the good they could potentially do gosh
29:45
um i feel like lately like everything is very doom and gloomy
29:49
it is 20 20 so it’s like the world is on fire
29:52
um what what a great question honestly this is a very good question to ask
29:56
um i i what i think is amazing about this technology at the
30:02
meta level and what interested me in it uh in in technology is just how much
30:06
amazing potential it has for us to question our institutional
30:10
paradigms and and us to question why things are structured
30:13
the way they are and i think the the thing if i were to pick one
30:17
thing that got me the most interested in this technology
30:19
is actually the potential for it for edtech which is
30:23
funny because edtech has now become one of the biggest topics of conversation
30:27
and talking about all the negative of the surveillance
30:29
state right but you think about it like what’s the what it should be what
30:34
something like edtech should be is a complete reimagining of education
30:37
because number one like educational systems do not
30:40
actually help uh do not actually help people get jobs
30:44
they don’t help people do well at their jobs like everyone always jokes about
30:48
the number one skill you need to learn in colleges excel
30:50
because and that’s the one thing they don’t teach you right so it’s there is
30:53
this disconnect between the quote the real world the jobs we get
30:57
and then education educational systems how they should we
31:00
know there’s inequality we know that people in the us end up
31:03
with massive student loans you know there’s just so so much that
31:06
can be resolved with this technology whether it’s remote learning
31:09
or customized learning or you know like whatever it is and early on in the days
31:16
of when i started my job at accenture
31:18
before then people were talking about lifelong learning
31:21
and how you know the sort of new worlds of technology and ai
31:25
really means that we have to embrace you know learning and really think about how
31:29
we’re going to spend the rest of our lives educating us all of this
31:32
what what amazing aspirations yeah um right and i sincerely hope that what
31:38
we don’t do is just try to like stick technology
31:41
into the existing broken infrastructure that is our traditional education system
31:46
because that that would be a disservice not just to us as
31:50
humanity but also to the technology and the potential of technology so
31:54
but is it is it also true or not that once you use technology to sort of
32:02
accelerate or amplify a given system that where it breaks
32:06
might be what’s instructive about where those institutions are
32:10
already failing us like we won’t know those failings until we
32:14
try to amplify them at some level right i mean i understand
32:16
that there are real harms that are being caused
32:19
by doing that and the impacts are real but i’m i’m also wondering if
32:23
uh if it’s not um if we won’t get to the the level of
32:29
discussion about the failings of those systems until they’re actually being
32:32
amplified do you think there’s a way that we can
32:35
we can do that uh effectively um i mean i think
32:39
specifically using the education example there are so many people that have
32:43
already looked at the inefficiency of these systems and what does work and
32:45
what doesn’t work and you know what and if we really think
32:49
about this again by going back this notion of human self-determination
32:52
or you know whether it’s meaning or whatever we’re talking about like what
32:56
is the purpose of this system and you know
32:59
frankly can we just objectively take a step back
33:01
and in a sense almost emotionlessly ask is it serving the purpose it is intended
33:07
to serve right like you know like what is the meaning of our
33:10
educational system why is it doing this i think there are plenty of people who
33:14
have been pointing out the systemic clause
33:17
and i think usually the pushback is that oh it’s easy to criticize the system
33:21
like but but who’s going to be the one to solve the problem and really the
33:24
smart thing to then say is well now we have technologies and
33:28
systems that theoretically could be designed
33:30
to solve these problems instead of being designed to simply
33:34
reinforce the power imbalance and the structural inequalities
33:37
and we’re gonna ignore what these people say because it’s too messy
33:41
to deal with that and much easier to just perpetuate amplify and
33:45
now like cement uh all of these inequalities rather than do like the
33:50
extra amount of work it would take to like fix things yes no and that
33:53
that’s a brilliant way to address that that
33:56
mindset or that problem where do you think the the solutions best originate
34:01
are you finding in your experience do you find the
34:04
solutions origin originate with academics or
34:09
with private corporations or is it kind of a mix in
34:13
in what you’ve seen in terms of being able to identify the the sort of
34:16
structural flaws of institutions and what’s going to happen
34:19
when they’re brought to scale with technology um
34:22
i think it’s a bit of both um you know i i love
34:25
all of my academic friends because they you know they do such an
34:28
an insightful job of understanding systems and you know and again like
34:32
they’re sometimes able to look at it more objectively
34:35
because they’re not inside it um but then there is the aspect of
34:39
there’s the application component of it and that’s you know
34:42
what industry does so i’ll give you a great example
34:45
um so about what two years ago at this point a little over two years ago
34:49
um accenture came up with a fairness tool so we were the first to create
34:52
a enterprise level bias mitigation tool um
34:56
and the way we did it was we started off with academic research papers on you
35:00
know this is things like counter factual
35:02
fairness bias mitigation blah blah blah you can find all of these papers
35:06
but what was important to us is whether this works outside of a laboratory
35:10
setting i think we started off with like 30 some
35:12
odd papers and we only ended up with actually three
35:16
three of them that worked if we thought about does this scale
35:19
you know is this generalizable across multiple different settings
35:23
and is this possible within the way a data scientist does works that was
35:27
basically our criteria um so i think everybody has their role
35:31
to play it’s there’s some there’s definitely value in
35:34
pursuing research and even research that seems crazy and
35:37
weird but then there is certainly value to trying to
35:40
ground that research in something pragmatic and applicable like it’s it is
35:45
wonderful to live in a world of like all of these possibilities but then at
35:49
some point if you want to make this reality you have to ask yourself
35:52
will people use it how can i make it so that somebody will use it
35:55
and is this actually as beneficial as people are claiming it can be
36:00
and and does it matter do you think in the
36:03
the the context in which the technology starts you know we were talking a little
36:08
bit before we got on about this current story about that broke i
36:12
think today about facebook using a simulation uh
36:16
with ai to to simulate bots and other kind of bad user behavior so that they
36:22
knew better how to moderate against it um which i think you know i think you
36:27
had said at one level of abstraction seems like a
36:30
really good idea from like a data science model right
36:33
but from another level of looking at it you can easily see how
36:36
this may not be ideal to train to to begin to develop that sort of
36:41
training so so is there does it matter where the the kind of
36:46
origins of of a technology are or or do we do we always need to be
36:50
working toward you know these good outcomes and the
36:52
best of humanity sort of outcomes yeah um so two parts to it one i think
36:57
all of my sts and hci friends and i agree with
37:00
them would say uh the origin of technology absolutely
37:04
does matter like this is why so many people study the
37:06
history of technology you know things that are built for uh
37:10
military use even if it is moved into the commercial
37:13
space which is by the way a lot of technology
37:15
it will still hold with it the vestiges of let’s say surveillance or monitoring
37:20
because it is ultimately built assuming the world is a particular way in other
37:24
words there are good people and bad people there’s me then there are the
37:27
others right there’s me then there’s there’s people
37:29
i’m protecting the people i’m fighting because that’s just how the military is
37:32
structured right so so then it’s just fundamentally how your view of the world
37:38
will impact the technology that you build and i think that’s really
37:41
really important and and maybe even to abstract it even more and going back to
37:45
like all this conversation about political correctness culture and you
37:48
know designing an a that quote hides itself
37:50
i think what paul may be missing and some of you may be missing is that
37:54
um often you create technology with them often you do you create your ai run
37:58
optimization function like there’s a goal
38:00
to this and this has kind of been some of the critiques of the way
38:04
um like you know some of these research firms have been trying to arrive at uh
38:10
sentient ai is by having them play these games and they have them play combative
38:15
games right rather than have them play
38:17
collaborative right and again your objective function matters if my
38:20
objective function is to win a game where i have to kill
38:24
everybody to win or it’s a zero-sum world in which
38:27
i have to have the most amount of points to win
38:29
right um then that sets up a very different
38:32
system than one in which i’m training it to play a game
38:36
where we have to be collaborative and collectively succeed
38:39
like two totally different worlds but it’s all a function of your
38:42
of your objective function so going back to this facebook example
38:46
i think it is actually really cool to kind of basically like
38:49
simulating red teaming which is kind of awesome because rather than kind of
38:52
wait for bad things to happen they’re saying we’re going to have to
38:55
proactively model the world but the problem with it could be is that
38:59
you have to it’s not necessarily future adaptable
39:02
and if a new thing starts to happen that obviously cannot
39:06
be modeled within the existing system that you built
39:10
because your existing system is only based on the past and i think a really
39:13
good pragmatic example might literally be
39:15
something like gamergate right and a lot of folks and a lot
39:18
especially the women who are impacted by gamergate will say
39:21
you know we were yelling and screaming about how gamergate was really like the
39:25
canary in the coal mine about like this whole in cell culture
39:29
this whole like underground culture of like just you
39:32
know like a lot of the a lot of the issues that we talk about today
39:36
um people getting harassed and doxxed and you know
39:39
all of this was the player in the coal mine was gamergate
39:43
and people ignored it but then you think about if you’re trying to build a
39:46
predictable predictive system gamergate prior to gamergate would not
39:50
fit into your paradigm of the world because that had never really happened
39:53
like that before right so it’s a good idea if the world is going
39:57
to stay static if the world’s going to change
39:59
you actually need to have some balance to it that
40:03
understands how the world’s changing yeah and i think by the same token
40:07
it kind of goes back to what we were saying earlier about you know there’s
40:10
there’s a body of work already that has identified problems
40:13
like there’s the um the scholars that have already
40:16
identified problems with edtech and sort of the
40:19
systems of institutional education you know
40:22
that that knowledge already exists the the the scholarship already exists
40:26
so that it’s parallel here it feels like there’s been plenty of
40:29
um light being shown on some of the areas that need the most
40:35
work in terms of content moderation in terms of
40:38
uh making sure that you know uh bad actors are are banned and that can’t get
40:43
through on on all the social platforms but it
40:46
seems that twitter facebook you know and so on
40:50
don’t necessarily adopt those those recommendations and
40:54
instead it’s like facebook wants to play a game with itself
40:57
in order to come up with uh this this war game as you as you
41:01
so aptly described to be able to identify
41:04
what it probably could identify just by taking the recommendations of experts
41:08
who have been saying this kind of thing right
41:12
um yeah i mean like i said i think there is certainly value
41:15
in like from a data science perspective and trying to do what they’re doing at
41:19
scale like one of the issues of like any sort of moderation or
41:22
tracking is just the sheer volume right there’s just like i can’t even
41:26
create a number to imagine how many harassing situations or flagged
41:31
posts there must be on all of the social media so how do they like
41:35
parse through and it’s it’s it’s actually like again from a like a data
41:38
science perspective kind of a similar problem to thinking
41:41
about things like credit fraud which is at a massive massive scale so
41:45
the cool slash interesting i think the cool
41:48
part of the problem of addressing things like credit fraud is like yes there are
41:51
people trying to defraud your system but also there are people
41:54
who just like happened to go on a vacation in germany and like didn’t call
41:58
the credit card company and how do you do it in a way that
42:00
you’re not going to lose a customer because you’re annoying them with phone
42:04
calls or you’re freezing their credit right so it’s like
42:06
it’s not just like shut down everything that looks bad and it’s


(Video 2)

00:00
and then we’ll just go ahead and sort of
00:02
close out with some broader themes
00:04
uh put put a little uh bow on the
00:07
discussion
00:08
um it’s such a bummer that we we lost
00:10
signal and lost connectivity while we
00:12
were talking
00:12
before because i think we really you
00:14
really were going through some some
00:15
interesting
00:16
uh thought there but i guess look maybe
00:19
let’s just go back to
00:20
you know the the discussion about
00:22
humanity and human flourishing like what
00:24
do we
00:25
what do you think uh what do you think
00:27
we can do
00:28
in culture and in technology to
00:32
stand a better chance of of bringing
00:33
about the best futures with
00:35
ai and algorithmic systems rather than
00:38
the worst futures
00:39
uh and and how do we how do we promote
00:42
humanity and human flourishing
00:43
in your view um wow yeah that’s
00:47
not it not a special question yeah so
00:48
it’s like it’s a little it’s a little
00:50
we’ll
00:51
we’ll figure it out in the last week i
00:53
mean you and i work around the concept
00:55
of humanity so we
00:56
have to think in big terms no absolutely
00:59
absolutely
01:00
um i mean there’s a few things just kind
01:02
of the immediate steps and then there’s
01:04
kind of the
01:04
the longer term like how do we view
01:06
things and and i guess like as a social
01:08
scientist i can’t help but think of it
01:09
as like
01:10
you know like atomized systems that
01:12
exist so one
01:13
i would say is first like what can
01:16
businesses
01:16
do to enable responsibility and that’s
01:19
pretty much the crux of
01:20
my job um so i have a paper out
01:23
with my research scientist donna rakova
01:27
as well as jinyoung yang from
01:28
partnership on ai and henrietta kramer
01:30
from spotify labs
01:31
where we um actually interviewed people
01:34
who work in
01:34
applied responsible ai or ethical ai um
01:37
so not in research people who are
01:38
working on business functions and we
01:40
actually got their
01:41
thoughts on what companies can do what
01:43
actions they can take
01:44
um you know what’s the president so
01:46
really it was couched it was
01:48
guided around like what’s the present
01:49
state what’s the prevalent state and
01:51
what’s your ideal future state and from
01:53
that we sort of drew out multiple levers
01:55
that companies
01:56
can use to enable so one it’s there’s
01:59
this balance of like external pressure
02:01
and internal pressure and that’s
02:02
something that’s actually worked in
02:04
um you know to to really drive change
02:07
and organizational change
02:09
i shouldn’t have the literature respond
02:12
is on the literature on organizational
02:14
change dynamics so what makes companies
02:16
culture shift right so there’s this
02:19
external pressure and external
02:20
validation and then there’s
02:21
the internal infrastructure um so one
02:24
it’s
02:24
an important to especially responsible
02:26
use of ai and technology
02:28
one is um having aligned success metrics
02:31
and like and that’s multiple things so
02:32
one
02:33
having metrics for things or ways of and
02:35
i say metrics very loosely i don’t just
02:37
need
02:37
like measurable quantifiable things and
02:40
you know qualitative metrics are just as
02:42
valuable
02:43
as quantitative metrics right i think
02:45
that’s a really important clarification
02:47
because people really do get hung up on
02:48
but i can’t
02:49
measure that specific thing and i can’t
02:51
see it on a dashboard and
02:52
yeah yeah exactly yeah or worse they
02:55
find some sort of like
02:56
you know insufficient metric and then
02:59
because human nature like we optimize
03:01
for numbers right
03:02
uh and that’s actually a pretty bad
03:03
thing too the analog i always give by
03:05
the way is like
03:06
when people are trying to be fit or lose
03:07
weight or be healthy there’s so many
03:09
quote metrics there’s like bmi there’s
03:11
weight there’s number of says all these
03:13
and
03:13
it doesn’t actually none of them are
03:14
actually good ultimately what matters is
03:16
like holistically how you feel
03:18
right and like that is a qualitative
03:19
metric that is very very valid right
03:21
and frankly more valid than how much you
03:23
weigh like a number on a scale
03:24
anyway so uh so when i say aligned
03:27
metrics for success
03:28
uh this is not just for products but
03:31
also for individuals like is it like
03:33
is it beneficial to my career at this
03:36
company
03:37
if i’m doing things like helping the
03:38
company create responsible
03:40
ai or is it going to look bad next year
03:43
in my performance review because i’ve
03:45
had x number of quote
03:46
failed projects right yeah and
03:48
interestingly like
03:49
there is a tech analog for this so you
03:52
know the lean startup eric reese’s book
03:54
is like the like
03:55
you know one of the core books of
03:57
anybody who’s starting a company
03:59
and in it he talks about um sort of like
04:01
innovation
04:02
metrics versus your traditional metrics
04:04
and this is quite similar like what
04:05
we’re talking about here really is truly
04:07
innovative
04:08
and this is part of innovation like we
04:10
are creating these technological systems
04:12
that are meant to actually improve
04:14
humanity in a very fundamental way so
04:17
like we actually do need to assess
04:18
people who work in these fields and
04:20
companies by quote innovation metrics
04:22
and not just these quarter reporter
04:24
and quote improvement metrics so there’s
04:26
that um another
04:28
is just this concept of tone from the
04:29
top and having you know your
04:31
leadership really say like this is
04:33
important to us as a company to support
04:35
us as an organization
04:36
i will honestly say that’s been a really
04:38
critical part of being for me to be
04:40
successful at accenture like you know by
04:43
my boss and our leadership has decided
04:45
that responsible ai sits in core
04:47
business functions
04:48
we have five core capabilities
04:49
responsible ai is one of them
04:51
that means something like that that
04:53
tells the entire organization
04:55
that you know they haven’t just hired me
04:57
to like talk on a stage and say nice
04:58
things
04:59
hired me to do real work and and that’s
05:01
very important and it helps me quite a
05:03
bit
05:04
um you know and some of it really is
05:07
just about creating
05:08
transparency around systems and
05:10
accountable systems so who’s responsible
05:12
for what
05:13
and to be fair if we’re trying to get
05:14
people to be on board with responsible
05:17
ai
05:17
we need to be very clear on what you can
05:19
and can’t do and what you will and will
05:21
not be responsible for so if
05:23
for example a lawyer is being told hey
05:25
you need to make sure these systems
05:27
don’t break the law
05:28
they’re like okay well i know what the
05:29
law is but i have no idea how these
05:31
systems work so maybe i don’t want to do
05:34
that because i don’t want to be left
05:35
holding the bag
05:36
if something bad happens right so
05:38
instead you have to very clearly define
05:40
as a lawyer it’s your job to enumerate
05:42
clearly to a data scientist
05:44
you know what the different aspects of
05:45
the laws are that make that may
05:47
come into play with this model and the
05:49
data scientist is responsible
05:50
for sharing with you the empirical
05:52
evidence like clearly defining those
05:54
responsibilities really help
05:56
um so that’s kind of one from the
05:57
organizational perspective and i guess
05:59
that’s maybe like
06:00
really specific but at this point i
06:02
think in responsible ai i love being
06:04
really specific because we have
06:06
everybody’s been talking into these high
06:08
level imperatives it was important to
06:09
have those imperatives yeah but i think
06:11
a lot of this pushback is coming from
06:13
you know certain people feeling like
06:15
we’re all kind of fluffy concepts and
06:17
not
06:17
real actions and we are absolutely real
06:19
actions um so that’s kind of the
06:21
corporate perspective i think you know
06:22
for the average human being
06:24
there’s a lot about education
06:26
understanding and sometimes it’s as
06:28
basic as understanding that there’s no
06:29
such thing as a free lunch
06:30
if there’s a technology that you’re
06:32
using an app you’re using on your
06:34
phone it is not actually free you’re
06:36
paying for it in data
06:37
you know you’re paying for it in some
06:38
way just like trust me you are
06:40
right whether it’s because you’re being
06:42
targeted media whether it’s because your
06:43
data is being taken and sold
06:45
like just understand there’s no such
06:46
thing as a free lunch right um
06:48
and like thinking about like what this
06:50
means and being mindful of the tech you
06:52
choose i think that these are the
06:53
actions that human beings
06:54
can take um and also there are
06:56
increasingly going to be
06:58
bills you can vote on so like everyone
07:01
should go vote
07:03
in general right but also make an
07:05
informed
07:06
vote and like look at whether there are
07:08
laws
07:09
in your municipality or in your city or
07:11
in your state around these things and
07:13
inform yourself on whether or not this
07:15
is going to give you more rights
07:16
over your information and whether you
07:18
want that and then vote accordingly so
07:20
uh that would be kind of my very high
07:22
level take that’s perfect
07:24
the time we had yeah no that’s perfect
07:26
and it sounds like you know
07:27
we’ve already talked about you know what
07:28
you sort of recommend to or what you
07:30
think
07:31
that uh government and sort of political
07:33
systems can do
07:34
and there’s there’s the the government
07:37
piece there’s the corporate piece
07:39
there’s the individual piece and that
07:40
individual piece i you know i come back
07:42
to this too it’s
07:43
it’s just saying again and again we have
07:44
to be very careful and very mindful
07:47
about you know how we’re participating
07:49
in different kinds of technology knowing
07:50
as you say
07:51
that we are paying at some level for
07:54
that participation
07:55
and and for that technology so super
07:58
super important
07:59
concepts uh i just want to make sure to
08:02
be
08:02
able to on screen thank you so much for
08:05
your time
08:06
for your flexibility and for uh putting
08:08
up with coming back on after
08:10
you know who knows why our signal
08:12
dropped it might have been one of the
08:14
pets in the background just saying
08:15
enough
08:16
of this it could be the cat
08:19
has had enough she’s trying to nap
08:21
[Laughter]
08:23
well i think this was wonderful and i
08:25
hope everybody got a lot out of it
08:27
um i want to thank you for being here
08:29
and hopefully we’ll even maybe come back
08:30
and do a part two
08:32
uh part three now at some point in the
08:35
future
08:35
so uh rahman thank you very much for
08:37
being here oh
08:39
before we go uh sorry where can people
08:41
find you online
08:43
oh yeah well you know my twitter it’s
08:46
ru chow r-u-c-h-o-w-d-h uh and on my
08:49
website which is just my name ramon
08:51
choudary.com
08:52
perfect all right thank you so much all
09:01
right

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.