Focus on the human side of the task

Champion No. 6: Alec Balasescu

A clever person solves a problem. A wise person avoids it”.

This quote from Albert Einstein describes how I feel every time I chat with Alex Balasescu. As an engineer, I have been trained to solve problems. Avoiding them is the next level. For that you need experience, and from the experience you build up your wisdom. Or you take a shortcut and talk with a wise anthropologist ☺!

 

Alec is an AI Anthropologist and a great sparring partner, when it comes to opening up your mind and doing some mental exercise. And when you feel lost, Alec puts you back in context. In fact, for him, everything is about context. AI Anthropology is about Artificial Intelligence in context. And this is the reason why I love chatting with him: he first gets me out of my tech context so that I can think freely, and then he leads me back to a broader new context. I hope you enjoy the ride as much as I did.

“AI is a tool: it does not need to be ethical (it’s absurd). It should be designed in accordance with ethical principles understood contextually.”

One of my professors used to say “a fool with a tool is still a fool”. When it comes to AI and ethics, sometimes I have the feeling that we put too much expectations on the tool and too low on the human. Is it just my biased engineer brain that thinks so? Or does AI really need to be ethical?

AI is a tool: it does not need to be ethical, that is rather absurd. It should be designed in accordance with ethical principles understood contextually, leading it to “act” ethically within the context. Therefore, we first need to understand the context.

So, let’s start at the beginning: Generally, there is this vision, that there were humans, then came intelligence, then we made tools. But humans and tools are co-creations. With each tool we create, we recreate ourselves, because that tool transforms who we are. The best illustration for this in popular culture is “2001 in Space Odyssey “ opening scene: the ape took the bone in its hand, marking the moment when it became human.

“The more we develop technology, the more we change ourselves as humans.“

With each new tool comes a new set of decision making mechanisms we have to devise for ourselves. When we talk about AI as a tool, there is the promise of optimization. The questions are:

  • Is everything worth optimizing just because we can?
  • If we optimize everything, what do we do with, and to, our own decision-making mechanism in our brain?

Let’s say you have a wearable that monitors your daily activity, DNA, lifestyle etc. and it recommends you what to eat to stay healthy. What do you do if your intelligent bracelet suggests quinoa for breakfast while you are passing by a bakery and you see and smell tasty croissants? What do you decide for? 

 

I am the one going for croissants, no doubt about that!

Of course, different people will answer differently. On the one hand you have the promise of optimization for an imagined future, and on the other hand the present experience. And that in a world that is uncertain. So, if you take AI to a logical conclusion, it promises a world without uncertainty, which is not possible – and possibly dystopian I might add.

 

That is right. The world is way too complex to be able to simulate it digitally, so this is why we simplify it and build models. But they are an approximation of reality and not a representation of it. I guess this is why we are surprised once the products go live and “reality punches us in the face”

Funny you mention this, because there is a term that gains popularity among AI designers, and that is “reality drifting”. It is not that reality drifts, reality just happens. On the other hand, the supposedly dynamic model of said reality is just a perpetuation of the status quo in the moment you have taken that slice of reality in a transversal way and you modelised it. We say “all models are wrong, some of them are useful!”

The problem today is that we consider all models to be useful. A lot of times we create models that don’t take into consideration ethical variances, cultural variances, or power structures. In fact, engineers create programs out of the power structures in which they live. Their biases get embedded into the programs (we all have biases), and they get amplified under the guise and lure of optimization.

Take an insurance company for example: it has some very good models to forecast accidents or illness or natural disasters. It uses these models to calculate the rates. We, on the other side, pay the insurance rates. And instead of having a redistribution of risks in society, we end up penalizing the people that are most exposed anyway. When it comes to an insurance case, we use the model in opposite ways: to avoid payments to the prospective beneficiary instead of receiving them. Same model – different aims – no wrong or right, the house always wins.

“Assume that models are always wrong. Models do not drift because people behave weirdly – they begin to drift because they are models.”

My advice? Assume that models are always wrong. Models do not drift because people behave weirdly – they begin to drift because they are models; their accuracy is limited over time, and the faster we change, the faster they drift. Remember I said that we are co-creations alongside technology. Faster the technology changes, the faster we change – and the “reality drifts” from the model.

Also, carrying models across contexts will implicitly lead to drift. So first, one needs to study the model’s cultural context (regional, institutional, professional) and to work one’s way back from there into the design of the AI systems.

The design process should start in the field, and not in labs. We need to design for the cultural context: build models starting with reality, and do not try to model reality on abstract models (including ethics) – sooner or later they will drift, and one of the domains in which they fail is ethics.

Last but not least, we need to create constant evaluation feedback loops. Remember, AI is material: it has a material support and it interacts with the material world. That means it is not going to flow smoothly. Be prepared to reassess and adjust based on how the adoption process develops.

AI only learns from the past. It reproduces what has happened, but we use it (with more or less questioning) to predict the future. This is more science fiction rather than ethical artificial intelligence. To sharpen the edges: it´s not even intelligence at all.

Do you know about reinforcement learning as the only methodology that does not focus on the past but on the present without any given data? For sure it cannot predict the future, but it learns to navigate in a present environment e.g. by using methods as trial-and-error.

I read about it, deep learning, machine learning, and reinforced learning. I cannot say that I understand it in detail, and for me it all sounds quite good in a lab. But we do not live in a perfect or linear built world. Humans relate to what machines say. And this relationship differs in different cultural contexts. Even the relationship to life – and this is important for all medical models – differs among cultures. One culture may see life as a predestination – a second one sees it as a predisposition. This has an impact on decision making as well as on building a model or an AI algorithm.

I don’t have a solution for that. There is no wrong or right and I can only point out some possible foibles and pitfalls in decision making. The very concept of the model and how they are designed reflect concepts of decision making mechanisms valid in a very narrow cultural context – its epitome being Silicon Valley culture. This is what we have to keep in mind! Those who embrace full heartedly digitalization make us try to believe that AI driven decisions are more real, more true, more rational because they are free from human emotions and failures. This, in fact, is a failure of rationality in itself. I call it “the irrational belief in rationality”. The very reasoning becomes codified, itself, through narrow exposure to only one way of thinking, reinforced by the research results. If those scientists are not in dialogue with, or confronted by, people outside their industry, they end up modeling a fictional world (see Meta for now). The cultural view that mathematics is a divine science that explains the universe helps the digital scientist to isolate and imagine that they hold the truth of the world as it should be. But AI is supposed – at least I hope so – to help people as they are, and not to validate models of reality designed in the Ivory Tower.

“My suggestion is to invest more time (and money) in testing and in evaluation. And therefore, you need scientists as well as anthropologists.”

I absolutely agree. In the conversational AI field, tech companies understand that you need linguists to write the dialogues for the chatbots, instead of having the computer scientists do so. The economic need forced companies to rethink their settings – this is what must happen in other AI fields, as well.

What else can we learn from anthropologists – what do we have to do to bring them on one level with the “techies” in the closed shop (or the lab)?

When we use technology, we assume that it makes our lives better or easier. But does it? We need to question this process of constant optimization and understand life more as a sequence of loops than as a linear path. For sure with the latter approach, it would be easier to rebuild, scale, and to multiply. But unfortunately, it abstracts the product, it removes it from culturally specific dimensions, and from a dynamic socio-cultural environment.

We should distinguish between making an ethical decision and the method with which we achieve that decision. The methods used to arrive at an ethical decision are the equivalent of ethical codes, or principles. The decisions we take (or which we let the AI take in an automated manner) are the result of choosing the precedence of one principle or code over another. When subsequently analyzing the decision under the lenses of a different code, the decision taken may appear unethical.

AI models interact with institutional, social and cultural contexts, and may fail if they are not designed for the appropriate context. You know how the brain has the left and the right hemisphere. The left one is about the text. The right one is about context. Lately (in the past 300 years more or less) our global culture is dominated by the left side. Prior to this, the inventors of algebra and trigonometry, the Muslim scholars of XI-th Century, used the mathematical formulas to literally write poetry – each sequence of numbers or fragments of “code” corresponding to letters or roots for families of words (it works better in Arabic). They used both text and context, and we should also remember and use our entire brain.

To point it out: Anthropologists plead for contextual understanding. For us optimization can also mean a step back. That’s the main difference with only a textual understanding. Combining these views may bring us not only forward, but on a new level of understanding and applicability.”

Let’s take the different perspectives on a more practical level: What can we do to be more diverse in developing products and models?

Focus on the human side of the task, and act as if you care – ask other people, take feedback loops, go into communication, collect experiences. Hire people from a diverse background, with a diversity of worldviews, and listen to their experience, without trying to model them on the narrow AI, Silicon-Valley inspired, culture.

AI marketing seems to want to make the user believe that we – with our models and our products – can foresee the future. If we change perspectives and when we combine disciplines the future may become slightly more predictable, but not in AI “classical” terms. One needs to bring the model into different contexts, test it, let it fail, evaluate, and redesign in order to make it even more valid.

Learn from user experience (UX) designers who combine market research, product development, strategy, design, emotions and experiences to create seamless user experiences for products, services and processes. They build a bridge to the customer, helping the company to better understand — and fulfil — their needs and expectations. This is what we can achieve when we develop models more interdisciplinary, and this is how algorithms could be designed, too.

„It’s all about context – the result of dynamic interactions between culture, technology, economy, religion, gender and sexuality, and institutional practices. AI Anthro is about Artificial Intelligence in context.“

Basically, we need to relate differently to what AI and algorithms mean, and to how we design, test, implement, and evaluate them. We need to rethink optimisation, and its premises.

The major lesson for AI is that adoption means adaptation in a world in which matter matters. To sum it up:

  • AI is a tool: it does not need to be ethical (it’s absurd). It should be designed in accordance with ethical principles understood contextually, leading to it acting ethically within the context. Therefore, we first need to understand the context – ask an anthropologist. Use our experience and methods!
  • Assume that models are always wrong. Models do not drift because people behave weirdly – they drift to begin with because they are models; their accuracy is limited over time, and the faster we change, the faster they drift. Carrying them across contexts will implicitly lead to drift. So first, one needs to study the cultural context (regional, institutional, professional) and to work one’s way back from there into modeling and the design of the AI systems. 
  • As a consequence, the design process should start in the field, and not in labs. We need to design for the cultural context: build models starting with reality, and do not try to model reality on abstract models (including ethics) – sooner or later they will drift, and one of the domains in which they fail is ethics. 
  • And last but not least, we need to create constant evaluation feedback loops. Remember, AI is material: it has a material support and it interacts with the material world. That means it is not going to flow smoothly. Be prepared to reassess and adjust based on how the adoption and adaptation process develops. 
  • Don’t be surprised when people will find “work arounds”. Use those work arounds, they indicate gaps. After all, we, humans, are the intelligent ones in this equation. Let’s design and use our tools in an intelligent manner.

About Alec Balasescu:

I am an Anthropologist by training, or as some would say a Philosopher with data. I approach the world, and my work, through the lenses of this science. I finished my Ph.D. at UC Irvine, in 2004, and I have been active in both public and private domains in various capacities, while continuing to teach in different university settings, both online and in class. I currently teach in the MA Global Leadership programme at Royal Roads University, Canada.

My experience spans 9 different countries on three continents in which I lived and worked in the past 23 years, since I left my native Romania. 

My research, writing, and practice is centred on understanding of human actions in context, and in developing strategies of change based on this – where context is understood to be the result of dynamic interactions between culture, technology, economy, religion, gender and sexuality, and institutional practices. I am particularly interested in, and writing about, AI and Climate Change. “