LUCATalk Recap: 6 challenges for Artificial Intelligence's sustainability and what we should do about it

Thursday, June 28, 2018

LUCATalk Recap: 6 challenges for Artificial Intelligence's sustainability and what we should do about it

If you're interested in Artificial Intelligence, you already know that it has many benefits and successful uses. It can improve the diagnosis and treatment of cancer, optimize the management of natural disasters and catastrophes, improve education, and help with automatic transaltion. With more people embracing this technology, it's also important to be aware of the issues surrounding it, before we are faced with negative consequences.
When using AI and Machine Learning, one of its subfields, fair practice must be ensured to avoid problems.

You can find the complete webinar below, straight from our YouTube channel:


The 6 challenges discussed in depth during this session were: Non-desired effects, Liability, Unknown consequences of AI automation, Relationship between people and robots, concentration of power and wealth and Intentional bad uses. Dr. Richard Benjamins covered a series of challenges and explained what is being done to deal with them, and some examples to illustrate their impact.

1. Non-desired effects

In one of the first examples, we observed how racial bias was one of the challenges that can be faced with AI. In this particular example,an automatic system was used to rate two people charged with petty crime, as more risky or less risky to commit another one. The data rated an African American person as more risky, and a Caucasian one as less risky, even though in this particular case the reality was the opposite. The conclusion was that certain data used to train the ML algorithm had bias.

Another non-desired effect has to do with privacy, current AI works with data, however if this data is private there can be risks, of which you can read about more in depth here.

2. Liability
One of the major challenges in liability are autonomous learning and self learning systems, these systems are designed to take decisions on their own and they learn over time, however there might be a point in time when designers cannot predict what the system will decide, and if the outcome is negative, who is responsible for the consequences? An example of this is a self-driving car, which is an autonomous and self learning system. One of the proposed solutions is to create a monitor for these systems, and a law and regulation for them.

3. Unknown consequences of AI automation
One of the main concerns is the workplace, will we still have jobs? many people fear that all jobs will become automated, and even though we cannot truly know what will happen in the future, a takeaway of this is that the nature of many jobs will change, some tasks will become automated rather than jobs themselves, and new jobs will likely surface.

Taxes is another area that has received attention, specifically if there be enough tax collection when certain jobs dissapear. Bill Gates suggested creating a robot tax, which was rejected, but discussions continue to find a solution for welfare and governments to have enough funds to still help their citizens.

Figure 1. Japanese government hopes that by 2020 four in five care recipients have support from robots.
4. Relation between People and Robots
The relationship between robots and people is a trending topic, as many fear that robots will take many of our tasks and jobs as mentioned above, but there are also positive sides to this relationship.
The first example is robot caretakers. In Japan, many hospitals have implemented robot caretakers, and robots to help elderly people feel less lonely and live happier lives. It has to be mentioned however, that Japanese society is more advanced in the sense that they have had more contact with robots, have robot themed cafe's and restaurants, and see this more frequently than in other areas of the world. Someone even married a robot!

Many articles mention how people leave bad managers and not bad jobs, which touches on the idea of robot managers. Some have suggested that in order to avoid preferences and bias at work, robots could become managers, and help people stay in jobs they would otherwise leave. This however is still under lots of dicussion, but the idea is out there.

5. Concentration of power and wealth
There are three main challenges when dealing with power and wealth, economic impact, danger of bias and AI as a service.

The first is Economic impact, can other companies compete one day with the powerhouses of the US and China? In the US and China huge companies have massive amounts of data. These companies are: Google, Amazon, Facebook, Apple and Microsoft in the US and Baidu, Alibaba and Tencent in China. This makes it hard as companies with less data, will not be at the same level, and wealth inequality will continue to happen.  The second is Danger of bias, no one knows if data coming from these companies has a degree of bias, and if it's fully representative of all genders and populations. Amazon for example, offers facial recognition for Police departments, and imagine if the data is not representative and algorithms are trained with this data, the results will be more negative than positive.  Last but not least, AI as a service that is still a black box, how to explain these results and what to do with accountability.

6. Intentional bad uses

Any technology can be used for good, but also for bad. malicious use is one of the risks of massively applying AI. The Malicious Use of Artificial Intelligence report: Forecasting, Prevention and Mitigation has identified three: Digital Security, Physical security and Political security. Cyber attacks to critical infrastructure would qualify as digital security, hacking into government systems for example. Self-driving cars or autonomous drones would go into physical security, without the proper use and control could be used as weapons if they are hacked, and the use of mass surveillance and fake news are examples of Political security.

It is important to understand these challenges, but consider that AI is used for good, it can help with natural disasters and catastrophe relief, and to improve many processes. It's also key to identify
marketing and advertising, from critical systems when thinking about applying rules to all applications of AI. If you receive a biased response related to your gender or race, it's not the same as received an automated ad that does not correspond to your preferences.

One of the questions asked during the live Q&A was about AI in Spain, and what about Spain in regards to this technology, as France, the UK and the EU, were mentioned during the webinar. At the moment, Spain is in the process of writing the "White Book" about AI, and where are the strentghs of applying AI, and what are the ethical issues surrounding it.

No comments:

Post a Comment