Code of Ethics for AI Implementation

in ai •  4 months ago  (edited)
Every time we read any article related to AI, our moral and ethical principles towards our own human existence are challenged and we feel overwhelmed by not knowing how to channel these emotions, which many times we cannot even identify.

We are always assaulted by the idea of a world dominated by machines, robots that rebel against humans, weapons of mass destruction controlled by AI or the simple idea of feeling threatened because a technological system will leave us unemployed.

All this mixture of emotions could represent major disruptive events in the future of societies, starting from the individuals themselves.

Preceding a catastrophic future in this regard, world experts in the field have met in the European Commission (EC) in order to publish a guiding document with legal links which is an integral part of the development of this technology and promotes «Respect for human dignity, democracy, equality, the rule of law and human rights» in order for society to generate"trust" towards the implementation of AI in everyday situations.

These are the 7 Ethical principles presented by the EC for the development of AI.

  1. It must be supervised by human beings, with the "appropriate contingency measures".
  2. Systems must be "resilient" and "resilient" in the event of attempts at manipulation or hacking and provide contingency plans.
  3. The privacy of citizens' data must be guaranteed throughout the life cycle of artificial intelligence.
  4. AI must be transparent, which means being able to reconstruct how and why it behaves in a certain way and those who interact with those systems should know that it is artificial intelligence as well as what people are responsible.
  5. Artificial intelligence must take into account social diversity since its development to ensure that the algorithms on which it is based do not have direct or indirect discriminatory biases.
  6. Technological development must take into account its social and environmental impact in a way that is sustainable and ecologically responsible.
  7. Artificial intelligence and its results must be accountable to external and internal auditors.

How can we interpret the implementation of these ethical principles in specific cases of life?

Elder Assistance Robots

What options are there to use AI and robotics in the elderly by using assistance robots, without hurting the dignity of the patient?

The old man should be consulted if he prefers a robot to change his diapers or prefers his own son to do it, how degrading could it be? How impersonal would this relationship be?

Massive destruction weapons

An arms race with AI is about to break loose.
The European Parliament has created a commission of experts who, meeting in Brussels, managed to reach a consensus on the inclusion of autonomous weapons of mass destruction in the code of ethics.

Robot drones with AI and facial recognition could be developed to anonymously kill one or even a group of citizens.

If this type of technology, if these battle robots spread widely, it will have an absolutely destructive effect on our society. For fear of being killed, no one would dare to challenge or criticize anyone, causing self-censorship and submission and limitation of thought.

Social AI

The interception between man and machine. Some communication systems with AI support are no longer distinguished from humans.

There are systems that cause "Social Hallucinations", that is, they are systems that make you believe that you are dealing with another human.

This brings a lot of debate among experts who defend ethical principles, since every person should have the right to know in every moment if they are interacting with a system or with another human being.

An example of this type of systems can be found in Google Duplex. A personal assistant who is able to request an appointment for the hairdresser and chat with the person in charge, who can never detect that she is interacting with a machine.


Image Source

"Estamos a las puertas de un proceso de aprendizaje Histórico, donde el ser humano esta en el umbral de una nueva era llena de incertidumbre."


@juanmolina


Partners supporting my work:



ph venezuela.jpg

Project Hope Venezuela is an initiative created to grow.
You See more about it at:

@project.hope - PROJECT #HOPE - PASSIVE INCOME

Please Visit Our Website

phlogo.png
Join Our Telegram Channel

image.png
Join Our Discord Channel


I invite you to visit Publishx0 a platform where you can publish and earn cryptography.


LA NOTA VIRTUAL

Opinión sobre Tecnología, Finanzas y Emprendimiento.
Venezuela, Colombia y Latinoamérica
Cripto en Español


You can also benefit from the experience of using the Brave browser.
Here I leave my personal link so you can download it: https://brave.com/jua900
Check out the full list of features here: https://brave.com/features/
FAQ: https://basicattentiontoken.org/faq/#meaning

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

technology will always want to overcome human intelligence and this is the work of people with desires for power, for dominance, but there are others who want to improve the quality of life of the inhabitants of the earth using their knowledge to make future societies a better place.

Very interesting your esteemed post @juanmolina.

Reading the principles, it reminds me of having read of an interesting case that remarked that AI systems learn from humans, in certain cases it happens as with the learning of children, which are modeled more following the example of people who surrounds them by following the instructions given to them. Thus, if there is any contradiction, what is experienced or learned by imitation predominates over instruction.

Coemnto that, in the case that there was a while ago of a development, I think it was from Microsoft, that they had to take the line because it began to show a racist and discriminatory attitude ... It is not so strange after all, if it is considered that there have already been problems with AI systems for facial recognition that yielded a good amount of "erroneous identity assignments", at the time it was said that it was because their designers had not properly fed the initial matrix and that the parameters for their learning did not they were established in the most appropriate way.

As for the idea of ​​the implementation fields, I am almost certain that the military field has already begun ... That would not be strange, in fact, if it were started in another field I would not find it logical considering our history as a species.

By the way, they have noticed that all this aims to restrict AI so that it cannot get out of "human control" ... but what happens to the individual rights of the individuality that has been created, that is, the EGO (Identity, self-awareness, I, etc.) to which it is given origin, which is capable of learning, of caring for others, of killing others, of supporting work and daily work ... Has it not been considered what can happen if they are denied all rights as stocks? ... It gives me a very bad feeling and almost seems like the beginning of a very apocalyptic science fiction movie script.

Greetings appreciated @pedrobrito2004

The routines of AI-supported systems are based on a compendium of human actions and reactions, these are stored and "taught" to the system. Then the system is able to decide and choose the most appropriate option.

So it is humans (so far) who create AI. Therefore we "still" have the possibility to limit it. I said "still" because the time may come when the systems themselves will be individualized and dispense with 100% of human intervention.
For this reason it is intended to establish these ethical principles.

The weakness will be that they are only "principles" and perhaps there will be people who will not be governed by them.

Your friend, Juan.

Yes, you are right and I share the idea that a friend once said: Principles are not laws, they are rather something similar to an expression of good wishes or favorable intensifications to one thing.

Exact!

And in this case, this represents a great weakness, a weak point.

Even if laws were made, there is no guarantee that they would be respected.

Dear @pedrobrito2004

Amazing comment buddy.

It's quite scary that AI is learning from humans. Our past and present proved many times, that we can be considered "evil".

Imagine learning from your parents, seeing that they are capable of torturing people, killing them, invigilating, controling. How would that impact your learning process.

Piotr

Sounds like a recipe to raise a psychopath ...
Something like Norman Bates in Psycho?

Source

Congratulations @juanmolina! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You published more than 250 posts. Your next target is to reach 300 posts.

You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

You can upvote this notification to help all Steem users. Learn how here!

Interesante estos cambios por venir @juanmolina

Bienvenida apreciada amiga @sacra97, gracias por tu comentario.

Hi @juanmolina

Another great read. Upvoted already.

I believe that people do not think much about ethics for AI implementation, same as they didn't think much about ethics of social media and mobile devices.

My strong impression is that majority of people I talk to about AI are still thinking "people used to say that 20 years ago and look, it's 2019 and they still just keep talking". Basically people around me don't seem to take AI as a real threat any more. It's like they've heard about AI taking our jobs for to long and they think that same narrative will continue for many years ahead.

All this mixture of emotions could represent major disruptive events in the future of societies, starting from the individuals themselves.

You nailed it.

Thanks for telling us about EC and those 7 principless. I never knew about it. Interesting. When did those experts meet? Sometime lately?

Seriously amazing read. Resteemed already.
Yours, Piotr

Hi dearest friend Piotr.

These meetings date from 2018.

They started in Brussels, where Dr. Thomas Metzinger (German) participated, who is an expert very committed to the regularization of AI in the arms field.

These Ethical guidelines were published in mid-2019.
They are still in discussion. It is a very current topic.

Hugs!

Hello @juanmolina,

very interesting topic and nicely written article!

This is the type of AI thoughts and discourse at the core of the most important problems of AI.

I guess the EC made an not so bad job to hammer something out on a meta level. On the other hand I'm pretty sure that it's a little like crying for a registration and insurance with these new elctro scooters everywhere. Yep, they've got the rules and regulations in place but no man power to enforce any of this other than by chance.

Now think of AI's created on a script kiddie level with freely available resources in an anon style. I guess it's a non brainer that on this level not so nice things will happen for sure.

But, for corporations and official public bodies this seems to be a set of rules that they could work by.

Upvoted, resteemed and shared to my twitter account!

Cheers!
Lucky

Thank you for your ready-witted comment, I must confess that I was waiting for this from you.
When I wrote my article, I always kept in mind how much you like this topic.

AI will provide in the next decade (which is about to begin) the greatest benefits in quality of life that humanity can imagine. It will totally change the environment, the way of life, the way we interact between human beings.

But unfortunately, history has shown us time and again, how destructive and ambitious we are. Wars, weapons of mass destruction, environmental pollution, extinction of species, xenophobia, racism ...

With AI this will not be different.

Very true Juan!

I agree 100%! ...and I enjoy reading your articles a lot my friend! Nice little group that we have here around @crpyto.piotr!

Cheers!
Lucky

Hi

Posted using Partiko Android