Are there any guidelines on artificial intelligence and data protection?

Artificial intelligence has seen exponential growth over the past few years and is likely to keep growing. We are moving in a direction where artificial intelligence will be involved in all spheres of life. Medicine, technology, jobs and education, and even politics. With the development of artificial intelligence solutions, data protection laws with respect to artificial intelligence have become imperative. On January 28, which is the Data Protection day, new guidelines for data protection with respect to AI were announced.

The Consultative Committee of the Convention for the Protection of Individuals with regard to the Processing of Personal Data has developed these Guidelines. These guidelines developed by Convention 108 for artificial intelligence and data protection have various subsets. There is a list of general guidelines, guidelines for developers manufacturers and service providers, and for legislators and policy makers. Le us take a look at these guidelines.

General guidelines:

There are numerous general guidelines, below is a list of the basic issues that the guidelines tackle. This is not the list as it was legally presented, but rather what you can understand from that list. This is how following guidelines will also be explained.

1. Human dignity

Human dignity, rights and freedoms should always be the central concern. This should be given importance especially when AI is used in processes of decision making.

2. Convention 108

In keeping with convention 108, personal data should only be used keeping in mind the principles of

  • Lawfulness
  • Fairness
  • Purpose specification
  • Proportionality of data processing
  • Privacy by design and by default
  • Responsibility and demonstration of compliance (accountability)
  • Transparency
  • Data security
  • Risk management


3. Avoid risks of using personal data

Since personal data will be used in artificial intelligence programming, developers should do so ethically.

4. Functioning of democracies and social values

Not just human rights and freedoms, but also the functioning of democracies and social conditions should be considered. Humanity should be protected in all facets.

5. Rights of data subjects

The data subjects should have the right to participate or opt out. They should further have meaningful control over any AI for which they are data subjects.

There are six guidelines in the legal document, however for the purpose of understanding, a few guidelines have been combined.

Guidelines for developers, manufacturers and service providers:

1. Values oriented approach

A mobile application development company or anyone developing AI should have a values-oriented approach for developing these products.

2. Rights and freedoms

Human rights and fundamental freedoms should always be the priority. No technology should be developed which infringes on these. All AI must pre-empt and mitigate these risks

3. No biases

Centring human rights needs to happen right from the development process. No biases should be computed into artificial intelligence programming.

4. limited personal data

The amount of personal data used by artificial intelligence applications must be minimum. Synthetic data may be used in order to avoid using a lot of personal data. Only the data which is essential for the program should be used and no marginal data should be collected.

5. Contextualised data

The impact of data that is not contextualised should be taken into account. While developing AI the impact that the data will have in certain contexts must be considered.

6. Consult experts

To develop socially acceptable, ethical, and human rights-based AI, developers should consult academics. They should consult experts from various fields so that they can be aware of potential biases. Especially in fields such as crime detection, is biases aren’t avoided it can be detrimental. It is necessary to have ethics and eliminate biases from all data.

7. Participatory risk assessment

Subjects who will be affected by certain artificial intelligence solutions should be involved in the discussion. Direct stakeholders of any artificial intelligence should be consulted before the program is developed.

8. User trust/ right to information

If certain artificial intelligence is being used, the people who are subject to it should be made aware. They should know what they are signing up for and it should not be used without their consent. There should always be alternatives to AI available in case users do not want their data to be used.

9. Right to object

Users should not only be made aware of their participation, but should also have the right to object. If a user does not want their data to be used, they should have the right to say so.

There are twelve guidelines in the legal document, however for the purpose of understanding, a few guidelines have been combined.

Guidelines for legislators and policy makers:

1. Accountability

Codes of conduct and certification mechanisms should be in place. A mobile application development company should be held accountable for the way they use AI.

2. Transparency

Artificial intelligence programming must maintain transparency. Only those things that should be confidential under law won’t be subject to this. Otherwise all usage of AI should be assessed for the consequences it may have on various groups.

3. Resources

The authority supervising the usage of AI should have the resources to do so. They should have all the digital resources necessary for the job.

4. Human intervention

In decision making processes, human intervention should not be entirely eliminated. The data should not be taken at face value and should be used ethically.

5. Consult supervisors

Mobile development companies should have to consult supervisors when dealing with data that has risk of ethical violations.

6. Authority collaboration

Several bodies such as consumer protection, anti-discrimination, competition and etc. should work together with the AI bodies when required.

7. Independence

The independence of all these bodies should be ensured. They should not be influenced by any group.

8. Stakeholder involvement

All data subjects, government, and other stakeholders that may be impacted by artificial intelligence solutions should be involved. Everyone who is involved should have some participation.

9. Digital literacy

People should be educated about artificial intelligence and how it works so that they can make informed choices about their participation.

Thus, these guidelines are an attempt at securing data protection. It is an attempt at putting humanity and ethicality at the centre of all AI programming.

Views: 34


You need to be a member of Small Business Bonfire to add comments!

Join Small Business Bonfire

About the Small Business Bonfire

The Small Business Bonfire is a social, educational and collaborative community founded in 2011 for entrepreneurs that provides actionable tips and tools through a small business blog, a weekly newsletter and a free online community.

Subscribe to Our Newsletter


© 2019   Created by Alyssa Gregory.   Powered by

Badges  |  Report an Issue  |  Terms of Service