Guiding Principles for the development of AI


AI is everywhere. What was regarded as something reserved for science fiction fans or mathematicians is now relevant for everyone and has the potential to bring extraordinary benefit to the whole of humanity. However, there is a darker side - there are many dangers that AI poses. This, we can not ignore, with a hope that someone else will make it right. Currently, the ethical regulation and guidelines are not robust enough to provide adequate protection from the potential threat and danger to humanity that AI poses. Stuart Russell, a leading academic in the development of AI, warns us clearly and simply on the urgency and importance of strong leadership in the ethical development of AI; ‘how we choose to control AI is possibly the most important question facing humanity’. This is something that needs to be done collectively and wisely. As Vladimir Putin said: whoever becomes the leader in AI “will become the ruler of the world”, as Russia invades and wages war against Ukraine, this shows us the very real and present danger if AI is not ethically regulated. It is imperative that decisive action is taken as a priority. Therefore I propose a regulatory framework that contains 3 distinct parts to combine best practice from other industries, existing AI research and practical application assistance to provide a regulatory framework to ensure artificial intelligence brings benefit to as many people as possible.


The 3 district parts of this regulatory frameworks are as follows:


1) ISO Process Adhered

With the purpose to improve the process of data collection in large quantities and data management procedures, I suggest a process based Data Quality Management System (DQMS) supported on ISO 9001 standard. This will ensure data quality, reliability, compatibility, interoperability, inclusivity and efficiency. ISO 9001 is a robust quality management standard. It applies to all types of organisations. It doesn't matter what size they are or what they do. It can help both product and service oriented organisations to achieve standards of quality that are recognised and respected throughout the world. This process should be adhered to for initial and ongoing data collection used for the development of AI.


2) Value Alinement (Octagonal Mindset)

The DQMS will be aligned in accordance with common global human values. ‘Value Alinement’ in the ethical development of AI is critical. Stuart Russell, empathises the importance of the development of AI with the objective to maximise the realisation of human values. I have outlined 8 common human values to be applied, I have called this an Octagonal Mindset.


3) Application of the Asilomar AI Principles

For each mindset I have allocated the 23 principles of the ‘Asilomar AI Principles’ document. The principles range from research strategies to data rights to future issues including potential super-intelligence. A collection of principles that most participants at the BAI 2017 Conference (the Future of Life Institute’s second conference on the future of artificial intelligence) agreed are important to uphold. Participants at the conference include leading AI academics and industry experts.


AI will allow us to achieve unprecedented possibilities, but it is important to acknowledge the potential risks posed by AI if it is developed and used without wisdom and care. In light of this hope it is important to regulate the initial data collection and ongoing process for the development of ethical AI. We know that data is what drives AI development. So with good, fair, inclusive and ethical data we will develop good, fair, inclusive and ethical AI. I have named this ethical regulatory framework for data collection an Octagonal Mindset. Below I have provided a brief explanation of each of the 8 Mindsets:


Faith to Fail

AI has driven data collection which means we have more information and intelligence available to us. We are better equipped and armed to make informed decisions and to achieve bigger things than ever before. This does not remove all risk but when we know we have good data behind the algorithm, we can better understand and manage the risks that remain. This increased availability of information and data assists us to learn better from mistakes and gives us courage to press forward with innovations.


This relates to the following Asilomar AI Principles:

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.


Original Identity

Technology should not be assigned a level of human identity, worth, dignity, or moral agency. No application of AI should devalue or degrade the dignity and worth of another human being. Original identity belongs to humans and not AI technology. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. Data collection needs to be conducted with the uniqueness of each person in mind and consider all biases.


This relates to the following Asilomar AI Principles:

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyse and utilise that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.


Respectful Patience

AI should be used in ways that aid work and allow humans to make fuller use of their talents, not to eradicate work but increase sustainable and meaningful employment opportunities for all. Proper investment needs to be made into data collection and corners cannot be cut. Big data is not enough, it needs to be good data. Good data creates good AI.


This relates to the following Asilomar AI Principles:

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.


Generous Integrity

AI is a powerful tool to identify and eliminate bias and assist in human decision-making, but it is not a moral agent. Humans alone bear the responsibility for moral decision making. However, this process can be strengthened when we have algorithms that are developed with integrity at the core and throughout the data collection process.


This relates to the following Asilomar AI Principles:

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.


Innocent Kindness

Advances should be guided by basic principles of ethics, including beneficence, non-maleficence, autonomy, and justice. The focus for AI development must be based and founded on the benefit of the whole of humanity as a whole. This is not survival of the fittest but inclusivity in its purest form; ensuring even the most disadvantaged people are benefitted from the development of AI. This focus must be in place from the beginning and carried through the data collection and annotation process.


This relates to the following Asilomar AI Principles:

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.


Values Applied

Data collection practices should be consistent with human values to build courage, inspiration, connection and motivation. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. Having values is not the important thing here, it is the application and priortisation of those values into our everyday lives and in the data collection process.


This relates to the following Asilomar AI Principles:

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.


Empathetic Unity

AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life. With the aim to pursue peace and justice in a flourishing society; data collection must be carried out with this in mind.


This relates to the following Asilomar AI Principles:

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.


Sustained Commitment

AI will continue to be developed in ways that we cannot currently imagine or understand, including development that will far surpass many human abilities. Developmental decisions should not be left solely to those who develop the technology or governments to set norms, humanity as a whole has an opportunity to shape and mould this development for good and positive effect for both present and future generations. Data collection can contribute to this through the communication of this vision through the global network of contributors to the data collection and annotation process.


This relates to the following Asilomar AI Principles:

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.


With strong, good data I hope that AI will help us to become people that have:

Good data will develop good AI - which will help and empower us to become a people that FORGIVES and ensure all of humanity benefits as our world becomes increasingly digital. In a world of data it is imperative to steward, handle and process this data responsibly to develop a better tomorrow. To conclude, we only need to step outside and look up at the sky and count the stars to be reminded of the infinite possibilities provided to us when we live and work with an Octagonal Mindset.









Recent Posts
Archive