Data Ethics and AI Policy – Novo Nordisk Fonden

This post was originally published on this site.

Novo Nordisk Foundation – Policy on Data Ethics

The Novo Nordisk Foundation is committed to upholding the highest standards of ethical conduct regarding the use of data, including compliance with Danish, EU, and other relevant laws. Additionally, recognising that the fast pace of technological development can leave regulatory gaps, we have further developed six principles for how to handle data in an ethical way.

This policy describes how the Novo Nordisk Foundation uses and processes data, including identifiable personal data, non-identifiable (e.g. anonymised) personal data, and data that grant applicants submit using the Foundation’s application system. The policy complements all other rules and guidelines for handling of personal and other data that apply to employees.

The overall responsibility for the policy on data ethics is anchored with the Foundation’s CFO. The Foundation will periodically review and revise the principles to reflect evolving technologies, the regulatory landscape, stakeholder expectations, and understanding of the risks and benefits to individuals and society of data use.

The six principles of the Novo Nordisk Foundation’s policy on data ethics and responsible handling of personal data are:

  1. Respect for the privacy of grant recipients, applicants and employees is a fundamental value for NNF.
  2. NNF considers data ethics considerations as more far-reaching than just compliance with the law.
  3. NNF prioritises openness and transparency in the ongoing challenge of ethically handling both personal data and non-identifiable data. NNF strives to learn from regulators, other companies, and other organisations.
  4. NNF minimises access to personal data to employees with a “need to know”. Employees with access to such data are bound to confidentiality.
  5. NNF only discloses grant applicant data to authorities if there is an obligation to do so according to legislation and a court or regulatory decision.
  6. Artificial intelligence, analyses, impact measurements, and the use of algorithms must be aligned with NNF’s core values and dedicated to advancing our strategy and goals. These can include assisting NNF’s grant applicants or recipients and promoting openness and transparency around NNF’s activities and impact. The Foundation maintains a separate Policy on Ethical Use of AI which expand on this point.

Novo Nordisk Foundation Policy on Ethical Use of AI

The Novo Nordisk Foundation is committed to upholding the highest standards of ethical conduct in the use of Artificial Intelligence (AI), including compliance with Danish, EU, and other relevant laws. In addition, recognising that the fast pace of technological development, can leave regulatory gaps and requires thoughtful and responsible decision making, we have further developed this policy on ethical use of AI.

This policy covers NNF-related use of AI by employees, management, consultants, suppliers, and vendors. In addition, this policy covers grant recipients, personnel funded by NNF grants, members of NNF committees, and individuals, institutions, and organisations collaborating with us when carrying out activities funded by or related to NNF.

GUIDING PRINCIPLES
The following principles should guide use of AI by individuals and organisations in scope for this policy.

  1. Societal benefit
    Our use of AI must be aligned with NNF’s core values and dedicated to advancing our strategy and goals. In addition, any use of AI must fully comply with relevant legislation.
  1. Transparency
    We must be transparent in disclosing when and how AI is used in decision-making, as well as make appropriate disclosures when material is generated by AI.
  1. Fairness
    We must minimise use of AI that could lead to unfair discrimination against individuals or groups and actively seek out AI trained so as to avoid creating or reinforcing biases.
  1. Safety
    We must take steps to ensure that our use of AI does not pose unwanted harms to people or processes.
  1. Data protection
    We much make sure our use of AI accounts for the protection and security of data, as well as upholds privacy rights.
  1. Human autonomy and oversight
    We must ensure that individuals maintain an ability to guide AI activities, decide when and how AI is applied, and review or override AI decision-making.
  1. Accountability
    We, as an organisation, must put in place measures to hold AI systems and their users accountable, including maintaining policies and procedures, such as impact assessments, audit, and due diligence.