On April 10, U.S. lawmakers introduced the Algorithmic Accountability Act (the AAA). The AAA empowers the Federal Trade Commission (FTC) to promulgate regulations requiring covered entities to conduct impact assessments of algorithmic “automated decision systems” (including machine learning and artificial intelligence) to evaluate their “accuracy, fairness, bias, discrimination, privacy and security.” The bill is evocative
The European Union Agency for Network and Information Security (ENISA) recently published its report on ‘Security and privacy considerations in autonomous agents’.
Artificial intelligence (AI) and complex algorithms offer unlimited opportunities for innovation and interaction, but they also bring a number of challenges that should be addressed by future policy frameworks at the EU level – especially in light of the amount of available data.
One of the objectives of the study was to provide relevant insights for both security and privacy for future EU policy-shaping initiatives. We have summarised some of the key security and privacy recommendations from the report below.Continue Reading ENISA tackles AI head on
The President has made artificial intelligence technology a policy priority. On February 11, 2019, the President issued an Executive Order to direct most federal executive agencies to promote and protect American advancements in artificial intelligence while working with private industry. The order recognized that public trust in artificial intelligence is an important factor in the development and use of the technologies, and highlights the need to “protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.”>
Specifically, the President ordered the agencies to consider artificial intelligence as a research and development priority and
- Invest in artificial intelligence (for example, machine learning) research and development.
- Enhance access to data, models, algorithms, and computing resources to promote artificial intelligence research and development (consistent with obligations to maintain safety, security, privacy, and confidentiality).
- Reduce barriers to the use of artificial intelligence (for example, machine learning) technologies.
- Help develop technical standards that minimize vulnerability to attacks and “reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies.”
- Train a workforce that can develop and take advantage of developments in artificial intelligence.
- Develop an action plan to “to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.”
A meeting of data protection authorities from around the world has highlighted the development of artificial intelligence and machine learning technologies (AI) as a global phenomenon with the potential to affect all of humanity. A coordinated international effort was called for to develop common governance principles on the development and use of AI in accordance with ethics, human values and respect for human dignity.
The 40th International Conference of Data Protection and Privacy Commissioners (conference) released a declaration on ethics and data protection in artificial intelligence (declaration). While recognising that AI systems may bring significant benefits for users and society, the conference noted that AI systems often rely on the processing of large quantities of personal data for their development. In addition, it noted that some data sets used to train AI systems have been found to contain inherent biases, resulting in decisions which unfairly discriminate against certain individuals or groups.
To counter this, the declaration endorses six guiding principles as its core values to preserve human rights in the development of AI. In summary, the guiding principles state:
Continue Reading Guiding principles for AI development