Impacts and Takeaways From the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Impacts and Takeaways From the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

The Evolution of AI and LLMs

Artificial intelligence (AI) and large language models (LLMs) have come a long way since their inception in the 1950s. From the pioneering research of English mathematician and logician Alan Turing to the recent breakthroughs achieved by models like GPT-3/GPT-4, AI has undeniably transformed industries and revolutionized human-computer interactions. But as AI becomes increasingly intertwined with our daily lives, developing an effective strategy to regulate it while optimizing value is more critical than ever.

As we rapidly approach the one year anniversary of ChatGPT’s release to the public, the number of users of the generative AI tool has skyrocketed from one million in the first five days to an estimated 180 million+. Given the resulting hype, not to mention the exponentially increasing value proposition to help drive innovation and advance government agency missions, President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was very timely. The order, which was 111 pages in total, charts a broad path, with short-term and long-term guidance towards responsible AI practices that protect privacy, address biases, and mitigate risks for years to come.

Short-term Impacts: Fostering Trust and Transparency

The Biden EO establishes clear expectations for both federal agencies and the private sector, emphasizing the importance of trust, security, and transparency in AI development and deployment. The order also promotes fairness and mitigating biases, ensuring that AI technologies do not inadvertently perpetuate discriminatory outcomes. By implementing explainable AI methodologies, organizations will be able to comprehend the decision-making process of AI systems, enhancing accountability and public trust.

Moreover, the order prioritizes privacy and data protection, recognizing the need to safeguard sensitive information in an era of growing data breaches and cyber threats. With stricter guidelines on data handling and encryption, the government underscores its commitment to protecting citizens’ personal information. These short-term impacts lay the foundation for a more responsible and ethical AI ecosystem, fostering trust among citizens and stakeholders alike.

Long-term Impacts: Advancing AI for the Public Good

Looking ahead, the order is poised to drive transformative changes in the AI landscape, with far-reaching benefits for society. By prioritizing research and development, the government aims to advance AI innovations that serve the public good while addressing societal challenges. This long-term vision aligns with initiatives like the AI for Good Global Summit, where experts from various sectors collaborate to leverage AI to tackle issues such as climate change, healthcare disparities, and educational equity.

Furthermore, the executive order recognizes the importance of open-source AI solutions. Open-source frameworks and models provide a collaborative platform for researchers and developers, enabling them to build upon each other’s work while fostering transparency and innovation. By encouraging the adoption of open-source practices, the government promotes the democratization of AI, allowing smaller organizations and researchers to contribute meaningfully to the field, leading to a more inclusive and diverse AI community.

Collaboration and Overcoming Challenges

It’s important to note that President Biden’s executive order on AI does not stand alone. It builds upon the collective wisdom of the AI community and complements existing frameworks like the AI Bill of Rights and the AI Risk Management Framework from the National Institute of Standards and Technology. These collaborative efforts across academia, industry, and government foster a multidimensional approach to addressing the challenges of AI, underscoring the power of collaboration in driving progress, overcoming biases, enhancing transparency, and securing the future of AI development and deployment.

By prioritizing trust, fairness, and transparency, the order lays a foundation for responsible AI practices that benefit individuals, communities, and society as a whole. It recognizes the potential of AI to address societal challenges and encourages collaboration to drive innovations for the public good. As the AI landscape continues to evolve, the principles outlined in this executive order will guide the way, fostering trust, security, and unlocking the true potential of AI technology.

Trusting in AI Requires Trusting Your Data

Good AI rides on the back of good data. As governments strive to advance and accelerate their missions through the use of AI solutions, they must ensure the underlying data is of high quality and trustworthy. This can only be achieved through robust data management capabilities and well-established data strategy, governance, and security measures. With Cloudera, a world-class leader in open data lakehouse for trusted AI, public sector agencies can harness the power of generative AI to improve mission planning, intelligence analysis, and cybersecurity, ultimately enhancing national security efforts through cutting-edge technology solutions.

Find out more about CDP and the only open data lakehouse for both private and public cloud here (Cloudera Data Platform (CDP) | Cloudera).

Steve DeVoir
Managing Director Industry Solutions - Public Sector
More by this author

Leave a comment

Your email address will not be published. Links are not permitted in comments.