Artificial intelligence has gone from an imagined technological fantasy to an integral part of our everyday lives. From super-personalized recommendations on our separate site OverTraders.com to the algorithms that are reshaping our financial markets, AI’s influence can’t be overstated. This rapid integration comes with a chilling undercurrent: the potential for misuse and the erosion of ethical boundaries. I’ve always been a strong proponent of the idea that government regulation is necessary to develop AI responsibly. It’s the best way to prevent hostile nations from stealing away this technology.
We’ve experienced just a small sample of AI’s sinister side. AI systems can easily perpetuate and amplify biases already present in society. This can lead to biased or discriminatory decisions in important fields like loan approvals, job recruitment, and the criminal justice system. Algorithmic bias is everywhere. The algorithms themselves aren’t evil. They’re taught on data that reflects our society’s biases, and without extreme caution and regulation, these algorithms can be just as likely to propagate and entrench those biases.
The unregulated tracking and use of individual data by AI systems presents significant privacy concerns as well. Facial recognition technology is being deployed across the country at an unprecedented rate. This rapid expansion creates risks, particularly around mass surveillance and dangerous surveillance use by governments or corporations. At what point do we draw the line between convenience and the erosion of our most basic human rights?
What is the danger of ceding so much decision-making power to machines? The increasing reliance on AI in critical areas like healthcare and autonomous vehicles raises profound questions about accountability and the role of human judgment. Who is held accountable when an AI-powered system fails? We need to make sure that human values remain central to these evolving decision-making processes.
The risk of abuse goes far beyond these ethical predicaments. AI can be weaponized to create sophisticated cyberattacks, spread disinformation on a massive scale, and even develop autonomous weapons systems that could escalate conflicts and destabilize global security. And hostile nations would bring these technologies to bear against us, using them to subvert our democracies, shatter our economies, and threaten our national security.
I reflected on the conversation I had with a leading cybersecurity expert a few months ago. Even scarier and more timely was his painting of AI-powered phishing attacks. These attacks are more sophisticated, more targeted, and more realistic than ever before. He warned that we're entering an era where it will be increasingly difficult to distinguish between what's real and what's fake online, and that the consequences could be devastating.
One would be the argument that government regulation would kill innovation and prevent the development of artificial intelligence. They think the market ought to be left alone to control itself. Their fear is that making overly burdensome regulations would stop innovation in its tracks. I get it, I really do, these are big concerns, but the costs of doing nothing are clearly greater than the costs of regulating for safety.
We're not talking about stifling innovation. We're talking about setting ethical boundaries and ensuring that AI is developed and used in a way that benefits humanity as a whole. This requires a multi-pronged approach that includes:
Establish meaningful and enforceable guidelines and standards for AI development and deployment. Create new requirements for transparency, accountability and security.
- Implementing strong and effective AI export controls to stop the flow of sensitive AI technologies to adversarial countries like China.
We are in the early stages of writing new regulations that would prevent malicious uses of AI. That is, including protecting against cyberattacks and disinformation campaigns.
We support the development of AI that places security and safety at its core. Illustratively, we’re working on developing AI to both detect and stop cyberattacks before they happen.
- Forming multilateral partnerships to cooperate and share information to respond to misuses of AI that pose a global risk.
The Council of Europe’s AI Framework Convention—the first attempt to establish a broad set of rules covering AI—is an encouraging development in this regard. It aims to establish a common legal framework for AI governance, focusing on human rights, democracy, and the rule of law. This convention is expansive in scope and focuses primarily on oversight tools. This would set the stage for more united and impactful global regulations on AI.
This last point leads me to the other, and often most powerful, force driving regulatory priorities – public opinion. Polls show that close to 90 percent of the public supports federal oversight of AI. In particular, they call for increased oversight in the fields of autonomous weapons systems and facial recognition technologies. This new, public demand for accountability and ethical, human-centered AI development must be heeded.
We must adopt regulations that put transparency front and center for AI systems. These regulations need to provide clarity on how these systems work and what decisions they are making. We’re going to need regulations that take into account the societal impacts of AI, like exacerbating inequality or creating misinformation. We need strong regulations that hold big tech accountable for their AI fueled harms from data privacy to racial bias, etc.
I recently read a study that noted how large an issue AI transparency is, or rather, the lack thereof. Users are more willing to trust AI systems when they have insights into how those systems are coming to their conclusions. This kind of understanding increases trust and comfort with the technology. This underscores the urgent need for regulations to protect public health. We need to ensure that companies be held accountable on how their algorithms are known to operate.
We know the path forward won’t be a cakewalk. Make no mistake, there will be growing pains and points of contention down the road. The consequences are too grave for us to wait and see as AI evolves without regulation. There is still time to make sure this magnificent new technology gets used for good, rather than ill.
The future of AI is not already written. How we go about doing that is entirely up to us. Act with fearless but responsible regulation, and prioritize international cooperation to realize AI’s transformative potential. First, let’s make ethics our priority so we can handle AI’s potential dangers responsibly. With deepfakes already on the loose and AI rapidly spreading, the time to act is now — before the algorithm writes a future we don’t want to live in.