Artificial Intelligence (AI) is a double-edged sword that brings to us innovation but at a cost. With its sudden advent, regulations relating to it are still a big question mark- with AI finding its application in more and more industries and slowly integrating into our everyday life. This makes it even more necessary to be protocoled before this growth turns on us. With the overview of new export controls on artificial intelligence software recently, the White House appealed to lawmakers, businesses, and European allies to evade the over-regulation of artificial intelligence. It also held its ground on its refusal to participate in the initiative by the seven leading economies, whose primary aim is to establish shared ideologies and guidelines on artificial intelligence, as the U.S. is touted to fill in the presidency of the organization this year. The U.S. has rejected working with other G-7 nations on the project, known as the Global Partnership on Artificial Intelligence, maintaining that the plan would be overly restrictive. While this philosophy would have worked in the initial days of the boom, at this point the unprecedented growth has the scope to get out of our hands.
Kay Mathiesen, an associate professor at Northeastern who focuses on information and computer ethics and justice, contends that the U.S.’s refusal to cooperate with other nations on a united plan could come back to hurt its residents.
Although the President argues that attempts to regulate might be bureaucratic and might restrict the freedom the science needs for its expansion, many companies are already ahead of the curve in developing oversight mechanisms to outline the ethical development of their products, therefore, poses the risk of the benefits of the technology being overridden by the cost. Many argue that the development of actual policies is a reality far into the future, and at this point, it is really just triggering the required discussion on ethics. The lack of public confidence that began with the data privacy leaks has the capacity to worsen if the negative impacts of AI come into play.