The letter they penned, signed by Elon Musk and Apple co-founder Steve Wozniak, suggested that “AI systems with human-competitive intelligence can pose profound risks to society … and should be planned for and managed with commensurate care. … Unfortunately, this level of planning and management is not happening.”
In order to address the concerns, Sen. Mike Rounds, R-S.D., leader of the Senate’s AI Caucus, suggested that AI creators–and creations–should be subjected to the same rules as any other field.
“I think what you have to do is, to identify what is not allowed in terms of ethics and illegal activities, whether it is AI or not – you impose on AI activities the same level of ethics and privacy that you do for other competencies today,” he said.
Sen. Gary Peters, D-Mich., Homeland Security and Government Affairs Committee chair, said he plans to continue to investigate the developing technology.
“I intend to have a series of hearings in Homeland Security and Government Affairs taking up AI and what we should be thinking about,” Peters said.
Peters and Rounds were joined by Sen. Michael Bennet, D-Colo., who agreed that some sort of oversight will likely be necessary.
“I think we do have a role to play,” he said of Congress’ role in managing AI, but stressed that technology developers will have to share that responsibility.
“In the long run, I think what we could do is set up, you know, an agency here. They can negotiate on behalf of the American people, so we can actually have a negotiation about privacy. In the near term, I think it’s going to be important for tech to police itself.”