- Home
- News
- Recent News
- MA AG and Applied Ethics Center Address AI
Massachusetts Attorney General and UMass Boston Applied Ethics Center Address AI Consumer Protection
Massachusetts Attorney General Andrea Joy Campbell joined Professor of Philosophy and Director of the UMass Boston Applied Ethics Center Nir Eiskovits for a fireside chat to discuss how artificial intelligence (AI) is influencing consumer and civil rights. Postdoctoral fellow at the center, Alec Stubbs, moderated the lively discussion between Campbell and Eiskovists, which covered the risks and benefits with the emergence of AI, biases in algorithms, and data protection.
Introducing the panel, Chancellor Marcelo Suárez-Orozco spoke about the rise of artificial intelligence, its influence on nearly every aspect of our lives, and the opportunities that AI brings to higher education. UMass Boston’s Paul English Applied Artificial Intelligence Institute will not only support the incorporation of AI competencies into curriculums but also provide students with the tools and skills they’ll need to thrive in tomorrow’s workforce, he said.
“We believe that AI will continue to do wondrous things – like drive economic productivity to new heights and launch innovative technologies that improve the quality of life for millions,” Suárez-Orozco said. “Our engagement with AI aims to ensure that a UMass Boston education cultivates the ability to apply artificial intelligence to improving the ways we live.”
Along with the benefits AI can bring will come issues around consumer and civil right, according to Attorney General Campbell. She spoke about how her office addresses AI and the tools available to combat discriminatory practices that are driven by AI models within the housing market, workers’ rights, and other areas of concern.
“If you have an algorithmic decision-making system, it just puts information in with no full context, which can very much be used against you,” she said. “We see our tools as being able to be leveraged to have greater conversations to convene folks to solve particular problems and then, of course, using other stakeholders who are better positioned to take on other challenges.”
Campbell and Eiskovits discussed how companies are training new AI models using social media platforms and the impact this type of data collection has on the consumer. Eiskovits explained that social media companies have a vested interest in making money and maximizing shareholder benefit, not in the preservation of democracy, and that these companies are actively seeking out other ways to collect data to train AI models to emulate the human mind.
“There is a disconnect about what matters to the companies like Meta and X, and what matters to the Attorney General and the people that the Attorney General represents and works for,” Eiskovits said.
“We currently don’t have the tools that we would want and need to hold social media platforms accountable for what content they put out there that may cause harm, misinformation, and undermine our democratic institutions,” Attorney General Campbell said. “We have to be working with the legislature, and the FTC at the federal level is concerned about this and we’re actively working with them.”
Eiskovits said the AI models that companies use can manipulate data to become biased, but that there’s the potential that AI as a vehicle of bias is more reversible than other vehicles of bias.
“If the data was fixed, which is a big if, and if the models were supervised and fixed, AI is potentially less biased than human beings,” Eiskovits said. “It has the potential if economic and political incentives are there.”
“There is the possibility that you can reduce bias depending on what the inputs are and who the human is that’s informing those systems, but I think it’s a real challenge,” Attorney General Campbell said. “The other potential societal risk or harm is that you improve a system so much that you continue to have labor shortages. Some of us here won’t have the same job because you will be replaced by technology, so our unions and other stakeholders are thinking about how to transition our residents to other employment opportunities.”
The Office of the Attorney General released an advisory to provide guidance on how state consumer protection and other laws apply to artificial intelligence. The advisory clarifies how AI developers, suppliers, and users must comply with existing state consumer protection, anti-discrimination, and data privacy laws. Attorney General Campbell spoke about the advisory and the tools that her office has, and will soon acquire, to properly address the impacts of AI.
“We have some of the most progressive and robust consumer protection laws in Massachusetts and in the country. Our discrimination laws are the same, and our data privacy laws are quite extensive and expansive,” Attorney General Campbell said. “We put out this advisory making it crystal clear that with these new forms of technology our laws absolutely apply and that we have every intention of enforcing them.”