New York state government agencies will have to conduct reviews and publish reports that detail how they’re using artificial intelligence software, under a new law signed by Gov. Kathy Hochul.
Hochul, a Democrat, signed the bill last week after it was passed by state lawmakers earlier this year.
The law requires state agencies to perform assessments of any software that uses algorithms, computational models or AI techniques, and then submit those reviews to the governor and top legislative leaders along with posting them online.
It also bars the use of AI in certain situations, such as an automated decision on whether someone receives unemployment benefits or child-care assistance, unless the system is being consistently monitored by a human.
Law shields workers from limiting of hours due to AI
State workers would also be shielded from having their hours or job duties limited because of AI under the law, addressing a major concern that critics have raised against generative AI.Â
State Sen. Kristen Gonzalez, a Democrat who sponsored the bill, called the law an important step in setting up some guardrails in how the emerging technology is used in state government.
Experts have long been calling for more regulation of generative AI as the technology becomes more widespread.
Some of the biggest concerns raised by critics, apart from job security, include security concerns around personal information, and that AI could amplify misinformation due to its propensity to invent facts, repeat false statements and its ability to create close to photo-realistic images based on prompts.Â
Several other states have implemented laws regulating AI, or are poised to. In May, Colorado introduced the Colorado AI Act, which sets out requirements for developers to avoid bias and discrimination in high-risk AI systems that make substantial decisions, coming into effect in 2026. Numerous AI bills will also enter into force in the new year in California after being signed into law in September, including one requiring large online platforms to identify and block deceptive content related to elections, and another which requires developers to be open about the data sets used to train their systems.Â
Canada has no federal regulatory framework for AI, although a proposed Artificial Intelligence and Data Act (AIDA) has been packaged with Bill C-27. It is still under consideration, with no timeline for whether or not it’ll become law. Earlier this fall, the federal government also announced the launch of the Canadian Artificial Intelligence Safety Institute, which is intended to advance research on AI safety and responsible development.Â
Alberta is working on developing its own regulations surrounding artificial intelligence, the privacy commissioner stated in March, specifically focusing on privacy issues such as deepfakes.Â