Utilizing artificial intelligence in the workplace offers promises of increased efficiencies, error reduction, improved communication, and lower costs.
Employers, for instance, may be able to use AI-powered chatbots to streamline training, provide accurate 24/7 support, reduce work hours required for administrative tasks, and assist HR professionals in screening job applicants.
But there is peril, too—AI usage can entrench impermissible bias and inequality, lead to job displacement, create need for upskilling, and potentially enable employees to misuse AI tools in ways that inadvertently disclose confidential business or personal information.
In response, federal agencies have issued guidance, and state legislatures have collectively introduced over two hundred bills affecting AI tools. While many of the employment-related bills focus on impermissible discrimination, especially regarding hiring practices, employers should also be mindful that some bills further address privacy and data protection, wage and hour issues, automation, job displacement, upskilling, and data transparency. Many of these efforts overlap between these categories.
Discriminatory outcomes pose a major risk with the deployment of AI
Inputs and models may embed assumptions and datasets that disparately impacts certain protected classes or produce biased outcomes in ways that are not easily discernible without specific attention from development through operation.
Reflecting these concerns, in April 2023, the EEOC, DOJ Civil Rights Division, FTC, and CFPB issued a joint statement titled Enforcement Efforts Against Discrimination and Bias in Automated Systems. The statement outlines the agencies’ respective enforcement authority under existing laws as they apply to AI systems, noting that AI may contribute to unlawful discrimination and other legal violations due to a range of problems, including historical bias in data and datasets, lack of transparency into the workings of AI tools, and design flaws or inconsistencies.
Similarly, the DOJ Civil Rights Division issued guidance titled Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, which cautions employers to consider how implementing these technological tools may impact disabled applicants or employees. It emphasizes that these tools should assess skills, not disabilities, and reminds employers to ensure that algorithmic hiring tools do not discriminate against or exclude individuals with disabilities.
Other Legislative Responses
State governments have likewise responded by proposing or enacting legislation, though efforts in Maryland have largely failed to date.
For instance, H.B. 1255, which would have generally prohibited employers from using automated employment decision tools for certain decisions and required disclosure of tool usage to applicants, did not pass in the last session of the General Assembly.
However, Maryland employers should be aware that other bills, while not directly addressing AI tools, may nonetheless be triggered by AI tool use. For instance, HB 1202—which became law in 2020—restricts employers’ use of certain biometric information in the employment application process.
It also prohibits employers from using facial recognition services for the purposes of creating a “machine-interpretable pattern of facial features that is extracted from one or more images of an individual by a facial recognition service,” during an interview unless an employee consents. While the law does not mention AI, many facial recognition tools use AI processes.
Looking to our neighboring jurisdictions, also currently pending before the Council of the District of Columbia is B 114, the Stop Discrimination by Algorithms Act of 2023. It would prohibit both for-profit and nonprofit organizations that use algorithmic decision-making tools from using them based on protected traits; requires corresponding notices to individuals whose personal information is used; and provides for appropriate means of civil enforcement. Violations include civil penalties up to $10,000.
Pending in Pennsylvania is H 1729, which amends the Pennsylvania Human Relations Act to require employers to notify job applicants of their use of employment decision tools, obtain consent for their use, and ensure that any tools used have undergone a bias audit within the last year.
Outside of the Mid-Atlantic, other state legislatures have seen more success. Illinois passed H.B. 3773, the AI Bias in Hiring Law, which amends the Illinois Human Rights act to make it a civil rights violation for an employer to use AI that results in discrimination based on protected classes or to use zip codes as a proxy for protected classes in recruitment, hiring, promotion, training, apprenticeship, discharge, discipline, tenure, or other terms of employment.
Colorado likewise passed the S.B. 24-205, which requires developers and deployers of “high-risk” AI systems “to use reasonable care to avoid algorithmic discrimination[.]” The law creates a rebuttable presumption that a developer or deployer used reasonable care if they complied with specified provisions in the law.
Illinois and Colorado are exceptions, however. Most states’ bills on algorithmic discrimination have not (yet) passed—or have failed. New Jersey’s A 3855, and California’s A 2930, for instance, remain pending; while Hawaii’s H 1607, Washington’s H 1951, and Georgia HB 890 failed.
Other concerns, such as AI’s potential to displace workers, have led some states to propose legislation urging Congress to protect these jobs. For instance, Pennsylvania’s HR 496 urges Congress to protect creative workers against displacement by only extending intellectual property protection to works created in the majority by natural persons; New York’s A 7838 asks the New York State Department of labor to study the long-term impact AI will have on the state workforce, including prohibiting state entities using AI in a way that would displace natural persons from their employment until the department’s final report is received; while New York’s A 8179 would impose a corporate tax on businesses whose workers are displaced due to certain technologies.
Privacy and data concerns have also prompted proposals. New York’s pending S 7623, would outlaw employee electronic monitoring tools unless it is primarily used to accomplish the bill’s specific purposes, including ensuring accomplishing an essential job function, or measuring worker performance; while Rhode Island’s H 6286 empowers the attorney general to adopt and enforce rules and regulations concerning generative AI models (e.g., ChatGPT) to protect the public’s safety, privacy, and intellectual property rights.
The benefits of artificial intelligence integration into the workplace come with complex challenges. AI technology demands careful attention from employers, legislators, and regulators alike.
For more information on artificial intelligence and the workplace, you can reach Matthew at mhtranter@lerchearly.com.