A new law on artificial intelligence was included in Bill C-27, which was introduced in June 2022 by the Minister of Innovation, Science, and Industry. It updated the federal private sector privacy framework. The Artificial Intelligence and Data Act would be the first piece of legislation in Canada to regulate the usage of AI systems if it were to pass.
AIDA's stated purpose is to establish common standards across Canada for the design, development, and deployment of artificial intelligence systems that are compliant with national and international standards and to outlaw certain conduct in relation to artificial intelligence systems that may seriously harm people or their interests, in each case in a manner that upholds Canadian standards and values in accordance with international human rights principles. Additionally, the Act addresses a variety of methods for handling personally identifiable information (PII), including data anonymization, data mapping, etc.
Although the basic strategy of AIDA is clear, the entire impact of the law won't be understood until the issuance of the related regulations, which will outline the majority of the detailed application.
In this guide, we’ll break down what business leaders need to know about the pending law and the potential impact on their businesses, and how Data Sentinel can help. While the law has not yet been passed, businesses should still prepare for compliance when it is inevitably passed into Canadian law.
What is Canada’s Artificial Intelligence and Data Act?
For those who followed Bill C-11 (2020), Bill C-27 reintroduces the Consumer Privacy Protection Act (a.k.a. The CPPA) and the Personal Information and Data Protection Tribunal Act (a.k.a. PIDPTA). The Artificial Intelligence and Data Act, which is a third piece of legislation, is essentially where C-27's uniqueness rests.
The Personal Information Protection and Electronic Documents Act is to be replaced by Bill C-27 with a more up-to-date and robust privacy and data protection legislative framework in Canada. The primary contrasts between the proposed law and the current federal privacy framework in the private sector, which is controlled by PIPEDA, are the subject of this bulletin.
The AIDA applies to private sector companies who design, develop, or produce usable artificial intelligence systems for use in international or interprovincial trade and commerce. Legislative regulations govern this practice. Artificial intelligence refers to any technical system that, independently or partially autonomously, uses an algorithm or similar method to process data about human behaviors in order to produce content or arrive at conclusions, suggestions, or predictions.
Bill C-27 essentially lays forth a few broad requirements.
Risk Reduction
A high-impact system must be evaluated by those in charge of it, and mechanisms must be put in place to identify, evaluate, and reduce the risk of damage or skewed output that might come from using the system.
Monitoring
The individuals in charge of high-impact systems must set up procedures to ensure that the risk mitigation measures are being followed.
Transparency
Anyone who makes a high-impact system available for use or oversees its operation is required to post a plain English explanation of the system's intended or actual usage on a publicly accessible website. Organizations must also make information about the different kinds of content they produce, the choices, suggestions, or forecasts they make, the safeguards put in place to reduce the possibility of harm or biased results from using the system, and any other details required by law, accessible to the public.
Anonymized Data
People who engage in activities covered by the act and who process or make anonymized data accessible for use in the course of such activities are required to develop procedures pertaining to the anonymization of data and the use or management of anonymized data in compliance with the rules.
Records
People who engage in regulated activity must adhere to established record-keeping guidelines within their organization.
Notification
If using a high-impact system causes or is likely to cause substantial injury, the person in charge of the system must notify the Minister.
What is the Scope of This Act?
AIDA is applicable to regulated activities engaged in during interprovincial or international trade and commerce. The phrasing used suggests that the federal government wants to leave the provinces to pass laws on interprovincial uses of AI since it says the legislation is made in accordance with the federal Parliament's trade and commerce jurisdiction under section 91 of the Constitution Act of 1867. AIDA does not expressly extend interprovincially, unlike the CPPA and its precursor PIPEDA, because no provincial legislation has been identified by the original CPPA as being substantially comparable. That being said, it is probable that the federal government intends for AIDA to largely regulate all AI development and usage in Canada given the extremely few situations under which an AI system would be created and deployed just for use inside a province.
Since the Act's scope includes international trade, foreign companies who conduct business in Canada or provide services to Canadians that include AI systems as defined by AIDA may consider themselves on notice as to their obligations to comply with the Act.
The term "regulated activity" is used broadly and appears to be meant to include the majority, if not all, of AI development and application. For the purposes of creating, developing, or utilizing an artificial intelligence system, processing or making accessible for use any data pertaining to human behaviors would be regarded as regulated activity. Managing an artificial intelligence system's operations as well as building, producing, or making it available for use is included in the definition of regulated activity.
Finally, AIDA stipulates that a person is accountable for an AI system if they create, build, make it accessible for use, or oversee its operation while engaging in international or interprovincial trade and commerce. This is significant because it implies that AI system suppliers and designers must abide by AIDA's guidelines, which include a number of administrative and operational criteria.
How Will the Act Be Administered and Enforced?
Apart from the ability to enact legislation, the Minister may delegate authority and name a senior departmental employee as the Artificial Intelligence and Data Commissioner. Orders made by the Minister or its representative may be enforced as orders of the Federal Court.
The newly established Artificial Intelligence and Data Commissioner may be given immediate authority to impose such fines under AIDA's proposal to develop by regulation an administrative monetary penalty regime. The maximum fine for the majority of AIDA violations is $10,000,000 CAD, or, if higher, the sum equal to 3% of the organization's global total revenues in the preceding fiscal year. The maximum fine for such offenses is $25,000,000, or, if higher, the sum equal to 5% of the organization's global total revenues in the preceding fiscal year.
AIDA establishes a legislative right for the Minister to appoint a senior official to the position of Artificial Intelligence and Data Commissioner, whose responsibility it is to support the Minister in the administration and enforcement of AIDA.
What is Required of Business Leaders to Stay Compliant With Bill C-27?
Keep in mind that Bill C-27 hasn’t completed passed yet, though it likely will. Through the help of a data management system like Data Sentinel, there are a few things that business owners will need to do in order to become compliant with the regulation once it is officially passed.
If the use of a high-impact system causes or is likely to cause material injury, the person in charge of the system must, in accordance with the regulations and as soon as practical, notify the Minister. Damage to an individual's property, economic loss to an individual, or bodily or psychological harm to an individual are all examples of harm, as was previously established.
Although individuals who aim to develop and deploy damaging technologies would be subject to the reporting obligation, it is doubtful that such actors would self-report. The people who learn that the AI system has an unanticipated real or prospective detrimental effect are most likely the main objective of the reporting requirement.
The EU Proposed AI Regulation's reporting requirements, which apply primarily to suppliers of high-risk systems and call for reporting to market surveillance authorities, may have served as inspiration for the inclusion of such a clause. Being that only those who put systems on the market or put systems into operation in the EU are considered providers, the reporting obligation in the EU is essentially an extension of the product liability or safety framework. Despite the possibility that the reporting requirement in AIDA is similarly meant to supplement or extend the Canadian product liability framework, the broad definition of "person responsible for an AI system" in AIDA means that designers and developers would be held to the same standards as those who actually make the systems available for use.
Could This New Bill Harm My Business?
If you are already focused on compliance and good data management practices, you should be alright. However, those who are irresponsible with consumer data could be fined up to $25,000,000 CAD. This Bill aims to modernize the Personal Information Protection and Electronic Documents Act, which is a part of Canada's federal privacy legislation. The Act's goal is to, among other things, suggest some new regulations for artificial intelligence, including what it truly implies for businesses in terms of obligations or subtleties.
When it comes down to it, this Bill establishes a number of new criteria. Additionally, it appears that there are increasing attempts being made to take data security into account, both domestically and globally, which suggests that businesses should begin reviewing their own data policies and processes.
The demand for consent is currently one of the key areas of worry for employers. A privacy effect assessment must be completed by employers, and copies must be given to the Commissioner. Penalties, notably the implications of either disregarding the Bill or breaching the law in its entirety, are another issue on HR officials' concerns.
When proposing a punishment to the new Tribunal, the Commissioner must take into account the elements outlined in the Bill. The Tribunal would have the authority to impose an administrative monetary penalty up to $10 million or 3% of gross worldwide revenue, as we previously said. Having said that, we must wait to see what sanctions are ultimately levied on organizations. If an organization commits an indictable offense, they might be fined up to 5% of its total global revenue, or $25 million, whichever is higher.
Many employers are worried about the consequences of the Bill due to its novelty and the fact that they are still unsure of what to anticipate. In the end, this Bill continues an increasing trend toward privacy that is strengthening everyone's protection.
How Data Sentinel Can Help
Data Sentinel automates the process of evaluating your AI models for bias and reports on the evaluated model for effective risk and compliance management.
Get in touch with our team today to learn more about how we can protect your business from poor compliance and data handling practices.