Summary#
This bill creates new rules to protect minors online and updates two existing laws. Part 1 sets a duty of care for platforms used by minors and requires safety settings, parental controls, clear disclosures, and regular transparency reports. Part 2 streamlines mandatory reporting of online child sexual abuse and exploitation material and extends data preservation. Part 3 adds new Criminal Code offences and tools against deepfake sexual images and online harassment.
- Platforms must default to the highest safety settings for children under 16 and offer parental controls for all minors under 18 (Part 1, ss.5–6).
- Parents and children can opt out of personalized recommendation systems and choose a chronological feed (Part 1, s.5(1)(d)).
- Platforms cannot use “dark patterns” to weaken safety settings or require a digital ID to access services (Part 1, s.9).
- A single law enforcement body will receive mandatory ISP notices; related data must be kept for 1 year (Part 2, ss.3–4).
- Creating or sharing a false intimate image (deepfake) without consent becomes a crime, with penalties up to 14 years in prison in aggravated cases (Part 3, s.162.1(1.1)).
- Staged start: most platform safety duties begin 18 months after Royal Assent; reporting, audits, and private lawsuits start 24 months after Royal Assent (Part 1, Coming into Force).
What it means for you#
-
Households
- Default safety settings for children under 16. These include contact controls, limiting who can see a child’s personal data, reducing autoplay/notifications/rewards that encourage extended use, opting out of recommendation systems, and restricting geolocation sharing with alerts when tracking occurs (Part 1, s.5(1)–(2)).
- Parental controls for all minors under 18. Parents can manage privacy and account settings, see time-spent metrics, and block purchases. These controls are default-on for children under 16. Parents can opt out. Minors must be notified when controls are active. Parents must be notified if defaults are turned off by a child (Part 1, s.6(1)–(5)).
- Ability to delete a child’s account and personal data, and set time limits (Part 1, s.5(4)).
- A reporting channel on every platform to flag harms to minors, with a required internal response process (Part 1, s.8).
- Clear labels for ads, including disclosures when content is an ad or paid endorsement; if minors are targeted, platforms must explain why and how data was used (Part 1, s.11).
- Right to sue a platform for serious harm caused by a breach of the duty of care, within 3 years of learning of the issue. Remedies include damages, injunctions, and costs (Part 1, s.15).
- Criminal law changes: Sharing intimate images and false intimate images without consent is punishable; victims can get restitution for removal costs (Part 3, ss.162.1, 738(1)(e)).
-
Service users (minors)
- Option to see content in chronological order if opting out of personalized recommendations (Part 1, s.5(1)(d)(i)).
- Protection from platform features that promote addiction-like behaviours (Part 1, s.4(1)(f), s.5(1)(c)).
- Protection from ads for unlawful products for minors (alcohol, cannabis, tobacco, gambling, pornography) (Part 1, s.9(1)).
-
Businesses (platforms, apps, game services, social media)
- Duty of care to design and operate services to prevent or reduce specified harms to minors, including sexual exploitation, self-harm, addiction-like use patterns, and predatory marketing (Part 1, s.4(1)).
- Provide default-high safety settings for children, parental controls for minors, and age verification methods that are reliable and privacy-preserving when restricting child access to inappropriate content (Part 1, s.5–6).
- Ban on using interface “dark patterns” to weaken safety settings/parental controls, and ban on requiring/requesting a digital identifier to access services (Part 1, s.9(2)–(3)).
- Disclose policies, data practices, recommendation system logic and user options, and ad labels, including reasons for targeting minors (Part 1, ss.10–11).
- Maintain audit logs and records; commission an independent review every 2 years; publish findings; and issue an annual risk-and-impact report with usage data for minors, breaches, design risks, and mitigation steps (Part 1, s.12).
- Potential fines up to CAD $25,000,000 (on indictment) or $20,000,000 (on summary) for breaches of core safety duties; up to $10,000,000 for disclosure/advertising/transparency violations or related regulations; due diligence is a defence (Part 1, s.14).
- Staged timing: core duties, safety settings, parental controls, and disclosures take effect 18 months after Royal Assent; transparency audits/reports, offences, and private lawsuits take effect 24 months after Royal Assent (Part 1, Coming into Force).
- The CRTC will publish guidelines on how to conduct market and product research involving minors (Part 1, s.13).
-
ISPs, hosting providers, and communication services (including email)
- Expanded definition of “Internet service” covers access, hosting, and interpersonal communication services (Part 2, s.1).
- Must notify a designated law enforcement body if their service is used for child sexual abuse and exploitation material; include transmission data when content is manifestly such material (Part 2, ss.3–3.1).
- Must preserve related computer data for 1 year, then destroy data not kept in the ordinary course unless ordered otherwise (Part 2, s.4).
- The limitation period to prosecute offences under the Act is extended to 5 years (Part 2, s.6).
-
Law enforcement and courts
- A single designated body will receive all ISP notifications and must file annual reports to the Ministers of Justice and Public Safety (Part 2, s.7).
- New Criminal Code offence for publishing or distributing a false intimate image (deepfake) without consent, with sentencing tiers up to 14 years in aggravated cases (Part 3, s.162.1(1.1)).
- Online criminal harassment is specified as repeated communication via internet or social media; anonymity or false identity is an aggravating factor at sentencing (Part 3, s.264(2)(b.1), s.264(4)(c)).
- Courts may issue production orders to identify anonymous online harassers, subject to conditions (Part 3, s.810(2.1)).
- Courts may prohibit internet use as part of sentencing for intimate image offences and order deletion/forfeiture of false intimate images (Part 3, ss.162.2(1), 164, 164.1).
-
Timing
- Part 1: Most platform duties in 18 months; transparency, audits, offences, and private right of action in 24 months after Royal Assent (Part 1, Coming into Force).
- Parts 2 and 3: In force on Royal Assent, or when the 2024 child sexual abuse/exploitation amendments come into force if later (Parts 2–3, Coming into Force).
Expenses#
Estimated net cost: Data unavailable.
- Fiscal note: Data unavailable.
- Explicit appropriations in the bill: None. The bill creates duties, offences, and reporting requirements but does not authorize new spending.
- Potential public-sector costs:
- Designation and operation of a single law enforcement notification body and its annual reporting (Part 2, s.12(d.1)–(d.2)). Data unavailable.
- Court and prosecution workload for new/expanded offences and recognizance proceedings (Part 3). Data unavailable.
- Potential private-sector compliance costs (mandates on operators):
- Build and maintain safety settings and parental controls; age verification that preserves privacy; reporting channels; audit logs; biennial independent reviews; annual systemic risk reports; ad labeling and disclosures (Part 1, ss.5–6, 8, 10–12). Data unavailable.
- Penalties and restitution (possible revenues/offsets):
- Fines up to CAD $25,000,000 (indictment) or $20,000,000 (summary) for core safety violations; up to $10,000,000 for disclosure/advertising/transparency breaches (Part 1, s.14).
- Restitution to victims for costs to remove intimate or false intimate images (Part 3, s.738(1)(e)). Amounts case-specific.
Item | Amount | Frequency | Source
Fines for core safety violations | Up to $25,000,000 (indictment); up to $20,000,000 (summary) | Per offence | (Part 1, s.14)
Fines for disclosure/advertising/transparency breaches | Up to $10,000,000 | Per offence | (Part 1, s.14)
Data preservation (ISPs/hosts) | 1 year | Per notification | (Part 2, s.4)
Restitution for image removal | Case-specific | Per case | (Part 3, s.738(1)(e))
Proponents' View#
- The bill sets a clear duty of care for platforms to reduce risks like sexual exploitation, self-harm, and addiction-like use, moving beyond voluntary policies (Part 1, s.4(1)).
- Parents and children get practical tools by default, including contact controls, time limits, geolocation restrictions, and a right to opt out of recommender systems in favour of a chronological feed (Part 1, s.5(1)).
- Prohibiting dark patterns protects user choice and prevents nudging minors to weaken protections (Part 1, s.9(2)).
- Mandatory, centralized reporting and 1‑year data preservation help investigations of child sexual abuse and exploitation material, while keeping privacy law obligations intact (Part 2, ss.3–4, 9).
- Criminalizing non-consensual deepfake sexual images with strong penalties fills a legal gap and deters harm; victims can recover removal costs (Part 3, s.162.1(1.1), s.738(1)(e)).
- Regular independent reviews and annual risk reports increase transparency and accountability to families and regulators (Part 1, s.12).
Opponents' View#
- Age verification must be “reliable” yet “preserve privacy,” but the bill sets no technical standard; this may push intrusive checks or uneven practices and raise data protection risks (Part 1, s.5(3)(a)).
- The ban on requiring a digital identifier may block privacy‑preserving credentials used to prove age, complicating compliance and access for users (Part 1, s.9(3)).
- Broad terms like “addiction-like behaviours” and “predatory or deceptive marketing practices” could create uncertainty and over‑removal to avoid liability (Part 1, s.4(1)(f), (h)).
- Small and mid‑size services may face high compliance costs for audits, logs, reports, and new controls, under threat of fines up to $25,000,000, increasing barriers to entry (Part 1, ss.5–6, 10–12, 14).
- Retaining data for 1 year after reports may increase privacy and security risks for users, despite the bill’s privacy clarification (Part 2, s.4, s.9).
- New offences and recognizance powers tied to online conduct, including the possibility of long or indefinite no‑contact conditions, may raise civil liberties concerns and require careful judicial oversight to avoid overbreadth (Part 3, ss.264, 810(3.001)–(3.003)).