Back to Bills

Public Sector AI and Cybersecurity Rules

Full Title:
The Public Sector Artificial Intelligence and Cybersecurity Governance Act

Summary#

This Manitoba law sets rules for how public bodies use artificial intelligence (AI) and protect their computer systems. Its goal is to make AI use more transparent and fair, and to raise cybersecurity standards across the public sector. The law is in place but takes effect on a date the government sets.

  • Applies to selected public sector bodies (province, agencies, municipalities, and others) named later by regulation.
  • Covers AI tools they build, buy, use publicly, or have made for them by others.
  • Requires public information about AI use, accountability plans, risk and bias checks, and human oversight in certain cases.
  • Lets the government set technical standards and, in some cases, ban certain AI uses (for example, creating artistic or creative content).
  • Sets cybersecurity requirements, including programs, training, incident response, and incident reporting.
  • Allows the minister to issue public directives on cybersecurity to specific entities.
  • Requires public consultation before most regulations and a review of each regulation within three years.
  • States that government decisions still stand even if a body fails to follow this law’s rules.

What it means for you#

  • Residents using public services

    • You may see clearer notices when an office uses AI in a service or decision.
    • You can get more information about how AI was used and how to ask questions.
    • A person must oversee AI use in certain higher‑risk situations.
    • There should be steps to detect and reduce unfair bias in AI decisions.
    • Stronger cybersecurity standards aim to better protect your data and reduce service outages from cyber attacks.
  • Public servants and managers

    • You will need to follow new AI policies, document how AI is used, and complete risk and impact assessments.
    • You may have to test for bias and track AI performance over time.
    • Training and clear roles for cybersecurity, incident response, and recovery will be required.
    • You may need to report certain cybersecurity incidents to the minister or a designated official.
    • Purchasing AI and cybersecurity tools will follow new procurement rules.
  • Municipalities and local governments

    • If prescribed, you must meet set cybersecurity standards and may need a formal cybersecurity program.
    • You may receive entity‑specific directives and must make them public.
    • Compliance could mean new tools, training, and reporting duties.
  • Vendors and contractors

    • Contracts may require meeting AI and cybersecurity standards, risk and impact assessments, and bias testing.
    • You may need to provide documentation and support oversight by client public bodies.

Expenses#

No publicly available information.

Proponents' View#

  • Builds public trust by requiring clear information about when and how AI is used.
  • Reduces discrimination by calling for bias detection and mitigation in AI systems.
  • Keeps humans in the loop for higher‑risk AI uses, adding a safety check.
  • Strengthens protection of personal data and services through formal cybersecurity programs and standards.
  • Offers flexibility to keep up with fast‑changing technology by allowing technical standards and codes to be adopted and updated.
  • Requires public consultation and regular reviews of regulations, adding accountability and transparency.
  • Lets the minister respond quickly to urgent cyber risks with directives targeted to the affected entity.

Opponents' View#

  • Scope is uncertain because many details are left to future regulations, creating confusion for agencies and vendors.
  • Compliance could be costly and burdensome, especially for smaller municipalities with limited staff and budgets.
  • The minister can issue directives to single entities, which critics may view as uneven or inconsistent treatment.
  • Decisions remain valid even if an entity fails to follow the rules, which may weaken accountability for mistakes.
  • Potential bans or limits on AI uses (including creative tasks) could reduce efficiency or stifle innovation in government.
  • Rules about what information must be shared include exemptions, so transparency may vary and leave gaps for the public.