Help needed! Please contribute your notes to help us finish this page.

Privacy and Surveillance

  1. International Law
    1. Right to privacy and the internet
    2. Interference with privacy
    3. ‘Unlawful’ and ‘Arbitrary’ – Qualified rights
  2. Privacy Protection in the European Union
    1. General Data Protection Regulation
  3. Privacy Protection in the United States of America
    1. Constitution
    2. Federal legislation
    3. State laws
    4. Case law
  4. Privacy Protection in Australia
    1. Constitution
    2. Common Law
    3. Privacy Act 1988 (Cth) and the Australian Privacy Principles
  5. The Privacy Act
  6. The Australian Privacy Principles
  7. Cyber Security and Data Breaches
    1. Overview
    2. Mandatory Data Breach Requirements (Federal)
    3. Mandatory Notification of Data Breach Scheme (NSW)
    4. Major Australian Data Breaches
    5. Cyber Security and Data Protection
    6. The Cyber Security Strategy and Australian Security Legislation
    7. Potential 2024 reforms to the Privacy Act
  8. Government Surveillance
    1. Telecommunications (Interception and Access) Act 1979 (Cth)
    2. Telecommunications Act 1997
    3. Exceptions to the Privacy Act
    4. Data Retention
    5. International surveillance laws: USA PATRIOT Act
  9. Privacy-enhancing technology
    1. Cryptography
    2. Regulation
    3. Case studies
    4. Children’s online privacy and sharenting
  10. The Right to be Forgotten
    1. What is the Right to be Forgotten?
    2. The Right to be Forgotten in Europe
    3. Australia and the Right to be Forgotten
  11. The SPAM Act
  12. Privacy Protection in India
    1. Constitution
    2. Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011
  13. The Digital Afterlife
    1. AI and the Digital Afterlife Industry
    2. Post-mortem Privacy Protection
    3. Social Media and Digital Remembrance
    4. Digital Assets and the Law
    5. Digital Assets and Succession
  14. Digital Products and Consumer Rights
    1. What is a Digital Product?
    2. Terms and Conditions
    3. Consumer Law and Digital Products

Rita Matulionyte Explains How Online Technologies Affect Our Privacy

International Law

Article 12, 1948 Universal Declaration on Human Rights (UDHR)

‘No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attack.’

The UDHR was adopted in the General Assembly as Resolution 217 on 10 December 1948. Among the 58 members of United Nations, 48 voted in favour, 8 abstained. Honduras and Yemen failed to vote or abstain. The historical vote on adoption does not affect the application of the UDHR on other member states who joined the United Nations later.

The UDHR is not a treaty and therefore does not itself create legal obligations for countries. It is an expression of fundamental values which are shared by all members of the international community, and therefore has arguably become binding as part of customary international law

Article 17, International Covenant on Civil and Political Rights (ICCPR)

‘(1) No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation’

‘(2) Everyone has the right to the protection of the law against such interference or attacks’

There are a total of 174 parties to the ICCPR.

Article 16, Convention on the Rights of the Child

‘(1) No Child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation.

‘(2) ‘The Child has the right to the protection of the law against such interference or attacks’

Under Art 1 in the Convention, child is defined as any human being below the age of 18

Article 14 International Convention on the Protection of All Migrant Workers and Members of their families

‘No Migrant worker or member of his or her family shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, correspondence or other communication, or to unlawful attacks on his or her honour and reputation. Each migrant worker and member of his or her family shall have the right to the protection of the law against such interference or attacks’

Treaty No.108 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data

This treaty is open for signature by member States of the Council of Europe and for accession by non-member states since 28 January 1981. There are a total of 57 accessions to it. In summary, it provides protection for individual against abuses arising out of collecting and processing of personal data, in order to secure their rights and fundamental freedoms, in particular his right to privacy. It imposes obligation for parties to the agreement to take appropriate security measure to prevent accidental or authorised access to personal data. It also enshrines data subject’s right to know with regards to his own personal data.

OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data

Although not binding, this serves as guidelines on all OECD Member countries to uphold human rights and prevent interruptions in international flow of data. It represents a consensus on basic principles that can be included in existing national legislations or serves as basis for legislations in those countries who do not have any yet.

There are 8 principles governing the protection of privacy and transborder flow of personal data. They are: collection limitation principle, data quality principle, purpose specification principle, use limitation principle, security safeguards principle, openness principle, individual participation principle and accountability principle.

Right to privacy and the internet

Right to privacy is not confined to the physical world. In its sixty-eighth session of General Assembly, the United Nations (UN) adopted Resolution 68/177 regarding the right to privacy in the digital age. It recognized the increasing global trend of Internet usage and the advancement in information and communications technologies, and emphasised that the right to privacy also includes privacy in the digital world.

While gathering of an individual’s sensitive information may be necessary for the purpose of national and public security, it must be done in compliance with the state’s obligations in international human rights laws. Therefore UN called upon on States to review their legislation and practices relating to communication surveillance and collection of personal data so as to protect individual’s right to privacy, which also includes digital communications.

Interference with privacy

Under UDHR and ICCPR, the content of the right to privacy includes the term ‘interference’. What this essentially means is that the integrity and confidentiality of correspondence should be guaranteed de jure and de facto, without any interception and without being opened or read. Any capture of communication data may potentially fall under the ‘interference’. Therefore, as suggested by the Office of UN High Commissioner for Human Rights, mass surveillance programmes adopted by many states would already be amount to ‘interfering’, and it is on the State to prove that such interception is neither arbitrary nor unlawful.

‘Unlawful’ and ‘Arbitrary’ – Qualified rights

The right to privacy under both UDHR and ICCPR is not an absolute right. It may be restricted or limited as long as it is not ‘unlawful’. This means that member states may implement laws that specifically authorize such derogation. However, member states are not unfettered. The implemented laws must not be in contravention with the provisions in the International Covenant on Civil and Political Rights, and should be ‘reasonable in particular circumstances’.

In determining the reasonableness of such limitation, references may be drawn from Siracusa Principles and case law. In short, they all emphasise the principles of legality, necessity and proportionality. Such a law has to be readily accessible and clear. It must be necessary and should be the least intrusive option to pursue the legitimate aim.

Privacy Protection in the European Union

Enshrined under Art 8(1) Charter of Fundamental Rights of the European Union and Art 16(1) Treaty of the Functioning of the European Union, data protection is recognized as a fundamental right in the European Union (EU). To facilitate the increase of trade and digital activities between Member States, the General Data Protection Regulation (GDPR) was enacted in 2016 and came into force in May 2018 to replace the previous Data Protection Directives. This creates a more comprehensive coverage of enhanced rights and protections of individual’s personal data.

General Data Protection Regulation

The GDPR formalizes 6 legal basis for personal data collection under Art 6(1). This includes:

  • Consent
  • Performance of contract
  • Compliance with legal obligations
  • Protection of vital interests of data subject
  • Performance for public interest
  • Legitimate interests pursued by the controller or by a third party

As well as formalising these legal bases for personal data collection, the GDPR also formalises legal bases for the removal of personal data across the European Union, which includes the right to erasure.

Of the 6 legal bases for data collection, consent is the most common one since it can be applied to almost every situation, unlike the other 5 where data processor is required to reach a rigorous situational threshold.

Consent is only valid only if it is freely given, specific, informed and is unambiguous. As to the practical operation of consent required, Art 29 Working Party (WP 29) has provided further clarification on its Guidelines on Consent. While WP 29 was an advisory body replaced by the European Data Protection Board (EDPB) under GDPR, since EDPB so far has not issued anything in replacement, the WP29 document continues to serve as an interpretive guideline for GDPR and EDPB under Art 94(2) GDPR since EDPB has not issued any superseding guidelines. The Guidelines analyzed the requirements under Art 4(11) GPDR, and considered what constitutes valid consent under different situations – such as imbalance of power, bundled consent, performance of a contract etc.

Bundled consent refers to consent that is given via a written declaration that contains multiple data processing purposes. For example, a mobile application asks for consent to collect data for GPS localization in their service agreement, which may also contain a clause stating that the data will be transferred to 3rd parties for advertising purpose. By signing the agreement, the data subject consents to a ‘bundle’ of data processing purposes. Although not explicitly spelt out in the law itself, it entrenched in the ‘freely given’ element and therefore bundled consent is invalid under GDPR.

In order to determine whether the situation render consent not freely given, it is essential to determine the scope of the contract and whether the collection of data is necessary for the performance of the contract. For example, by denying the unnecessary data processing, the data subject will act to their detriment since he will also deny the processing of data for the enforcement of the contract. Thus, such consent is not ‘freely given’.#

Situation - Employment

A lot of data processing arises out of employment context, no matter whether it is for application for jobs, promotion, removal or monitoring systems in the workplace. Given the imbalance of power, employees are unlikely able to respond to their employer’s request for consent freely, since they are in fear of the detrimental effect for their refusal.

Consent is freely given if three is a real choice, and no risk of deception, intimidation, coercion or significant negative consequences if data subject does not consent. Given the inherent dominance of employer in the employer-employee relationship, it is very unlikely there is no pressure when the employee gives consent. Thus, consent should not be the legal basis for processing personal data in an employment context.

Nevertheless, processing of personal data may still likely to be legitimate under Art6(1)(b) if the employer can show that the processing is necessary for the performance of the employment contract.

Situation - Granularity

Granularity refers to cases where there are multiple purposes for multiple collection of personal data. For example, service application forms may incorporate both terms and conditions of provision of the data user’s services and statements relating to the use of data collected for marketing products or services.

For multiple purpose collection, Art 7(2) and Recital 32 GDPR require consent to be given distinguishably. What this essentially means is that data subject should be given the choice to accept or reject a particular purpose, rather than having to consent to a bundle of processing purposes. A lack of granularity may invalidate consent given since it is not specific, as required under Art 6(1)(a), which is closely linked to the requirement of a freely given consent.

Performance of a Contract

Performance of a contact forms a legal basis for processing personal data where it is necessary in the context of a contract or the intention to enter into a contract.

This requirement does not require a specific law for each individual processing. It is sufficient if the data user can demonstrate that the processing is necessary for the performance of a task carried out in the public interest or for official authority to exercise their power.

Vital Interest of the data subject

As suggested in Recital 46, this basis should come last in line and other legal bases under Art 6 should be exhausted first.

Legitimate interest

Personal data may be disclosed if it is of the legitimate interest of data controller, provided that the interests or fundamental rights and freedoms of the data subject are not overriding. This has to take into account of the reasonable expectations of data subjects based on their relationship with the controller

Privacy Protection in the United States of America

Constitution

The right to privacy is not explicitly provided for in the United States Constitution. However, the Bill of Rights (that is, some of the first 10 amendments to the Constitution) protect against invasion of privacy by state actors. These include the First Amendment provision on the right to free assembly, the Fourth Amendment provision against unwarranted seizure or search, the Ninth Amendment provision against denial of right due to others, and the Fourteenth Amendment provision on the right to due process. These Amendments combine to broadly establish a constitutional basis for the protection against invasion of personal privacy by state agencies.

Federal legislation

There is no particular federal legal framework that holistically provides for privacy regulation or data protection in the United States. Instead, several sector-specific federal laws focusing on different types of data represent an attempt at privacy protection legislation, including:

  • The Children’s Online Privacy Protection Act 2000 recognises the inherent safety and privacy risks posed by online harm to children and thus seeks to protect information privacy for children below the age of 13. It severely restricts the type of information or data about the children organisations are allowed to gather, distribute or appropriate. Most notably, the Act expressly requires website operators to not only notify parents but also acquire ‘verifiable parental consent’ prior to ‘collecting, using, or disclosing’ any personal and private data from children.

  • The Driver’s Privacy Protection Act 1994 seeks to protect personal information collected by the Department of Motor Vehicles. The Act regulates the privacy, storage, use, and disclosure of the personal information about drivers the Department collects on its online system with a view to safeguarding against misuse and data breaches.

  • The Video Privacy Protection Act 1988 guarantees the privacy of the personal information provided by individuals on online streaming sites. It proscribes unauthorised disclosure of personal records of people using online video streaming services, stipulates certain exceptions under which such information may be disclosed, and establishes penalties for related violations.

  • Cable Communications Policy Act 1984 establishes and safeguards subscriber privacy. It stipulates the manner in which cable system operators can collect and use personal information besides prohibiting them from collecting or using such data without first obtaining the express consent of the subscribers. The Act also restricts cable systems from collecting only the data they require to effectively offer their services to their subscribers and to safeguard cable communication from unauthorised reception.

State laws

In Alaska, the Alaska Right of Privacy Amendment 1972 expressly recognises the right to privacy for the Alaskan people and sets out explicit provisions to safeguard the right and protect it from infringement.

In Montana, the Montana Constitution in Article 2, subsection 10 expressly entrenches the right of its residents to individual privacy. The provision further recognises the right to privacy as a key component in ensuring the welfare of a free society. It prohibits the infringement of this right with the exception being when compelling state interests can be demonstrated.

Other states including Florida and Washington provide for privacy protection through their respective constitutions and varied privacy legislations. For instance, the Florida Constitution in Article 1, subsection 23 stipulates every person is entitled to be left alone and their private life to be free from any form of intrusion. The instrument however provides for instances when the government can intrude into the private life of a person effectively demonstrating that the right to privacy in the state is not absolute. Similarly, the Washington Constitution seeks to protect the privacy of its residents through Article 1, subsection 7 which prohibits the state from invading homes or disturbing the private affairs of the residents without explicit legal authority. The provision safeguards the online communication and correspondences of private individuals against illegal access, searches and interception by state actors.

Case law

Across the United States, several court decisions have concerned privacy protection.

The first such court decision was rendered in the case Pavesich v. New England Life Insurance Company (1905) in which the right to privacy was recognised based on constitutional values, common law, and natural law.

The case Cohen v. Cowles Media Co. (1991) was also instrumental in establishing standards for the intrusion of seclusion and solitude which are key elements in the protection of the right to privacy. Decisions rendered by the court in the case stipulate when the intrusion of seclusion can be said to have occurred, that is, when the perpetrator deliberately invades electronically, physically or by other means the private affairs, seclusion, solitude, or private space of a person, where such intrusion is exceedingly offensive any reasonable person. The case further established the three core considerations that ought to be made to determine if an intrusion has taken place including:

  • the use of fraudulent, misleading, or deceptive tactics to gain access;
  • if intrusion or invitation to intrude occurred; and
  • whether privacy was expected.

The court decision in the case New York Times Co. v. Sullivan (1964) established the standard on the tort of privacy on what constitutes false light. As a result, for breach of the right to privacy a non-public person is entitled to, specifically the tort of false light, to be determined as having occurred, the purported misleading or untrue impression created about them must be determined as having been a function of actual malice. The decision has been instrumental in interpreting the right to privacy due to non-public persons hence promoting the protection of not only their personal privacy but also their emotional and mental wellbeing in the event of a breach.

Privacy Protection in Australia

Australia does not have a clear law protecting personal privacy as such.

Constitution

Unlike the Constitutions of many other liberal democracies, the Australian Constitution does not contain a right to privacy. Australia does not have a comprehensive Bill of Rights, either as part of the Constitution or as federal legislation. The ACT and Victoria do have legislated bills of rights enforceable against the territory- and state-level public agencies.

At the international level, Australia is a signatory to the International Convention on Civil and Political Rights (ICCPR), which does protect the right to privacy, but the Convention rights are not directly enforceable in domestic Australian law. The Australian Government views the Privacy Act 1988 (Cth) as implementing the ICCPR’s right to privacy. However, this implementation does not include a strong human right to privacy which can invalidate conflicting legislation, as is the case in many other jurisdictions which recognise the right to privacy in their Constitutions or Bills of Rights.

Common Law

Various areas of law have evolved to protect aspects of an individual’s space and reputation, including copyright, defamation, trespass, nuisance and confidentiality.

Until about 100 years ago, there was no formal legal notion of privacy in common law countries. But in 1890, a seminal US article from Warren and Brandeis called for a ‘right to privacy’, conceptualised as a ‘right to be left alone’ to be established in law.

In Australia, there is speculation as to whether a right to privacy or a tort of invasion of privacy exists in common law.

An early case, Victoria Park Racing, seemed to suggest that there was no such common law right in Australia.

But in the 2000s, there was significant development of English common law on privacy, as a result of the UK Human Rights Act (1998) coming into force which gave rise to some enforceability in domestic law of European Convention on Human Rights (ECHR) rights, including privacy and free expression. In England there is no separate tort of invasion of privacy, but the courts during this period have ‘stretched’ the tort of breach of confidence to cover privacy breaches. Furthermore, in 2004, a common law tort of invasion of privacy was found to exist in New Zealand.

A more recent Australian case, Lenah Game Meats, suggested that there could be a common law tort of invasion of privacy in Australian law. The High Court did not need to rule on that specific point given the facts of the case, but refused to rule out a more ‘suitable’ future case finding the existence of a privacy tort. The High Court suggested that a more ‘suitable’ scenario would involve a natural person rather than a legal person trying to establish the privacy tort.

So far, no such case has come up to the Australian High Court but there have been various decisions in lower courts on this issue.

Privacy Act 1988 (Cth) and the Australian Privacy Principles

Rita Matulionyte Explains the Legal Protections for Privacy in Australia

The Privacy Act 1988 (Cth) protects information privacy - that is, it prescribes what ‘personal information’ organisations and federal government agencies can collect about Australians, how that information can be collected and how it must be stored, the circumstances in which the information can be used and disclosed, and what Australian citizens must be told about the information collected about them. Personal information includes things like name, address, phone number, occupation, and sensitive information like health information. Other, state-level information privacy legislation also exists, which usually applies to state government agencies e.g. Information Privacy Act 2009 (QLD).

Personal privacy in Australia is protected in a de facto way, through a myriad of laws that are not designed specifically to protect privacy but which may have that effect. For example, a person may be able to preserve the privacy of their home through trespass laws. Privacy of movement may be asserted against another individual who offends against stalking laws. Laws designed to protect reputation, such as defamation laws and passing off laws, may be used to protect a person’s privacy in some cases. Finally, there are laws which protect privacy in communications, such as breach of confidence laws and the Telecommunications (Interception and Access) Act 1979 (Cth).

The Privacy Act

Rita Matulionyte Provides an Introduction to the Privacy Act and Video Overview by Michael Thomson Explains the Role of the OAIC

The Privacy Act 1988 (Cth) contains 13 Australian Privacy Principles (APPs) in Schedule 1. These principles apply to “APP entities”.

An “APP entity” is defined in section 6 to mean a Commonwealth government agency or an organisation. Organisation, in turn, is defined in s. 6C to include individuals, but not small business operators. Small business operators are those businesses with an annual turnover of $3 million or less and which meet the other requirements set out in section 6D.

When considering the APPs, it is important to first identify whether you are dealing with personal information or sensitive information (or both). Sensitive information is defined in section 6 and includes health information.

If a person thinks that their privacy has been breached under the Act, they may complain to the Office of the Australian Information Commissioner (OAIC) under section 36. Section 40 gives the Commissioner the power to investigate the complaint, and under section 52, the Commissioner may make a determination that an APP entity has breached the privacy principles in the Act. The Commissioner may also order that the entity take steps to ensure that the breach is not repeated and to provide redress to the complainant. If an entity does not comply with the Commissioner’s declaration, then either the individual complainant or the Commissioner can apply to the Federal Court to have the declaration enforced under s.55A.

Sections 65 and 66 of the Privacy Act provide that entities must cooperate with a Commissioner’s investigation, and there are financial penalties imposed for the failure to do so.

The Australian Privacy Principles

Rita Matulionyte Explains the APPs

APP 1 — Open and transparent management of personal information

Ensures that APP entities manage personal information in an open and transparent way. This includes having a clearly expressed and up to date APP privacy policy.

APP 2 — Anonymity and pseudonymity

Requires APP entities to give individuals the option of not identifying themselves, or of using a pseudonym. Limited exceptions apply.

APP 3 — Collection of solicited personal information

Outlines when an APP entity can collect personal information that is solicited. It applies higher standards to the collection of ‘sensitive’ information.

APP 4 — Dealing with unsolicited personal information

Outlines how APP entities must deal with unsolicited personal information.

APP 5 — Notification of the collection of personal information

Outlines when and in what circumstances an APP entity that collects personal information must notify an individual of certain matters.

APP 6 — Use or disclosure of personal information

Outlines the circumstances in which an APP entity may use or disclose personal information that it holds.

APP 7 — Direct marketing

An organisation may only use or disclose personal information for direct marketing purposes if certain conditions are met.

APP 8 — Cross-border disclosure of personal information

Outlines the steps an APP entity must take to protect personal information before it is disclosed overseas.

Video Overview of APP 8

Outlines the limited circumstances when an organisation may adopt a government related identifier of an individual as its own identifier, or use or disclose a government related identifier of an individual.

APP 10 — Quality of personal information

An APP entity must take reasonable steps to ensure the personal information it collects is accurate, up to date and complete. An entity must also take reasonable steps to ensure the personal information it uses or discloses is accurate, up to date, complete and relevant, having regard to the purpose of the use or disclosure.

APP 11 — Security of personal information

An APP entity must take reasonable steps to protect personal information it holds from misuse, interference and loss, and from unauthorised access,modification or disclosure. An entity has obligations to destroy or de-identify personal information in certain circumstances.

APP 12 — Access to personal information

Outlines an APP entity’s obligations when an individual requests to be given access to personal information held about them by the entity. This includes a requirement to provide access unless a specific exception applies.

Video Overview of APP 12

APP 13 — Correction of personal information

Outlines an APP entity’s obligations in relation to correcting the personal information it holds about individuals.

There has been very little case law on the application of the Privacy Act and APPs. One recent exception is the Privacy Commissioner v Telstra case involving technology journalist Ben Grubb’s metadata. Unfortunately, it is unclear in the aftermath of the case whether dynamic IP addresses constitute ‘personal information’ for the purposes of Australian privacy law. (NB It would constitute ‘personal data’ in EU data protection law.)

Privacy Commissioner v Telstra Corporation Ltd1

The case arose from a request by journalist, M. Grubb, who sought access to all metadata held by Telstra Corporation Ltd related to his mobile phone. While Telstra provided some data, it refused him access to its mobile network data, including metadata. In 2013, the (former) National Privacy Principle (‘NPP’) 6.1 (now reflected in Article 13 of the APP2) ‘…gave individuals the right to access, subject to some exceptions, their own personal information held by an organisation, such as Telstra’. 3

The Privacy Commissioner argued that this metadata constituted personal information, as Telstra had the capacity to link it to Mr. Grubb’s account, making him identifiable. The Administrative Appeals Tribunal (AAT) disagreed with the Privacy Commissioner’s assessment, ruling that the mobile network data was not information ‘about’ Mr. Grubb. Instead, the Tribunal viewed it as information about how Telstra provided services to Mr. Grubb. The Tribunal emphasised that merely being able to identify an individual from the data was insufficient; the data must also be about the individual to qualify as personal information under the Privacy Act.

The Privacy Commissioner appealed the decision, arguing that the Tribunal had misinterpreted the phrase ‘about an individual’ in the definitional context of ‘personal information’. Privacy advocates welcomed the appeal, anticipating it would provide the first comprehensive judicial guidance from the Federal Court on this fundamental concept within Australia’s privacy legislation. However, the Full Federal Court dismissed the appeal.

The judgment found that telecommunications metadata did not qualify as personal information under the Privacy Act 1988 (Cth). This ruling highlighted that the classification of technical data as personal information is context-dependent and illustrated ambiguity as to what constitutes ‘personal information’ for the purposes of privacy regulation in relation to the internet.

At [3], Dowsett J, with Kenny and Edelman JJ concurring, determined that:

… [T]he definition of the term ‘personal information’ in s 6 of the Privacy Act clearly contemplates identification of information or opinion concerning the relevant applicant… In other words, personal information is information or opinion:

  • About the relevant applicant; and

  • From which his identity is apparent or could reasonably be ascertained.

Recent Developments

To assist in elucidating the scope of the word ‘about’, the Explanatory Memorandum of the Privacy Amendment (Enhancing Privacy Protection) Bill 2012 (Cth)4 outlines two steps to determine whether information is personal information under the current Privacy Act:

  1. Whether there is a sufficient nexus between the information and the individual.

  2. The cost, difficulty, practicality, and likelihood that the information will be linked to identify that individual.

Cyber Security and Data Breaches

Overview

Recently, there has been significant reform in the law and strategy implemented by the Australian Government to improve cyber security with the aim of minimising the number of breaches. In addition to leaking Australian’s personal information, cybercrime is having significant economic impacts with the cost on Australian businesses increasing by approximately 14% per annum.

The Australian Signals Directorate (ASD) recorded 150 data breaches in 2022-2023, which was up from 81 breaches recorded in 2021-2022.

Mandatory Data Breach Requirements (Federal)

Mandatory data breach requirements were first introduced in Australia in early 2017 as an amendment to the Privacy Act 1988 (Cth). The amendments contain a notification scheme for certain types of data breaches involving unauthorised access and disclosure of personal information likely to lead to serious harm to individuals.

The requirements are binding on APP entities, credit reporting bodies, credit providers, tax file number recipients and internet service providers.

If an entity becomes aware of a data breach, it must inform the Office of the Australian Information Commissioner (OAIC) and the individuals whose data is affected when the data breach is likely to result in serious harm to the individuals involved. If directly contacting the affected individuals is not practical, the entity may publish a statement on their website and take reasonable steps to publicise the contents of the statement. If the data breach affects one or more entities, an entity is not required to complete these steps if another entity has already done so.

Mandatory Notification of Data Breach Scheme (NSW)

Inception

On 16 November 2022, the Privacy and Personal Information Protection Amendment Bill (NSW) passed both houses of NSW Parliament with agreement being reached on 28 November 2022. The Bill came into effect on 28 November 2023 with key changes including:

  • The introduction of the Mandatory Notification of Data Breaches scheme (MNDB scheme). This includes notification requirements and relevant assessment being undertaken to assess the seriousness of harm suffered due to the breach occurring.
  • Application of the Privacy and Personal Information Protection Act 1998 (NSW) (PPIP Act) to all NSW state-owned corporations that are not covered under the Privacy Act 1988 (Cth).
  • Unifying public sector agencies by repealing s117C of the Fines Act 1996 (NSW) to ensure all are covered under the MNDB scheme.

These key changes ensure a comprehensive framework for handling data breaches exists for both private organisations and federal bodies, as well as state-owned corporations in NSW.

Types of Breaches

The Information and Privacy Commission (IPC) is the regulator within NSW for the MNDB scheme and they identify 3 areas to which public sector agencies (including NSW Police and local councils) experience data breaches:

  1. Human error — for example a letter or email is sent to the incorrect recipient or an employee loses their laptop in a public space.
  2. System failure — for example no authentication is needed for systems containing confidential information or system automates workflows and redirects them to other users.
  3. Malicious or criminal attack — for example malware, hacking and phishing.

Due to the MNDB scheme only being introduced 10 months ago, there are very few reported cases of eligible breaches and no annual data reportable by the IPC in terms of trends, recommendations or themes within the sector. Comparatively, the OAIC reported 67% of data breaches were criminal/malicious attacks for the last reported period (July-December 2023). Health service providers were the highest reporter of breaches with 104 being recorded, compared to the finance industry which was the next highest reporter with 49 breaches.

3-tier Process for Determining Eligible Breaches

Not every data breach that occurs falls under the MNDB scheme, as s 59L(2) of PPIP Act provides that only eligible breaches are captured under this scheme. To determine an eligible breach, regard must be given to s 59D(1) in that:

  1. Information held by a public sector agency has been unlawfully accessed, disclosed or lost (either internally or externally);
  2. That information was/is considered personal information (s 59B); and
  3. The data breach would likely result in serious harm for the individual to whom the information relates (s59D(1)(a) and s59D(1)(b)(ii)).

An agency that has a suspected eligible data breach reported to it has 30 days to conduct an assessment to determine if an eligible breach occurred. This is achieved by having an assessor appointed (s 59G) and relevant factors being considered (s 59H) such as to whom the information was released and how long they have had access to it.

If an eligible breach has occurred, agencies must notify the Information Commissioner immediately (s 59M) and the individual concerned (as soon as reasonably practical (s 59N)). Penalties could be up to $40,000 if the matter is decided in the NSW Civil and Administrative Tribunal.

Responsibilities Under the NSW Scheme

Agencies that hold personal information have a legislative responsibility to protect that information in accordance with the Information Protection Principles (IPPs) in Part 2 of the PPIP Act. As such, when an eligible breach occurs, all reasonable efforts must be made to contain the breach and mitigate the harm suffered.

Agencies are required to have a Data Breach Policy that outlines how they intend to address data breaches within their organisation as well as maintain both a public and internal register of eligible data breaches that have occurred within the agency. In addition to this, there is also a legislative onus on agencies to maintain a current Privacy Management Plan that not only outlines how personal information and privacy are reflected within the relevant organisation but the processes engaged when information is inadvertently released or unlawfully disclosed.

The Information Commissioner carries various enforcement powers, including:

  • Directions can be issued in regard to providing specific information or making specific recommendations when reasonable suspicion is held that an eligible breach has occurred (s 59Y).
  • Investigative powers and monitoring powers can be exercised to ensure systems, policies and procedures reflect the objectives of the Act and the agencies’ requirements to uphold legislative requirements pertaining to personal information (s 59ZA).

Of particular note within the OIAC, civil penalty proceedings were instigated in November 2023 against Australian Clinical Labs Limited (ACLL) after an investigation into the privacy practices was conducted following a 2022 reported breach. Failure to conduct an expeditious assessment and notify the Commissioner are some of the allegations raised that demonstrates at the federal level there is an appetite to prosecute offences and disregard of the IPPs.

“Serious Harm” Definition

For agencies subject to the MNDB scheme, little guidance is provided in terms of a legislative interpretation of “serious harm” but rather a collection of considerations is provided that does little to ensure a consistent application of the assessment process between agencies. There is no definition of “serious harm” prescribed within the PPIP Act. Naturally, each data breach case assessed would vary in severity and heavily depend on the case-specific factors including:

  • individuals involved,
  • the sensitivity of the information released,
  • how the information was released,
  • any known history concerning the individual, and
  • the person to whom the information was released to.

The framework, although unifying, still lacks an underlying prescription of any form of what harm means or how it can be applied, without relying on cases to be tried within NCAT or through complaints lodged to the IPC.

Major Australian Data Breaches

In September 2022, Optus became the target of a large cyber attack resulting in 9.8 million customer records being breached. This raised public concern about the information that telecommunication companies hold and their ability to protect this information from being exposed or exploited.

Following this, in October 2022, Medibank was the victim of a data breach where the hackers gained access to private medical records of approximately 9.7 million Australians. In response to this, the Australian Government has reformed the law surrounding cyber security to attempt to prevent these breaches from occurring again and come up with ways to minimise the impacts of data breaches.

These breaches resulted in reputational damage to these companies as well as individual concerns from the customers regarding potential identity theft and the misuse of their sensitive personal information. Many of these companies were still using outdated encryption procedures and had insufficient monitoring systems. These breaches highlighted the need for the government to act swiftly and reform the law concerning cyber security to initiate stricter regulation and enforcement of cyber security measures.

Cyber Security and Data Protection

Cyber security plays a fundamental role in protecting private information from being leaked in a data breach. Cyber attacks present a significant challenge to the sovereignty of states and personal data; these challenges being intensified due to the ongoing evolution of AI technology. The ASD has the authority to conduct cyber operations, information security and foreign communication. The Australian Cyber Security Centre forms part of the ASD and conducts threat assessments and incidence response services to cyber incidents and threats, forming a collaborative approach to cyber security in Australia.

The Cyber Security Strategy and Australian Security Legislation

The Australian Government released the 2023-2030 Australian Cyber Security Strategy on 22 November 2023 which replaced Australia’s Cyber Security Strategy 2020. The strategy consists of six shields; strong businesses and citizens, safe technology, world-class threat sharing and blocking, protected critical infrastructure, sovereign capabilities, and resilient region and global leadership.

The strategy’s aim is preventative in nature but also seeks to achieve resilience and minimise the overall impacts that data breaches can have upon individual’s information as well as larger entities. The strategy also has a strong focus on collaboration between different departments to minimise the chance of a breach occurring through appropriate communication. To coincide with the introduction of this strategy, the Australian Government also appointed its first ever Executive Cyber Council. The role of the Council is to facilitate transparent co-management on key cyber security issues.

To facilitate this strategy the Australian Government also introduced the 2023-2030 Australian Cyber Security Action Plan which provides detail about how the strategy will be implemented across different stages. The action plan consists of multiple actions related to each of the six shields under the strategy. For example, one action under the ‘strong businesses’ shield is to support small and medium businesses to strengthen their cybersecurity.

Security of Critical Infrastructure Act

The Security of Critical Infrastructure Act 2018 (Cth) is one of the key pieces of legislation that governs cyber security in Australia. Relevant amendments include:

  • The Security Legislation Amendment (Critical Infrastructure) Act 2021 (Cth), which came into effect on 2 December 2022 and included introducing mandatory reporting of cyber incidents and the implementation of stringent security measures.
  • The Security Legislation Amendment (Critical Infrastructure Protection) Act 2022 (SLACIP Act), which:
    • added a new section 3(d): “imposing enhanced cyber security obligations on relevant entities for systems of national significance in order to improve their preparedness for, and ability to respond to, cyber security incidents;”
    • introduced a new risk management program requiring that a critical infrastructure risk management program is maintained by entities who hold one or more critical infrastructure assets (Part 2A);
    • introduced a requirement that the Minister to notify every reporting entity for assets that are declared to be of national significance, and the Minister also has the power to privately declare systems of national significance (Part 6A).

Potential 2024 reforms to the Privacy Act

2022 Privacy Act Review Report

In 2022, the Attorney-General’s Department of the Commonwealth of Australia released its Privacy Act Review Report (2022 Report). This followed the 2019 release of the Australian Competition and Consumer Commission’s Digital Platforms Inquiry final report and a two-year review of the efficacy and appropriateness of the Privacy Act 1988 (Cth) in the modern digital age.

The vulnerability of personal information (particularly data breaches) was highlighted as a key motivation for the 2022 Report, alongside significant privacy reforms that have taken place or are taking place abroad.

Key insights

(a) Submissions to the Attorney-General’s Department sought to retain the principles basis of the Act but supplement them with further detail. To this end, the 2022 Report proposed a new ‘fair and reasonable’ test alongside more detailed rules and mechanisms, such as more detailed guidance from the Office of the Australian Information Commissioner (OAIC), specific legislated requirements and expansion of the Australian Privacy Principles (APPs). The 2022 Report also suggested amending the object of the Act to be for the protections of personal information due to the public interest in doing so.

(b) Stakeholders were unclear as to what constituted protected ‘personal information’ under the Act. The 2022 Report proposed amendments to clarify that ‘personal information’ includes both technical and inferred information which concerns a “reasonably identifiable individual”. This clarification will avoid uncertainty as to whether individual data points (such as IP addresses and cookies) are included in the definition of ‘personal information’ as was exposed in Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4. It also proposed extending protections to de-identified ‘personal information’, as it can be re-identified.

(c) Some submissions requested that current exemptions were removed or narrowed (including journalism, political, small business and employee record exemptions). Others were vehemently against the removal of those exemptions. In a balancing exercise, the 2022 Report proposed changes to meet changes in community expectations: small business should no longer be exempt; private sector employee information should be protected; and political and journalism exemptions should be narrowed.

(d) Stakeholders expressed strong support for increased protections for personal information under the Act. Accordingly, the 2022 Report proposed:

  • better quality privacy collection notices and consents;
  • a new ‘fair and reasonable’ test to underscore the APPs;
  • requirements for entities to undertake a Privacy Impact Assessment before beginning any activity which may have a significant impact on individual privacy;
  • additional privacy protections for children;
  • introduction of OAIC guidelines for what constitutes reasonable steps to destroy unneeded personal information;
  • enhancements to the Notifiable Data Breach scheme so that any harm caused by a breach can be minimised quickly and effectively;
  • regulation of targeted advertising (whether the person is identified or not);
  • further individual rights and control over their own personal information, modelled on the General Data Protection Regulation of the European Union; and
  • a new concept of ‘controllers’ and ‘processors’ in the Act.

(e) Submissions highlighted the need for effective enforcement so as to encourage compliance with the Act and for pathways for recourse where privacy invasions fall outside the scope of the Act. The 2022 Report proposed: new powers for the Information Commissioner and further civil penalties regarding public inquiries, investigations and determinations; a review of the feasibility of industry funding models for the OAIC; the introduction of a statutory tort for serious invasions of privacy, especially for privacy invasions that fall outside the purview of the Act; and reducing the regulatory burden by streamlining privacy obligations, reducing duplication, and producing a privacy law design guide to ensure future legislative harmony.

In total, the 2022 Report made 116 proposals to the Australian Government to overhaul Australia’s privacy laws to ensure they meet the demands of the modern digital age.

2023 Government response to the Privacy Act Review Report

In 2023, the Government released its response to the 2022 Report and ultimately made a commitment to introduce legislation to enhance protections for the personal information of Australians. Out of the 116 proposals of the 2022 Report, the Government agreed to 38 proposals, agreed in-principle to 68 proposals (subject to further stakeholder engagement), and noted 10 proposals (without agreeing or agreeing in principle to legislative change).

Most pertinently, the Government:

(a) Agreed in principle to the introduction of a new ‘fair and reasonable’ test for the handling of personal information.

(b) Agreed in principle to amendments to clarify the concept of ‘personal information’ and to the introductions of the concept of de-identification. However, the Government only noted the proposal to introduce specific protections for de-identified information.

(c) Agreed in principle that the small business exemption should be removed, with further consultation with small businesses. Whilst the Government agreed to the narrowing of the journalism exemption, it only noted the narrowing of the political exemption.

(d) Agreed in principle that non-government entities complete Privacy Impact Assessments and to the inclusion of further protections for children. The Government also agreed in principle to the regulation of direct and targeted advertising (defining both terms and fair and reasonable targeting), but whilst the unqualified right to opt-out of direct marketing was agreed in principle, that same right for targeted advertising was only noted.

(e) Agreed to new civil penalty provisions and additional powers for the Information Commissioner to conduct investigations and to conduct public inquiries and reviews. The Government also agreed in principle to further investigation into an OAIC industry funding model.

The Government Response to the ‘ambitious’ 2022 Report has been described as “cautious and measured”.5 The Government laid out its plan to prepare draft legislation for the less contentious recommendations and leave the more contentious recommendations for further stakeholder consultation and analysis prior to implementation.

In May 2024, the Attorney-General’s Department announced that the Government planned to introduce draft legislation implementing the less contentious recommendations in August 2024. The first tranche of reforms to the Privacy Act 1988 (Cth), the Privacy and Other Legislation Amendment Bill 2024 was introduced by the Government on 12 September 2024. It passed the Senate with minor changes on 29 November 2024.

Government Surveillance

Surveillance is the monitoring of behaviour, activities, or other changing information, usually of people for the purposes of influencing/managing/directing/protecting them (Lyon 2007). For a glossary of commonly-used terms in surveillance studies, have a look at this open access book edited by Guy McHendry.

Surveillance is by governments for intelligence gathering, prevention of crime, protection of process/group/person/object or for investigation of crime.

The extent of government surveillance powers go to heart of issues about appropriate role of the state in our lives, including:

  • Rule of law
  • Liberal democratic
  • Public safety and security
  • Civil liberties and human rights (especially privacy)

Since 9/11, the War on Terror in Western countries has seen expansion of anti-terrorism and law enforcement surveillance powers in many countries.

Telecommunications (Interception and Access) Act 1979 (Cth)

This Act:

  • Makes it an offence to intercept (listen to or record) a communication passing over a ‘telecommunications system’ without the knowledge of the person making the communication
  • Also an offence to publish or retain a record of information gained in this way
  • Allows access to communications content for law enforcement and national security purposes after obtaining a judicial warrant.

Telecommunications Act 1997

This Act imposes obligations on telecoms providers inc to provide assistance to law enforcement agencies for:

  • enforcing the criminal law and laws imposing pecuniary penalties
  • assisting the enforcement of the criminal laws in force in a foreign country
  • protecting revenue
  • safeguarding national security.

Exceptions to the Privacy Act

The Privacy Act 1988 (Cth) applies to most Australian government agencies, including the Australian Federal Police, Australian Border Force, and CrimTrac. However, certain intelligence and national security agencies are excluded from the Act:

  • Office of National Assessments
  • Australian Security Intelligence Organisation (ASIO)
  • Australian Secret Intelligence Service (ASIS)
  • Australian Signals Directorate (ASD)
  • Defence Intelligence Organisation
  • Australian Geospatial-Intelligence Organisation
  • Australian Commission for Law Enforcement Integrity
  • Australian Criminal Intelligence Commission

For these excluded organisations, the Inspector General of Intelligence and Security provides oversight and review of these agencies’ activities, ensuring their operations remain within legal bounds and maintain propriety

Data Retention

Law passed in 2015 to implement data retention scheme: Telecommunications (Interception and Access) Amendment (Data Retention) Act 2015 (Cth).

Telecommunications companies must retain and secure for 2 years a set of information:

  • source and destination of a communication
  • date, time and duration of a communication
  • communication type
  • location of communications equipment.

22 law enforcement agencies are able to access this information without a needing a court warrant (except if it is a journalist’s data)

International surveillance laws: USA PATRIOT Act

The USA PATRIOT Act, officially known as the “Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001,” was a keystone in the national security enhancement strategy passed by the United States Government after the September 11, 2001 terrorist attacks. Its provisions have considerably broadened the surveillance and investigative authorities of federal agencies. It was intended to mitigate future attacks by giving the government broad authorities to track and access communications and financial data across international borders and within the US.

The Act has profound implications for data privacy, particularly in the context of cloud services. It grants US government agencies access to any kind of personal data stored by a US-based cloud provider, no matter the country of origin of the owner or where the data lies. This means that even if a business or individual outside the US chooses a cloud service that operates within the US or has headquarters there, their data could still be subject to scrutiny under the PATRIOT Act.

Sections 215 and 505 of the PATRIOT Act raise particular concerns for data privacy:

  • Section 215 (the “business records” provision) allows federal agencies to request any person or entity to hand over “any tangible things” relevant to a terrorism investigation. This section grants the government broad authority to collect a wide range of business records without requiring a court order. Importantly, according to the original version of the Act, this authority could be used to gather information in bulk, even if the data pertains to individuals not directly under investigation, raising significant concerns about overreach and privacy.

  • Section 505 authorises the issuance of National Security Letters (NSLs), which are administrative subpoenas that allow federal agencies to demand certain types of records from companies, such as telecommunication firms and internet service providers, without prior judicial approval. Unlike Section 215, NSLs are more targeted but come with a “gag order” that prevents companies from informing individuals about the government’s data request. This secrecy adds another layer of concern regarding transparency and the potential misuse of these powers.

The PATRIOT Act is criticised as undermining the ability of other countries to enforce their own data privacy laws, leading to legal complexities and confusion. This has implications for companies that handle sensitive or personal data and must comply with strict data protection regulations, such as the General Data Protection Regulation (GDPR) in the European Union, Personal Data Protection Act (PDPA) in Singapore, or Russian Federal Law on Personal Data (152-FZ). The PATRIOT Act’s provisions may conflict with such legislation.

Additionally, the Act raises ethical concerns about the balance between national security and individual privacy. Public awareness of these issues grew significantly after Edward Snowden’s 2013 revelations about the extent of US government surveillance, particularly the bulk data collection under section 215. Snowden’s disclosure further turned the public eye to such surveillance powers, eventually followed by legal reforms (the USA FREEDOM Act of 2015, which tried to limit some of the more controversial practices). While the USA FREEDOM Act curtails bulk data collection, many of the core provisions affecting cloud hosting, such as access to information stored by US-based providers, remain largely intact. This underlines ongoing ethical concerns of trading off national security against personal privacy, especially to the disadvantage of non-US citizens who are using US-based cloud services.

Privacy-enhancing technology

Data security and individual privacy rights are paramount considerations in the digital information age and an urgent global priority. Privacy-enhancing technologies (PETs) employ various measures to secure data by: (i) reducing or eliminating personal data, or (ii) preventing the unnecessary processing of personal data while preserving the functionality of the data system.1

In addition to traditional cryptographic techniques, corporations and other entities may utilise a variety of PETS to achieve these goals. These include data obfuscation, encrypted data processing, data accountability tools, and federated and distributed analytics. The integration of these emerging technologies promotes a privacy-by-design or default paradigm. This approach typically involves incorporating PETs into system infrastructure from the outset and modifying how organisations collect and use personal data.

In Australia, entities are indirectly encouraged to use PETs to meet their obligations under the Australian Privacy Principles (APPs) concerning personal information collection, retention, and handling.

Cryptography

Cryptography is a fundamental method for enhancing individual privacy and serves as the foundation for many data security iterations, including private and hybrid blockchains, virtual private networks (VPNs), and distributed database systems. For entities gathering, retaining and transmitting sensitive personal information digitally, it offers a safeguard to protect individuals from unauthorised access, interception and data tampering.

The widespread use of cryptographic encryption is attributed mainly to the uptake of Pretty Good Privacy (PGP) encryption programs. Encryption can enhance or undermine data protection, depending on how it is used. When individuals encrypt their personal information and retain the private key, they effectively maintain autonomy over disclosing their information. When an individual encrypts another person’s personal information using a public key and withholds the private key, it mirrors the basic operations of ransomware. For more information on ransomware in cybersecurity, see Simplilearn’s video.

Cryptographic encryption works by converting data into a ‘secret’ code, ensuring only authorised individuals may access, view and change the information contained within the data. Authorised individuals can use a cipher (a key) allowing the user to decrypt obfuscated data into its original format. The technology is interpolated into various messaging platforms such as Meta, Wickr or WhatsApp and remains a trusted security measure in the Wikileaks submission portal.

Cryptographic encryption requires three critical components:

  1. Data that needs to be protected. This data could be confidential information such as a message, file, or other digital content.

  2. A sender who possesses a public key. A public key is a cryptographic code that may be freely distributed and is used to encrypt the data. The sender utilises the public key to encrypt the data before sending it to the intended recipient.

  3. The receiver of the encrypted data holds the corresponding private key. The private key is kept secret and is used to decrypt the received data that has been encrypted using the public key.

Ryan Glister Explains TOR and Sooraj Sidhu Explains Public Key Encryption

Regulation

The proper use of PETs generally falls within the mandate of the Office of the Australian Information Commissioner (OAIC) to ensure organisations take reasonable steps when handling personal information and maintain acceptable data retention practices in compliance with the Privacy Act 1988 (Cth) and other relevant laws. The regulatory framework in Australia encourages but generally does not expressly require organisations to use privacy-enhancing technologies (PETs).

Law enforcement powers

In Australia, law enforcement is conferred broad powers to compel certain private intermediaries to provide ‘technical assistance’ to garner access to encrypted communications concerning criminal investigations. Under the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (TOLA Act), service providers must provide law enforcement access to decrypted communications or decryption tools, stripping anonymity or privacy of communications. These laws remain controversial globally and perhaps best explained in Tim Cook’s open letter to customers in 2016 following the company’s refusal to comply with FBI requests to remove cryptographic features. The TOLA Act arguably undermines the essential security features of encryption and infringes upon individual privacy rights.

OAIC powers

The regulatory powers conferred on the OAIC govern the use of PETs by monitoring government and private entities’ compliance with the APPs set out in Schedule 1 of the Privacy Act. The OAIC may commence an investigation based on individual complaints or the Information Commissioner’s initiative regarding potential breaches. Where entities report a data breach per their obligations under the Notifiable Data Breaches Scheme (NDBS), the OAIC will assess the data breach and provide advice or further investigate the matter to mitigate and prevent further impacting personal privacy.

The investigative powers conferred on the OAIC provide authority to compel the provision of information regarding data access, record-keeping, and internal policies. The OAIC may conduct Privacy Assessments to evaluate an entity’s compliance with the APPs. Where the OAIC determines non-compliance with or a breach of the Privacy Act, it may order data handling practices to cease or change, issue infringement notices, or apply to the Federal Court seeking orders of injunctive relief and financial penalties for repeated and serious breaches.

The OAIC may also publish investigation outcomes and issue public notices regarding potential non-compliance, breaches and privacy issues associated with the practices of specific organisations. This ensures transparency, alerting the public to the privacy risks associated with organisations, and acts as a deterrent for other organisations lacking or inappropriately handling personal information. For example, in the wake of the Facebook and Cambridge Analytica political data-sharing controversy,2 the OAIC investigated with the Commissioner, bringing proceedings against Facebook for serious and repeated interferences with privacy. While pending determination, the OAIC has published the particulars of the allegations highlighting breaches of APP 6 and 11.

Case studies

My Health Records Act 2012 (Cth) and Health Identifiers Act 2010 (Cth)

My Health Records facilitates identifying and maintaining patient records, enabling informed communication between healthcare providers regarding individual healthcare recipients. An individual health identifier (IHI) is assigned to a collection of personal information from those recipients per s 7(3) of the Health Identifiers Act 2010 (Cth) (HI Act) for use in the My Health Record data system. Personal information such as names, addresses, dates of birth, government identifiers, and the resultant IHI constitutes personal information under s 33C(1)(a) of the Privacy Act.

Healthcare providers must also take reasonable steps to protect IHIs from unauthorised data use, misuse, or loss (s 27). In addition to the range of compliance obligations regarding personal information under the Privacy Act, the HI Act sets a higher privacy standard, making individuals liable to face criminal and civil penalties for unauthorised disclosures and data misuse (s 26). Concurrently, a breach of the HI Act will interfere with the affected parties’ privacy for any regulatory action taken under the Privacy Act.

The OAIC’s role involves investigating privacy matters arising from data handling personal information in the My Health Record system. Part V of the Privacy Act sets out the OAIC’s investigative powers. However, the Information Commissioner has a broader power under s 73(4) to “do all things necessary or convenient to investigate” contraventions of the My Health Records Act.3

While seemingly broad, commentary following My Health Record system audits shows it lacked proper management of shared cyber security risks. Specifically, there was no assurance framework monitoring third-party software connecting to the system nor a means to monitor compliance with the security requirements per the legislation.4 At that time, the My Health Record system boasted a robust core infrastructure, though third-party applications, such as a Microsoft OS update, could undermine the system’s security.

Children’s online privacy and sharenting

Editor’s note: needs edit – not sure about the neologism.

Many children have digital footprints before they take their first steps, leading to concerns around privacy breaches and the exacerbation of privacy risks by the “sharing” practices of parents and guardians online. The term “sharenting” refers to a parent/guardian sharing photos, videos, personal stories, and other updates about the child’s daily activities, such as eating, sleeping, bathing, and playing. Sharenting invokes a tension between the child’s interest in privacy and autonomy over their digital identity, and the parent/guardian’s right to freedom of speech and to have control over the upbringing of their children.

Risks associated with sharenting

As information cannot easily be erased once shared online, the harms of sharenting may include:

  • identity theft
  • resharing pirated information on predator sites
  • sharing psychosocial information that should remain private, and
  • sharing revealing or embarrassing information that may be misused by others.

Long-term consequences can include a negative impact on the emotional, social and intellectual development of a child, as they may grow up to resent their parents, be victim to bullying and harassment, or have to rebuild their digital identity.

Studies conducted by the Australian Government’s eSafety Commission found approximately 50% of images shared on paedophile sites were taken from social media. The new phenomenon of “Kidfluencers” adds an additional element to this risk. A recent controversial instance concerned a 3-year-old girl, Wren Eleanor, whose mother posted videos of her on their TikTok account, which over time accrued over 17 million followers. As the videos gained attention, people started the notice the ‘creepy comments’ and mass amounts of times the videos had been saved by other anonymous users, and concerns were expressed that the mother may have been exploiting her child for money.

Legislation governing the privacy rights of children

The law leaves children’s online privacy and sharenting largely unregulated as current laws do not address or govern the publication of a child’s image or information online. According to the United Nations Convention on the Rights of the Child, a child means every human under the age of eighteen, but in Australia, the Privacy Act 1988 protects an individual’s personal information regardless of their age. It doesn’t specify an age after which an individual can make their own privacy decision, but for their consent to be valid, an individual must have capacity to consent. Article 16 of the UN Convention on the Rights of the Child, states that:

“No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation and the child has the right to the protection of the law against such interference or attacks.”

Social media platforms are regulated in collecting and handling data on children’s accounts, but individuals’ digital privacy is left to user discretion. Hence a child’s privacy can still be overruled by their parents’ freedom of speech and discretion.

There have been attempts by other countries to implement specific regulations related to the sharing of children’s personal data, but no countries had adopted laws that protects children’s privacy through the lens of rights to their images until earlier this year. In France, the Law no. 2024-120 of February 19, 2024 (“Children’s Image Rights Law”), was implemented with aims to tackle risks of sharenting, by completing measures to limit risk-creating behaviour and enshrine children’s right to privacy and facilitate the exercise of rights which protect minors. This new law is the first law of its kind and sets the precedent for other countries to do the same, to ensure the safety of children and the risks associated with their privacy and image are protected.

The Right to be Forgotten

What is the Right to be Forgotten?

The right to be forgotten (also known as the ‘right to erasure’) grants individuals the ability, in certain circumstances, to have their personal and private information removed from the internet, where it no longer serves a significant public interest. Exercising the right to be forgotten removes the subject information from search engine results. Although it’s not completely ‘deleted’, it significantly reduces the visibility and accessibility of the information. There have been recent calls for Australia to introduce a ‘right to be forgotten’.

The Right to be Forgotten in Europe

In the 2014 case of Google Spain SL v Agencia Española de Protección de Datos (Google Spain), the right to be forgotten was formally recognised as a fundamental right for Europeans.

Mr Mario Costeja Gonzalez filed a complaint against Google Spain and the Spanish Data Protection Agency, because searching his name on Google revealed a link to a 1998 newspaper article, which described information about his personal debts. Gonzalez argued that the information was irrelevant and infringed on his personal privacy. In its decision, the European Court of Justice ruled in Gonzalez’s favour. The court stated that individuals had the right to request the removal of links to personal information when the information was ‘inadequate, irrelevant, no longer relevant, or excessive’. This gave rise to the right to be forgotten (also known as the right to erasure) for Europeans, which prior to the decision, was far more theoretical and lacked legal definition. The General Data Protection Regulation (GDPR) now outlines the right to erasure under Article 17.

Australia and the Right to be Forgotten

The case of Google Spain and the enaction of the GDPR indicates that privacy protection for individuals with unequal bargaining power against large corporations, is a significant policy concern in the European Union. Whilst many of these privacy issues are similarly addressed by Australian policymakers and courts, Australians do not have the right to be forgotten.

Instead, Australians rely on protection from the Australian Privacy Principles (APPs) under the Privacy Act 1988 (Cth) and traditional remedies, such as the tort of defamation. Although APP 11 and 13 require the destruction, de-identification or correction of information, Australia does not come close to providing adequate protection for citizens in comparison to the GDPR. Currently, there is minimal legislative guidance as to how and what steps should be taken by entities to remove personal information. On the other hand, whilst the tort of defamation may help remove slanderous content, it cannot address privacy concerns regarding harmful, yet true, public information. Defamation is also limited by practical issues, for example, the difficulty of enforcing a judgment when the online content is posted by an unknown or foreign individual. This has drawn sharp criticism from Australians seeking stronger safeguards regarding the handling of their personal information.

Australian Common Law

In the South Australian case of Duffy v Google Inc, whilst not directly referring to the right to be forgotten, the Supreme Court held that because Google Inc had published the personal data of Dr Duffy, they were responsible for its removal. It still remains to be seen whether higher courts in Australia adopt this position in similar cases.

Legislative Reform

In 2019, the Attorney-General released its report on the Privacy Act 1988 (Cth) and proposed the adoption of a right to be forgotten into Federal legislation. In 2023, the Australian Government released its response to the Attorney-General’s Privacy Act Review Report. The Government’s response concluded that it was necessary to overhaul Australia’s privacy laws, with the ultimate goal to ensure that the Privacy Act remained fit for purpose in the ever-changing digital age. Within the response, the Government announced a number of proposed updates and agreed in-principle at proposal 18.3 that Australians required the individual right to request an entity to delete (or de-identify) personal information, with the exception of purposes required for law enforcement and national security:

Proposal 18.3

Introduce a right to erasure with the following features:

a) An individual may seek to exercise the right to erasure for any of their personal information.

b) An APP entity who has collected the information from a third party or disclosed the information to a third party must inform the individual about the third party and notify the third party of the erasure request unless it is impossible or involves disproportionate effort. In addition to the general exceptions, certain limited information should be quarantined rather than erased on request, to ensure that the information remains available for the purposes of law enforcement.

In September 2024, the first tranche of the Government’s proposed Privacy Act reforms were announced. However, proposal 18.3 was not amongst the selected reforms put forward in the Privacy and Other Legislation Amendment Bill 2024. There remains no right to be forgotten in Australia.

The SPAM Act

Video Overview of the SPAM Act by Anna Hall

The SPAM Act 2003 (Cth) prohibits the sending of unsolicited commercial electronic messages with an Australian link. A message has an Australian link if it originates or was commissioned in Australia, or originates overseas but was sent to an address accessed in Australia.

Electronic messages include Email, SMS and instant messaging. An electronic message is commercial if it offers, advertises or promotes the supply of goods, services, land or business or investment opportunities, or if it advertises or promotes the supplier of any of these things.

Messages are SPAM if they are sent without the prior consent of the recipient. A single message may be SPAM; messages do not have to be sent in bulk.

To avoid contravening the SPAM Act, electronic messages should only be sent with the consent of the recipient, must contain clear and accurate identification of the sender and how they can be contacted, and should include an unsubscribe facility.

The financial penalties for breaching the SPAM Act are steep and indexed to the Commonwealth penalty unit ($313 from July 1, 2023). A single day’s contravention may result in a penalty of up to $626,000 (2,000x), and repeated breaches of the Act may give rise to penalties of up to $3.13 million (10,000x). [Refer - Crimes Act 1914(Cth) s 4AA(1); Spam Act 2003(Cth) s 25]

After an investigation by the Australian Communications and Media Authority (ACMA) found Pizza Hut sent 5,941,109 text and email messages between January 2023 and May 2023 to customers who had not withdrawn consent or not consented to receive those messages. The investigation also found that during that period they had sent 4,364,971 messages without providing an option for customers to unsubscribe. ACMA had previously issued 15 compliance alerts to Pizza Hut, with Pizza Hut eventually paying over $2 million for breaching the Spam Act 2003 (Cth).

Privacy Protection in India

Constitution

Art 21 Constitution of India ‘No person shall be deprived of his life or personal liberty except according to procedure established by law.’

There is no express provision for the right to privacy in the Constitution of India. Over the past 60 years, there was a divergence of opinion as to whether the right to privacy is a fundamental right in India, resulting in inconsistent judgments being laid down.

In 2017, it was unanimously held in Justice KS Puttaswamy (Retd) v Union of India & Ors that the right to privacy is protected as a fundamental constitutional right under the right to life or personal liberty in Art 21 of the Constitution of India. This case serves as a landmark judgment and it explicitly overrules previous judgments where it was held that there is no fundamental right to privacy.

The right to privacy under the Indian Constitution is not an absolute right. An invasion of personal liberty must pass through the 3 fold test of legality, necessity, and proportionality.

Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules 2011

The Rules is a subordinate legislation which regulates the collection and disclosure of information by any bodies corporate. It provides for a consent requirement where businesses must obtain consent in writing through letter or fax or email from the provider of sensitive personal data or information before any collection of such information. Businesses must take reasonable steps to ensure that the person has sufficient knowledge regarding the collection.

The rules also control the disclosure and transfer of information. They are permissible in cases where prior permission is obtained from the provider or when it is necessary for the performance of the lawful contract between the business and the provider of information.

Although the implementation of security practices and standards are not mandatory under the Rules, in the event of an information security breach, businesses are required to demonstrate that they have implemented security control measures.

The Digital Afterlife

‘Digital afterlife’ refers to a continuation of an active or passive digital presence after a person’s death. It is a virtual space where information, assets, legacies, and digital remains reside as part of the cyber soul. Digital remains are “the digital content and data which was accumulated and stored online during our lifetime that reflect our digital personality and memories.” When internet users pass away, their digital remains may still be used by the living. For instance, the living may communicate to chatbots and avatars created by artificial intelligence (AI) that adopt characteristics of the deceased based on algorithms, and an executor of a deceased estate may deal with digital assets and social media accounts can be ‘memorialised’.

Ethical concerns have been raised about having individuals immortalised in the digital space, however, discourse regarding the legal implications of this emerging issue has remained limited.

AI and the Digital Afterlife Industry

The internet has become a space where the dead and the living co-inhabit. The number of dead accounts online is expected to surpass the living population by the end of this century. Projects including LifeNaut, Eternime, Replika, Project December powered by OpenAI, and ETER9 have emerged to take advantage of the new Digital Afterlife Industry (DAI). These social networks utilise AI machine-learning technology, which allows the AI to recognise patterns in user behaviour and data to digitally replicate the deceased. DAI is defined as an umbrella term encompassing “any activity of production of commercial goods (or services) that involves online usage of digital remains”. An example is two-way communication with AI chatbots and avatars impersonating the deceased (also known as ‘griefbots’ or ‘deadbots’).

For example, the AI in ETER9 creates a virtual counterpart to users (known as ‘Niners’) that learn and mimics the users’ online behaviour. These counterparts can interact online autonomously, i.e. post, comment and ‘like’, whilst assuming the user’s personality and characteristics, even after the biological user has passed. Some people communicate with these deceased’s counterparts to process grief. However, once users of these projects die, they no longer have control over their data and posthumous self, which has been permanently left on the internet to be consumed by the algorithm.

Post-mortem Privacy Protection

Issues that arise are the current lack of regulation for DAI companies such as ETER9 and the concern for an individual’s posthumous privacy protection. The notion of ‘post-mortem privacy’ (PMP), suggests that privacy, personal data and testamentary freedom forms part of a person’s autonomy and should be respected the same way as a physical body. PMP has been conceptualised as ‘the right of a person to preserve and control what becomes of his or her reputation, dignity, integrity, secrets or memories after death’. PMP emphasises the importance of a person’s control over their data after death. However, the right to privacy does not typically apply to deceased persons.

The General Data Protection Regulation (GDPR), introduced in 2014 by the European Union, provides the right to be forgotten online with the aim of “protect[ing] fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data”. ‘Natural persons’ does not include dead persons. Similarly, the Australian Privacy Act 1988 (Cth) does not extend the right to be forgotten to the deceased. However, there have been some instances where the possibility of providing privacy protection to the deceased has been shown. Section 9 of the Personal Data Protection Act 2018 of Estonia provides that the consent of processing personal data is “valid during the lifetime of the data subject and for 10 years after the death of the subject matter.” This approach may be considered by common law countries in the future.

Allowing AI to utilise the digital remains of deceased people might also be considered a violation of their dignity. However, the protection of an individual’s dignity is currently only recognised in common law (i.e. against defamation) and does not extend post-mortem.

While, PMP is currently not recognised under succession laws or privacy, an individual could clarify their intentions of their digital assets by way of social media providers’ online tools (eg social media legacy contacts or similar) or testamentary dispositions.

Social Media and Digital Remembrance

Social media platforms are also part of the DAI, with some allowing inactive profiles to remain as a place of remembrance for the deceased. This has been described as ‘online cemeteries’. In 2009, Facebook introduced a legacy contact function, which allows users to appoint a person to manage their memorialised main profile or have their account permanently deleted. The legacy contact may post a final message, respond to friend requests, and update the profile photo; however, they are unable to access the account and the user’s private messages. Accounts on Instagram may also be memorialised, with the word ‘Remembering’ appearing next to the deceased’s profile name. People can also appoint an ‘Inactive Account Manager’ for their Google account.

Digital Assets and the Law

Editor’s note: check overlap on discussion on digital goods and discusion on EULAs.

Digital assets are described as ‘any digital file on a person’s electronic device as well as any online accounts and memberships’. These assets include (but are not limited to) email accounts, social media accounts, digital photos, digital videos, cryptocurrency and online subscription accounts. Digital assets, to be identified clearly, can be categorised into different kinds of value. Nonetheless, the asset can be categorised into more than one value.

Digital assets can have financial, sentimental, social and intellectual value. Digital assets with financial value include bank accounts and cryptocurrency. These assets have a definite monetary value. Digital assets with sentimental value do not have monetary value, but rather sentimental in nature such as photographs and videos. Digital assets with social value include social media platforms where users have curated their accounts which allows them to connect with new people and portray their lives. Digital assets with intellectual value include emails, blogs, social media posts (written and visual material) that is published material. However, an asset with intellectual value may become a liability if the published material is defamatory to another individual. A deceased person cannot be sued; however, an estate of a deceased person can be sued if their published material is libel.

Currently, there is a lack of guidelines in Australia for how someone can access a deceased’s digital assets. The United States introduced the Revised Uniform Fiduciary Access to Digital Assets Act 2015 and Canada enacted a similar legislation in 2016 called the Uniform Access to Digital Assets by Fiduciaries Act. These laws authorise a representative to access digital assets if that power was expressed in the deceased’s will or other legal document such as a power of attorney. In 2022, the Australian Attorney-General’s Department released the Privacy Act Review Report endorsing the introduction of an access scheme for digital records. The Australian Government is yet to implement the recommendations in this report.

Digital Assets and Succession

In Australia, there are no laws that require testamentary acts to include digital assets, such as what directions the executor is given to either continue or delete social media profiles, online subscriptions or emails. It is up to the discretion of solicitors to raise what digital assets the testator owns as well as discussing with the client that they should provide their executor with their passwords upon their death to allow the executor to access and close or manage accounts. The NSW Law Reform Commission (NSWLRC) published a report ‘Access to Digital Records Upon Death or Incapacity’ in 2019 that thoroughly discussed amendments that need to be made to the current succession laws and estate laws to reflect digital assets, and to provide education to legal practitioners to assist clients in their decision-making when preparing their testamentary dispositions and providing their directions to their executor/s, trustees, guardians and attorneys.

The NSWLRC conducted two surveys to understand how digital assets should be dealt with upon death or incapacity. The first survey was asking the public “what should happen to your social media when you die?”, and the second survey was asking legal practitioners 43 questions ranging from “do you practice estate planning” and “is advising personal representatives about administering deceased estates part of your practice?”. The results concluded that not many people have thought about their digital assets.

The results can be found here.

Digital Products and Consumer Rights

Editor’s note: This section needs revision to terminology – ‘digital product’ is not a defined term. These ‘products’ and the licences to use them or the platforms in which they are available may be treated as goods or services, and the distinction sometimes matters.

What is a Digital Product?

The term digital product encompasses any item on the internet that can only be accessed via a technological device, that the user has ‘earned’ the right to use through a transaction, usually financial. Examples include e-books, video game files and movie downloads. Consumers may assume that purchasing an e-book on a website affords the same rights as purchasing a physical book on a website, but this is not always the case.

Terms and Conditions

When purchasing digital products, the rights of the consumer are dictated by the terms and conditions of the platform selling the product. As these transactions are typically concluded by browse-wrap or click-wrap methods, it is difficult to contest problems that may arise as there is no option to negotiate the terms and conditions. Australian courts have confirmed that it is the responsibility of the signatory to be aware of a website’s terms and conditions when making an online purchase.

Example: Microsoft Store’s Book Category Closure

In April 2019, Microsoft closed the book category of its online store. As well as preventing future sales, this closure affected previous sales. From July 2019, purchased e-books were permanently removed from consumer’s devices. The decision sparked conversation about what ownership means in the context of digital products, and how much control consumers have over the products they have purchased. In the instance of Microsoft Store consumers, they were licensees whose rights of use were dictated by Microsoft. Unlike purchasing a novel from a physical or online bookstore, where the purchaser can use that book regardless of what happens to the vendor where it’s purchased, e-books are controlled by the vendor and can be removed from devices or even altered after being purchased. This is despite the fact that e-book transactions are completed with ‘buy now’ options, not ‘lease now’ options.

Consumer Law and Digital Products

The Australian Consumer Law does not impose any obligations of sustained access to a purchased digital product.

The notion that users are licensing these digital products, rather than purchasing them, requires the definitions of goods and services to be considered. Consumer guarantees for goods are concerned with the quality and freedom of use of the good, whereas consumer guarantees for services are concerned with the duration and purpose of the service. It is only if digital products are considered a good that there will be a consumer right to continued use of the digital product.

The Australian Consumer Law states that the term ‘goods’ is inclusive of objects including computer software. The programs that facilitate the use of e-books, like Amazon’s Kindle app and device, and Apple’s Books app, are computer software; however, digital products do not fit into this definition as easily. In Valve Corporation v Australian Competition and Consumer Commission [2017] FCAFC 224, the Federal Court considered the implication of ‘computer software’ in relation to goods. Edelman J at [156] stated that the data that accompanies computer software is not a good, but conceded that it is difficult to differentiate the two. A recent dispute involving video games confirms that digital products are considered a service rather than a good.

Example: Ubisoft’s The Crew

In December 2023, Ubisoft, a video game publisher, removed one of its titles, The Crew, from both digital stores and the consoles of users who had already purchased the game. Ubisoft were discontinuing the servers that The Crew relied on to be played. The withdrawal of the game was permitted under the end-user license agreement. This led to a petition demanding the enactment of legislation that requires digital products to remain operational without support from its publisher. In response, the Assistant Treasurer and Minister for Financial Services stated that digital products confer a license to use the product, not a right of ownership.

As long as digital products are considered services rather than goods, the Australian Consumer Law offers limited protection for consumers of digital products. Their license to use the digital product will be governed by the terms and conditions set by the company.

  1. https://www.edps.europa.eu/data-protection/data-protection/glossary/p_en#pets European Commission, 2023. Glossary of the European Data Protection Supervisor.  2

  2. Katherine Sainty and Belyndy Rowe, ‘OAIC v Facebook’ (2020) 39(2) Communications Law Bulletin 17.  2

  3. See also, My Health Records (Information Commissioner Enforcement Powers) Guidelines 2016 (Cth).  2

  4. See, Auditor-General Report No.13 2019–20 Implementation of the My Health Record System at 17-18  2

  5. James Patto and Annie Zhang, ‘2023 Government Response to the Privacy Act Review Report’ (2023) PWC Australia.