The Improbability of Privacy Act Compliance, Part 1

After reading up on the amendments to Australia's Privacy Act over the past month and hosting a forum with experts on the matter, I can only conclude that the vast majority of Australia's large organisations will not be fully compliant with the Act upon its introduction on March 12.

Late last year, myself and InfoSec expert Darren Pauli met with CIOs, lawyers and IT security experts to drum out a best practice response to ensuring your systems and processes are compliant.

We've put together what I expect to be an easily-digested resource for those making a last minute dash to compliance. But there were many areas that raised important questions that need answers.

I have thus held back publication of our report whilst I attempt to address some of the vagaries of the amendments with the Office of the Information Commissioner (OAIC).

Also in this blog series:

For now, I thought it useful to begin blogging about three or four areas most concerning and/or confusing.

The first is defining what information about a person would be considered 'Personally Identifiable Information' under the revised Act.

What is PII anyway?

Australia's amended act describes "personal information" as information or an opinion about an identified individual — or about an individual who is reasonably identifiable from the data.

It applies to all data, including all the data in your legacy databases that was collected before March 12.

The key difference in the amendment concerns not just what is in your database but what could be matched with it to identify an individual.

"There is a slight change to the wording of the definition of personally identifiable information," noted our resident legal expert, ShelstonIP partner Mark Vincent, "and what it is important to focus on is what data the Information Commissioner thinks could be associated with an identified individual by using practical and reasonable steps, being careful to consider data that would otherwise be anonymous information."

It's here that things get tricky. As one attendee of our roundtable noted, "I would be hard pressed to find anyone I know under the age of 35 for whom I couldn't type something into Google and find out where they live, their phone number, and their biggest hobby."

That doesn't, as Vincent noted, mean that the data is not protected by the Act. It doesn't matter whether that information appears out in the wild already or even if it is true or not.

"The emphasis remains whether the information, together with any other information the organisation may reasonably be able to access, allows the organisation to identify the individual," Vincent notes in our report.

Digital Identifiers

One of the main questions for the digital world to consider is whether an IP address or device identifier (such as a MAC address) could be considered PII.

An IP address, in many senses, is an identifier for an individual that can also be tied to both a physical address and the use of a variety of online services. A MAC address would be even more specific. A mobile phone number — one attendee noted — is an identifier that rarely changes over time. Is it PII?

"Arguably, [PII] could be a car license plate or an IP address — data which you might not think can reveal the identity of an individual," Vincent noted. "If you can combine that with another data set relatively easily, than it can be considered personally identifiable information."

One attendee at our roundtable — an Information Security Manager in the healthcare sector —said that the Office of the Information Commissioner needs to provide clarity on whether these 'digital footprints' constitute information that needs to be handled sensitively under the Act.

"It's hard to draw the line on what is PII. If an IP addresses is a static address and you can identify the end user, it could be argued that it is. It would be helpful if the commissioner were more specific on this matter — he should come out and say: 'These are the items that could be used to identify an individual'."

It's also important for those developing online services to consider how our definition of PII might sit with definitions elsewhere in the world.

The services we acquire from multinational providers tend to have been developed without our definition of PII in mind. When cloud computing providers or social networks went looking for a compliance policy, they most often took a US-centric position. In Europe, the assumption is that if you can match this number with an individual, the data fits the bill. But other jurisdictions might argue that a number designed to identify a machine doesn't identify a human.

The closest we have to ascertaining the OAIC's opinion in this regard is in a September 2013 paper advising mobile app developers on how they should approach privacy issues under the existing Act.

"Personal Information could include "Internet Protocol (IP) addresses, Unique Device Identifiers (UDIDs) and other unique identifiers in specific circumstances," the paper advised.

This week we've asked the OAIC whether this still is still the case under the Amendments, and what specific circumstances might apply.

Consider Big Data

There are so many implications in the Act for Big Data that I'll consider it broadly in a separate blog post. But for today's purposes, what constitutes PII is a very important one.

One way organisations have been attempting to avoid data they collect as being considered PII (and therefore caught by privacy regulations) is by de-identifying the data. This would involve the stripping out or masking of any data within a data set that would directly identify an individual's identity.

Often an organisation only holds data on an individual for the purpose of including it in a larger data set to make better decisions on product direction or consumer tastes. "De-identification doesn't care who you are," Vincent remarks, "it just want you to buy this bag of chips."

Numerous studies have shown that the sheer volume of data available on the public web — and the compute power available to process it — means that "you have to be very careful before you assume you have an anonymised data set," Vincent notes.

Correlations can and will be made to re-identify data, if the stakes are high enough. It will be telling, for example, to see if the Office of the Information Commissioner takes any action in a case where de-identified data is leaked or stolen by a third party and re-identified by the attacker.

The OAIC is alert to the changing uses of data and the increasing potential to re-identify individuals from so called anonymised data sets. The Office has stated:

"The risk of re-identification may shift as technologies develop and a greater amount of data and information is published or obtained by an organisation. Agencies and organisations should regularly re-assess the risk of re-identification and, if necessary, take further steps to minimise the risk."

I suspect the OAIC has kept the definition of PII intentionally vague so as to be technology agnostic, and thus ensure the Act's relevance well into the future.

The OAIC is due to release its final guidelines to compliance with the Act over the coming months.

A narrower definition of PII — at least for the purposes of the Office's new power to audit and fine — would be a useful addition.

In the meantime, the safest thing to do is to take the default position and treat all customer data as PII.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.