X
Tech

Australia's encryption laws will fall foul of differing definitions

A cryptographer's rebuttal to a GCHQ interception concept highlights how participants in the encryption-busting debate are talking past each other. What even is a "systemic weakness", anyway?
Written by Stilgherrian , Contributor
zdnet-robot-safe-jobs-lawyer.jpg
(Image: Heide Benser/Corbis)

At the core of Australia's controversial new encryption laws sits a conundrum. How can a communications provider create a way to access specific encrypted communications without also creating a prohibited "systemic weakness" that can be used to access any of these communications more widely?

The Assistance and Access Act, which became law just days ago, defines a systemic weakness as one that "affects a whole class of technology, but does not include a weakness that is selectively introduced to one or more target technologies that are connected with a particular person".

That just creates a new conundrum: What counts as a "whole class" of technology?

In his evidence [PDF] to the Parliamentary Joint Committee on Intelligence and Security (PJCIS) on 19 October, Mike Pezzullo, secretary of the Department of Home Affairs, took a high-level view.

"We're keen to ensure that there's a distinction in everyone's mind, including at the level of the legislation, that no one is requiring at the enterprise level, when you manufacture a device or set up a network, that there be a general and universal way of simply flicking a switch and all of a sudden rendering encrypted communications clear," Pezzullo told the committee.

"A systemic weakness would be something that would be universal and that therefore, subject to the technical capacity of someone wishing to attack that weakness, would be available to all attackers. That is the last thing we want."

Read also: Everyone will use encryption, Australia should get over it

Mike Burgess, director-general of the Australian Signals Directorate, had a similar view.

"A systemic weakness is one that would be available to everyone. It would be one thing to ask for assistance to get access to something, but the action undertaken to provide that in that targeted case might actually jeopardise the information of other people as a result of that action being taken. That's not what's being asked," Burgess told the committee.

Let's try the evidence of Hamish Hansford, First Assistant Secretary for national security and law enforcement policy at Home Affairs.

"It's defined within its ordinary meaning of 'relating to a system'," he told the PJCIS.

"The industry we're talking about is broad, including over-the-top providers, carriage service providers and people who are providing applications. So to try to define what a systemic weakness is for every individual company relies on an understanding of what their business structures are. A systemic weakness for Apple or Google might not be one for Microsoft."

Now let's take Cisco's definition of the equally problematic word "backdoor".

"We have defined a 'backdoor' to include any surveillance capability that is intentionally created and yet not transparently disclosed," Cisco wrote in their submission [PDF] to PJCIS.

"To the extent that the Bill would require via a [Technical Capability Notice], the creation of a capability while simultaneously preventing the [communication providers] from documenting the existence of that capability, the law would result in the creation of backdoors."

Then there's the issue of what might count as "breaking encryption", something which critics of new encryption laws say, the law itself, does. Is it possible to access end-to-end encrypted messages without weakening the encryption itself?

See: Why Australia is quickly developing a technology-based human rights problem(TechRepublic)

Britain's GCHQ certainly thinks so. Two of its technical directors recently outlined how a communications provider could silently add another endpoint to chats and calls.

"The service provider usually controls the identity system, and so really decides who's who and which devices are involved -- they're usually involved in introducing the parties to a chat or call," they wrote.

"We're not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we're normally talking about suppressing a notification on a target's device, and only on the device of the target and possibly those they communicate with. That's a very different proposition to discuss and you don't even have to touch the encryption."

Matthew Green, who teaches cryptography at Johns Hopkins University, says GCHQ's approach would "absolutely" weaken encryption systems, with "unpredictable and unfortunate consequences".

"Right now most chat clients will give you an explicit warning when a new person joins your conversation. Obviously the police can't have a 'Special Agent Bryant joined your chat' message pop up. So that message will have to be suppressed," Green tweeted on Tuesday.

That would generally mean changing code on the client side. Do you distribute the modified app to every user, or only push an app update to the targets? If there's already a capability to do the latter, there isn't a problem to solve with a Technical Capability Notice. Any new capability would therefore involve distributing the weakened client app to all users.

"So in the process of creating a 'targeted vulnerability' you've introduced a global security hole across your entire userbase. No doubt you will try to block exploitation, but history tells us people are great at exploiting vulnerabilities. That's why we don't add flaws," Green wrote.

"The remarkable thing about this GCHQ proposal is how limited its shelf life is. The reason nobody hardens their key distribution systems against these attacks is because vendors saw them as impractical. By proposing the attack, GCHQ makes it legitimate to worry about."

GCHQ defends this approach as "not weakening encryption", but according to Green, that's a "very lawyerly description".

And that, gentle readers, is the true core of the matter.

Laws are written, naturally enough, by politicians and their lawyers. They want their laws to work independent of the technology. The laws describe the outcome they want from a legal and societal perspective: lawful access to the intercepted messages of suspected criminals.

But as those two GCHQ technical directors point out, the technical details matter.

"Without details, the problem is debated as a purely academic abstraction concerning security, liberty, and the role of government," they wrote.

"There is a better way that doesn't involve, on one side, various governments, and on the other side lawyers, philosophers, and vendors' PR departments continuing to shout at each other. If we can get all parties to look at some actual detail, some practices and proposals -- without asking anyone to compromise on things they fundamentally believe in -- we might get somewhere."

Right now, though, we're not doing that.

The politicians are trying to get by without understanding the basics of the technical challenges surrounding encryption, let alone discussing them. And, to be fair, the technologists usually aren't making much of an effort to understand what the politicians are trying to achieve.

We've just got people shouting at each other. That's failure right there.

Related Coverage

Editorial standards