Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger and Alana's 'Consumer Device Security'

The Feasibility of Consumer Device Security

Roger Clarke and Alana Maurushat **

Version of 11 April 2007

Prepared as a submission to the Australian Securities and Investments Commission (ASIC) in relation to its Review of the Electronic Funds Transfer Code of Conduct (2007)

Revised version published in J. of Law, Information and Science 18 (2007), which appeared in June 2009

© Xamax Consultancy Pty Ltd, 2007

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/II/ConsDevSecy.html


Abstract

Consumers have available to them a wide array of Internet-connected devices. A great many of the uses that consumers are putting these devices to involve transactions with organisations and other individuals. Many of these transactions are financially risky, particularly those that involve payment.

The Australian Electronic Funds Transfer Code of Conduct (EFT Code) provides consumer protection in relation to most electronic funds transfers. This includes payment transactions conducted on ATMs, at EFT/POS devices, through Internet banking, and using credit-card details over the Internet.

The EFTS Code is currently under review. As part of that process, corporations are seeking to significantly reduce the consumer protections that the Code currently affords. In particular, corporations want to shift liability for financial loss from the corporation to the consumer where devices are insufficiently secure. The proposal uses vague terms, and is not accompanied by an adequate analysis of its practical and legal implications.

The corporations' argument is predicated on the assumption that consumers are capable of taking responsibility for the security of the devices that they use. This paper surveys the security threats, and the vulnerabilities of consumer devices. It assesses the effectiveness of available safeguards and the practicability of imposing responsibilities on consumers to understand the risks involved, to install relevant software, to configure it appropriately, and to manage it on an ongoing basis.

The nature of consumer devices is such that it is entirely infeasible to impose responsibility on consumers in the manner that corporations desire. Indeed, many eCommerce and even eBanking services only work because they exploit vulnerabilities on consumer devices. More practicable approaches are identified, to enable the increasing risk of error and fraud to be addressed.


Contents


1. Introduction

Australia once boasted one of the most enlightened consumer protection regimes in the world. At federal level, the last decade has seen substantial winding back of protections, with the regulation of corporate behaviour in relation to consumers to a considerable extent replaced by the largely vacuous notion of 'self-regulation'.

In the payments area, however, the Electronic Funds Transfer Code of Conduct has for many years provided crucial protections to consumers in the area of online purchasing. It was established in 1986 to address issues about ATM usage. It was expanded in the 1990s to apply to payments by means of cards at EFT/POS terminals at the physical point of sale in stores, and later to payment arranged remotely by means of card-details keyed into web-forms. Although nominally "a voluntary industry code of practice", in practice financial institutions have little option but to abide by it, and it can be most readily described as a form of 'co-regulation' rather than 'self-regulation'. This applies only to regulated financial institutions, however, and the increasingly rich array of alternative payment mechanisms (such as eBay's PayPal) are not subject to any regulation at all.

The current version of the EFTS Code of Conduct, dated 18 March 2002, is presently under review. ASIC released a consultation paper on 12 January 2007, seeking responses by the inauspicious date of Friday 13 April 2007.

Payment transactions are increasingly being conducted on the Internet. Moreover, they are being undertaken using a wide range of consumer devices. Consumers presume that the present protections translate into the new contexts. As the Code currently stands, its scope is defined by the expression 'electronic equipment', which would appear to make it automatically extensible to new forms of consumer device. The ASIC discussion paper does not suggest any reduction in this aspect of the Code's scope.

Corporations are, however, seeking a reduction in the consumer protections in the Code. They want to make consumers liable for losses caused by consumer devices infected with malware. An example of how such losses can arise is where 'malicious software' running in a PC that a consumer uses for a financial transaction captures the user's password and/or PIN and thereby enables an identity fraud to be performed. Their viewpoint has found form in Q28 of the ASIC discussion paper:

"Should account holders be exposed to any additional liability under cl 5 for unauthorised transaction losses resulting from malicious software attacks on their electronic equipment if their equipment does not meet minimum security requirements?".

The document is highly unclear as to what is meant by "does not meet minimum security requirements". In addition to the term "minimum", the document also uses the terms "adequate" security and "reasonable" safeguards to secure the device. Moreover, there is little or no discussion of how such terms would be operationalised, nor about their concrete implications for liability.

This paper examines the scope for consumers to ensure the security of transactions they conduct using consumer devices, particularly those involving payments. It is addressed to executives and policy-makers who have some understanding of the infrastructure and technologies involved, rather than to technical specialists. Technical language is used only to the extent that is necessary in order to achieve sufficient accuracy, and brief explanations are provided for each technical term the first time that it is used.

The paper commences by identifying some of the key characteristics of consumer devices, and describing the approach adopted to the analysis. It then catalogues the threats and vulnerabilities that afflict consumer devices, and the safeguards that can be put into place. The effectiveness and practicality of both the technical and legal safeguards are examined, and policy issues are identified. Alternative approaches are canvassed. The conclusion is reached that it would be inappropriate and counter-productive to impose liability for malfunctions on consumers, but that there would be considerable value in education and software being made available to consumers, and even greater value in imposing responsibility for security on the suppliers of consumer devices.


2. Consumer Devices

The term 'consumer' is used broadly in this paper. The first category of person it is intended to apply to is individuals operating in a private capacity, whether for social, economic or other purposes. It therefore extends not only to people at home and at play, but also to operators of unincorporated small businesses, and to employees who use their own device in work-contexts. The second category is individuals who use a device that is provided by the employer but who take personal responsibility for their actions using it. It is in general not intended to encompass use by employees of or agents for incorporated medium and large business enterprises or government agencies, who use an employer-managed device.

In order to convey the breadth of applicability, the term 'authorised user' is applied to the individual who owns the device or to whom it is assigned, and who takes responsibility for transactions performed using it.

The term 'consumer device' is also used broadly, to encompass all devices used by consumers that contain a processor, operating system and applications, which together provide users with the capacity to participate in transactions with adjacent and remote devices. In 2007, this includes personal computers, both desktops and portables, and a wide variety of 'handhelds'. Small consumer devices are numerous and diverse, and include mobile phones, personal digital assistants (PDAs) of various kinds, games machines, music-players like the iPod, and 'converged' / multi-function devices such as the recently-announced Apple iPhone. It is also feasible for processing capabilities to be housed in many other, much smaller packages, such as credit-cards, rings, watches, and RFID tags.

Most consumer devices are currently conceived as 'single-user' devices. By this is meant that, at any one time, only one user at a time can cause the device to perform functions. Multi-user devices, such as those that support web-servers and mail-servers, make functions available to more than one remote device at a time. Such multi-user devices are subject to vulnerabilities additional to those identified in this paper. This document gives no further consideration to multi-user devices, but instead focusses on single-user consumer devices.

Consumer devices may be used entirely standalone, without any form of communication with any other device. Because standalone use does not support the conduct of transactions with other devices and the individuals and organisations that control them, it is not further considered in this paper.

Consumer devices may be enhanced through the installation of additional components, software and/or data. Examples of such enhancements include multiple screens, audio-speakers, extra storage (variously magnetic, optical and solid-state), and PCMCIA or PC cards and ExpressCards, and other attachments inserted into USB and Firewire sockets.

Crucially for the analysis conducted in this paper, consumer devices may also interact with other devices in several ways:

In the layers above the communications infrastructure, the dominant facility is currently the Internet, and particularly the Web overlaid over it. The upper layers are in a state of flux, however. For example, 3G mobile telephony might be maturing into an alternative high-level application infrastructure. Any such emergent services are also within-scope of this analysis.

A consumer device that is connected to a telecommunications infrastructure can perform functions, ultimately in hardware but primarily driven by software. The elements that make up those functions include:

Software may be caused to perform these operations by the action of a human being, typically using a physical or simulated keyboard and/or mouse, but perhaps through voice-activation. There are several other ways, however, including triggering by internal conditions within the device (such as the date or time), and initiation from a remote location by means of messages transmitted over telecommunications infrastructure.

The consumer devices described in this section are subject to many threats and vulnerabilities. The following section describes the research method adopted in this paper in order to evaluate them.


3. Research Method

The conventional computer security model is adopted in this paper. Under this model, threatening events impinge on vulnerabilities to cause harm. Safeguards are used to prevent or ameliorate that harm. More fully:

This paper is concerned with the use of consumer devices to perform transactions that have financial consequences. Relevant categories of harm include the following:

The body of this paper considers each of a variety of contexts in which consumer devices are subject to threats and vulnerabilities. The first cluster is associated with the physical contexts in which consumer devices are used. The second group is concerned with the operation of the devices themselves, and the third with their use in conjunction with communications facilities. The final group comprises intrusions by attackers, including various forms of malware, and computer hacking.

In each case, consideration is given to vulnerabilities, and to the safeguards that are available. The focus throughout is on the effectiveness of the safeguards, and their practicability for consumers.


4. Vulnerabilities and Safeguards

The purpose of this section is to identify threats and vulnerabilities, and the safeguards that may provide protection against them. The focus is limited to the conduct of transactions that the authorised user did not intend, or the conduct of transactions in a manner materially different from that which the authorised user intended.

The structure adopted reflects the widely varying sources of the threats and vulnerabilities. Some result from the physical context in which the consumer device is used, some from the nature of the device itself, and others from the communications between the consumer device and other devices. A separate section considers active intrusions into the consumer device by attackers.


4.1 The Physical Environment

Problems are considered firstly in terms of the physical surroundings and secondly in terms of the organisational context of their use. The final sub-section addresses social engineering factors, in particular the impact of techniques designed to cajole consumers into divulging information.

(1) The Physical Surroundings

The locations in which devices can be used were once constrained by size, power requirements and network connection requirements. With the majority of consumer devices, those constraints have been overcome, and the physical surroundings are now enormously varied, and include the home, the workplace, other people's homes, and 'public places' of many different kinds.

There are various ways in which consumer devices are capable of being used by some person other than the authorised user. While the authorised user is operating the device, and while it is not in use but securely in that person's possession, it is difficult for other people to gain access to the controls. At other times, depending on the size of the device and the care taken by its owner, there are various circumstances under which access to it by other individuals may be feasible. Common examples of problems of this kind include fellow householders in the owner's place of abode, friends in social environments, and colleagues in work-environments - and perhaps cleaners, security staff, repairmen and supervisors if the device is left at work.

If the device's capabilities are abused, the authorised user may or may not be aware of it. Furthermore, there may or may not be a way in which the user, or someone acting on the user's behalf, may be able to discover that someone else has used it, and, if so, what for.

It is possible to impose physical security measures on the surroundings in which devices are left. Examples include auto-locking doors, unique door-keys, and security cabinets with unique keys. But few consumer devices are subject to such safeguards, because they are expensive, at least inconvenient and in many circumstances impracticable.

It is also possible to impose physical security measures on the devices themselves. Examples include locks and 'dongles' (tokens that must be inserted into ports on the device, in the absence of which the device is disabled). But such safeguards are not mainstream in the marketplace, and are expensive, inconvenient and impracticable for consumers. As a result, few consumer devices are subject to them.

A further set of safeguards is commonly referred to as 'logical' safeguards, to distinguish them from the physical ones. These include:

Such measures vary greatly in their effectiveness, but many significantly reduce the scope for use of the device by unintended people. They do not reliably prevent it, however, because all such logical security safeguards are subject to countermeasures. For example:

Generally, consumer devices are subject to little in the way of logical security measures. Most consumers are only vaguely aware of the threats, do not appreciate the harm that could arise if colleagues or visitors use their devices, and few are aware of the available safeguards. The safeguards are in any case generally at least inconvenient and even entirely impracticable, and they may be expensive both to install and to maintain.

(2) The Organisational Context

The strongest forms of protection may be available where the consumer is employed by a corporation that employs or contracts specialists with the capability to support users. Employers may see it as being to their advantage to assist their employees to protect themselves, because they are very likely to conduct company business on the same device. Some consumers will undertake relevant training with their employer, or at least become aware of threats to and vulnerabilities of similar devices used within the employment context and safeguards that their employer applies.

Support of a similar kind may be available from computer clubs, and may be able to be acquired from suppliers of consumer devices and related services, perhaps as an extension to the basic package. Locations that make consumer devices available for free or for fee (such as libraries, Internet cafés, and coffee-shops that operate Wifi 'hot-spots') may provide protections. On the other hand, they may lack protections that the consumer might assume to be in place; and they may even be set up by the operator or a third party to create opportunities for mis-deeds.

The organisational contexts within which consumers work and play are likely to give rise to some degree of awareness and some mitigation of threat. It would be unreasonable, however, to expect other than that a substantial proportion of consumer devices will be largely unprotected.

(3) Social Engineering

Consumers are subject to observation when they use their devices, and this creates vulnerabilities. The most apparent is the use of simple passwords and PINs that are easily inferred from the user's movements. Another is the unintended disclosure that use of the device does not require authentication, or that unlocking of the device is performed by a simple, observable procedure. There appear to be few 'defensive driving' courses available for consumers, so it is to be expected that most users will be vulnerable in this way.

A further threat is conventionally described using the term 'social engineering' . This refers to techniques whereby people can be manipulated into performing desired actions or divulging confidential information. Common examples include gaining the confidence of a person over the counter or over the telephone, and enveigling them into disclosing personal data about a third party.

A primary example of social engineering applied to consumer fraud is the acquisition of the authenticators that the consumer uses when authorising payments. A further example is where users are convinced to change their security settings in order to install or execute a program, undermining safeguards and enabling software to run for malevolent purposes.

An all-too-common application of the general notion of social engineering is 'phishing'. This technique involves sending a message, commonly an email-message, to the user, that causes them to provide their authenticators to the fraudster, or to use them in such a manner that the fraudster can acquire them. A common approach is to provide a URL, and ask the consumer to visit the site and go through the authentication process. The site is usually masquerading as a real financial institution, typically reproducing the institution's look and feel, but capturing data that should normally only be provided to the institution itself. Various reasons are given to encourage the consumer to divulge the data, such as the need to re-set the consumer's password as a result of the old one being compromised. The technique appears to have yielded significant returns to fraudsters, and many variants and refinements exist.

In ASIC's discussion paper, Q29 and Q30 consider whether "extreme carelessness in responding to a deceptive phishing attack" should be grounds for imposing increased liability levels on consumers. The questions even contemplate the imposition of high levels of consumer liability even in the absence of "extreme carelessness".

A strong contrary argument exists. Financial institutions have actively promoted telephone and Internet banking. They have done so despite the threats and vulnerabilities that exist. And they have provided their customers with user agreements expressed in legalistic terms, rather than training in safe use of the facilities.

Even after the problems became apparent, public education programs have been inadequate to reduce the incidence with which the seemingly simple stratagem of phishing works. Moreover, few financial institutions have implemented 'two-factor' authentication techniques (such as a one-time password communicated to the consumer over a separate channel). The inadequate efforts of financial institutions and governments in addressing the phishing epidemic have led to widespread public dissatisfaction, and threaten public confidence in eBanking.


4.2 The Physical Device

Many vulnerabilities arise from, or in relation to, the consumer device itself. Firstly the hardware and systems software are considered, and then the applications that run over them. Separate consideration is given to the functions that the device performs, the installation of new software, and the means whereby software is activated.

(1) Hardware and Systems Software

The intention of the designers of consumer devices is to create a highly functional device that is attractive to the target market, but inexpensive. Many of the physical components used in their manufacture are cheap commodities. Additional weaknesses derive from the generic nature of the architecture within which the components are placed, and the lack of a comprehensive security strategy. As a result, consumer devices are not intrinsically secure, and they omit features that would be needed to enable them to be converted into secure devices.

At the level of the operating system, security has been a theoretical topic for decades, but there has been little practical outcome. Moreover, consumer devices depend variously on commodity operating systems and on cut-down versions of operating systems that were originally designed for more powerful desktop and laptop machines.

Security concerns exist in relation to the Linux and Macintosh operating systems (which is also Unix-based). The various Microsoft operating systems that are used on the large majority of consumer devices, on the other hand, have always been inherently insecure, and the company's recent commitment to reduce the insecurity of its systems appears to be only slowly bearing fruit. The origins of these problems include a low level of quality in design and coding, and inadequacies in quality assurance. A variety of vulnerabilities result, such as 'buffer overflows'. These create many opportunities for 'hackers', discussed below, which are widely exploited.

There is only a limited amount that a consumer can do about the many vulnerabilities arising from this cluster of quality inadequacies, and the many attacks that exploit those vulnerabilities. Operating systems providers generally issue upgraded versions and 'patches'. Their release is often forced, because the vulnerability has become known and reports have been published by organisations such as CERTs including AusCERT, and commercial security firms.

However, in order to overcome each set of vulnerabilities, consumers are generally forced to accept everything else that comes with the bundle. This may include undesirable features such as 'bloat' (i.e. significantly increased memory requirements, with implications for the device's speed of operation) and 'spyware' (discussed later). Moreover, the supplier may seek to impose on the `locked-in' customer licence terms that are yet more onerous than those originally applied.

Software that was poorly-designed in the first place is inherently complex, difficult to understand, and very challenging to reliably amend, especially in a hurry. So patches rushed out to address a newly-publicised vulnerability often also contain new vulnerabilities.

(2) Applications

The applications that are run on consumer devices are in many cases insecure, and in some cases extremely insecure. As with systems software, many applications exhibit low-quality design, coding and quality-assurance measures. Insecure programming languages have also contributed greatly to the problem. Many applications, when they crash, create vulnerabilities that 'hackers' can utilise.

Email-clients and particularly Instant Messaging (IM) clients are of concern, but Web-browsers are an especially easy target for attackers. Most versions of the most commonly-used web-browser, Microsoft Internet Explorer (MSIE) have been highly insecure, not least because of the default settings of a variety of parameters. The most recent versions of MSIE appear to have been improved in a number of ways, but those improvements are swamped by other factors discussed below.

All browsers, by intent of their designers, deal relatively openly with remote devices that comply with the HTTP protocol. A first vulnerability is the so-called 'cookie' feature, whereby remote devices can instruct browsers to store data, and to send that data with subsequent requests to web-servers. Most uses of cookies breach the IETF Best Practices Guide to the use of Cookies (RFC 2964), and many of those uses unintentionally, and in some cases intentionally, create vulnerabilities.

Most browsers permit the download of additional software modules variously called 'helper applications' and 'plug-ins'. Most also support a particular programming language commonly called JavaScript (but more generically and correctly referred to as ECMAScript). Many of the HTML files delivered from web-servers to web-browsers may contain code expressed in that language. The language is claimed to be reasonably limited in its functionality, but rumours of vulnerabilities emerge from time to time. An unrelated programming language called Java is also available, which is much more powerful than JavaScript/ECMAScript. It is restricted to a 'sandbox' and hence the extent to which it can be used to develop attacks on the consumer device is limited.

The design of MSIE, however, directly results in consumer devices being insecure. It supports software components usually referred to as 'Active X controls', which are not limited to a 'sandbox' (as Java applications are), but have essentially unfettered access to the complete consumer device. Hence the operating environment is utterly permissive of software that is delivered to it. Non-MSIE browsers may share MSIE's designed-in insecurity, to the extent that they support the same capabilities. Although Active X controls require user affirmation, comprehension by consumers of how their device may be affected is generally very limited.

A recent development is an extension to the Web protocol called XMLHttpRequest. This was originally devised by Microsoft but has since been widely adopted. It extends the capabilities available to programmers, and reduces the extent to which the user does, or even can, understand what their device is doing. A family of development techniques referred to as AJAX takes advantage of this extended, more powerful Web protocol.

The AJAX approach enables closer control by the programmer of the user's visual experience, because small parts of the display can be changed, without the jolt of an intervening blank window. This is achieved by constructing an 'Ajax engine' within the browser, to intercept traffic to and from the web-server. Control of the browser-window by code delivered by an application running on the server represents subversion of the concept of the Web and hijack of the functions of the browser. The power it offers provides programmers with the capacity to manipulate consumer devices. It is a boon for attackers.

Some safeguards are available to consumers. Cookies may be blocked, or managed, although the tools to do so are highly varied in their approach, difficult to understand, and in many cases inadequate. Javascript may be switched off; but a great many web-sites will not function if it is, most web-sites fail to detect and report to the consumer that they will not function correctly, and very few provide alternative ways of delivering the functionality. Many web-sites fail to communicate any of this to consumers. But consumers face significant disincentives if they turn JavaScript off, because many services become unusable. Hence it is unlikely that many would turn it off even if they appreciated that a number of vulnerabilities (not to mention many consumer-annoyance features) can be avoided by doing so.

Java can also be turned off, but similarly some sites will not function, and in many cases they fail to detect and report to the consumer that that is the case. Because Java is limited to a sandbox, it does not appear that leaving it turned on directly creates many security vulnerabilities. On the other hand, it is a complex programming language that is too challenging for a great many programmers, and bugs and browser-crashes are common, which may result in vulnerabilities in some circumstances.

ActiveX also may be switched off (although on at least some versions of MSIE it appears that it requires five options to be disabled). It may also be enabled for `trusted sites' and disabled for all others; but the option is nested five levels down a complex menu-tree, and it is unlikely that many consumers even find the function, let alone understand it. As with Javascript and Java, if settings are adjusted to prevent ActiveX is controls from running, many sites will not function, or will not function as the designer intended. Most consumers are oblivious to the existence of these facilities, let alone the dangers they embody, and the opportunity to avoid those dangers by sacrificing some of the experiences that the Web offers them.

It appears that even highly technically literate consumers may be either unable to preclude AJAX techniques from intruding into their devices, or unable to do so without abandoning access to a wide range of services. In particular, there appears to be no convenient, consumer-understandable way in which AJAX techniques can be permitted under specific circumstances only (such as from a known and trusted supplier like their bank) without the device being open to all comers.

The alternative of using ancient browser-versions, or intentionally cut-down browsers that do not support key features on which AJAX depends, incurs considerable disadvantages. Most web-site developers design applications only to run on very recent browser-versions (or, in remarkably many instances, only on the most recent versions of MSIE). Old and cut-down versions of browsers therefore quickly become unusable on many sites; and hence there is a built-in and powerful disincentive working against consumers using less vulnerable browsers.

Consumers might reasonably expect that computer crimes legislation would make such abuses of their devices unlawful. As discussed in section 5.2 below, however, that expectation is not fulfilled.

In short:

Expressed differently, many eCommerce and even eBanking services only work because they exploit vulnerabilities on consumer devices.

(3) The Functions Performed by the Device

Each consumer has an understanding of what their devices do. That understanding is based on representations made by the providers of the device and software running on it, information provided by other consumers and the media, their own experience of their devices' behaviour, and sometimes even documentation if it is made available by suppliers and read by the consumer.

Representations, reputation and experience are not comprehensive, and consumer devices perform many functions that authorised users are not aware of. Hence consumers have at best only a very partial understanding of the functions performed by their devices.

Moreover, some functions are designed by the providers of the software to be hidden, and to be difficult to discover. Common instances of this include:

Because software performs such additional functions, consumer devices may participate in transactions that the authorised user did not intend, or that are different from what they intended. The installed software may perform functions autonomously, or it may be triggered by some external stimulus. Moreover, this may even occur without the user knowing that it is happening, or that it has happened. The software may be designed to be surreptitious, by minimising the extent to which the fact that a transaction has occurred can be detected through examination of data-logs. Despite these serious vulnerabilities, it would appear that ASIC, on behalf of corporations, is contemplating imposing "additional liability under cl 5 for unauthorised transaction losses resulting from malicious software attacks on their electronic equipment if their equipment does not meet minimum security requirements" (Q28).

In order to be protected against such eventualities, consumers would firstly need to invest effort to understand the complete set of documented functions of every item of software running on their devices. Secondly, they would need assurance that the software contained no undocumented functions. Very few consumers are capable of performing an audit of executable code. Indeed, such an audit is extremely complex and challenging, and any such service from an independent third party would be expensive, and the level of assurance provided (as indicated by the limited warranty that would be offered) would not be high.

A more reliable form of assurance would be certification by an independent third party based on inspection of and experimentation with the source code rather than the executable code. It is likely that most software providers would be unwilling to submit their code for inspection in such a manner, and it appears that few such inspections are performed, even for the business market, let alone for consumer products.

The consumer could seek certification from each software supplier about the functions that the software performs, and the security features it embodies. Further, the consumer could seek warranties and indemnities from each software supplier. In practice, however, consumers lack the market power to make suppliers do such things. In any case, unless carefully designed, such mechanisms would be cumbersome, and would work against widespread adoption of eCommerce. Instead, consumers are generally forced to accept software without certification, and without significant warranties and indemnities, and indeed with onerous obligations unilaterally imposed on them and expressed in complex and aggressive terms.

Worse, trade practices regimes impose very limited responsibilities on software suppliers, and provide very weak protections to consumers. As a result, it is not clear that any software suppliers at all provide certification, nor any material warranties or indemnities about the functions their software performs.

A form of ex post facto control could be implemented, by logging the traffic generated by the device, and comparing it with a model of the traffic that was expected given the consumers' actions. Little software appears to be available that provides such controls. Developing, installing, configuring and operating such tools would be difficult enough for experienced professionals, and the challenges involved far exceed the capabilities of the vast majority of consumers. Furthermore, such forms of testing might even be illegal, by virtue of constraints on reverse engineering embodied in extensions of copyright law enacted in recent years for the benefit of copyright-owners.

A mechanism exists that is commonly referred to as 'code signing'. This enables a software provider to digitally sign software that it distributes. If a consumer can acquire the relevant public signing key through some reliable channel, and confirm that the digital signature is valid, then there are two useful inferences: firstly that the software was signed by the organisation that claims to have signed it; and secondly that it arrived in exactly the same form as it was despatched. The code-signing approach therefore addresses two relatively minor risks in relation to software distribution (that it was created by someone other than the organisation that it is meant to have come from, and that it may have been changed in transit).

But code signing does nothing to address the vital question as to whether the software performs any functions that the consumer did not expect, and in any case the warranties offered by certificate authorities are so small as to be essentially valueless.

(4) Software Installation

The preceding sub-sections have considered the functions performed by systems software and applications that is already on consumer devices. Threats and vulnerabilities need to be considered that arise at the time that software is installed.

Reference was made earlier in this paper to the social engineering mechanism of enveigling users into de-activating safeguards in order to enable the attacker to go about their business more easily. Vulnerabilities of this kind can arise even without an attacker involved. It is quite commonly necessary for safeguards to be circumvented in order that desired software can be successfully installed. The vulnerability may become long-term if the security settings are not adjusted back to their normal level immediately after the installation is conducted. And if the settings apply to all processes running in the device rather than only to the approved installation process, a vulnerability exists at the very least for the duration of the installation activity.

When confronted with a security alert warning about such vulnerabilities, the user's understanding is commonly limited, and their options are commonly restricted to `permit' or `deny'. Where a `learn more' option is available, it often delivers statements about unknown certificate authorities or security parameters, which are expressed in a manner that is at least daunting, and often simply incomprehensible. Hence the extent to which consent is `informed' is in considerable doubt.

A rational risk assessment process would lead a consumer to distinguish between different categories of activity, in particular:

Unfortunately, it is likely that only a small percentage of consumers would be even vaguely aware of these categories, and an ever smaller percentage could distinguish the different risk profiles that each presents.

In section 4.2(2) above, reference was made to a variety of circumstances in which consumers are not even made aware that software has been loaded onto their machine. In the case of ActiveX controls, the lack of a sandbox suggests that software that is ostensibly delivered for a single specific transaction may be able to be permanently installed, and in such a manner that it is generally available rather than limited to a specific context.

(5) Software Activation

In order for a software function to be performed, the relevant software has to be invoked, executed or activated. There are several ways in which this can come about, including:

Versions of Microsoft browsers (MSIE) and email-clients (Outlook) were for many years distributed in insecure form, such that they permitted any form of file to be invoked on arrival in the client-device. They were intrinsically permissive and hence dangerous. The most recent versions have been distributed with less permissive defaults. However:

Consumers who are educated about the risks involved in using their devices, who are well-informed about the specific features of their web-browser and email-client, and who take care to initialise and maintain all of their software parameters at a safe setting, can generally preclude emailed files from executing on their devices. The proportion of consumers who satisfy those conditions, and who sustain their vigilance at all times, is, however, not likely to be high.

Some limited protections are possible in relation to cookies, Javascript, Java and ActiveX, but only if the consumer is:


4.3 Communications

A range of vulnerabilities arise from the fact that consumer devices communicate with other devices, and the manner in which they do so. This section considers firstly the partners with whom communications are exchanged, and secondly the flows of messages between them.

(1) Transaction Partners

When conducting a transaction, a consumer needs to have confidence that their device is interacting with the (or an) appropriate device operating on behalf of the (or an) appropriate person or organisation. The term ' identity authentication' is commonly used to refer to the checking performed to provide that confidence.

A mechanism is available that provides a form of identity authentication on the Internet. The Secure Sockets Layer (SSL) mechanism, now standardised as Transport Layer Security (TLS), enables any party to digitally sign a message and invite other parties to check the digital signature. Unfortunately the scheme falls far short of providing real confidence in the identity of the other party. The reasons include the low quality of the certificates on which the mechanism depends (as evidenced by the almost complete absence of any meaningful warranties and indemnities), and the low quality of ongoing maintenance of certificate schemes, with many outdated certificates, slow updating of directories, and few implementations of online certificate checking.

The process of checking the identity of organisations and individuals on the Internet is technically challenging and complex. Consumers have little understanding of it (and indeed the significance of the process eludes many postgraduate students). And it is of very low quality in any case. Most consumers put their faith in prior transaction partners, the honesty of other parties or consumer protection laws; or they simply take the risk and hope for the best. This is not an environment that encourages greater use of electronic networks for the conduct of business.

In principle, SSL/TLS enables the server to authenticate the identity of the client (i.e. of the device, or the web-browser, or conceivably the web-browser user). This could be a general scheme, or a specific scheme, in particular one implemented by financial institutions for their customers. In practice, however, this potential is very little used, and such schemes as exist have attracted limited participation. The user's private signing key (which must be stored on and used by the device) is at risk of capture by malware or 'hacking', and consumer devices are especially vulnerable.

(2) Data Transmission

The data transmitted during the course of a transaction is vulnerable when it travels over any form of communications link. In particular, it may be lost, may accidentally change or be intentionally changed while in transit, or may be intercepted. If it is intercepted, it may be used as part of an act designed to defraud the consumer or some other party.

Data can be protected in transit by encryption. There needs to be some means of ensuring that the intended recipient, and only the intended recipient, has the means of decrypting the message. A variety of tools exist, implementing a variety of encryption schemes. The most readily available is SSL/TLS, mentioned in the previous section, which is conveniently supported by all mainstream web-browsers.

Encryption of email sent from mainstream email-clients is, on the other hand, very poorly supported. Email-clients that work within email-browsers can readily take advantage of SSL/TLS. In practice, however, only a small proportion of consumers appreciate the importance of doing so. Worse still, some Internet Service Providers fail to switch users to protected mode (referred to as https rather than http protocol). As a result, not only is most email content unprotected from eavesdroppers while in transit, but so too are some email-account passwords, which users transmit when they login to view their incoming messages or to compose and send their own outgoing messages.

A further concern is undesired traffic between the consumer device and other devices. Any Internet-connected device has to have particular 'ports' open, on which it will accept messages. These ports are readily discoverable, and can be used by an attacker to probe for security vulnerabilities.

Some degree of protection against these threats can be achieved by utilising a 'firewall'. For a consumer device, this is software that blocks all traffic except those messages that satisfy particular rules. Although this represents a safeguard against some attacks, it necessarily leaves a great deal open as well.


4.4 Intrusions

The previous sections have considered a wide variety of vulnerabilities that are intrinsic to consumer devices and their use. This section focusses on means whereby attacks can be mounted against consumer devices. The first two groups focus on the various forms of 'malware', initially considering the means whereby malware can be infiltrated into consumers' devices, and then the kinds of things that malware can do. The third section is concerned with means whereby other parties can gain remote access to, and operate on, consumer devices as though they had direct physical access to them.

(1) Malware Vectors

The expression 'malware' is a useful generic term for a considerable family of software and techniques implemented by means of software, which result in some deleterious and (for the user of the device) unexpected outcome. ASIC's Q28 uses the expression "malicious software attacks", presumably in the same manner in which information technologists use the term 'malware'. A useful general term for the code that performs the harmful function is its 'payload'. That is discussed in the following sub-section.

Malware comes to be on a consumer device by means of a 'vector'. One example of a vector is portable storage such as a diskette, CD, DVD or solid-state electronic 'drive'. Loading files from such media may deliver malware onto the device. So too may file-download from another device on a local area network. Since the mid-1990s, Internet connections have become the most common source of vectors for malware migration onto consumer devices. Connections to other wide area networks, such as those for 3G mobile phones, are likely to become a further source.

A longstanding and very common form of malware is a 'virus'. This is a block of code that inserts copies of itself into other programs. It arrives on a device when an infected copy of a program is loaded onto it from an external source. If and when the infected program is invoked, in addition to the other functions it performs, the infected program seeks out other executable files and inserts copies of itself in them. In addition to the code that performs the replication function, a virus generally carries a payload, which may be intended to be constructive, to have nuisance value, or to have serious consequences for the device's owner or some other party. To avoid early detection, viruses generally delay the performance of functions other than replication.

Another well-known category of malware is a 'worm'. A worm is a program that propagates copies of itself over networks. It does not infect other programs. Worms propagate by exploiting the many security vulnerabilities on consumer devices that were referred to in section 4.2 above.

There are many other vectors, including email attachments, web-pages, and files downloaded from instant messaging (IM) services and peer-to-peer (P2P) services. The file that is downloaded does not need to itself be an `executable' (i.e. a program). It may be a data-file in which a segment of executable code is embedded, such as `macros' within text documents, spreadsheets and slide presentations. Recent versions of Microsoft's Office suite provide an even more powerful facility for embedding code in data-files, called Microsoft's Visual Basic for Applications (VBA). Files prepared using the Microsoft Office suite consequently represent a major vector for malware.

Consumers can protect themselves against malware vectors in several ways. One would be to never download software onto the machine. But this is impractical in the extreme. One reason is that it is very difficult to reliably distinguish files containing executable code from data-only files. Another is because software suppliers actively attract people to install new software and new versions of software, and impose active disincentives against continuing to run old versions for extended periods (through removal of support, non-functioning after some 'drop dead' date, and 'planned obsolescence' such as non-support for old data formats). In any case, this approach does not address the problem of malware present on the device when it is originally acquired.

Another difficulty is that controls over the active 'pulling' of software onto the device do not prevent 'pushing' of software to the device by other parties. A primary example of 'pushed' software is email attachments, which can arrive without the device's user issuing any request for the software. Another crucial example of pushed software is the wide array of code that arrives in response to requests to web-servers. The consumer may think that they are requesting an HTML file that will display in their browser-window; but increasingly that is accompanied by active code that infiltrates the device; and as noted earlier, Microsoft's ActiveX 'controls' appear to have largely uncontrolled access to the whole of the device and local storage.

A conventional form of protection is akin to `perimeter defence'. This involves running software that is usually referred to as 'virus protection software' or 'anti-virus software'. This checks incoming files for known instances of malware. There are many such products.

Implementing such products requires understanding, patience and investment. It may need to be acquired, in many cases for a fee, it needs to be installed, it most likely needs to be configured, and running it is an inconvenient overhead that delays the consumer's desired experience. Moreover, installation of such software creates additional vulnerabilities, as discussed in sub-section 4.2(4) above. In addition, because malware is in a state of continual adaptation, such software and the data that supports it requires frequent updating. That is onerous if performed manually; but if the process is automated it creates yet further vulnerabilities.

All such protections are incomplete because there is a lead-time between the creation of new malware, discovery by the suppliers of protection software that it exists, discovery of the malware's 'signature' whereby it can be recognised in users' storage, and distribution of the new data or software version to consumers' devices.

(2) Malware Payloads

The previous sub-section focussed on how malware reaches consumer devices, and what safeguards exist that can prevent that happening. This sub-section considers what malware does once it makes it through the protections and gets itself installed on the device.

The term 'trojan horse' or 'trojan' refers to a program that purports to perform a useful function (and may do so), but does perform one or more malicious functions. An example is a useful utility (which, for example, helps find lost files, or draws a Christmas Tree that can be sent to friends at the appropriate time of year). If it is a trojan, then it performs some additional function (reminiscent of enemy soldiers carried in a wooden horse's belly).

One use to which malware is put is to enable a consumer device to be controlled by processes running on some other computer. The term 'zombie' is used to refer to a device that has such malware installed on it. This aspect is further discussed in the following sub-section.

Where a malware payload gathers personal data on a consumer device, it is referred to as 'spyware'. To be effective, such software generally operates surreptitiously, and without informed consent. Examples include code intended to assist corporations to monitor the use of copyright works that they own (such as software, images, music and videos), and tools that assist in the commission of financial fraud and theft.

Many instances of spyware are created by corporations to enable advertisements to be displayed on consumer devices, preferably ads that will be of interest to the user. The proponents of such software prefer the term 'adware', and seek to distinguish 'adware' from the broader category of 'spyware'.

A particular sub-category of malware payload is commonly referred to as a 'keystroke logger'. The function of this form of malware is to capture as data what the user keys on the keyboard. This may enable the conduct of fraudulent transactions, especially where the data is part of an authentication process, such as a password, PIN or passphrase. A keystroke-logger may also be used for surveillance of the user's activities, for example by the person's employer, or by a corporation, by a government agency, or by a law enforcement or national security agency.

A further category of malware payload comprises tools to facilitate remote use of the device by another party. A legitimate form is so-called `remote administration software', such as Microsoft Back Office and Apple Remote Desktop. These enable users to be provided with technical support without the user, the device and the technician having to be in the same place at the same time. An example of a tool that mimics remote administration software but is used by unauthorised parties is Back Orifice. The use of such malware payloads is discussed in the following sub-section.

Safeguards exist against malware payloads that have already successfully infiltrated the device. The 'virus detection' software described in the previous section can be run periodically, to 'audit' the software that is installed on the device. For this to be effective, however, the protection software and the data that supports it need to be updated continually, or at least from time to time. Viruses and worms may get through the perimeter protection in the first instance, because their signature, or even their very existence, are not yet known; but some time later they become recognisable and can be detected and removed. However, it may also be necessary to run additional anti-spyware software. This caters for software that arrives through vectors that are not monitored by the 'virus protection software'. Implementing such protections requires understanding, patience and investment by the consumer, because running such software is an inconvenient overhead.

Safeguards of all kinds are subject to countermeasures. This is particularly apparent in the context of malware, where there is a running battle between malware producers and the providers of safeguards. A further relevant category of malware payload is referred to as 'Rookits'. These are tools that help conceal the presence of software and files on the device. They thereby assist the remote user to escape detection. In section 4.2(4) above, attention was drawn to the risk involved in software installation. Because anti-virus and anti-spyware tools attract some degree of trust from consumers, they are also used as vectors to infiltrate malware onto consumer devices.

(3) 'Hacking'

The term 'hacking' is commonly applied to the operation of a device by a remote user without the authority of the local user. Other (and preferable) terms for this are 'break-in' and 'cracking' (as of a safe).

There are myriad opportunities for crackers, because of the inherent insecurity of operating systems and applications described in section 4.2 above. Of particular relevance are the highly permissive nature of many default settings, and the desire of software developers to have unfettered access to consumer devices in order to enhance 'the user experience', market their own and other parties' products, and exercise control over the use of their own and other parties' software and data.

There are readily-accessible libraries of recipes on how to conduct 'hacking'. Many of the techniques have been productised in the form of 'scripts'. The people who perform hacking require a moderate amount of skill, but they do not need to be experts and are sometimes referred to by the derogatory term 'script kiddie'.

In addition, hacking may be made easy through the existence of a 'backdoor'. This term refers to any planned means whereby a person can surreptitiously gain unauthorised access to a remote device. Some may be intrinsic to the software installed on the device before it is delivered to the consumer, whereas others are infiltrated into the device at a later stage. Examples include remote administration software referred to in the previous sub-section and intended to enable maintenance programmers to gain access, trojans infiltrated by means of worms, and features added into a program by viruses.

When a device has been hacked into, a remote user is able to operate the device as though they were the local user. The capabilities available may be somewhat restricted, or may be the same as those available to the local user. A hacker generally has reasonable technical competence, and hence knows enough to be able to do far more than most users can do with their own machine. In particular, a hacker may be able to upgrade themselves from the restricted capabilities of a normal user to the full set of privileges to operate the device that are available to a 'super user', 'administrator' or 'root'.

A hacker who has cracked a device is in a position to run software that observes the conduct of financial transactions on the consumer device, and hence to capture identifiers and authenticators. In some circumstances, a hacker may be able to cause transactions to be conducted, e.g. to authorise transfer of funds under the control of the consumer to an account under the control of the hacker.

When a device is subject to automated remote control, it is referred to as a 'zombie', 'robot' or 'bot'. A collection of such machines is referred to as a 'botnet'. Botnets have been used to perform attacks on other computers (referred to as 'distributed denial of service' or DDOS), and to relay spam. It has been estimated that a large proportion of Internet-connected devices are zombies.

Zombies could conceivably be used as part of a financial fraud, e.g. to effect 'transaction laundering' by shifting funds through a succession of accounts controlled by consumers in different jurisdictions, thereby obfuscating the origin and/or eventual destination of the funds. A further application that has been speculated upon but does not yet appear to have been publicly demonstrated is market price manipulation, e.g. of shares traded on exchanges.

Some limited protections are available to consumers. They can inform themselves about the security settings of the operating system, systems software, and each application running on their device. However, there are scores of software items whose settings need to be controlled, the settings are complex and highly diverse, and the locations where the settings can be changed and the documentation relating to them are in many cases very obscure. In practice, few users are even aware of the problems, let alone capable of learning how to adjust their devices to be less vulnerable.

In section 4.3(2) above, mention was made of the availability of 'firewalls' as a means of preventing some forms of traffic. These can also deny some of the means whereby hackers can gain access to the device. Doing so requires understanding, effort and assiduousness on the part of the consumer. In any case, skilled attackers continue to have many avenues available whereby they can penetrate a consumer device.

As mentioned in section 4.2(1) above, suppliers of systems software and applications issue 'patches' from time to time, which address vulnerabilities that have come to light, usually as a result of poor design and programming. Consumers can implement those 'patches' in order to block off some of the vulnerabilities on their machines that make them particularly susceptible to hacking.

Unfortunately, there are serious disincentives that militate against consumers actually doing so. Suppliers commonly only patch very recent versions of their products. That forces many consumers to upgrade the version they are running in order to have access to the patch. That may cost a considerable amount of money, and require considerable effort and delay.

It may be actually undesirable to upgrade to a new version, and even highly so. In many cases, the only reason for applying the update is to address the security weakness in the original version of the product. But with the new version comes very probably 'bloat', and probably slow-down associated with the bloat. In addition, many new versions include spyware installed by the supplier to serve their own ends. They may also include additional, non-negotiable licensing terms that are unacceptable to the consumer. These features arise with many suppliers, but are particularly common with Microsoft products, which afflict many consumers, variously at the operating system level, and in the key applications of office tools, web-browsers and email-clients.

It may even not be possible to upgrade a consumer device to a new version of systems or application software. Because later versions of software are typically bloated with a great many new, inefficient and mostly unwanted features, they may not run on the consumer's device without hardware enhancement. That involves additional cost and inconvenience. But some consumer devices, particularly smaller handhelds, may not be capable of being enhanced in the necessary manner, because their `form-factor' is inherently greatly constrained, and there may be no port to plug an enhancement into, or no space inside the housing.

There are, in short, many barriers and disincentives that work against the widespread implementation of safeguards against `hacking'.


5. The Effectiveness of Available Safeguards

This section considers the reasonableness of the proposition contained in the EFTS Code consultation paper of January 2007 to the effect that a duty of care should be imposed on consumers, formulated variously as `minimum', `adequate' or `reasonable' standards. Technical safeguards are addressed, then legal safeguards, and finally the implications of the analysis for the EFTS Code.

5.1 Technical Safeguards

The preceding sections have demonstrated that a vast array of threats exist, that they impinge on a vast array of vulnerabilities, and that the vulnerabilities are deep-seated in the device architecture, the systems software, the languages in which systems software and applications are developed, the applications, and the development practices of software suppliers.

Safeguards are available that address some of the threats and vulnerabilities. However, these safeguards:

In order to take advantage of each particular safeguard, the user must do the following:

Worse still, after the consumer has gone to all of that trouble, the safeguards are of limited effectiveness, because:

Moreover, even a well-protected consumer device still has a wide array of vulnerabilities, many of them known to attackers. This applies in particular to the intrinsic vulnerabilities outlined in section 4.2, many of which were intentionally designed-in by suppliers, and to undefendable malware and hacking attacks outlined in section 4.4. There are also considerable exposures arising from the social engineering attacks referred to in section 4.1(3), particularly when skilfully combined with the data communications insecurities described in section 4.3.

5.2 Legal Safeguards

A legal framework communicates rights and obligations, encourages responsible players to behave responsibly, and acts as a deterrent against irresponsible behaviour. It also creates the possibility of back-end controls, in the form of sanctions against organisations and individuals that misbehave, or at least opprobrium from `naming and shaming'. Hence, in theory at least, the law could be an effective safeguard for parties affected by malware.

Computer crimes legislation was enacted with the intention of making unlawful the abuse of devices by means of data misuse. The federal Cybercrime Act 2001 (and other similar provisions in the various Australian States) criminalises malicious actions that allow unauthorised access, impairment and modification of data or electronic communications. Whether an offence is committed generally depends on whether the person had the intent to commit an offence or to cause harm, but in some cases recklessness is sufficient.

The practices of many `legitimate corporations' arguably fall within the parameters of criminal data misuse. Web-sites that install rootkits and backdoors (as has been the case with Sony and Microsoft), and that use invasive programming techniques without user authorisation are quite probably in breach of the criminal law. In many cases, suppliers depend on a 'consent' arising from an end-user license agreement (EULA) or acceptance of a privacy policy statement. The provisions of such documents are often vague, difficult to understand and even misleading. Hence consumers have not provided the necessary 'informed consent'. Yet the political will is lacking to prosecute 'legitimate corporations' for even quite clear instances of criminal data misuse.

There does appear, on the other hand, to be willingness to act against 'illegitimate business' such as 'organised crime' or 'cybergangs'; but for a variety of reasons, very few cases are prosecuted. Cybercrimes are rarely reported. Many cybercrimes are committed by people in distant jurisdictions. International cooperation is required, but is difficult to get, and very slow. The gathering of evidence requires forensic specialists, who are in very short supply. Technical complexities abound. The laws relating to digital evidence are still immature. Even if cases were mounted and won, the penalties are not severe enough to act as an effective deterrent, particularly given the scale of the financial benefits to the cyber-criminals.

Civil actions might provide a safeguard but, again, only in theory. Australian product liability law does not apply to software. A consumer could sue a `legitimate' or `illegitimate' corporation using a variety of other civil instruments, such as the laws of negligence, misleading and deceptive conduct, and invasion of privacy. In each of these cases, the plaintiff (a consumer, or perhaps an association or watchdog agency on behalf of a class of consumers) bears the burden of proof based on the balance of probabilities. Proving negligent conduct is, in practice, very difficult. A successful litigant would need to present sophisticated technical evidence that requires expert testimony. Opportunities abound for respondents' experts to counter litigants' experts. Judges routinely permit respondents great latitude to delay the case for long periods, and to raise the costs of litigation very high. Hence the costs commonly greatly outweigh the financial harm for which reparation can be sought.

In short, the law fails to provide any effective safeguards in relation to the security of consumer devices.

5.3 Implications for the EFTS Code

The changes being considered for the EFTS Code would see allocation of liability shifted to the consumer for malware damage where the consumer did not implement "minimum" (or perhaps "adequate", or "reasonable") safeguards to secure the computer device. This sub-section considers that proposal in light of the preceding analysis.

The discussion paper does not give a clear indication of who is to bear the burden of proof. Transactions are generally posted to consumers' accounts when the financial institution receives notice of them. So a contested transaction stands until and unless the consumer takes action that causes the financial institution to reverse it. Hence the burden of proof is readily interpreted as lying with the consumer.

The consultation paper sets out two policy principles: the `least cost avoider' principle (para. 7.10), and the simplicity principle (para. 7.11). This paper has demonstrated the infeasibility of devising a secure payments scheme that depends on consumer device security. Hence the `least cost' approach dictates that the service-provider must deliver security from the server end. The simplicity principle states that "broad standards such as `the user takes all reasonable steps to keep the access method safe' are less appropriate than specific standards" (cl. 7.11). Yet the draft does not even discuss what types of specific standards would be imposed; and, in any case, the analysis conducted in this paper suggests that no simple statement of specific standards is feasible. Hence the scheme described in the discussion document could not possibly be equitable.

Instead of placing the onus of proof on the consumer, the logical locus of responsibility is the service-provider. If a corporation wished to shift to the consumer the responsibility for loss arising from a particular transaction, then the corporation would need to be able to (in the terms used in the EFTS Code of Conduct at clause 5.5) "prove on the balance of probability" that the loss arose in large part because of one or more specific deficiencies in the consumer's device.

This would require access by the corporation to the device, in a state that held some time previously. That probably implies that the device itself has to be taken from the consumer. In any case, the science and (largely) art of security safeguards is in many circumstances incapable of determining what vulnerability was exploited by what attack, let alone of providing information of evidentiary quality in support of the conclusion reached.

In most circumstances therefore, it is logically untenable for corporations to argue for a shift in liability. In delivering services to consumer devices over the Internet, corporations are depending on insecure infrastructure, and they must carry the responsibility for doing so.

A particular irony in the situation is that a great many web-sites that support transactions depend on advanced and intrusive programming techniques such as cookies, Javascript, Java, ActiveX controls and AJAX. But it is precisely these threats that safeguards need to block in order to achieve consumer device security. Hence those consumers who actually adopt appropriate safeguards would be to a considerable extent precluded from conducting transactions and making payments on the Internet.


6. Other Approaches

Given the enormous range of vulnerabilities, the ineffectiveness of safeguards, and the serious difficulties involved in imposing liability on consumers, it might appear that consumers will have to get away scot-free, irrespective of the degree of recklessness with which they use the Internet to conduct transactions. Clearly it is not in the interests of society or the economy for that to be the case. It would be preferable to provide an incentive for them to take due care.

Constructing such a scheme is challenging, however. The present arrangements already embody a requirement that consumers be careful. Consumers remain liable for the consequences of compromised credit-card details until they report the problem (EFTS Code at 5.3), and where a security-code such as a PIN is not protected, they bear the first $150 of the loss (5.5(c)). If additional contingent liabilities are to be imposed on consumers, then similar, very carefully judged approaches are essential.

A related concern is that the Code at para. 5.4 suggests that the consumer could be liable where they have "contributed to the loss". The effect of this is currently limited by paras. 5.5 and 5.6. Any amendments to the Code would need to be carefully constructed, however, in order to ensure that consumers did not become disproportionately responsible for losses.

There would be considerable benefits in a multi-partite scheme to:

Such a scheme would require substantial funding. It would need to involve financial institutions, retailers and software providers (in each case perhaps through appropriate industry associations), and governments, in consultation with consumer representative and advocacy organisations. On the other hand, software suppliers might see the scheme as a threat to their business, and financial institutions are likely to seek to avoid the costs involved. Hence each of the various corporations may well obstruct any such scheme.

It is therefore inadequate to limit the discussion about appropriate incentives and disincentives to consumers. The discussion needs to extend to:

The analysis conducted in this paper has demonstrated that consumer devices are inherently highly insecure. There is an urgent need for producers of these devices and the software that runs on them to abandon their cavalier attitudes of the past and take responsibility for producing devices that have far fewer and far less severe vulnerabilities.

Moral suasion and the stimulation of 'self-regulatory codes' are an inadequate response, because the changes required are attitudinal and substantial. Further, an increasing number and variety of organisations, which are currently outside the scope of the EFTS Code, need to be subjected to it. Hence formal legislation is necessary, to establish a regulatory framework, followed by co-regulatory work to enable the promulgation of enforceable codes that are practicable for industry, and that are adaptable sufficiently quickly as the patterns of technology and eBusiness change.

Because of the international nature of the information technology industry, at least bilateral discussions with US regulators are needed, and more likely multilateral discussions, through such venues as the Organisation for Economic Co-operation and Development (OECD) and the Asia Pacific Economic Cooperation (APEC).


7. Conclusions

It is generally perceived that eCommerce and eGovernment offer great scope for efficiency and service benefits. Consumer trust is fundamental to the widespread adoption of transaction-based services, particularly those services that involve payments. Considerable improvements in the security of the infrastructure used for Internet transactions is essential.

This paper has summarised evidence about the feasibility of the authorised users of consumer devices being made responsible for their behaviour, in particular in the context of financial transactions.

There are a great many ways in which a consumer device may conduct transactions, or conduct transactions in a way, that the device's authorised user did not intend, and is not even aware of.

Some safeguards exist that address some of the vulnerabilities and mitigate the effects of some of the threats. These safeguards are difficult and expensive to implement, and the safeguards are in any case incomplete, far from perfect, and not up-to-date. Moreover, some of the vulnerabilities are inherent in the hardware, network connections, systems software and application software. Quite simply, consumer devices are not currently safe, and cannot be rendered safe.

It is not practicable for consumers to achieve control of their devices. As a result, it is impracticable to make consumers responsible for the negative impacts of actions by their machines that they did not authorise. Further, it is untenable to remove the established consumer protections and make consumers responsible for losses arising from the use of consumer devices to make payments. Instead, it is essential that consumers be indemnified against those negative impacts.

Further, actions are needed by business and government to address the parlous state of consumer device security, and the almost complete absence of formal legal safeguards.


Resources

AIC (2005-) `High Tech Crime Briefs' Australian Institute of Criminology, 2005-, at http://www.aic.gov.au/publications/htcb/

AISA, Australian Information Security Association, at http://www.aisa.org.au/

Anderson R. (2001) `Why Information Security is Hard - An Economic Perspective', University of Cambridge Computer Laboratory, 2001-, at http://www.cl.cam.ac.uk/~rja14/econsec.html

AusCERT, at http://www.auscert.org.au/

AusCERT (2006) 'Protecting your computer from malicious code' AusCERT, 10 April 2006, at http://national.auscert.org.au/render.html?it=3352

Belton M (2006) `Understanding Malware and Internet Browser Security', Berbee Information Networks Corporation, May 2006, at http://www.berbee.com/public/learning/WP_UnderstandingMalware.aspx Bradley T. (2006) 'Essential Computer Security: Everyone's Guide to Email, Internet, and Wireless Security' Syngress, 2006

CERT, Bibliography of Security Books and Articles, at http://www.cert.org/other_sources/books.html

CERT (2002) 'Home Computer Security', CERT, 2002, at http://www.cert.org/homeusers/HomeComputerSecurity/

Ciampa M. (2005) 'Security Awareness: Applying Practical Security in Your World' Course Technology, 2nd ed., 2005

Clarke R. (1988) 'Who Is Liable for Software Errors? Proposed New Product Liability Law in Australia' Xamax Consultancy Pty Ltd, December 1988, at http://www.rogerclarke.com/SOS/PaperLiaby.html

Clarke R. (1996) 'Message Transmission Security (or 'Cryptography in Plain Text')' Privacy Law & Policy Reporter 3, 2 (May 1996), pp. 24-27, at http://www.rogerclarke.com/II/CryptoSecy.html

Clarke R. (1997) 'Cookies' Xamax Consultancy Pty Ltd, 1997, at http://www.rogerclarke.com/II/Cookies.html

Clarke R. (2001a) 'Introduction to Information Security' Xamax Consultancy Pty Ltd, February 2001, at http://www.rogerclarke.com/EC/IntroSecy.html

Clarke R. (2001b) 'The Fundamental Inadequacies of Conventional Public Key Infrastructure' Proc. Conf. ECIS'2001, Bled, Slovenia, 27-29 June 2001, at http://www.rogerclarke.com/II/ECIS2001.html

DCITA (2005) 'Taking Care of Spyware' DCITA, September 2005, at http://www.dcita.gov.au/search/click.cgi?url=http://www.dcita.gov.au/__data/assets/pdf_file/30866/Taking_Care_of_Spyware.pdf&rank=2&collection=search

Farmer D. & Venema W. (2005) `Computer forensics', Addison-Wesley Professional Computing Series, 2005

Garfinkel S. & Spafford G. (2001) 'Web Security, Privacy & Commerce, Second Edition' O'Reilly, 2nd ed., 2001 

Gralla P. (2005) 'PC Pest Control' O'Reilly, 2005

Peter Gutmann (2005?) 'The Convergence of Internet Security Threats (Spam, Viruses, Trojans, Phishing)', at http://www.cs.auckland.ac.nz/~pgut001/pubs/blended.pdf

Hill C. (2001) 'Risk of Masquerade Arising from the Storage of Biometrics', Honours Thesis, Dept of Computer Science, Australian National University, November 2001

Jakobsson M. & Myers S. (eds.) (2006) 'Phishing and Countermeasures: Understanding the Increasing Problem of Electronic Identity Theft' Wiley, 2006

James L. (2005) 'Phishing Exposed' Syngress, 2005

Kaiser T. (2000) 'Secure Storage of Private Keys on Commodity Workstations', Unpublished Honours Thesis, Department of Computer Science, Australian National University, November 2000

Lehtinen R. & Gangemi G.T. (2006) 'Computer Security Basics' O'Reilly, 2nd ed., 2006 

Microsoft, 'Security at Home', at http://www.microsoft.com/athome/security/default.mspx

Miller M. (2002) 'Absolute PC Security and Privacy' Sybex, 2002

Mitnick K.D. & Simon W.L. (2002) 'The Art of Deception: Controlling the Human Element of Security' Wiley, 2002

Mohay G. (2003) `Computer and Intrusion Forensics' Artech House, Boston, 2003

Pfleeger C. & Pfleeger S. (2006) `Security in Computing' Prentice Hall, 4th ed., 2006

RFC 2964 (2000) 'Use of HTTP State Management', 'Best Current Practice document, BCP 44 (Moore K. & Freed N.), at ftp://ftp.isi.edu/in-notes/rfc2964.txt

Slay J. & Koronios A. (2006) 'Information Technology Security & Risk Management' Wiley, 2006

Stafford T.F. & Andrew Urbaczewski A. (2004) 'Spyware: The Ghost in the Machine' Commun. Association for Information Systems 14 (2004) 291-306, at http://web.njit.edu/~bieber/CIS677F04/stafford-spyware-cais2004.pdf

Tyree A. (2005) `Banking Law in Australia' LexisNexis Butterworths, Australia, 5th ed., 2005

Wikipedia (2007) 'computer insecurity' at http://en.wikipedia.org/wiki/Computer_insecurity, accessed January 2007, plus many articles linked to from the article


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., a Visiting Professor in the E-Commerce Programme at the University of Hong Kong, and a Visiting Professor in the Department of Computer Science at the Australian National University.

Alana Maurushat is a research associate at Cyberspace Law & Policy Centre, and a part-time lecturer and PhD candidate, all in the Faculty of Law at the University of N.S.W. She is formerly a lecturer and deputy director of the LLM in IT and IP in the Faculty of Law at the University of Hong Kong, where she continues to teach as a visiting lecturer.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 14 January 2007 - Last Amended: 11 April 2007 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/ConsDevSecy.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy