Risk-limiting audits provide statistical assurance that election outcomes are correct by hand counting portions of the audit trail--paper ballots or voter-verifiable paper records. We sketch two types of risk-limiting audits, ballot-polling audits and comparison audits, and give example computations. Tools to perform the computations are available at statistics.berkeley.edu/~stark/Vote/auditTools.htm.
Australia has a long history of transparent, high-integrity secret ballot elections. As elections are increasingly dependent on electronic systems, the traditions of transparency and privacy must be extended to new technologies and new ways of scrutinising them. We examine this challenge and describe the most promising way forward. The Victorian Electoral Commission (VEC) is undertaking a pioneering project that aims to set the standard for how Australian e-voting systems should be commissioned, developed and scrutinised. Through collaboration with the Universities of New South Wales, Melbourne, Luxembourg and Surrey, the VEC is developing the largest universally verifiable public e-voting system in the world, based on Prêt à Voter.
Nowadays European citizens’ digital identity is usually based on the X.509 standard. The management of this identity by public administrations is an important challenge that sharpens when interoperability among the public administrations of different countries becomes necessary. Owing to the diversity of identity management systems, when the user of a given system seeks to communicate with governments outside the scope of their own local identity management system, both management systems must be linked and understand each other. To achieve this, the European Union has addressed the creation of an interoperability framework for Identity Management Systems or IDMs. This paper provides an overview of current state of IDMs at a pan-European level, analyzing and identifying issues where there is an agreement, those that have not been solved and which are preventing the adoption of a large-scale model.
Federal Reserve Regulation E guarantees that US consumers are made whole when their bank passwords are stolen. The implications lead us to several interesting conclusions. First, emptying accounts is extremely hard: transferring money in a way that is irreversible can generally only be done in a way that cannot later be repudiated. Since password-enabled transfers can always be repudiated this explains the importance of mules, who accept bad transfers and initiate good ones. We demonstrate that it is the mule accounts rather than those of victims that are pillaged. We argue that passwords are not the bottle-neck, and are but one, and by no means the most important, ingredient in the cyber-crime value chain. We show that, in spite of appearances, password-stealing is a bad business proposition.
The Parfait static code analysis tool started as a research project at Sun Labs (now Oracle Labs), to address runtime and precision shortcomings of then-available C/C++ tools, as pointed out by developers within the company. After developers started to see and verify the research outcomes, they raised more practical requests, to ensure the tool would be easy to use and integrate. This helped to transition Parfait from a research artifact to a tool for developers. At present, Parfait is used on a daily basis to prevent new defects from being introduced into codebases, as well as reporting defects in existing code. It has been integrated into the build process of several organizations at Oracle. In this paper we explain the research design goals for Parfait, the practical development features that made the tool popular amongst developers, and our experiences with deploying the tool into our company's development organizations.
Common Criteria for Information Technology Security Evaluation has the ambition to be a global standard for IT-security certification. The issued certifications are mutually recognized between the signatories of the Common Criteria Recognition Arrangement. The key element of mutual relationships is trust. A question raised in this paper is how far trust can be maintained in Common Criteria when additional signatories enter with conflicting geopolitical interests to earlier signatories. Other issues raised are control over production, the lack of permanent organization in the Common Criteria, which leads to concerns of being able to oversee the actual compliance. As Common Criteria is formulated today it is unlikely that it would survive over time. The reasons why it might fail is the rigid framework, rapid technical development makes a security target a moving target leading to instability and uncertainty, and the increased militarization in cyberspace moving from information assurance to information operations.
Most networks today employ static network defenses. The problem with a static defense is that an adversary has unlimited time to circumvent it. We propose a moving target defense, based on the Internet Protocol version 6, that dynamically obscures network-layer and transport-layer addresses. Our technique can be thought of as "frequency hopping" in the Internet protocol (IP) space. By constantly moving the logical location of a host on a network, our technique prevents targeted attacks, host tracking, and eavesdropping. We demonstrate the feasibility and functionality of our design using prototypes deployed on the Virginia Tech campus-wide IPv6 network.
Neighbor Discovery Protocol (NDP) is one of the main protocols in IPv6 suite. However, it has no security mechanisms and it is vulnerable to various attacks. Therefore, SEcure Neighbor Discovery (SEND) Protocol is designed to countermeasure NDP threats. SEND is based on the usage of RSA Key pair, Cryptographically Generated Addresses (CGA), digital signature and X.509 certification. Unfortunately, SEND deployment is still a challenge for several reasons. First, SEND is compute-intensive. Second, SEND deployment is not trivial, and the SEND Authorization Delegation Discovery (ADD) is mostly so far theoretical rather than practical. Third, operating systems lack the sophisticated implementations for SEND. In this article, we give an overview of the SEND deployment challenges, and review some of the proposals to optimize SEND to make it applicable.
For optimum success, static analysis tools must balance the ability to find important defects against the risk of false positive reports. Each reported warning must be interpreted by a human to determine if any action is warranted, and the criteria for judging warnings can vary significantly depending on the role of the analyst, the security risk, the nature of the defect, the deployment environment, and many other factors. These considerations mean that it can be difficult to compare tools with different characteristics, or even to arrive at the optimal way to configure a single tool. This paper presents a model for computing the value of using a static analysis tool. Given inputs such as engineering effort, the cost of an exploited security vulnerability, and some easily-measured tool properties, the model allows users to make rational decisions about how best to deploy static analysis.
With water we have trust that qualities harmful to its intended use are not present. In order to avoid a regulatory “solution” to problems with “contaminants” that endanger software’s intended use, the industry needs to put in place processes and technical methods for examining software for the contaminants that are most dangerous given the intended use of specific software. The Common Weakness Enumeration (CWE™) offers the industry a list of potentially dangerous contaminants to software. Common Weakness Scoring System (CWSS™) and Common Weakness Risk Analysis Framework (CWRAF™) provide a standard method for identifying which of these dangerous contaminants would be most harmful to a particular organization, given the intended use of a specific piece of software within that organization. By finding systematic and verifiable ways of identifying, removing, and gaining assurance that contaminated software has been addressed, software providers can improve customers’ confidence in systems and possibly avoid regulatory solutions.
Just as seat belt use is wide spread, we argue that use of a static analyzer should be part of ethical software development. Drawing on our experience with three Static Analysis Tool Expositions (SATE) we show that static analysis report actual vulnerabilities. Even though the expression of most weaknesses is far more complex than a single bug of this type at exactly these lines of code, static analysis tools identify real vulnerabilities. Their information-rich reports and graphical interfaces help developers efficiently and correctly understand weaknesses and possible consequences. Tool's capabilities complement expert analysis. We also collected thousands of engineered reference programs with known weaknesses in the SAMATE Reference Dataset (SRD). Using SATE data and the publicly-available SRD programs, we plan to develop benchmarks so users can be confident about how much assurance the use of static analyzers provides.
While open source software presents opportunities for software acquisition, it also introduces additional risks. The question of how to select open source applications needs to be based on security risks as well as features. Risks include security vulnerabilities, of which published vulnerabilities are only the tip of the iceberg in terms of the total security vulnerabilities in an application. Having the source code of an application allows us to look deeper at its security. We use static analysis to help evaluate the risks presented by application. In this paper, we introduce SAVI (Static Analysis Vulnerability Indicator), a metric designed to assess risks of using software built by external developers. This metric combines several types of static analysis data to rank application vulnerability. We analyze five open source projects studied over a four year period, finding strong correlations between static analysis metrics and the quantity of subsequently reported vulnerabilities.
Biometric technology has been increasingly deployed in the last decade, offering greater security and convenience than traditional methods of personal recognition. But although the performance of biometric systems is heavily affected by the quality of biometric signals, prior work on quality evaluation is limited. Quality assessment is a critical issue in the security arena, especially in challenging scenarios (e.g. surveillance cameras, forensics, portable devices or remote access through Internet). Different questions regarding the factors influencing biometric quality and how to overcome them, or the incorporation of quality measures in the context of biometric systems have to be analyzed first. In this paper, a review of the state-of-the-art in these matters is provided, giving an overall framework of the main factors related to the challenges associated with biometric quality.
The unacceptable occurrence of information breaches demands a vigorous response. The traditional approach is by using policies to constrain and control. Information security policies inform employees about appropriate uses of information technology. Unfortunately, there is limited evidence of the effectiveness of policies in reducing losses. This paper explores the possible reasons for this, and reports on a survey carried out to detect the presence of these factors in an NHS health board. A plea is made for attention to be paid to the entire system, and not a myopic focus on individuals. The survey shows how the pressures and rules imposed by the policies often place staff in an impossible position. They sometimes feel this leaves them no option but to break the rules, simply to get their jobs done. The paper concludes by identifying areas where the policy formulation and implementation processes can be improved to alleviate these pressures.
It is a common requirement in real world applications for untrusting parties to be able to share sensitive information securely. We describe a secure anonymous database search scheme (SADS) that provides exact match capability. Using a new primitive, re-routable encryption, and the ideas of Bloom Filters and deterministic encryption, SADS allows multiple parties to efficiently execute exact match queries over distributed encrypted database in a controlled manner. We further consider a more general search setting allowing similarity searches, going beyond existing work that considers similarity in terms of error-tolerance and Hamming distance by capturing semantic level similarity in our definition. Building on the cryptographic and privacy preserving guarantees of the SADS primitive, we then describe a general framework for engineering usable private secure search systems.
PrePrint: Detecting Targeted Malicious Email Using Persistent Threat and Recipient Oriented Features
Targeted malicious emails to enable computer network exploitation have become more insidious and more widely documented in recent years. Beyond spam or phishing designed to trick users into revealing personal information, targeted malicious email (TME) facilitates computer network exploitation and the gathering of sensitive information from targeted networks. These TMEs are not singular unrelated events, instead they are coordinated and persistent campaigns that can span years. We survey existing email filtering techniques, implement new techniques for detecting TME and compare these new techniques to two traditional detection methods, SpamAssassin and ClamAV. The new email filtering techniques are based on using persistent threat and recipient oriented features of email with a random forest classifier. Incorporating these features improves the detection of TME over SpamAssassin and ClamAV while maintaining reasonable false positive rates. During testing, the new techniques correctly classify 91% of TME as compared to the 16% identified by SpamAssassin+ClamAV.
The transition from today’s power systems to the smart grid will be a long evolutionary process. While it might introduce new vulnerabilities, it will also open up for opportunities for improving system security. In this article we consider various facets of power system security. We discuss the difficulty of achieving all-encompassing component level security in power system IT infrastructures due to its cost and potential performance implications. We then outline a framework for modeling system-wide security, which facilitates the assessment of the system’s security despite its complexity by capturing the interaction between system components. We use the example of power system state estimation to illustrate how the security of the system can potentially be improved by leveraging the knowledge of the physical processes and the significant amount of redundant information. Finally, we touch upon the problem of information availability, a key security requirement in power system control and operation systems.
To correct geometric distortion and reduce space and time-varying blur, a new approach is proposed in this paper capable of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach reduces the space and time-varying deblurring problem to a shift invariant one. It first registers each frame to suppress geometric deformation through B-spline based non-rigid registration. Next, a temporal regression process is carried out to produce an image from the registered frames, which can be viewed as being convolved with a space invariant near-diffraction-limited blur. Finally, a blind deconvolution algorithm is implemented to deblur the fused image, generating a final output. Experiments using real data illustrate that this approach can effectively alleviate blur and distortions, recover details of the scene and significantly improve visual quality.
PrePrint: A Novel Bayesian Framework for Discriminative Feature Extraction in Brain-Computer Interfaces
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI, in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatio-spectral filter optimization is formulated as the estimation of an unknown posterior pdf that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.