- Most developers and operators are concerned with correctness: achieving desired behavior
- A working banking web site, word processor, blog…
- Security is concerned with preventing undesired behavior
- Considers an enemy/opponent/hacker/adversary who is actively and maliciously trying to circumvent any protective measures you put in place
- The primary attribute that system builders focus on is correctness. They want their systems to behave as specified under expected circumstances. If I’m developing a banking website, I’m concerned that when a client specifies a funds transfer of say $100 from one of her accounts.
- If I’m developing a word processor, I’m concerned that when a file is saved and reloaded, they do get back my data from where I left off. And so on.
- A secure computer system is one that prevents specific undesirable behaviors under wide ranging circumstances. While correctness is largely about what a system should do, security is about what it should not do. Even when there is an adversary who’s actively and maliciously trying to circumvent any protective measures that you might put in place.
Kinds of undesired behavior
- Stealing information:
- Corporate secrets (product plans, source code, …)
- Personal information (credit card numbers, SSNs, …)
- Installing unwanted software (spyware, botnet client, …)
- Destroying records (accounts, logs, plans, …)
- Denying access:
- Unable to purchase products
- Unable to access banking information
- There are three classic security properties that systems usually attempt to satisfy. Violations of these properties constitute undesirable behavior.
- These are broad properties. Different systems will have specific instances of some of these properties depending on what the system does.
- The first property is confidentiality. If an attacker is able to manipulate the system so as to steal resources or information such as personal attributes or corporate secrets, then he’s violated confidentiality.
- The second property is integrity.
- If an attacker is able to modify or corrupt information keep by a system, or is able to misuse the systems functionality, then he’s violated the systems integrity.
- Example violations include the destruction of records, the modifications of system logs, the installation of unwanted software like spyware, and more.
- The final property is availability. If an attacker compromises a system so as to deny service to legitimate users, for example, to purchase products or to access bank funds, then the attacker has violated the system’s availability.
Significant security breaches
• RSA, March 2011
• stole tokens that permitted subsequent compromise of customers using RSA SecureID devices. In 2011, for example, the RSA corporation was breached. I’ll say more about how in a moment. The adversary was able to steal sensitive tokens related to RSA’s SecureID devices. These tokens were then used to break into companies that use SecureID.
Adobe, October 2013
• stole source code, 130 million customer records (including passwords) In late 2013, Adobe corporation was breached, and both source code and customer records were stolen.
Target, November 2013
• stole around 40 million credit and debit cards
Defects and Vulnerabilities
• Many breaches begin by exploiting a vulnerability
• This is a security-relevant software defect that can be
exploited to affect an undesired behavior
• A software defect is present when the software
behaves incorrectly, i.e., it fails to meet its
requirements
• Defects occur in the software’s design and its
implementation
• A flaw is a defect in the design
• A bug is a defect in the implementation
Example: RSA 2011 breach
Exploited an Adobe Flash player vulnerability.
- A carefully crafted Flash program, when run by the vulnerable Flash player, allows the attacker to execute arbitrary code on the running machine.
- This program could be embedded in an Excel spreadsheet and run automatically when the spreadsheet is opened.
- The spreadsheet could be attached to an e-mail masquerading to be from a trusted party (spear phishing).
Considering Correctness
• The Flash vulnerability is an implementation bug
• All software is buggy. So what?
A normal user never sees most bugs, or works around them. Most (post-deployment) bugs due to rare feature interactions or failure to handle edge cases. Assessment: Would be too expensive to fix every bug before deploying.
• So companies only fix the ones most likely to affect normal users.
Considering Security
An adversary is not a normal user!
• The adversary will actively attempt to find defects in rare feature interactions and edge cases. For a typical user, (accidentally) finding a bug will result in a crash, which he will now try to avoid. An adversary will work to find a bug and exploit it to achieve his goals