Page tree
Skip to end of metadata
Go to start of metadata

Part 1: Parts of a software project

Excursion: We will quickly review what a modern software development process looks like.

  • Until now, we have mainly discussed what goes wrong during low-level design and implementation. However, implementation in a real software project does not happen in isolation.
  • How well a software project can actually perform software security work depends heavily on how well the actual software development works.
  • A modern software development organisation usually has one of the following roles. In small teams, these may be performed by the same people. Some company cultures favour separating these roles.
    • Product owners. These people are a part of product management. Their primary role would be to understand the needs of the customer (users of the software), and to articulate these needs as requirements, and to prioritise these requirements in relation to each other. They usually carry business responsibility for “doing the right thing”.
    • Service designers. Often very close to product ownership, they define how the service looks like, how users interact with the system, and how the user experience (“UX”) ties in with the business goals. Sometimes you find dedicated design people, user interface (UI) designers, and graphic designers too.
    • Developers. They do actual development (write code), and are also responsible for design and architecture, at least within their own area.
    • Architects. Not every organisation has these, but usually, when an organisation is large enough to have multiple teams, someone has to be responsible for a unified technical vision. The selected architectural model strongly affects whether there are separate architects. In some cases, you could even find a “security architect”.
    • Testers and Quality engineers. They do testing, and maintain test automation. Sometimes they are developers. Developing automated tests is usually writing software. In previous times, there could have been a more robust separation of quality assurance and development as organisational units, but the difference is very quickly disappearing.
    • Operations. People who take care of the actual running of the systems. This area is currently under rapid change, as operations responsibilities are being shifted on development teams (“DevOps”) and many operations activities are being automated and managed with tooling. Still, you can find entirely dedicated operations organisations.
  • All of these roles have a job to do in software security. You can probably guess most of these, but I want to highlight some of the uncommon ones:
    • Product owners should recognise users’ security and privacy needs, and prioritise both functional and non-functional security work in relation to other work appropriately.
    • Service design can nudge the users towards using the system securely; secure use can be made a low-friction one. In addition, many of the privacy aspects can be handled already on this level, even before a single line of code has been written.
  • A modern software development process usually has the following concepts.
    • The requirements are stored on a product backlog. This is a prioritised list of stuff that needs doing. Often, it is co-managed by the product owner and the developers.
      • It is fairly typical for security activities not to be visible here, but it would make them more explicit.
      • It would usually be really good to have time-consuming security activities such as building security tests into test automation, or performing threat modelling, as tasks on the product backlog. This way they will actually get a time allocation. It also helps to prioritise them in relation to functional requirements.
      • The product backlog often has several different abstraction levels for requirements; they are often termed epics (large user journeys), (user) stories (smaller bits of user experience) and tasks (something to be implemented). This varies hugely between teams.
    • The work management system for developers. This is often either Scrum or kanban. Both are based on processing a stream of stories tasks, taken from the backlog.
      • Both methodologies aim at managing the work at a level that is sustainable, by enforcing a thing called a work-in-progress limit (WIP limit). Scrum does this by splitting work into timeboxes, typically of 2 weeks, and taking on an amount of work that can be performed in that time; kanban works by having a limit of how many tasks can be under work at any given time, but not having any specific time limit for how long a task needs to complete.
      • Security-wise, there is no real difference between Scrum and kanban. The challenges that some have seen with agile development are mostly product management and product backlog related, and the work management method does not affect security work.
    • The build, test automation, delivery and deployment pipeline, often referred to as “CI” (for Continuous Integration) or “CI/CD” (Continuous Integration / Continuous Delivery).
      • This is the place where code goes once it has been committed, potentially after peer review. This includes automated tests and anything that needs to be done to package the code into something that runs wherever it is supposed to run.
      • This pipeline is currently a really hot topic in software security. Not only are the existing automated tools (such as static analysis tools, about which latyer) being integrated in CI/CD, but dynamic analysis tools are being introduced here as well.

Part 2: Software security activities in a software development process

  • A “secure” software development process is often termed SDLC, Secure (or Security) Development Lifecycle. (Sometimes SDLC means Software Development Lifecycle, in which case the secure version is S-SDLC. You’re welcome.)
    • When the lifecycle also supports privacy, the term that gets thrown around a lot is Privacy by Design - more on that on the 5th lecture.
  • It is very typical for a company to start from security testing - and within testing, usually from exploratory security assessments, and move towards development-time ”secure coding”.
  • Security testing, however, is by nature trying to detect what has already gone wrong.
  • The contribution of the various software project activities to software security:
    • Product management & requirements engineering: Avoidance of architectural and design level security issues; giving enough time and people for secure software engineering; balancing functional and non-functional security requirements.
      • ”Functional requirements are those who actually result in a feature or functionality. A password dialog, use of a security protocol, firewall configuration, sandboxing, and such things are ”functional”.
      • ”Non-functional” requirements are about how something has been implemented. This is about avoidance of exploitable weaknesses in any part of the code (not just security-related code) and qualities such as usability and performance of security features.
    • Design: Avoidance of architectural and design level security issues, mainly through creation and updating of an attack model (what is it that we are trying to protect against) and threat modelling (also called architectural risk analysis).
    • Development: Avoidance of design and implementation level security bugs, and using security features correctly so that they actually offer the intended security control effect.
    • Building: Specifically in an automated build setup, offers automation opportunities for various security activities. The most obvious contenders are static code analysis and security testing. The build process can also act as a ”gate”, where security issues can stop the delivery of code into further testing or production.
    • Testing: Verifying whether the security functionality works as expected (through positive and negative testing), and trying to determine whether there are any non-functional security issues such as robustness problems. The levels and types of testing and their contribution to a security test vary.
      • Unit testing: Positive and negative tests for security features. (Does a wrong password return an error code? Does an invalid input get filtered?)
      • System testing: Positive and negative tests for security features, and robustness testing. (Does injection of bad data cause issues downstream?)
      • Acceptance testing: User story / use case level checks for security features and verifying that ”attacker use cases” have an expected result. Exploratory security testing. (What if an attacker tries to brute force a password? What can we see in a HTTP request on the line? Is the client actually checking certificates properly?)
      • Security assessments, penetration testing (pentesting) or intrusion testing: These are usually exploratory test runs that are often done by an external party due to lack of in-house expertise, or because of a regulatory or contractual requirement.
    • Deployment: Deployment automation provides a faster round-trip time for security fixes, and the deployment configuration is where ”hardening” happens for the underlying platform. For applications that run on devices that the software vendor doesn’t control, this includes secure software update systems. In many cloud deployments, deployment-time security may be just as important to get right as the development-time security.
    • Operations: Involves monitoring the system for signs of an attack, and responding to them, as well as ensuring that the software dependencies, the platform, and the network stay secure, unless these are provided as a part of the deployment. This also provides data for reacting to security issues (e.g., logging and monitoring).
    • Bug management and support operations: From security perspective, operating a security contact point for internal and external events, tracking known vulnerabilities in dependencies and the platform, and feeding all this back into the development.
    • Decommissioning: From security perspective, data retention and erasure is an area which is notable here.
  • To summarise all the above: Software security is a chain of events throughout the whole ”lifecycle” of software creation. The activities, as described here, support and build on each other.

Part 3: Software security frameworks

  • There are a number of software security activities in a software development organisation that are not directly connected to implementation activities.
  • In addition, it is important to have a balanced set of software security activities. An example:
    • Attackers are usually creative. This kind of hints that quite possibly it would also make sense to have a creative element also on the software development side. You could do that through some sort of threat modelling activity, or by performing exploratory security testing yourself.
    • Humans are usually bad at repetitive tasks. Some tasks lend themselves very well for automation - checking for known vulnerabilities in dependencies is a good example - so there’s a place for less creativity and more automation here. This reduces the probability of human error.
    • There should be activities both during the development and operations. The development side is more about preventing issues, and operation is about detection and response.
  • BSIMM (”Building Security In Maturity Model”) is a project by Synopsys, a security consultancy.
    • Based on actually interviewing companies and trying to detect what exactly they are doing. The model gets usually an annual update.
    • Summary data is available under a permissive license and this gives a good view as to what companies are doing in reality. However, it should be noted that if a company actually takes part in BSIMM, they are likely to be ”in the know” already, so I suspect that this gives a rosier picture than the reality is.
    • 12 ”practice areas” containing a total of 111 ”activities”, in three ”maturity levels”.
  • OpenSAMM, or SAMM (”Software Assurance Maturity Model”) is an OWASP project that has common, although very distant, roots with BSIMM. You can compare and contrast OpenSAMM with BSIMM activity areas.
    • OpenSAMM is more prescriptive as it is not based on actual research on existing companies. In spring 2015, however, they claimed to be planning to provide data on its use.
  • ISO 27034 is one of the newest members in the ISO 27000 series of standards.
    • The first part has been published as ISO 27034-1, and it specifies the information model for a software security program.
    • Specifies vocabulary and concepts for software security process requirements for an organisation.
    • One peculiar thing about ISO 27034-1 is that it uses Microsoft SDL as an example in its appendix.
    • My personal guess is that ISO 27034 becomes more important in the future as the more specific parts will get published; it may also affect the vocabulary. Those companies who currently require ISO 27001 (the canonical “infosec” standard) from their vendors might require ISO 27034 too at some point.
  • PA-DSS (Payment Application Data Security Standard), a part of PCI-DSS (Payment Card Industry…). Any application processing (or storing or transmitting) credit card numbers needs to conform to this. PCI-DSS is a big reason why websites use third party payment processors - they don’t want to touch the card data.
  • Specifically for the Finnish audience, there is a VAHTI (”Valtionhallinnon tietoturvaohje”, ”Government information security guideline”) Application Security guideline from 2013. This document mainly has only national significance, but may become soon a de facto requirement for any governmental or public sector procurement.

Part 4: In preparation for the weekly exercise - Typical tooling

  • There are a lot of vendors providing automated or semiautomated tools that they hope software development projects would integrate into their lifecycle. Three major tool types with a significant market are discussed here.
  • This is not intended to explain the deep details and usage modes of each tool, but only as a quick explanation of the categories and aims in order to be more productive in the weekly exercise.
  • Code analysis tools
    • ”Static” analysis is ”static” because the code does not execute during analysis. To contrast with ”dynamic” tools where the code is being executed. They are often referred to as SAST and DAST.
    • Linters (static)
      • Linters are mainly for purely syntactic analysis. These tools are often used in editors or IDEs to highlight lines that have syntax errors, use undefined variables, or something like that. Most languages have one or more linters available.
      • Using a linter should be done by anyone who programs. They are really great productivity boosters when used in conjunction with syntax highlighting in an editor. However, linters often cannot detect security bugs.
      • Some linter-type tools may provide metrics that may have be indirectly indicative of security issues. For example, McCabe’s cyclomatic complexity provides a count of the linearly independent paths through a certain block of code. The argument is that if the cyclomatic complexity is too high (often more than 10), it becomes hard for the programmer to keep track of what is happening, giving rise to bugs - including security bugs. Simple metrics such as this should not be played down as a useful security risk measurement. They could also be used to identify parts of code that needs more manual code review or refactoring.
      • Open source examples include pylint and pyflakes for Python and JSLint for JavaScript.
    • Data flow analysis (static)
      • Static analysis tools that actually do data flow analysis can be useful in finding actual security bugs.
      • Static analysis tools can do ”tainting” (see session 2) statically. They are able to track the variables and branching through the code, and they can do a series of ”what if” tests. They can detect blocks of ”dead” code, meaning code that cannot be reached by any inputs. They can also detect data flows between untrusted inputs and dangerous outputs.
      • Static analysis tools are usually either language-specific, or they could compile the original source into an intermediate language. Some tools work directly with the compiled binary or bytecode.
      • The effectiveness of static analysis tools is highly dependent on its capacity to support the specific language, platform, and the set of frameworks and libraries that you are using.
      • Open-source examples of static analysis tools include FindBugs for Java.
    • Dynamic analysis
      • Dynamic analysis tools can be used to monitor the program when it is running (hence ”dynamic”).
      • Dynamic tools can, for example, detect memory leaks and detect if a freed memory block is used. These are especially useful analyses if you are programming in a language that is low-level and you have to do your memory management by yourself.
      • Dynamic tools can also be used for a number of non-security-specific profiling tasks, so they are useful in performance engineering. They might be already in use in a project because of one of these reasons.
      • Open-source examples of dynamic analysis tools include Valgrind, and compiling/linking target binaries with Address Sanitizer (see week 1 exercise).
  • Security testing tools
    • These groups are not really separate; an attack proxy can be used to ”scan” a web application and to test its robustness. I am just artificially describing the three groups here - this is not any kind of officially recognised division, but useful to bear in mind.
    • In selecting what tools are going to be used, it is useful to bear in mind the totality of software security activities performed in a project. If there is a significant amount of code review being done, perhaps it would be better to start tooling from fuzzing. If you already do a lot of testing after the implementation, perhaps your best bet would be to invest in something that precedes implementation.
    • In a typical real-life case, a manager (who really has no security experience) orders a ”scan” for a web application. The word really elicits thoughts about an automated activity that somehow, actively or passively, could do a comprehensive check of the target application. In most cases this is not true. Either the scan will not find any complex issues, or it is not really a scan but actually exploratory testing that is just augmented by automated and semiautomated tools.
    • Robustness testing tools
      • This group includes fuzzers and other tools that inject incorrect inputs. Fuzzers were already discussed in the first session, so please refer there for details.
      • Open-source examples include Radamsa (which you used in the first exercise), the American Fuzzy Lop (AFL), Peach, Bunny the Fuzzer, and many more.
    • Attack proxies and sniffers
      • This group of tools acts as a middleman for traffic, or looks at the traffic. Attack proxies were described in the second session, so you are already very familiar with them.
      • Sniffers and protocol analysers are used to look at the traffic. One really useful mode of using a protocol analyser is just to grab some traffic and have an expert look at it. It is surprising how often there are some oddities that may point to a bug. This, however, requires quite a bit of experience to see what is normal and what isn’t.
      • Open-source examples include Zaproxy (an attack proxy) and Wireshark (a sniffer/protocol analyser).
    • Configuration checkers
      • Configuration security sits on the borderline between operations security and software security. As a part of move towards DevOps (combined development and operations), operational scanners become more widespread in the development teams, too.
      • Examples include Docker Bench for Security, for checking Docker deployment status, and for checking TLS configuration.
    • Operational & deployment tools
      • Web application firewalls
        • WAFs try to detect anomalous / attacker traffic before it hits the target system.
        • Instead of patching a web application, some companies just add a ”detection” to a WAF.
      • Sensors
        • Sensors sit on hosts that are running workloads (for example, virtual hosts that are running Docker containers), and look at various operating system parameters.
        • If the sensors detect anomalous behaviour, they’ll trigger an alert.
        • An open source example (although tied to a commercial offering) is capsule8, and in the more classical setting, Tripwire is a well-known example of host-based detection.
    • Virtualisation and sandboxing
      • Virtualisation allows running untrusted code in sandboxed environment.
      • Operating systems also provide facilities for this, such as Linux Containers. Specifically on Linux, AppArmor is an easy-to use system to provide extra insulation. Docker and its libcontainer provide a fairly straightforward way to leverage these.
    • Logging and monitoring
      • Crossing over to more traditional IT security, ensuring that your application can be (audit) logged and monitored, falls into the functionality category, but the log analysis tools enable real-time operational visibility and enable you to react if something goes wrong. They also help to see what “normal” looks like.
    • Asset tracking (inventory)
      • For example, what versions of software and their dependencies are we running? Are we running just those processes we are supposed to? Do we only have the correct ports open?

Reading List

This is a list of useful documents that will enhance your understanding of the course material. The reading list is session-by-session. “Primary” material means something you would be expected to read if you are serious about the course, and may help you to do the weekly exercise; “Additional” material you may want to read if you would like to deepen your understanding on a specific area.

Primary material

  • Have a look at the BSIMM software security activity list. This is a list of software security activities that many companies have been observed as performing. Under each activity, there are a number of examples of how companies have actually done this.
  • Have a look at Microsoft’s SDL process guidance. You can click on the various phases of the SDL to see what sort of activities Microsoft would mandate in their development.

Additional material

  • Ross Anderson: Security Engineering, 2nd Edition, 2008. This is a great book, and fun to read, although you could probably build a house using them as bricks. If you want to get serious about security engineering - especially systems which have unique properties, and not just run-of-the-mill web apps - then I would really recommend reading this book in its entirety at some convenient time. For the purposes of this session, the Chapter 25, Managing the Development of Secure Systems, is interesting.
  • If you are interested in agile and lean methods, I have written about software security in agile product management, especially driving security activities through product backlogs. An article of mine is available in the Handbook of The Secure Agile Software Development Life Cycle, pages 16-22.


This is lecture support material for the course on Software Security held at Aalto University, Spring 2018. It is not intended as standalone study material.

Created by Antti Vähä-Sipilä <>, @anttivs. Big thanks to Sini Ruohomaa and Prof. N. Asokan.

  • No labels