Showing posts with label Application Security. Show all posts
Showing posts with label Application Security. Show all posts

Tuesday, April 6, 2021

Keeping track of application security flaws

This article provides a granular view into how to track and visualise application security flaws. It builds upon a previous article that provided a high-level overview of how to keep track of your application security posture. We'll do a deep dive into a few metrics identified within the original article. This will help you to understand how to visualise the metrics and what to look out for when analysing / trending the data.

Before getting started its important to be aware that garbage in (flawed input) will lead to garbage out (flawed output). Ensure the flaws identified by your security controls are genuine. A combination of false positive (incorrectly identify a vulnerability) and false negative (incorrectly identify that a vulnerability does not exist) will distort your findings.

Reporting on flawed data can be particularly problematic as you may incorrectly prioritise and resource unnecessary mitigations or fail to act in situations where mitigations are required.

Finding / Flaw Creation Rate

Track the rate of newly created flaws over a set period. Flaws are often introduced due to:

  • The deployment of changes;
  • Newly identified vulnerabilities in utilised technologies;
  • A failure to maintain technologies or adhere to security best practice standards.

What to look out for

Be aware that both upward or downward trends require further investigation as they can be considered positive or negative.

No change

If the number of identified flaws remains consistent this indicates that the security posture of your application/s is being maintained.

Upward trend

Positive
This can indicate improvements in your capability to identify flaws. This may include the increased:

  • Effectiveness in flaw identification tools / techniques
  • Scope of systems covered in your programme

Negative
This can indicate declining security standards.

Downward trend

Positive
This can indicate improving security standards.

Negative
This can indicate a reduction in your capability to identify flaws. This may include the decreased:

  • Effectiveness in flaw identification tools / techniques
  • Scope of systems covered in your programme

Finding / Flaw Remediation Rate

Track the rate of flaw remediation over a set period. Flaws are remediated due to:

  • The deployment of changes;
  • Patching of vulnerabilities in utilised technologies;
  • Maintenance of technologies or alignment to security best practice standards.

What to look out for

Be aware that both upward or downward trends require further investigation as they can be considered positive or negative.

You will need to ensure that flaws marked as remediated have been fixed. Incorrectly closing flaws distorts the remediation rate as well as the overall security posture of the application.

No change

If the number of remediated flaws remains consistent this indicates that the resourcing level is being maintained.

Upward trend

Positive
This can indicate an increased level of effort / resourcing or more effective enforcement of security standards.

Negative
This can indicate the potential gamification of the vulnerability management process. Check to ensure that flaws are not being incorrectly closed or suppressed.

Downward trend

Positive
This can indicate a reduction in the total number of outstanding flaws.

Negative
This can indicate a decreased level of effort / resourcing or a decline in enforcement / adherence to security standards.


Flaw Growth Rate

The growth rate is derived from the flaw creation and remediation metrics. This is calculated by:

Flaw Growth Rate = Flaw Creation Rate - Flaw Remediation Rate

What to look out for

Upward and downward trends have a clear positive and negative correlation. From a security risk perspective, you want to see either no change or a downward trend to ensure that the associated level of risk is at least being maintained.

No change

A flat growth rate indicates that the security posture is being maintained at a consistent level. This may be an issue if you have a significant backlog of flaws.

Upward trend

Negative
This indicates that the number of open flaws is increasing. An upwards trend can be a good indicator of increasing risk exposure.

Downward trend

Positive
This indicates that the number of open flaws is decreasing. A downwards trend can be a good indicator of decreasing risk exposure.


Visualising the data

The following charts help to demonstrate how you can visual this reporting to support with the analysis of your data. The charts have been created based on the below table.

Flaw Creation & Remediation Rates

The below bar chart summarises the flaws identified and remediated over a period of six months.

If you were seeing this within your own data, you would want to determine why there is such a disparity between the rate of flaw creation and remediation. This is highlighting a concerning upwards trend.

Flaw Growth Rate

The below waterfall chart demonstrates the flaw growth rate over a period of 6 months. This clearly identifies a growth trend and shows that the total number of flaws have grown by 46 over that period.

Whilst the chart demonstrates a lagging (historic) indicator this can also be used as a leading (future) indicator in projecting trends. Given the identified average monthly growth rate of 8 flaws, you can predict that based on the current trajectory the backlog of flaws will end up doubling to 92 within the next 6 months. This provides a clear indication of increasing risk exposure.

What actions should you consider taking?

On the basis that the reported data is correct there are a couple of actions that you will want to take.

Reduce newly created flaws

Its always easier and more cost effective to address the flaws early in the software development lifecycle. You will want to consider:

  • Defining and enforcing a secure coding standard;
  • Integrating security tools (i.e. SAST, DAST) into the development lifecycle;
  • Improving the security capability of your developers / testers
  • Improving the security team engagement into the development workflow.

These actions will help to reduce the number of identified flaws within new deployments.

Remediate the flaw backlog

The flaw growth rate has led to 46 flaws in the production environment. The existing remediation priority / resourcing is insufficient to maintain the flaw backlog let alone reduce it. Investigate what can be done to increase the rate of remediation.

Correlating the flaw growth rate with the underlying risk will help you to indicate where the level of risk is outside of your company’s appetite. In doing so this may help you in getting increased resource allocation to address the flaws.

Tuesday, October 27, 2020

Keeping track of your applications' security posture

There are so many different indicators that you can use to track the security posture of your applications. The challenge is to determine what to track and what good looks like. Make sure that the indicators that you track offer value and aren’t just creating noise. You’ll also need to differentiate between increased risk and improved effectiveness of security controls.

The intention is to comprehensively implement controls and ensure they are efficient and effective at meeting the desired outcome. Do your indicators help you do this?

In this article I’ll be discussing application security indicators and some wider considerations around how to use them to monitor the ongoing effectiveness of your security initiatives and programs.

General Considerations

Some basic considerations:

  • Check which CVSS version they utilise, where possible use a consistent version;
  • Check comparable findings across tools to ensure vulnerabilities have consistent severity ratings;
  • Question the validity of the findings, not all findings will be valid;
  • Tools require refinement to reduce false positive and false negative results;
  • Watch out for exceptions being creating around false results, check that these are and remain valid.

Quality of findings

Its important to note that the quality of findings will vary across controls. Penetration testing will involve an individual proving the existence of a vulnerability. Automated tools especially those that are unauthenticated have a higher probability of identifying false results.

Of the total findings identified check how many are:

  • True positive;
  • False positive – incorrectly identify a vulnerability;
  • False negative – incorrectly identify that a vulnerability does not exist.

You’ll need to go through a process of refinement to reduce the number of false findings. This may involve recording respective false findings as exceptions within your tools.

Potential Indicators

There are lots of potential indicators that you can track. The following are some indicators for you to consider. These are all impacted by the quality of findings and the consistency of ratings. Refine the tools used to reduce any false results and ensure consistent severity rating of findings.

Control Implementation Indicators

According to NIST 800-55 implementation measures demonstrate “progress in implementing programs, specific security controls, and associated policies and procedures.”

Number of automated tests and tools

Determine what proportion of your applications are in scope for your security services. From that scope check how many are covered by your services. It doesn’t matter how effective your controls are if the scope of implementation is limited.

Assessment Coverage

Check what proportion of the application is covered by the assessment. Seek to maximise the coverage of your controls.

Assessment Frequency

Check how frequently security assessments are being performed against the in scope applications. Does this frequency adhere to your or industry defined standards?

There is a balance to be found according to your organisations risk profile and the resources available to you. Testing:

  • Infrequently will leave vulnerabilities undetected on your applications for longer durations;
  • Frequently will provide you with a more frequent snapshot of your security posture but is more resource intensive to operate.

Control Effectiveness / Efficiency indicators

According to NIST 800-55 Effectiveness / Efficiency measures “monitor if program-level processes and system level security controls are implemented correctly, operating as intended, and meeting the desired outcome”. The following are some potential indicators to consider.

Number of application vulnerabilities

At a very basic level you can track the number of vulnerabilities across one or more applications. The number is typically tracked along with the respective severity ratings:

  • Critical;
  • High;
  • Medium;
  • Low.

Ratings are reflective of the vulnerabilities exploitability and impact. The vulnerability score is typically derived using the Common Vulnerability Scoring System (CVSS). It is important to note than a combination of vulnerabilities can represent a higher severity to the application than the individual ratings considered in isolation. For instance two medium findings combined may constitute a critical vulnerability.

Type of vulnerability

Tracking the type of vulnerability helps to apply context to the vulnerabilities identified across applications. Example types include:

  • Cross Site Scripting (XSS);
  • SQL Injection;
  • Cross Site Request Forgery (CRSF);
  • Insecure Cryptographic Storage.

For a more comprehensive list of web application vulnerabilities see the OWASP Top 10.

Look out for consistent / repeat findings across one or more applications. These can be an indicator of weak standards or poor implementation of standard requirements. Seek to understand the root cause rather than trying to address findings on a case by case basis.

Source of discovery

It is common to have multiple controls that are used to identify application vulnerabilities. Consider correlating the number, type and quality of findings with the source tool. This helps to determine which tools and processes are most effective and offer the greatest return on investment (ROI).

Average Time to Fix or Defect Remediation Window (DRW)

From the initial identification of a vulnerability track how long it takes in days until it is verified closed. Companies will typically define time to fix requirements in accordance with the finding severity rating.

Make sure you:

  • Refer to the original identified date as findings often come from multiple sources;
  • Verify that findings have been closed.

This indicator will help determine if vulnerabilities are being addressed within the timescales set out in your standards. Repeated failure to meet the agreed timescales can be a good indicator of increased risk exposure.

Finding / Flaw Creation Rate

Track the rate of new findings over a set period. Its worth noting that an increase in findings may relate to an improvement in your ability to identify findings rather than a drop in development standards.

Finding / Flaw Remediation Rate

Track the rate of remediated findings over a set period. Make sure you verify that findings are closed. It’s possible that an applied fix isn’t sufficient to address the finding.

Finding / Flaw Growth Rate

If the finding creation rate exceeds the finding remediation rate this is a key indicator that vulnerabilities are increasing. This is a good way to identify increasing risk exposure.

Flaw Growth Rate = Flaw Creation Rate – Flaw Remediation Rate

Factor in the severity rating of findings. An increased growth rate driven by low findings alone may not suggest an increased risk exposure.

Density

Not all applications are created equal. Larger applications have a greater attack surface that can be exploited and its reasonable to expect them to contain a greater number of findings.

Consider tracking the number and severity of findings according to a defined density such as lines of code, number of pages or screens. This will be a more effective indicator of the security standard of an application.

Rate of findings per lines of code = Lines of Code / Number of Findings

It may be difficult to source sizing details for applications. Investigate what tools are available to you to support gathering of meta data relating to your applications.

Weighted Risk Trend (WRT)

It is difficult to articulate the actual risk of applications based on just the volume and severity of their findings. WRT makes it possible to derive a single measure which uses the business criticality of the application to determine risk in the context of the business.

WRT = ((critical multiplier x critical defects) + (high multiplier x high defects) + (medium multiplier x medium defects) + (low multiplier x low defects)) x Business Criticality

Choose severity multipliers that work within your organisation and make sure the calculations you use are transparent to the intended audience. Multipliers could be assigned scores on a simple basis of a range of 1 (low) to 4 (critical) or with wider weighting increasingly afforded to the more significant severities such as:

  • Critical = 9;
  • High = 6;
  • Medium = 3;
  • Low = 1.

Rate of Defect Recurrence

Identify the reoccurrence of previously remediated findings. Look out for:

  • Version control related issues that lead to loss of security fixes in the production environment;
  • Remediation of issues without solving the root cause of the problem;
  • Developers that lack security awareness may continue to produce insecure code overwriting that which was previously secure.

Consider Business Context

Security findings need to be considered in the context of the wider organisation. For instance, an application handling confidential / regulated data will pose a greater risk to the company than a brochureware site containing publicly accessible data. Look to prioritise your resources to effectively manage the security risk of your applications.

Application criticality

Determine the criticality of your applications to the organisation. This is often used to determine / prioritise the scope of security services offered. Typical factors often include:

  • Type and volume of data;
  • External availability;
  • Compliance or contractual requirements.

Business Impact Assessments (BIA) are a useful source of information.

Gamify reporting

In isolation it can be hard to articulate what a good application security posture is. Organisations typically own / operate multiple applications. Within a large organisation its viable to expect this to be in the hundreds or potentially thousands.

Consider the comparison across regions / business lines / legal entities / departments / functions within your organisation. Identify those with the best and worst security posture and encourage some friendly competition. You could even create a league table so that the different areas can easily see how they compare to each other.

A mean of the overall rating will enable you to trend indicators over time and track increasing / decreasing trends in your organisations overall security posture.

Summary

Identify a select number of key application indicators that will support you track the effectiveness of your controls. Be careful to avoid tracking data that does not provide an indicator of implementation, effectiveness or return on investment (ROI). Avoid tracking data that provides no clear indication as this can create noise and detract from those indicators that are important.

Target indicators to respective audiences to ensure that stakeholders have access to the right information. Identify where indicators are increasing / decreasing and determine what actions are required to address any increasing risk exposures.

Its important to work with stakeholders involved in the delivery / operation of your security controls. Work closely with your development / technical teams to educate them in the usage of the tools and provide guidance around remediation. Make sure that you recognise stakeholders for improvements that are being made and support them in their on going journey to improve.

Let me know what application indicators you track and what you have found to work effectively within your organisation.

Monday, June 1, 2020

Your journey towards secure development

This article focuses on understanding web security risks and building the foundations for secure development. This is a large subject area that I will look to address across a number of articles.

It’s important to integrate secure practices into your software development lifecycle. Even with a strong defence in depth approach to security, poor coding practices will leave you susceptible to compromise. Hardened server builds and Web Application Firewalls (WAF) won’t provide complete protection if your applications are insecure. These should be seen as a supplement to and not replacement for secure coding.

There are numerous vulnerabilities that a WAF isn't going to protect you against such as:

  • Script injection (i.e. Magecart type attacks);
  • Concurrency (multiple concurrent user session) flaws;
  • Business logic flaws.

As we become increasingly reliant on software in our every day lives the importance of maintaining the confidentiality of our data as well as the integrity of transactions is of paramount importance.

Understand security risks to applications

For web applications the OWASP top 10 is a great place to build your understanding of the most critical security risks to web applications. As at the time of writing this article we’re on the 2017 version. This list is maintained by OWASP to ensure it remains reflective of the latest security risks. Even with these updates there are common flaws that are repeated across the versions. Despite being widely documented with proven mitigations the likes of SQL injection and Cross-Site Scripting (XSS) flaws remain prevalent across today's web applications. OWASP is a great resource to not only understand the risks but to also understand how to code securely and perform effective security testing.

There are a wealth of readily available resources. The CWE/SANS TOP 25 Most Dangerous Software Errors is another useful resource to refer to.

If you lack formal company security standards a decent place to start is to require mitigation of the OWASP Top 10 or CWE / SANS Top 25 vulnerabilities within both your internal teams or third parties via Service Level Agreements (SLAs).

A word of note from experience. Just because your company is outsourcing to a experienced software development company don’t assume that good secure coding practices will be followed. Expect to be delivered a minimum viable product (MVP) as companies look to reduce costs and maximise profits. Make sure your security standards are included in that minimum. Trying to include these in retrospect after an initial contract is agreed is often far from straightforward. Typical challenges may include:

  • Renegotiating a new contract / service agreement - in large organisations this can be particularly onerous. I've been involved in these situations that take in excess of 6 months to resolve;
  • The individual who is accountable for the third party may have limited motivation to instigate or oversee delivery of the changes;
  • It can be costly even if your company has negotiated decent rates for services, changes to those services often come at a premium.

Secure Software Development Framework (SSDF)

Having spent a considerable period of time working in development I’ve seen and been involved in development practices of wide variations in maturity. Take my word for it that poor development / change management practices are not just limited to small businesses!

It doesn’t matter what your current level of maturity is. The key is to understand your current state and desired state. You’ll then want to define a program to enable transformation (through delivery of milestones) to your desired state.

A good place to start is with reviewing your existing processes against the Capability Maturity Model Integration (CMMI). You need to understand what you have and its current state of maturity before you can look to make improvements.

There are a wealth of resources available providing detail on what your desired state might look like. The OWASP Software Assurance Maturity Model (SAMM) supports the complete software development lifecycle and covers key categories. For each category there are 3 maturity levels provided helping you get started and understand a path of progression. This is comprehensive covering categories from Threat Assessment through to Education and Guidance.

There are various alternatives to consider such as:

A framework helps to identify the categories you should consider and enables you to take a structured approach to modelling what you have and determining where you want to be. These frameworks will need to be tailored to your organisations. There is no mandate that you need to achieve the highest maturity ratings across all the categories. You’ll need to consider what fits within the appetite of your organisation to understand what levels of maturity will meet your needs.

NIST have released an interesting paper covering how to mitigate risk of software vulnerabilities by adopting a SSDF. This is a comprehensive resource about this topic and well worth a read.

Final Thought

When developing secure development practices it is vital to understand the type of vulnerabilities you’ll need to address. Good coding standards combined with vulnerability mitigation's (bespoke or part of a framework) will make substantial improvements to the security posture of your applications.

As the security maturity of your development practices improve you’ll see a drop in the number of vulnerabilities being identified in your production systems through automated vulnerability scanning and penetration testing. This will reduce the potential risk of compromise to your applications and lower the cost of remediation as they are addressed earlier in the lifecycle.

Be proactive in your approach and not reactive:

  • Proactive - avoid vulnerabilities or fix early in the lifecycle;
  • Reactive - playing whack a mole with a multitude of vulnerabilities in production. Being in a constant cycle of dealing with the issues rather than seeking to address the root cause of the problem.

It doesn’t matter where you are on your own security journey, every company is at a varying level of maturity. You just need to understand where you currently are and where you are trying to get to. There is no better time than the present to get started!

I’ll be looking to create a series of security in development related articles to cover some other important topics. It would be great to get your thoughts on the topics covered along with any experiences that you’ve had that can be of help to others.