Enjoy Sharing Technology!

Software,Develope,Devops, Security,TroubleShooting

Showing posts with label Web security. Show all posts
Showing posts with label Web security. Show all posts

Sunday, November 21, 2021

fortify scan: XML Injection

Abstract:

XML Injection is an attack technique used to manipulate or compromise the logic of an XML application or service. The injection of unintended XML content and/or structures into an XML message can alter the intend logic of the application. Further, XML injection can cause the insertion of malicious content into the resulting message/document.



Explanation:

XML injection occurs when:

1. Data enters a program from an untrusted source.

2. The data is written to an XML document.

Applications typically use XML to store data or send messages. When used to store data, XML documents are often treated like databases and can potentially contain sensitive information. XML messages are often used in web services and can also be used to transmit sensitive information. XML messages can even be used to send authentication credentials.

The semantics of XML documents and messages can be altered if an attacker has the ability to write raw XML. In the most benign case, an attacker may be able to insert extraneous tags and cause an XML parser to throw an exception. In more nefarious cases of XML injection, an attacker may be able to add XML elements that change authentication credentials or modify prices in an XML e-commerce database. In some cases, XML injection can lead to cross-site scripting or dynamic code evaluation.

Example 1:

Assume an attacker is able to control shoes in following XML.

<order>

   <price>100.00</price>

   <item>shoes</item>

</order>

Now suppose this XML is included in a back end web service request to place an order for a pair of shoes. Suppose the attacker modifies his request and replaces shoes with shoes</item><price>1.00</price><item>shoes. The new XML would look like:

<order>

    <price>100.00</price>

    <item>shoes</item><price>1.00</price><item>shoes</item>

</order>

This may allow an attacker to purchase a pair of $100 shoes for $1.

Recommendations:

When writing user supplied data to XML, follow these guidelines:

1. Do not create tags or attributes with names that are derived from user input.

2. XML entity encode user input before writing to XML.

3. Wrap user input in CDATA tags.

Share:

Monday, November 15, 2021

appscan:CSRF (Cross-site request forgery)

 1.1, attack principle

   CSRF (Cross-site request forgery) cross-site request forgery, also known as "One Click Attack" or Session Riding, usually abbreviated as CSRF or XSRF, is a malicious use of websites. Although it sounds like cross-site scripting (XSS), it is very different from XSS. XSS is caused by the arbitrary execution of input from the browser, while CSRF is caused by over-trusting users and allowing them to come from so-called legitimate users who have passed authentication. An attack carried out by requesting to perform a certain function of the website. Compared with XSS attacks, CSRF attacks are often less popular (so the resources to prevent them are also quite scarce) and difficult to prevent. CSRF is more dangerous than XSS.

1.2, case analysis

Background: A user was attacked by CSRF during normal transfer, and his account balance was stolen

  1) User Bob initiates a transfer request to the bank http://bank.com.cn/transfer?account=bob&amount=1000000&for=bob2, at this time, the server verifies Bob's identity through the verification session, and Bob completes the normal transfer operation

  2) The hacker Lisa also opened an account in the same bank and initiated a transfer request to the bank: http://bank.com.cn/transfer?account=bob&amount=1000000&for=lisa, Lisa identity verification failed, the request failed

3) There is a CSRF vulnerability in this website. Lisa forged a URL or a hyperlink image with the embedded code http://bank.com.cn/transfer?account=bob&amount=1000000&for=lisa, and induced Bob to click on this URL or image At this time, the request will initiate a request from Bob’s browser to the bank with Bob’s cookie attached. Bob has just visited the bank’s website and the Session value has not expired. The browser’s cookie contains Bob’s authentication information

  4) Tragedy happened! The request http://bank.com.cn/transfer?account=bob&amount=1000000&for=lisa sent to the bank server through Bob's browser will be executed, and the money in Bob's account will be transferred to the Lisa account

  5) Traceability and accountability cannot be traced. The bank log shows that there is indeed a legitimate request from Bob himself to transfer funds, and there is no trace of being attacked.

1.3, APPSCAN test process

APPSCAN removes the HTTP headers that may interfere with the CSRF attack, and uses the forged Referer header http://bogus.referer.ibm.com/ to initiate a request to the server. If the application server returns normally, it is judged that the application is vulnerable to cross-site request forgery attacks .

POST /tg/supplier/supplyFreezeSearch.do HTTP/1.1

Content-Type: application/x-www-form-urlencoded

Accept-Language: en-US

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Referer: http://bogus.referer.ibm.com

Host: pxxx-core-xxxx.com.cn

User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Win32)

ec_i = ec & ec_eti = & ec_ev = & ec_efn = & ec_crd = 15 & ec_f_a = & ec_p = 1 & ec_s_supplyId = & ec_s_supplyName = & ec_s_reason = & ec_s_flagMean = & ec_s_cdate = & ec_s_beginDate = & ec_s_acceName = & __ ec_pages = 2 & ec_rd = 50 & ec_f_supplyId = 1234 & ec_f_supplyName = 1234 & ec_f_reason = 1234 & ec_f_flagMean = 1234 & ec_f_cdate = 1234 & ec_f_beginDate = 1234 & ec_f_acceName = 1234

HTTP/1.1 OK

Date: Mon, 10 Apr 2018 15:17:54 GMT

Location: http://pxxx-core-stg2.paic.com.cn/login

X-Powered-By: Servlet/2.5 JSP/2.1

Set-Cookie: WLS_HTTP_BRIDGE=Ln1YOmot2_3Gzn7sonux8lIOYaSafCnOVQZzmUl8EjaP1lHMMwqP!-1955618416; path=/; HttpOnly

<html><head><title>Welcome to XXX system</title></head>

1.4, defense suggestions

  1) Verify the HTTP Referer field

   According to the HTTP protocol, there is a field in the HTTP header called Referer, which records the source address of the HTTP request. Under normal circumstances, a request to access a secure restricted page comes from the same website. For example, to access http://bank.example/withdraw?account=bob&amount=1000000&for=Mallory, the user must first log in to bank.example and then pass Click the button on the page to trigger the transfer event. At this time, the Referer value of the transfer request will be the URL of the page where the transfer button is located, usually an address beginning with the bank.example domain name. If a hacker wants to implement a CSRF attack on a bank website, he can only construct a request on his own website. When a user sends a request to the bank through the hacker's website, the referer of the request points to the hacker's own website. Therefore, to defend against CSRF attacks, the bank website only needs to verify the Referer value for each transfer request. If it is a domain name starting with bank.example, it means that the request comes from the bank website itself and is legitimate. If the Referer is another website, it may be a CSRF attack by a hacker and reject the request.

2) Add the token to the request address and verify that the CSRF attack is successful because the hacker can completely forge the user’s request. All the user authentication information in the request is in the cookie, so the hacker can do without knowing these verifications. In the case of information, the user’s own cookie is directly used to pass the security verification. To resist CSRF, the key is to include information that hackers cannot forge in the request, and that information does not exist in the cookie. You can add a randomly generated token as a parameter to the HTTP request, and establish an interceptor on the server side to verify the token. If there is no token in the request or the content of the token is incorrect, the request may be rejected because of a CSRF attack. .

3) Customize attributes and verify in the HTTP header. This method also uses tokens and performs verification. The difference from the previous method is that instead of putting the token in the HTTP request as a parameter, it It is placed in a custom attribute in the HTTP header. Through the XMLHttpRequest class, you can add the HTTP header attribute csrftoken to all requests of this type at one time, and put the token value in it. This solves the inconvenience of adding a token to the request in the previous method. At the same time, the address requested through XMLHttpRequest will not be recorded in the browser's address bar, and there is no need to worry about the token being leaked to other websites through the Referer.

  4) Use audited libraries or frameworks that do not allow this weakness, such as: OWASP CSRFGuard: http://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet

"ESAPI Session Management" control http://www.owasp.org/index.php/ESAPI

  5) Ensure that there are no XSS vulnerabilities, because XSS usually leads to the theft of user identity information

  6) Do not use the GET method for any request that triggers a state change

1.5, the actual repair plan

By configuring the filter in web.xml to filter the corresponding request, inherit the OncePerRequestFilter.java parent class in the filter class, and then perform the corresponding matching judgment on the request header in the corresponding filter. If it does not match, it is considered to be a kind of The request of CSRF attack will not be executed.

The filter condition (url-pattern) should be configured according to the actual situation, sometimes it is not necessarily the request at the end of .do or .html to report this vulnerability. At this time, other configurations need to be performed according to the actual situation, and /* may be required for globalization Request a match.

   At the same time, the web.xml in the server may be overwritten by the cache file web_merged.xml, causing the configuration newly added to the web.xml to become invalid, resulting in the old configuration in the cache file being executed. This requires attention. Solution: Shut down the server, delete the cache file, and then restart the service.




Share:

appscan: Authentication Bypass Using HTTP Verb Tampering

 1.1, attack principle

Insecure HTTP methods PUT/DELETE/MOVE/COPY/TRACE/PROPFIND/PROPPATCH/MKCOL/LOCK/UNLOCK allow attackers to modify web server files, delete web pages, and even upload web shells to obtain user identity information, etc., they all have Serious security vulnerabilities may be created. Developers need to control HTTP request types to prevent unauthorized tampering of server resources.

1.2, case analysis

  APPSCAN uses the meaningless HTTP verb bogus to initiate a request to the server, and the system returns normally, showing that the system does not restrict the judgment of the http request type, and there is an HTTP verb tampering vulnerability.

BOGUS /fams/admin/j_security_check HTTP/1.1

Accept-Language: en-US

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Referer: http://xxx-core-stg1.paic.com.cn/fams/

Host: xxx-core-stg1.paic.com.cn

User-Agent: Mozilla/4.0 (compatible; MSIE 9.0; Win32)

HTTP/1.1 200 OK

Server: Apache-Coyote/1.1

Content-Type: text/html;charset=utf-8

Content-Length: 477

Date: Wed, 14 Mar 2018 01:56:23 GMT

1.3, defense recommendations

   1. Restrict http method, such as only allow GET, POST and other types

  2. Use the Filter method provided in the J2EE standard for request type filtering

  3. Check tomcat's web.xml, weblogic.xml configuration of weblogic, and restrict the request type, such as:

<security-constraint>

  <web-resource-collection>

    <url-pattern>/*</url-pattern>

    <http-method>PUT</http-method>

    <http-method>DELETE</http-method>

    <http-method>HEAD</http-method>

    <http-method>OPTIONS</http-method>

    <http-method>TRACE</http-method>

  </web-resource-collection>

  <auth-constraint></auth-constraint>

</security-constraint>

<login-config>

  <auth-method>BASIC</auth-method>

</login-config>

  4. Use the request.getMethod method to add a request interceptor in Struts, such as:

if(method.equalsIgnoreCase("post")||method.equalsIgnoreCase("get")||method.equalsIgnoreCase("head")||method.equalsIgnoreCase("trace")||method.equalsIgnoreCase("connect") ||method.equalsIgnoreCase("options")){}

   5. Disable the WebDAV function of IIS. WebDAV is based on a communication protocol of HTTP 1.1. It adds some methods other than GET, POST, and HEAD to HTTP 1.1, so that applications can directly write files to the Web Server.

  6. ​​The following restrictions are set in the httpd.conf file of apache

<Location />

 <LimitExcept GET POST HEAD CONNECT OPTIONS>

   Order Allow,Deny

   Deny from all

 </LimitExcept>

 </Location>

1.4, the actual repair plan

  1. The server can be divided into two types: Tomcat and WebSphere (WAS). The local is Tomcat, plus the configuration mode of 2 and the mode of 3 is mainly for the WAS server.

  2. Add the <security-constraint> configuration in the web.xml file.

   3. If it is the requested static resource, save the subordinate field as a file .htaccess and place it under the static resource folder.

  <LimitExcept GET POST>

  Order deny,allow

  Deny from all

  </LimitExcept>

  Dynamic resources need to be implemented in java code.

   Refer to the limitexcept command on the official website, IHS is based on apache, and the syntax is the same.

http://httpd.apache.org/docs/2.4/mod/core.html#limitexcept

Share:

appscan:Session identification is not updated (medium-sangered)

 1.1, attack principle

   When authenticating a user or establishing a new user session in other ways, if any existing session identifier is not invalidated, an attacker has the opportunity to steal the authenticated session. This vulnerability can be combined with XSS to obtain the user session to initiate a login process attack on the system.

1.2, APPSCAN test process

  AppScan scans the cookies before and after the "login behavior", which records the session information. After the login behavior occurs, if the value in the cookie does not change, it is judged as a "session ID not updated" vulnerability

1.3, repair suggestions

  1. Always generate a new session for the user to log in when the user is successfully authenticated to prevent the user from manipulating the session ID. Do not accept the session ID provided by the user's browser when logging in; revoke any existing session ID before authorizing the new user session.

  2. For platforms that do not generate new values ​​for session identification cookies (such as ASP), please use auxiliary cookies. In this method, the auxiliary cookie on the user's browser is set to a random value, and the session variable is set to the same value. If the session variable and cookie value never match, cancel the session and force the user to log in again.

  3. If you are using the Apache Shiro security framework, you can use the SecurityUtils.getSubject().logout() method, refer to: http://blog.csdn.net/yycdaizi/article/details/45013397


1.4, fix the code sample

  Add the following code to the login page:


<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>

<%

    request.getSession().invalidate();//Clear session

    Cookie cookie = request.getCookies()[0];//Get cookie

    cookie.setMaxAge(0);//Let the cookie expire

%>

  Add the following code before verifying that the login is successful:


try {

    request.getSession().invalidate();

    if (request.getCookies() != null) {

       Cookie cookie = request.getCookies()[0];// Get cookie

       cookie.setMaxAge(0);// Let the cookie expire

    }

} catch (Exception e) {

     e.printStackTrace();

}

session = request.getSession(true);

1.5, exception handling

   The session is indeed updated before and after login, it can be regarded as a false positive

Share:

appscan:encrypted session (SSL) is using a cookie without the "secure" attribute

 1.1, attack principle

Any information such as cookies, session tokens, or user credentials sent to the server in clear text may be stolen and later used for identity theft or user disguise. In addition, several privacy regulations point out that user credentials and other information are sensitive Information must always be sent to the Web site in an encrypted manner.

1.2, repair suggestions

   add secure attribute to cookie

1.3, fix the code example

  1) The server is configured as HTTPS SSL

  2) Servlet 3.0 (Java EE 6) web.xml is configured as follows:

  <session-config>

   <cookie-config>

    <secure>true</secure>

   </cookie-config>

  </session-config>

  3) Configure as follows in ASP.NET Web.config:

   <httpCookies requireSSL="true" />

  4) Configure as follows in php.ini

  Session.cookie_secure = True

  or

  Void session_set_cookie_params (int $lifetime [, string $path [, string $domain

                                  [, bool $secure= false [, bool $httponly= false ]]]])

  or

  Bool setcookie (string $name [, string $value [, int $expire = 0 [, string $path

                 [, string $domain [, bool $secure= false [, bool $httponly= false ]]]]]])

  5) Configure as follows in weblogic:


  <wls:session-descriptor>

      <wls:cookie-secure>true</wls:cookie-secure>

       <wls:cookie-http-only>true</wls:cookie-http-only>

   </wls:session-descriptor>

1.4. Other information

  Https://www.owasp.org/index.php/SecureFlag

1.5, the actual repair plan

  Solution 1: The project uses the WebShpere server, this can be set in the server:

   In fact, this repair method is the same as 5.2 repair suggestion 2) adding configuration to web.xml. Both of these repair methods can definitely be scanned by Appscan, but the 19 environment needs to support both https and http protocols. The above two solutions will cause the cookies under the http protocol to not be transmitted, resulting in the part under the http protocol The function cannot be used. For the time being, this scheme has been scanned at the expense of not using the functions under the http protocol.

  Option II:

   If the cookie is configured with the secure attribute, then the cookie can be transmitted in the https protocol, but not in the http protocol. In the actual system application, two protocols must be supported. Here you can get which protocol is through request.getScheme() (this way https protocol is also http, strange, you can judge whether it is https protocol in the following way)

  String url = req.getHeader("Referer");

  If(url.startsWith("https")){}

  Then judge whether to add this attribute: cookie.setSecure(true).

   With this scheme, you can only set the cookies that your own code responds later, but not the cookies that the container automatically responds to. Therefore it is not used here.

Share:

Sunday, November 14, 2021

fortify scan: cross-site request forgery (CSRF)

Abstract:

Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they’re currently authenticated. With a little help of social engineering (such as sending a link via email or chat), an attacker may trick the users of a web application into executing actions of the attacker’s choosing. If the victim is a normal user, a successful CSRF attack can force the user to perform state changing requests like transferring funds, changing their email address, and so forth. If the victim is an administrative account, CSRF can compromise the entire web application.



Explanation:

A cross-site request forgery (CSRF) vulnerability occurs when:

1. A web application uses session cookies.

2. The application acts on an HTTP request without verifying that the request was made with the user's consent.

A nonce is a cryptographic random value that is sent with a message to prevent replay attacks. If the request does not contain a nonce that proves its provenance, the code that handles the request is vulnerable to a CSRF attack (unless it does not change the state of the application). This means a web application that uses session cookies has to take special precautions in order to ensure that an attacker can't trick users into submitting bogus requests. Imagine a web application that allows administrators to create new accounts as follows:

  var req = new XMLHttpRequest();

  req.open("POST", "/new_user", true);

  body = addToPost(body, new_username);

  body = addToPost(body, new_passwd);

  req.send(body);

An attacker might set up a malicious web site that contains the following code.

  var req = new XMLHttpRequest();

  req.open("POST", "http://www.example.com/new_user", true);

  body = addToPost(body, "attacker");

  body = addToPost(body, "haha");

  req.send(body);

If an administrator for example.com visits the malicious page while she has an active session on the site, she will unwittingly create an account for the attacker. This is a CSRF attack. It is possible because the application does not have a way to determine the provenance of the request. Any request could be a legitimate action chosen by the user or a faked action set up by an attacker. The attacker does not get to see the Web page that the bogus request generates, so the attack technique is only useful for requests that alter the state of the application.

Applications that pass the session identifier in the URL rather than as a cookie do not have CSRF problems because there is no way for the attacker to access the session identifier and include it as part of the bogus request.

Recommendations:

Applications that use session cookies must include some piece of information in every form post that the back-end code can use to validate the provenance of the request. One way to do that is to include a random request identifier or nonce, like this:

  RequestBuilder rb = new RequestBuilder(RequestBuilder.POST, "/new_user");

  body = addToPost(body, new_username);

  body = addToPost(body, new_passwd);

  body = addToPost(body, request_id);

  rb.sendRequest(body, new NewAccountCallback(callback));

Then the back-end logic can validate the request identifier before processing the rest of the form data. When possible, the request identifier should be unique to each server request rather than shared across every request for a particular session. As with session identifiers, the harder it is for an attacker to guess the request identifier, the harder it is to conduct a successful CSRF attack. The token should not be easily guessed and it should be protected in the same way that session tokens are protected, such as using SSLv3.

Additional mitigation techniques include:

Framework protection: Most modern web application frameworks embed CSRF protection and they will automatically include and verify CSRF tokens.

Use a Challenge-Response control: Forcing the customer to respond to a challenge sent by the server is a strong defense against CSRF. Some of the challenges that can be used for this purpose are: CAPTCHAs, password re-authentication and one-time tokens.

Check HTTP Referer/Origin headers: An attacker won't be able to spoof these headers while performing a CSRF attack. This makes these headers a useful method to prevent CSRF attacks.

Double-submit Session Cookie: Sending the session ID Cookie as a hidden form value in addition to the actual session ID Cookie is a good protection against CSRF attacks. The server will check both values and make sure they are identical before processing the rest of the form data. If an attacker submits a form in behalf of a user, he won't be able to modify the session ID cookie value as per the same-origin-policy.

Limit Session Lifetime: When accessing protected resources using a CSRF attack, the attack will only be valid as long as the session ID sent as part of the attack is still valid on the server. Limiting the Session lifetime will reduce the probability of a successful attack.

The techniques described here can be defeated with XSS attacks. Effective CSRF mitigation includes XSS mitigation techniques.

Tips:

1. SCA flags all HTML forms and all XMLHttpRequest objects that might perform a POST operation. The auditor must determine if each form could be valuable to an attacker as a CSRF target and whether or not an appropriate mitigation technique is in place.


Share:

fortify scan: Header Manipulation: Cookies

Abstract:

Including unvalidated data in Cookies can lead to HTTP Response header manipulation and enable cache-poisoning, cross-site scripting, cross-user defacement, page hijacking, cookie manipulation or open redirect.

Explanation:

Cookie Manipulation vulnerabilities occur when:

1. Data enters a web application through an untrusted source, most frequently an HTTP request.

2. The data is included in an HTTP cookie sent to a web user without being validated.

As with many software security vulnerabilities, cookie manipulation is a means to an end, not an end in itself. At its root, the vulnerability is straightforward: an attacker passes malicious data to a vulnerable application, and the application includes the data in an HTTP cookie.

Cookie Manipulation: When combined with attacks like cross-site request forgery, attackers may change, add to, or even overwrite a legitimate user's cookies.

Being an HTTP Response header, Cookie manipulation attacks can also lead to other types of attacks like:

HTTP Response Splitting:

One of the most common Header Manipulation attacks is HTTP Response Splitting. To mount a successful HTTP Response Splitting exploit, the application must allow input that contains CR (carriage return, also given by %0d or \r) and LF (line feed, also given by %0a or \n)characters into the header. These characters not only give attackers control of the remaining headers and body of the response the application intends to send, but also allows them to create additional responses entirely under their control.

Many of today's modern application servers will prevent the injection of malicious characters into HTTP headers. For example, recent versions of Apache Tomcat will throw an IllegalArgumentException if you attempt to set a header with prohibited characters. If your application server prevents setting headers with new line characters, then your application is not vulnerable to HTTP Response Splitting. However, solely filtering for new line characters can leave an application vulnerable to Cookie Manipulation or Open Redirects, so care must still be taken when setting HTTP headers with user input.

Example: The following code segment reads the name of the author of a weblog entry, author, from an HTTP request and sets it in a cookie header of an HTTP response.

author = form.author.value;

...

document.cookie = "author=" + author + ";expires="+cookieExpiration;

...

Assuming a string consisting of standard alphanumeric characters, such as "Jane Smith", is submitted in the request the HTTP response including this cookie might take the following form:

HTTP/1.1 200 OK

...

Set-Cookie: author=Jane Smith

...

However, because the value of the cookie is formed of unvalidated user input the response will only maintain this form if the value submitted for AUTHOR_PARAM does not contain any CR and LF characters. If an attacker submits a malicious string, such as "Wiley Hacker\r\nHTTP/1.1 200 OK\r\n...", then the HTTP response would be split into two responses of the following form:

HTTP/1.1 200 OK

...

Set-Cookie: author=Wiley Hacker


HTTP/1.1 200 OK

...

Clearly, the second response is completely controlled by the attacker and can be constructed with any header and body content desired. The ability of attacker to construct arbitrary HTTP responses permits a variety of resulting attacks, including: cross-user defacement, web and browser cache poisoning, cross-site scripting, and page hijacking.

Cross-User Defacement: An attacker will be able to make a single request to a vulnerable server that will cause the server to create two responses, the second of which may be misinterpreted as a response to a different request, possibly one made by another user sharing the same TCP connection with the server. This can be accomplished by convincing the user to submit the malicious request themselves, or remotely in situations where the attacker and the user share a common TCP connection to the server, such as a shared proxy server. In the best case, an attacker may leverage this ability to convince users that the application has been hacked, causing users to lose confidence in the security of the application. In the worst case, an attacker may provide specially crafted content designed to mimic the behavior of the application but redirect private information, such as account numbers and passwords, back to the attacker.

Cache Poisoning: The impact of a maliciously constructed response can be magnified if it is cached either by a web cache used by multiple users or even the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers, then all users of that cache will continue receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, then that user will continue to receive the malicious content until the cache entry is purged, although only the user of the local browser instance will be affected.

Cross-Site Scripting: Once attackers have control of the responses sent by an application, they have a choice of a variety of malicious content to provide users. Cross-site scripting is common form of attack where malicious JavaScript or other code included in a response is executed in the user's browser. The variety of attacks based on XSS is almost limitless, but they commonly include transmitting private data like cookies or other session information to the attacker, redirecting the victim to web content controlled by the attacker, or performing other malicious operations on the user's machine under the guise of the vulnerable site. The most common and dangerous attack vector against users of a vulnerable application uses JavaScript to transmit session and authentication information back to the attacker who can then take complete control of the victim's account.

Page Hijacking: In addition to using a vulnerable application to send malicious content to a user, the same root vulnerability can also be leveraged to redirect sensitive content generated by the server and intended for the user to the attacker instead. By submitting a request that results in two responses, the intended response from the server and the response generated by the attacker, an attacker may cause an intermediate node, such as a shared proxy server, to misdirect a response generated by the server for the user to the attacker. Because the request made by the attacker generates two responses, the first is interpreted as a response to the attacker's request, while the second remains in limbo. When the user makes a legitimate request through the same TCP connection, the attacker's request is already waiting and is interpreted as a response to the victim's request. The attacker then sends a second request to the server, to which the proxy server responds with the server generated request intended for the victim, thereby compromising any sensitive information in the headers or body of the response intended for the victim.

Open Redirect: Allowing unvalidated input to control the URL used in a redirect can aid phishing attacks.

Recommendations:

The solution to cookie manipulation is to ensure that input validation occurs in the correct places and checks for the correct properties.

Since Header Manipulation vulnerabilities like cookie manipulation occur when an application includes malicious data in its output, one logical approach is to validate data immediately before it leaves the application. However, because web applications often have complex and intricate code for generating responses dynamically, this method is prone to errors of omission (missing validation). An effective way to mitigate this risk is to also perform input validation for Header Manipulation.

Web applications must validate their input to prevent other vulnerabilities, such as SQL injection, so augmenting an application's existing input validation mechanism to include checks for Header Manipulation is generally relatively easy. Despite its value, input validation for Header Manipulation does not take the place of rigorous output validation. An application might accept input through a shared data store or other trusted source, and that data store might accept input from a source that does not perform adequate input validation. Therefore, the application cannot implicitly rely on the safety of this or any other data. This means that the best way to prevent Header Manipulation vulnerabilities is to validate everything that enters the application or leaves the application destined for the user.

The most secure approach to validation for Header Manipulation is to create a whitelist of safe characters that are allowed to appear in HTTP response headers and accept input composed exclusively of characters in the approved set. For example, a valid name might only include alphanumeric characters or an account number might only include digits 0-9.

A more flexible, but less secure approach is known as blacklisting, which selectively rejects or escapes potentially dangerous characters before using the input. To form such a list, you first need to understand the set of characters that hold special meaning in HTTP response headers. Although the CR and LF characters are at the heart of an HTTP response splitting attack, other characters, such as ':' (colon) and '=' (equal), have special meaning in response headers as well.

After you identify the correct points in an application to perform validation for Header Manipulation attacks and what special characters the validation should consider, the next challenge is to identify how your validation handles special characters. The application should reject any input destined to be included in HTTP response headers that contains special characters, particularly CR and LF, as invalid.

Many application servers attempt to limit an application's exposure to HTTP response splitting vulnerabilities by providing implementations for the functions responsible for setting HTTP headers and cookies that perform validation for the characters essential to an HTTP response splitting attack. Do not rely on the server running your application to make it secure. When an application is developed there are no guarantees about what application servers it will run on during its lifetime. As standards and known exploits evolve, there are no guarantees that application servers will also stay in sync.


Share:

fortify scan:JSON Injection

Abstract:

The method writes unvalidated input into JSON. This call could allow an attacker to inject arbitrary elements or attributes into the JSON entity.

Explanation:

JSON injection occurs when:

1. Data enters a program from an untrusted source.

2. The data is written to a JSON stream.

Applications typically use JSON to store data or send messages. When used to store data, JSON is often treated like cached data and may potentially contain sensitive information. When used to send messages, JSON is often used in conjunction with a RESTful service and can be used to transmit sensitive information such as authentication credentials.

The semantics of JSON documents and messages can be altered if an application constructs JSON from unvalidated input. In a relatively benign case, an attacker may be able to insert extraneous elements that cause an application to throw an exception while parsing a JSON document or request. In a more serious case, such as ones that involves JSON injection, an attacker may be able to insert extraneous elements that allow for the predictable manipulation of business critical values within a JSON document or request. In some cases, JSON injection can lead to cross-site scripting or dynamic code evaluation.

Example 1: The following JavaScript code uses jQuery to parse JSON where a value comes from a URL:

var str = document.URL;

var url_check = str.indexOf('name=');

var name = null;

if (url_check > -1) {

  name =  decodeURIComponent(str.substring((url_check+5), str.length));

}

$(document).ready(function(){

  if (name !== null){

    var obj = jQuery.parseJSON('{"role": "user", "name" : "' + name + '"}');

    ...

  }

  ...

});

Here the untrusted data in name will not be validated to escape JSON-related special characters. This allows a user to arbitrarily insert JSON keys, possibly changing the structure of the serialized JSON. In this example, if the non-privileged user mallory were to append ","role":"admin to the name parameter in the URL, the JSON would become:

{

  "role":"user",

  "username":"mallory",

  "role":"admin"

}

This is parsed by jQuery.parseJSON() and set to a plain object, meaning that obj.role would now return "admin" instead of "user"

Recommendations:

When writing user supplied data to JSON, follow these guidelines:

1. Do not create JSON attributes with names that are derived from user input.

2. Ensure that all serialization to JSON is performed using a safe serialization function that delimits untrusted data within single or double quotes and escapes any special characters.

Example 2: The following JavaScript code implements the same functionality as that in Example 1, but instead verifies the name against a whitelist, and rejects the value otherwise setting the name to "guest" prior to parsing the JSON:

var str = document.URL;

var url_check = str.indexOf('name=');

var name = null;

if (url_check > -1) {

  name =  decodeURIComponent(str.substring((url_check+5), str.length));

}

function getName(name){

  var regexp = /^[A-z0-9]+$/;

  var matches = name.match(regexp);

  if (matches == null){

    return "guest";

  } else {

    return name;

  }

}

$(document).ready(function(){

  if (name !== null){

    var obj = jQuery.parseJSON('{"role": "user", "name" : "' + getName(name) + '"}');

    ...

  }

  ...

});

Although in this case it is ok to perform whitelisting in this way, as we want the user to control the name, in other cases it is best to use values that are not user-controlled at all.


Share:

fortify scan: Often Misused: Authentication

Abstract:

An API is a contract between a caller and a callee. The most common forms of API abuse are caused by the caller failing to honor its end of this contract. ... For example, if a coder subclasses SecureRandom and returns a non-random value, the contract is violated.

Explanation:

Many DNS servers are susceptible to spoofing attacks, so you should assume that your software will someday run in an environment with a compromised DNS server. If attackers are allowed to make DNS updates (sometimes called DNS cache poisoning), they can route your network traffic through their machines or make it appear as if their IP addresses are part of your domain. Do not base the security of your system on DNS names.

Example 1: The following code uses a DNS lookup to determine whether or not an inbound request is from a trusted host. If an attacker can poison the DNS cache, they can gain trusted status.

 struct hostent *hp;

 struct in_addr myaddr;

 char* tHost = "trustme.trusty.com";

 myaddr.s_addr=inet_addr(ip_addr_string);

 hp = gethostbyaddr((char *) &myaddr,

      sizeof(struct in_addr), AF_INET);

 if (hp && !strncmp(hp->h_name, tHost, sizeof(tHost))) {

 trusted = true;

 } else {

 trusted = false;

 }

IP addresses are more reliable than DNS names, but they can also be spoofed. Attackers may easily forge the source IP address of the packets they send, but response packets will return to the forged IP address. To see the response packets, the attacker has to sniff the traffic between the victim machine and the forged IP address. In order to accomplish the required sniffing, attackers typically attempt to locate themselves on the same subnet as the victim machine. Attackers may be able to circumvent this requirement by using source routing, but source routing is disabled across much of the Internet today. In summary, IP address verification can be a useful part of an authentication scheme, but it should not be the single factor required for authentication.

Recommendations:

You can increase confidence in a domain name lookup if you check to make sure that the host's forward and backward DNS entries match. Attackers will not be able to spoof both the forward and the reverse DNS entries without controlling the nameservers for the target domain. However, this is not a foolproof approach: attackers may be able to convince the domain registrar to turn over the domain to a malicious nameserver. Basing authentication on DNS entries is simply a risky practice.

While no authentication mechanism is foolproof, there are better alternatives than host-based authentication. Password systems offer decent security, but are susceptible to bad password choices, insecure password transmission, and bad password management. A cryptographic scheme like SSL is worth considering, but such schemes are often so complex that they bring with them the risk of significant implementation errors, and key material can always be stolen. In many situations, multi-factor authentication including a physical token offers the most security available at a reasonable price.

Tips:

1. Check how the DNS information is being used. In addition to considering whether or not the program's authentication mechanisms can be defeated, consider how DNS spoofing can be used in a social engineering attack. For example, if attackers can make it appear that a posting came from an internal machine, can they gain credibility?

Share:

fortify scan: Resource Injection

Abstract:

This attack consists of changing resource identifiers used by an application in order to perform a malicious task. When an application defines a resource type or location based on user input, such as a file name or port number, this data can be manipulated to execute or access different resources. The resource type affected by user input indicates the content type that may be exposed. For example, an application that permits input of special characters like period, slash, and backslash is risky when used in conjunction with methods that interact with the filesystem.

Explanation:

A resource injection issue occurs when the following two conditions are met:

1. An attacker is able to specify the identifier used to access a system resource.

For example, an attacker may be able to specify a port number to be used to connect to a network resource.

2. By specifying the resource, the attacker gains a capability that would not otherwise be permitted.

For example, the program may give the attacker the ability to transmit sensitive information to a third-party server.

Note: Resource injections involving resources stored on the file system are reported in a separate category named path manipulation. See the path manipulation description for further details of this vulnerability.

Example: The following code uses a port number read from a CGI request to create a socket.

...

char* rPort = getenv("rPort");

...

serv_addr.sin_port = htons(atoi(rPort));

if (connect(sockfd,&serv_addr,sizeof(serv_addr)) < 0)

error("ERROR connecting");

...

The kind of resource affected by user input indicates the kind of content that may be dangerous. For example, data containing special characters like period, slash, and backslash are risky when used in methods that interact with the file system. Similarly, data that contains URLs and URIs is risky for functions that create remote connections.

Recommendations:

The best way to prevent resource injection is with a level of indirection: create a list of legitimate resource names that a user is allowed to specify, and only allow the user to select from the list. With this approach the input provided by the user is never used directly to specify the resource name.

In some situations this approach is impractical because the set of legitimate resource names is too large or too hard to maintain. Programmers often resort to implementing a deny list in these situations. A deny list is used to selectively reject or escape potentially dangerous characters before using the input. However, any such list of unsafe characters is likely to be incomplete and will almost certainly become out of date. A better approach is to create a list of characters that are permitted to appear in the resource name and accept input composed exclusively of characters in the approved set

Tips:

1. If the program is performing custom input validation you are satisfied with, use the Fortify Custom Rules Editor to create a cleanse rule for the validation routine.

2. Implementation of an effective deny list is notoriously difficult. One should be skeptical if validation logic requires implementing a deny list. Consider different types of input encoding and different sets of metacharacters that might have special meaning when interpreted by different operating systems, databases, or other resources. Determine whether or not the deny list can be updated easily, correctly, and completely if these requirements ever change.


Share:

fortify scan:Process Control

Abstract:

Transferring program control to an untrusted application program or in an untrusted environment can cause an application to execute malicious commands on behalf of an attacker. It could result in the program using a malicious library supplied by an attacker.

Explanation:

Process control vulnerabilities take two forms:

- An attacker can change the name of the library that the program executes: the attacker explicitly controls what the library name is.

- An attacker can change the environment in which the library is loaded: the attacker implicitly controls what the library name means.

Process control vulnerabilities of this type occur when:

1. An attacker provides a malicious library to an application.

2. The application loads the malicious library because it fails to specify an absolute path or verify the file being loaded.

3. By executing code from the library, the application gives the attacker a privilege or capability that the attacker would not otherwise have.

Example: The following code is from a web-based administration utility that allows users access to an interface through which they can update their profile on the system. The utility makes use of a library named liberty.dll, which is normally found in a standard system directory.

LoadLibrary("liberty.dll");

The problem is that the program does not specify an absolute path for liberty.dll. If an attacker is able to place a malicious library named liberty.dll higher in the search order than file the application intends to load, then the application will load the malicious copy instead of the intended file. Because of the nature of the application, it runs with elevated privileges, which means the contents of the attacker's liberty.dll will now be run with elevated privileges, possibly giving the attacker complete control of the system.

The type of attack shown in this example is made possible because of the search order used by LoadLibrary() when an absolute path is not specified. If the current directory is searched before system directories, as was the case up until the most recent versions of Windows, then this type of attack becomes trivial if the attacker may execute the program locally. The search order is operating system version dependent, and is controlled on newer operating systems by the value of the registry key:

HKLM\System\CurrentControlSet\Control\Session Manager\SafeDllSearchMode

This key is not defined on Windows 2000/NT and Windows Me/98/95 systems.

On systems where the key does exist, LoadLibrary() behaves as follows:

If SafeDllSearchMode is 1, the search order is as follows:

(Default setting for Windows XP-SP1 and later, as well as Windows Server 2003.)

1. The directory from which the application was loaded.

2. The system directory.

3. The 16-bit system directory, if it exists.

4. The Windows directory.

5. The current directory.

6. The directories that are listed in the PATH environment variable.

If SafeDllSearchMode is 0, the search order is as follows:

1. The directory from which the application was loaded.

2. The current directory.

3. The system directory.

4. The 16-bit system directory, if it exists.

5. The Windows directory.

6. The directories that are listed in the PATH environment variable.

Recommendations:

An attacker may indirectly control libraries that are used by a program by modifying the environment in which libraries are loaded. The environment should not be trusted and precautions should be taken to prevent an attacker from using some manipulation of the environment to perform an attack. Whenever possible, libraries should be controlled by the application and executed using an absolute path. In cases where the path is not known at compile time, such as for cross-platform applications, an absolute path should be constructed from known values during execution. Because an attacker may indirectly control libraries that are loaded by a program by modifying the environment in which they are executed. The environment should not be trusted and precautions should be taken to prevent an attacker from using some manipulation of the environment to perform an attack. Whenever possible, libraries should be controlled by the application and loaded using an absolute path. In cases where the path is not known at compile time, such as for cross-platform applications, an absolute path should be constructed from known values during execution.

Because Windows APIs impose a specific search order based not only on a series of directories, but also on a list of file extensions that are automatically appended if none is specified, an attacker may be able to inject a malicious library of the specified name with an extension higher in the search order. Therefore, absolute paths should also specify a file extension on Windows systems.

Library names and paths read from configuration files or the environment should be sanity-checked against a set of invariants that define valid values. Other checks can sometimes be performed to detect if these sources may have been tampered with. For example, if a configuration file is world-writable, the program might refuse to run.

In cases where information about the library to be loaded is known in advance, the program may perform checks to verify the identity of the library. If a library should always be owned by a particular user or have a particular set of access permissions assigned to it, these properties can be verified programmatically before the library is loaded.


Share:

fortify scan: Insecure Compiler Optimization

Abstract:

Improperly scrubbing sensitive data from memory can compromise security.

Explanation:

Compiler optimization errors occur when:

1. Secret data is stored in memory.

2. The secret data is scrubbed from memory by overwriting its contents.

3. The source code is compiled using an optimizing compiler, which identifies and removes the function that overwrites the contents as a dead store because the memory is not used subsequently.

Example 1: The following code reads a password from the user, uses the password to connect to a back-end mainframe and then attempts to scrub the password from memory using memset().

  void GetData(char *MFAddr) {

  char pwd[64];

  if (GetPasswordFromUser(pwd, sizeof(pwd))) {

   if (ConnectToMainframe(MFAddr, pwd)) {

// Interaction with mainframe

}

  }

  memset(pwd, 0, sizeof(pwd));

}

The code in the example will behave correctly if it is executed verbatim, but if the code is compiled using an optimizing compiler, such as Microsoft Visual C++(R) .NET or GCC 3.x, then the call to memset() will be removed as a dead store because the buffer pwd is not used after its value is overwritten [2]. Because the buffer pwd contains a sensitive value, the application may be vulnerable to attack if the data is left memory resident. If attackers are able to access the correct region of memory, they may use the recovered password to gain control of the system.

It is common practice to overwrite sensitive data manipulated in memory, such as passwords or cryptographic keys, in order to prevent attackers from learning system secrets. However, with the advent of optimizing compilers, programs do not always behave as their source code alone would suggest. In the example, the compiler interprets the call to memset() as dead code because the memory being written to is not subsequently used, despite the fact that there is clearly a security motivation for the operation to occur. The problem here is that many compilers, and in fact many programming languages, do not take this and other security concerns into consideration in their efforts to improve efficiency.

Attackers typically exploit this type of vulnerability by using a core dump or runtime mechanism to access the memory used by a particular application and recover the secret information. After an attacker has access to the secret information, it is relatively straightforward to further exploit the system and possibly compromise other resources with which the application interacts.

Recommendations:

Optimizing compilers are hugely beneficial to performance, so disabling optimization is rarely a reasonable option. The solution is to communicate to the compiler exactly how the program should behave. Because support for this communication is imperfect and varies from platform to platform, current solutions to the problem are imperfect as well.

It is often possible to force the compiler into retaining calls to scrubbing functions by reading from the variable after it is cleaned in memory. Another option involves volatile pointers, which are not currently optimized because they can be modified from outside the application. You can make use of this fact to trick the compiler by casting pointers to sensitive data to volatile pointers. This could be accomplished in Example 1 by adding the following line immediately after the call to memset():

*(volatile char*)pwd = *(volatile char*)pwd;

Although both of these solutions prevent existing compilers from optimizing out calls to scrubbing functions such as the one shown in Example 1, they rely on current optimization techniques, which will continue to evolve in the future. The insidious aspect of this is that, as compiler technology evolves, security flaws such as this one may be reintroduced even if an application's source code has remained unchanged.

On recent Windows(R) platforms, consider using SecureZeroMemory(), which is a secure replacement for ZeroMemory() that uses the preceding volatile pointer trick to protect itself from optimization [2]. Additionally, in most versions of Microsoft Visual C++(R) it is possible to use the #pragma optimize construct to prevent the compiler from optimizing specific blocks of code. For example:

#pragma optimize("",off);

memset(pwd, 0, sizeof(pwd));

#pragma optimize("",on);


Share:

fortify scan: Weak Encryption: Inadequate RSA Padding

Abstract:

The method AESDecryptBuffer() in AESCrypt.c performs public key RSA encryption without OAEP padding, thereby making the encryption weak.

Explanation:

In practice, encryption with an RSA public key is usually combined with a padding scheme. The purpose of the padding scheme is to prevent a number of attacks on RSA that only work when the encryption is performed without padding.

Example 1: The following code performs encryption using an RSA public key without using a padding scheme:

  void encrypt_with_rsa(BIGNUM *out, BIGNUM *in, RSA *key) {

    u_char *inbuf, *outbuf;

    int ilen;

    ...

    ilen = BN_num_bytes(in);

    inbuf = xmalloc(ilen);

    BN_bn2bin(in, inbuf);

    if ((len = RSA_public_encrypt(ilen, inbuf, outbuf, key, RSA_NO_PADDING)) <= 0) {

      fatal("encrypt_with_rsa() failed");

    }

    ...

  }

This category was derived from the Cigital Java Rulepack.

Recommendations:

In order to use RSA securely, OAEP (Optimal Asymmetric Encryption Padding) must be used when performing encryption.

Example 2: The following code performs encryption with an RSA public key using OAEP padding:

  void encrypt_with_rsa(BIGNUM *out, BIGNUM *in, RSA *key) {

    u_char *inbuf, *outbuf;

    int ilen;

    ...

    ilen = BN_num_bytes(in);

    inbuf = xmalloc(ilen);

    BN_bn2bin(in, inbuf);

    if ((len = RSA_public_encrypt(ilen, inbuf, outbuf, key, RSA_PKCS1_OAEP_PADDING)) <= 0) {

      fatal("encrypt_with_rsa() failed");

    }

    ...

  }


Share:

fortify scan: Command Injection

Abstract:

Command injection is a cyber attack that involves executing arbitrary commands on a host operating system (OS). Typically, the threat actor injects the commands by exploiting an application vulnerability, such as insufficient input validation.



Explanation:

Command injection vulnerabilities take two forms:

- An attacker can change the command that the program executes: the attacker explicitly controls what the command is.

- An attacker can change the environment in which the command executes: the attacker implicitly controls what the command means.

Command injection vulnerabilities of this type occur when:

1. Data enters the application from an untrusted source.

2. The data is part of a string that is executed as a command by the application.

3. By executing the command, the application gives an attacker a privilege or capability that the attacker would not otherwise have.

Example 1: The following simple program accepts a filename as a command line argument and displays the contents of the file back to the user. The program is installed setuid root because it is intended for use as a learning tool to allow system administrators in-training to inspect privileged system files without giving them the ability to modify them or damage the system.

int main(char* argc, char** argv) {

char cmd[CMD_MAX] = "/usr/bin/cat ";

strcat(cmd, argv[1]);

system(cmd);

}

Because the program runs with root privileges, the call to system() also executes with root privileges. If a user specifies a standard filename, the call works as expected. However, if an attacker passes a string of the form ";rm -rf /", then the call to system() fails to execute cat due to a lack of arguments and then plows on to recursively delete the contents of the root partition.

Example 2: The following code from a privileged program uses the environment variable $APPHOME to determine the application's installation directory and then executes an initialization script in that directory.

...

char* home=getenv("APPHOME");

char* cmd=(char*)malloc(strlen(home)+strlen(INITCMD));

if (cmd) {

strcpy(cmd,home);

strcat(cmd,INITCMD);

execl(cmd, NULL);

}

...

As in Example 1, the code in this example allows an attacker to execute arbitrary commands with the elevated privilege of the application. In this example, the attacker may modify the environment variable $APPHOME to specify a different path containing a malicious version of INITCMD. Because the program does not validate the value read from the environment, by controlling the environment variable the attacker may fool the application into running malicious code.

The attacker is using the environment variable to control the command that the program invokes, so the effect of the environment is explicit in this example. We will now turn our attention to what can happen when the attacker may change the way the command is interpreted.

Example 3: The following code is from a web-based CGI utility that allows users to change their passwords. The password update process under NIS includes running make in the /var/yp directory. Note that since the program updates password records, it has been installed setuid root.

The program invokes make as follows:

system("cd /var/yp && make &> /dev/null");

Unlike the previous examples, the command in this example is hardcoded, so an attacker cannot control the argument passed to system(). However, since the program does not specify an absolute path for make and does not scrub any environment variables prior to invoking the command, the attacker may modify their $PATH variable to point to a malicious binary named make and execute the CGI script from a shell prompt. And since the program has been installed setuid root, the attacker's version of make now runs with root privileges.

On Windows, additional risks are present.

Example 4: When invoking CreateProcess() either directly or via a call to one of the functions in the _spawn() family, care must be taken when there is a space in an executable or path.

...

LPTSTR cmdLine = _tcsdup(TEXT("C:\\Program Files\\MyApplication -L -S"));

CreateProcess(NULL, cmdLine, ...);

...

Because of the way CreateProcess() parses spaces, the first executable the operating system will try to execute is Program.exe, not MyApplication.exe. Therefore, if an attacker is able to install a malicious application called Program.exe on the system, any program that incorrectly calls CreateProcess() using the Program Files directory will run this application instead of the intended one.

The environment plays a powerful role in the execution of system commands within programs. Functions like system(), exec(), and CreateProcess() use the environment of the program that calls them, and therefore attackers have a potential opportunity to influence the behavior of these calls.

Recommendations:

Do not allow users to have direct control over the commands executed by the program. If user input affects the command to be run, use the input only to select from a predetermined set of safe commands. If the input appears to be malicious, the value passed to the command execution function should either default to some safe selection from this set or the program should decline to execute any command.

If user input must be used as an argument to a command executed by the program, this solution can become impractical; the set of legitimate argument values may be too large or too hard to keep track of. In this situation, programmers often fallback on implementing a deny list to selectively reject or escape potentially dangerous characters before using the input. However, any such list of unsafe characters is likely to be incomplete and will be heavily dependent on the system where the commands are executed. A better approach is to create a list of characters that are permitted to appear in the input and accept input composed exclusively of characters in the approved set.

Another line of defense against maliciously crafted input is to avoid the use of functions that perform shell interpretation. For example, do not use system(), which executes its own command shell.

Be aware of the external environment and how it affects the behavior of the commands you execute. In particular, pay attention to how the $PATH, $LD_LIBRARY_PATH, and $IFS variables are used on Unix and Linux machines.

Be aware that the Windows APIs impose a specific search order that is based not only on a series of directories, but also on a list of file extensions that are automatically appended if none is specified. For example, functions in the _spawn() family, try executing file name extensions in the following order, if the command name argument does not have a file name extension or does not end with a period: first .com, then .exe, then .bat, and finally .cmd. Furthermore, additional risks exist on Windows due to the way command executing functions parse spaces in arguments that represent executables and paths.

Example 5: The following code re-writes Example 4 to avoid unintentionally executing a malicious application by using quotation marks around the executable path.

...

LPTSTR cmdLine[] = _tcsdup(TEXT("\"C:\\Program Files\\MyApplication\" -L -S"));

CreateProcess(NULL, cmdLine, ...);

...

Another way to achieve the same result is to pass the name of the executable as the first argument, instead of passing NULL.

Although it may be impossible to completely protect a program from an imaginative attacker bent on controlling the commands the program executes, be sure to apply the principle of least privilege wherever the program executes an external command: do not hold privileges that are not essential to the execution of the command.


Share:

fortify scan: Buffer Overflow

Abstract:

A buffer overflow, or buffer overrun, occurs when more data is put into a fixed-length buffer than the buffer can handle. The extra information, which has to go somewhere, can overflow into adjacent memory space, corrupting or overwriting the data held in that space.

Explanation:

Buffer overflow is probably the best known form of software security vulnerability. Most software developers know what a buffer overflow vulnerability is, but buffer overflow attacks against both legacy and newly-developed applications are still quite common. Part of the problem is due to the wide variety of ways buffer overflows can occur, and part is due to the error-prone techniques often used to prevent them.

In a classic buffer overflow exploit, the attacker sends data to a program, which it stores in an undersized stack buffer. The result is that information on the call stack is overwritten, including the function's return pointer. The data sets the value of the return pointer so that when the function returns, it transfers control to malicious code contained in the attacker's data.

Although this type of stack buffer overflow is still common on some platforms and in some development communities, there are a variety of other types of buffer overflow, including heap buffer overflows and off-by-one errors among others. There are a number of excellent books that provide detailed information on how buffer overflow attacks work, including Building Secure Software [1], Writing Secure Code [2], and The Shellcoder's Handbook .

At the code level, buffer overflow vulnerabilities usually involve the violation of a programmer's assumptions. Many memory manipulation functions in C and C++ do not perform bounds checking and can easily overwrite the allocated bounds of the buffers they operate upon. Even bounded functions, such as strncpy(), can cause vulnerabilities when used incorrectly. The combination of memory manipulation and mistaken assumptions about the size or makeup of a piece of data is the root cause of most buffer overflows.

Buffer overflow vulnerabilities typically occur in code that:

- Relies on external data to control its behavior.

- Depends upon properties of the data that are enforced outside of the immediate scope of the code.

- Is so complex that a programmer cannot accurately predict its behavior.

The following examples demonstrate all three of the scenarios.

Example 1: This is an example of the second scenario in which the code depends on properties of the data that are not verified locally. In this example a function named lccopy() takes a string as its argument and returns a heap-allocated copy of the string with all uppercase letters converted to lowercase. The function performs no bounds checking on its input because it expects str to always be smaller than BUFSIZE. If an attacker bypasses checks in the code that calls lccopy(), or if a change in that code makes the assumption about the size of str untrue, then lccopy() will overflow buf with the unbounded call to strcpy().

char *lccopy(const char *str) {

char buf[BUFSIZE];

char *p;


strcpy(buf, str);

for (p = buf; *p; p++) {

if (isupper(*p)) {

*p = tolower(*p);

}

}

return strdup(buf);

}

Example 2.a: The following sample code demonstrates a simple buffer overflow that is often caused by the first scenario in which the code  relies on external data to control its behavior. The code uses the gets() function to read an arbitrary amount of data into a stack buffer. Because there is no way to limit the amount of data read by this function, the safety of the code depends on the user to always enter fewer than BUFSIZE characters.

...

char buf[BUFSIZE];

gets(buf);

...

Example 2.b: This example shows how easy it is to mimic the unsafe behavior of the gets() function in C++ by using the >> operator to read input into a char[] string.

...

char buf[BUFSIZE];

cin >> (buf);

...

Example 3: The code in this example also relies on user input to control its behavior, but it adds a level of indirection with the use of the bounded memory copy function memcpy(). This function accepts a destination buffer, a source buffer, and the number of bytes to copy. The input buffer is filled by a bounded call to read(), but the user specifies the number of bytes that memcpy() copies.

...

char buf[64], in[MAX_SIZE];

printf("Enter buffer contents:\n");

read(0, in, MAX_SIZE-1);

printf("Bytes to copy:\n");

scanf("%d", &bytes);

memcpy(buf, in, bytes);

...

Note: This type of buffer overflow vulnerability (where a program reads data and then trusts a value from the data in subsequent memory operations on the remaining data) has turned up with some frequency in image, audio, and other file processing libraries.

Example 4: The following code demonstrates the third scenario in which the code is so complex its behavior cannot be easily predicted. This code is from the popular libPNG image decoder, which is used by a wide array of applications, including Mozilla and some versions of Internet Explorer.

The code appears to safely perform bounds checking because it checks the size of the variable length, which it later uses to control the amount of data copied by png_crc_read(). However, immediately before it tests length, the code performs a check on png_ptr->mode, and if this check fails a warning is issued and processing continues. Since length is tested in an else if block, length would not be tested if the first check fails, and is used blindly in the call to png_crc_read(), potentially allowing a stack buffer overflow.

Although the code in this example is not the most complex we have seen, it demonstrates why complexity should be minimized in code that performs memory operations.

if (!(png_ptr->mode & PNG_HAVE_PLTE)) {

/* Should be an error, but we can cope with it */

png_warning(png_ptr, "Missing PLTE before tRNS");

}

else if (length > (png_uint_32)png_ptr->num_palette) {

png_warning(png_ptr, "Incorrect tRNS chunk length");

png_crc_finish(png_ptr, length);

return;

}

...

png_crc_read(png_ptr, readbuf, (png_size_t)length);

Example 5: This example also demonstrates the third scenario in which the program's complexity exposes it to buffer overflows. In this case, the exposure is due to the ambiguous interface of one of the functions rather than the structure of the code (as was the case in the previous example).

The getUserInfo() function takes a username specified as a multibyte string and a pointer to a structure for user information, and populates the structure with information about the user. Since Windows authentication uses Unicode for usernames, the username argument is first converted from a multibyte string to a Unicode string. This function then incorrectly passes the size of unicodeUser in bytes rather than characters. The call to MultiByteToWideChar() may therefore write up to (UNLEN+1)*sizeof(WCHAR) wide characters, or

(UNLEN+1)*sizeof(WCHAR)*sizeof(WCHAR) bytes, to the unicodeUser array, which has only (UNLEN+1)*sizeof(WCHAR) bytes allocated. If the username string contains more than UNLEN characters, the call to MultiByteToWideChar() will overflow the buffer unicodeUser.

void getUserInfo(char *username, struct _USER_INFO_2 info){

WCHAR unicodeUser[UNLEN+1];

MultiByteToWideChar(CP_ACP, 0, username, -1,

          unicodeUser, sizeof(unicodeUser));

NetUserGetInfo(NULL, unicodeUser, 2, (LPBYTE *)&info);

}

Recommendations:

Never use inherently unsafe functions, such as gets(), and avoid the use of functions that are difficult to use safely such as strcpy(). Replace unbounded functions like strcpy() with their bounded equivalents, such as strncpy() or the WinAPI functions defined in strsafe.h .

Although the careful use of bounded functions can greatly reduce the risk of buffer overflow, this migration cannot be done blindly and does not go far enough on its own to ensure security. Whenever you manipulate memory, especially strings, remember that buffer overflow vulnerabilities typically occur in code that:

- Relies on external data to control its behavior

- Depends upon properties of the data that are enforced outside of the immediate scope of the code

- Is so complex that a programmer cannot accurately predict its behavior.

Additionally, consider the following principles:

- Never trust an external source to provide correct control information to a memory operation.

- Never trust that properties about the data your program is manipulating will be maintained throughout the program. Sanity check data before you operate on it.

- Limit the complexity of memory manipulation and bounds-checking code. Keep it simple and clearly document the checks you perform, the assumptions that you test, and what the expected behavior of the program is in the case that input validation fails.

- When input data is too large, be leery of truncating the data and continuing to process it. Truncation can change the meaning of the input.

- Do not rely on tools, such as StackGuard, or non-executable stacks to prevent buffer overflow vulnerabilities. These approaches do not address heap buffer overflows and the more subtle stack overflows that can change the contents of variables that control the program. Additionally, many of these approaches are easily defeated, and even when they are working properly, they address the symptom of the problem and not its cause.

Tips:

1. On Windows, less secure functions like memcpy() can be replaced with their more secure versions, such as memcpy_s(). However, this still needs to be done with caution. Because parameter validation provided by the _s family of functions varies, relying on it can lead to unexpected behavior. Furthermore, incorrectly specifying the size of the destination buffer can still result in buffer overflows.


Share:

fortify scan: Privacy Violation

 Abstract:

Mishandling private information, such as customer passwords or social security numbers, can compromise user privacy, and is often illegal.

Explanation:

Privacy violations occur when:

1. Private user information enters the program.

2. The data is written to an external location, such as the console, file system, or network.

Example 1: The following code contains a logging statement that tracks the contents of records added to a database by storing them in a log file. Among other values that are stored, the get_password() function returns the user-supplied plain text password associated with the account.

pass = get_password();

...

fprintf(dbms_log, "%d:%s:%s:%s", id, pass, type, tstamp);

The code in Example 1 logs a plain text password to the file system. Although many developers trust the file system as a safe storage location for any and all data, it should not be trusted implicitly, particularly when privacy is a concern.

Private data can enter a program in a variety of ways:

- Directly from the user in the form of a password or personal information.

- Accessed from a database or other data store by the application.

- Indirectly from a partner or other third party.

Sometimes data that is not labeled as private can have a privacy implication in a different context. For example, student identification numbers are usually not considered private because there is no explicit and publicly-available mapping to an individual student's personal information. However, if a school generates student identification based on student social security numbers, then the identification numbers should be considered private.

Security and privacy concerns often seem to compete with each other. From a security perspective, you should record all important operations so that any anomalous activity can later be identified. However, when private data is involved, this practice can create additional risk.

Although there are many ways in which private data can be handled unsafely, a common risk stems from misplaced trust. Programmers often trust the operating environment in which a program runs, and therefore believe that it is acceptable to store private information on the file system, in the registry, or in other locally-controlled resources. However, even if access to certain resources is restricted, it does not guarantee that the individuals who do have access can be trusted with certain data. For example, in 2004, an unscrupulous employee at AOL sold approximately 92 million private customer email addresses to a spammer marketing an offshore gambling web site [1].

In response to such high-profile exploits, the collection and management of private data is becoming increasingly regulated. Depending on its location, the type of business it conducts, and the nature of any private data it handles, an organization may be required to comply with one or more of the following federal and state regulations:

- Safe Harbor Privacy Framework 

- Gramm-Leach Bliley Act (GLBA) 

- Health Insurance Portability and Accountability Act (HIPAA)

- California SB-1386 

Despite these regulations, privacy violations continue to occur with alarming frequency.

Recommendations:

When security and privacy demands clash, privacy should usually be given the higher priority. To accomplish this and still maintain required security information, cleanse any private information before it exits the program.

To enforce good privacy management, develop and strictly adhere to internal privacy guidelines. The guidelines should specifically describe how an application should handle private data. If your organization is regulated by federal or state law, ensure that your privacy guidelines are sufficiently strenuous to meet the legal requirements. Even if your organization is not regulated, you must protect private information or risk losing customer confidence.

The best policy with respect to private data is to minimize its exposure. Applications, processes, and employees should not be granted access to any private data unless the access is required for the tasks that they are to perform. Just as the principle of least privilege dictates that no operation should be performed with more than the necessary privileges, access to private data should be restricted to the smallest possible group.

Tips:

1. As part of any thorough audit for privacy violations, ensure that custom rules are written to identify all sources of private or otherwise sensitive information entering the program. Most sources of private data cannot be identified automatically. Without custom rules, your check for privacy violations is likely to be substantially incomplete.

Share:

fortify scan: Missing XML Validation

Abstract:

Failure to enable validation when parsing XML gives an attacker the opportunity to supply malicious input.

Most successful attacks begin with a violation of the programmer’s assumptions. By accepting an XML document without validating it against a DTD or XML schema, the programmer leaves a door open for attackers to provide unexpected, unreasonable, or malicious input. It is not possible for an XML parser to validate all aspects of a document’s content; a parser cannot understand the complete semantics of the data. However, a parser can do a complete and thorough job of checking the document’s structure and therefore guarantee to the code that processes the document that the content is well-formed.

Explanation:

Most successful attacks begin with a violation of the programmer's assumptions. By accepting an XML document without validating it against a DTD or XML schema, the programmer leaves a door open for attackers to provide unexpected, unreasonable, or malicious input. It is not possible for an XML parser to validate all aspects of a document's content; a parser cannot understand the complete semantics of the data. However, a parser can do a complete and thorough job of checking the document's structure and therefore guarantee to the code that processes the document that the content is well-formed.

Recommendations:

Always enable validation when you parse XML. If enabling validation causes problems because the rules for defining a well-formed document are Byzantine or altogether unknown, chances are good that there are security errors nearby.

Example: The following code demonstrates how to enable validation when using XmlReader.

XmlReaderSettings settings = new XmlReaderSettings();

settings.Schemas.Add(schema);

settings.ValidationType = ValidationType.Schema;

StringReader sr = new StringReader(xmlDoc);

XmlReader reader = XmlReader.Create(sr, settings);


Share:

fortify scan: Cookie Security: HTTPOnly not Set on Application Cookie

 Abstract:

The program does not set the httpCookies.httpOnlyCookies property to true in Web.config. 

Explanation:

The default value for the httpOnlyCookies attribute is false, meaning that the cookie is accessible through a client-side script. This is an unnecessary cross-site scripting threat, resulting in stolen cookies. Stolen cookies can contain sensitive information identifying the user to the site, such as the ASP.NET session ID or forms authentication ticket, and can be replayed by the attacker in order to masquerade as the user or obtain sensitive information.

Example 1: Vulnerable configuration:

<configuration>

  <system.web>

    <httpCookies httpOnlyCookies="false">

Recommendations:

Microsoft Internet Explorer version 6 Service Pack 1 and later supports a cookie property, HttpOnly, that can help mitigate cross-site scripting threats that result in stolen cookies. Stolen cookies can contain sensitive information identifying the user to the site, such as the ASP.NET session ID or forms authentication ticket, and can be replayed by the attacker in order to masquerade as the user or obtain sensitive information. When an HttpOnly cookie is received by a compliant browser, it is inaccessible to client-side script.

Example 2: Here see the secure configuration. Any cookie marked with this property will be accessible only from server-side code, and not to any client-side scripting code like JavaScript or VBScript. This shielding of cookies from the client helps to protect Web-based applications from cross-site scripting attacks. A hacker initiates a cross-site scripting (also called CSS or XSS) attack by attempting to insert his own script code into the Web page to get around any application security in place. Any page that accepts input from a user and echoes that input back is potentially vulnerable.

<configuration>

  <system.web>

    <httpCookies httpOnlyCookies="true">


Share:

fortify scan: WCF Misconfiguration: Service Enumeration

Abstract:

Publicly exposing information about a service can provide attackers with valuable insight into how they might exploit the service.

Explanation:

The <serviceMetadata> tag enables the metadata publishing feature. Service metadata could contain sensitive information that should not be publicly accessible.

Recommendations:

At a minimum, only allow trusted users to access the metadata and ensure that unnecessary information is not exposed.

Better yet, entirely disable the ability to publish metadata. A safe WCF configuration will not contain the <serviceMetadata> tag.


Share:

fortify scan: WCF Misconfiguration: Throttling Not Enabled

Abstract:

Not placing a limit on the use of system resources could result in resource exhaustion and ultimately a denial of service.

Explanation:

Windows Communication Foundation (WCF) offers the ability to throttle service requests. Allowing too many client requests can flood a system and exhaust its resources. On the other hand, allowing only a small number of requests to a service can prevent legitimate users from using the service. Each service should be individually tuned to and configured to allow the appropriate amount of resources.

In this case, PDLCWcfService.dll.config does not contain a <serviceThrottling> tag which indicates the service is using default resource allocation values that are likely to be suboptimal.

Recommendations:

Enable WCF's service throttling feature and set limits appropriate for your application.

The following is an example configuration with throttling enabled:

<system.serviceModel>

   <behaviors>

      <serviceBehaviors>

        <behavior  name="Throttled">

          <serviceThrottling

            maxConcurrentCalls="[YOUR SERVICE VALUE]"

            maxConcurrentSessions="[YOUR SERVICE VALUE]"

            maxConcurrentInstances="[YOUR SERVICE VALUE]" />

...

</system.serviceModel>


Share:

Search This Blog

Weekly Pageviews

Translate