CVE-2019-8081 in Adobe Experience Manager

Adobe Experience Manager is a suite of online cloud-based services provided by Adobe for content and digital asset management. It includes a set of analytics, social, advertising, media optimization, targeting, Web experience management and content management products aimed at the advertising industry. There are various functions to enable consumers in managing their digital asset and content in different ways. One of the function provided by Adobe is ‘Workflows‘.

Workflows consist of a series of steps that are executed in a specific order. Each step performs a distinct activity such as activating a page or sending an email message. A workflow is made of steps which can be customized by using either an ECMA script or a Java class. AEM provides many useful workflow processes “by default”.

Vulnerability Details:

One of the Adobe provided workflow utility, urlcaller (which is a simple workflow process that calls any given URL), was logging the supplied password in the debug log. Adobe does mention here that this workflow process should be used only during development and demonstrations but people/organizations unaware of this condition and unaware of the password logging issue here, might have used this workflow process in any security sensitive setting which might have logged the passwords unknowingly in their debug logs.

Below is path of this workflow script and arguments it take.

  • ECMAScript path/etc/workflow/scripts/urlcaller.ecma
  • Payload: None.
  • Arguments:
    • args := url [‘,’ login ‘,’ password]
    • url := /* The URL to be called */
    • login := /* The login to access the URL */
    • password := /* The password to access the URL */
    • for example: http://localhost:4502/my.jsp, mylogin, mypassword

How I came across this issue?

While testing one of the AEM targets, I came across an open and misconfigured querybuilder servlet and used it to query the internal system for *.ecma files. I came across this file urlcaller.ecma, and when I looked at the source code, I noticed that this code was logging the supplied password to the debug log.  

I reported this issue to Adobe PSIRT. They did their investigation and confirmed the issue. Adobe released an advisory regarding this issue in their security bulletin and assigned CVE-2019-8081 to this finding.

Adobe PSIRT team was very responsive and updated me regularly on the fixes.

Vendor Response:

I shared a draft post with the Adobe team for review. As per Adobe team, for an AEM environment to be at risk, the following criteria is required:

  • An administrator must enable debug logging
  • This sample code, which is meant for demo purposes only, would be nevertheless used in production environments

I don’t agree with this statement entirely but if you have ever used this workflow utility in your organization or if you are using an older version of AEM, please do check if this utility has logged any password to your debug logs and take the required action.

Timeline:

02/23/2019 – Reported this issue to Adobe
02/24/2019 – Adobe PSIRT responded and assigned it a case id
03/13/2019 – I requested for an update
03/14/2019 – Adobe PSIRT said they are still investing the issue
09/26/2019 – Adobe PSIRT informed me that they are fixing this issue in the next release
10/15/2019 – Adobe released the security bulletin for AEM.
08/25/2020 – Shared a draft post with Adobe for review
10/02/2020 – Published the blog post

References:

Ok Google! bypass ‘flag_secure’

Google Assistant on Android 9 can bypass the screen-capture protection provided by Android’s FLAG_SECURE.

Vulnerability Details:

FLAG_SECURE is a window level flag in Android ecosystem that allows mobile apps to safeguard their content from a screenshot capture. Application needs to enable it by specifying the WindowManager.LayoutParams#FLAG_SECURE for the windows/screens, it doesn’t want to be recorded.

We observed that Google Assistant on Pixel devices, was able to capture screenshots even when screens were protected with FLAG_SECURE.

This is also important to know that MediaProjectionAPI in Android, allows an app to capture screenshots programmatically. Any rogue app using this API and with proper permissions would have been able to capture screen of the device when other apps are in-use.

NightWatch CyberSecurity has written a detailed post on FLAG_SECURE and MediaProjectionAPI. Google has some sample code on Github on how to use this API in capturing device screen in real time.

Testing Steps:

1. Install the Google Search app (https://play.google.com/store/apps/details?id=com.google.android.googlequicksearchbox). Enable assistant.
2. Go to the settings for Google Search and enable screenshots under “General”. Also enable “Use Screen Context” option under “Google Assistant”, “Phone”
3. Open Chrome in incognito mode, press Power + Volume Down. Note that screenshots won’t work.
4. Now tap the home button and hold, and say “take screenshot” or “share screenshot” and google assistant will take screenshot bypassing the ‘flag_secure’ restrictions.

This was tested on Pixel 2 and Pixel 3 devices running Android 9.

Timeline:
03/12/2019 – Reported the finding through Google VRP
03/14/2019 – Google confirms the finding. Also tells us that it is a duplicate of an already tracked bug.
03/14/2019 – Asked when it will be patched and at what point we can disclose it publicly.
03/19/2019 – Received below response. Google recommended to check the status of the fix time to time.
undefined
04/30/2019 – I reached out to Google to know about the status of the fix and shared a draft write-up. No response from Google.
06/20/2019 – Asked again. No response.
08/30/2019 – Asked for a status update. No response.
04/14/2020 – Noticed that this finding was fixed in Android’s September 2019 bulletin and CVE-2019-2103 was assigned to this issue. I shared a modified write-up with Google and asked if CVE-2019-2103 is for the same vulnerability. I did not receive any response.
05/01/2020 – Published this blog post.

This was jointly discovered by Pankaj Upadhyay and NightWatch CyberSecurity.

Arbitrary Command execution in Privacy Disclaimer page of a very popular organization

I found that ‘security and privacy’ section of this company’s website was vulnerable to command execution. I informed them about this issue and their security team was able to confirm it. They added me to their hall of fame and were able to fix the issue quickly. My overall experience of working with them was very pleasant.

One fine evening, while exploring an organization’s ‘Security and Privacy’ page, I don’t remember how I came across a java stack trace. The last paragraph, after the stack trace caught my attention. It said –
“You are seeing this page because development mode is enabled. To disable this mode, set struts.devmode=false”.

I googled the error message but could not find anything relevant. Then, I searched for “struts dev mode” and somehow, landed at Pwntester’s insightful blog on OGNL Injection via struts dev mode, (as well as a few other links on the command execution capability of the dev mode setting).

OGNL is an expression language for Java which allows getting and setting JavaBeans properties, on the fly (using java reflection). It also allows execution of methods of Java classes.

Struts2 comes with an inbuilt OGNL debug console named as dev mode, to help developers with more verbose logs. This can also be used in testing OGNL expressions. Dev mode is disabled by default. If enabled, this setting uses debugging interceptor and supports four types of debug parameters.

  • debug = console
    (non-intrusive way to confirm if devMode setting is enabled. If enabled, a new window with webconsole will open with a black background which can be used for further OGNL expression testing.)
  • debug = browser
    (non-intrusive way to confirm if devMode is enabled. This will show all properties of the specified object value e.g. debug=browser&object=%23parameters)
  • debug = xml
  • debug = command
    (this is used to execute the intended OGNL payload.)

By using parameter debug=command and passing the specially crafted OGNL payload as ‘expression‘ parameter, a command execution can be achieved. e.g. As shown in the below URL, debug and expression parameters are passed to a Struts Action, HelloWorld.action.

http://<target>/struts2-blank/example/HelloWorld.action?debug=command&expression=1%2b1

Mitigation and Remediation:

Always disable devMode in production. Apache also mentions this in their security tips. Best way is to ensure the following setting is applied to your struts.xml in production:

<constant name ="struts.devMode" value="false" />

While by-default devMode is set to ‘False’, many applications enable this setting in their non-prod environment for verbose logs and forget to disable it when deploying to Prod.

Timeline:
8/26/2018 – Reported the issue to this organization
8/28/2018 – They acknowledged the report and confirmed that it was a valid issue and was not previously reported either internally or externally.
10/08/2018 – They fixed the issue and asked me to validate it.
10/08/2018 – They added me to their security hall of fame list.
05/02/2019 – Draft blog post shared with them
05/03/2019 – Organization said they need time to review it
06/20/2019 – Followed up with the them
06/25/2019 – They wrote that they were still reviewing the post
07/11/2019 – Followed up with the organization, received no response
07/22/2019 – Followed up with the organization, received no response
08/16/2019 – Followed up with the organization, received no response
08/22/2019 – Followed up with the organization, received no response
11/13/2019 – Followed up with the organization, no response from their side
11/16/2019 – Published this post but without any name.

References: 
1) http://www.pwntester.com/blog/2014/01/21/struts-2-devmode-an-ognl-backdoor/
2) https://www.cvedetails.com/cve/cve-2012-0394
3) https://struts.apache.org/security/
3) https://www.rapid7.com/db/modules/exploit/multi/http/struts_dev_mode
4) https://www.netsparker.com/web-vulnerability-scanner/vulnerabilities/struts2-development-mode-enabled/
5) https://gist.github.com/mgeeky/5ba0170a5fd0171eb91bc1fd0f2618b7
6) https://issues.apache.org/jira/browse/WW-4348

 

Tale of a Cross-Site Scripting vulnerability in ICICI Bank Website

If you’re teaching reflected cross-site scripting to a newbie, what could be a classic example?

A search page taking search keyword as input and reflecting it back on the result page, along with the search results.

I logged into ICICI Bank website after ages and noticed a new search page on my dashboard. Out of curiosity, I just wanted to check if they were encoding the input properly. I entered a few special characters in the search field and right clicked on the result page to view the HTML source but an alert popped up stating that ‘Due to security reason, right click is not allowed’. It is generally very trivial to bypass such client side restrictions and I don’t think any site needs to do that as a security control.

I just added ‘view-source:’ before the URL and was able to see the generated HTML source. After looking at the HTML source, I worked on the XSS payload and below payload successfully popped up an alert, confirming the presence of Cross-Site Scripting vulnerability.

xxx')</script>alert("XSS")

Timeline:

07/31/2016 – Reported this issue to ICICI’s anti-phishing email and whatever other emails I was able to find. Also shared the screenshot and steps to reproduce the issue.
08/02/2016 – Received a generic reply from their customer care asking for my account details and phone number to help me further.
08/02/2016 – Requested them to forward that email to their IT Security team or to anyone responsible for the IT department.
08/26/2016 – Asked for an acknowledgement or an update. Received a generic email from someone in customer care department.
03/05/2017 – Requested for an update.
01/18/2018 – After some good time, when I logged in to the ICICI site, I noticed that XSS was fixed. Emailed them again to confirm if it was fixed.
01/25/2018 – Received a generic email again from the customer care department asking for my account details and phone number to help me further.
09/21/2019 – As I never received an official response, my understanding is that this issue has been resolved. I’m writing this blog post for the general security awareness of my blog readers.

WebEx Meetings are vulnerable to MITM

In my free time, I was looking at some Android applications and noticed that I was able to intercept SSL traffic for Webex Meetings app. When explored it further, I found that Webex Meetings mobile app accepts self-signed certificates. Also there is no certificate pinning enabled.

This makes Webex meet app vulnerable to Man in the middle attack.

Users of this app, if connected to a public Wi-Fi spot, can be targeted by any person on the same network. If connected to a rogue Wi-Fi hotspot, Wi-Fi provider may have access to the data passed from the app to the server. Malwares on the device can also exploit this vulnerability to intercept any sensitive data while it is traveling across the wire.

A proper SSL ensures confidentiality and integrity of the information passed from point A to point B and is very important.
OWASP also puts ‘Insecure Communication’ on 3rd position in their top 10 list for mobile application vulnerabilities.
https://www.owasp.org/index.php/Mobile_Top_10_2016-M3-Insecure_Communication

In simpler terms, if you love connecting to free Wi-Fi hotspots for your Webex meetings, in your gym or coffeeshops, then your meetings may not be not secret anymore.

Vulnerable version:

I tested Webex Meetings Android app, version 10.6.0.21060208 Samsung S8 (on Android version 8.0).
As per vendor’s response, it seems all Webex mobile clients have similar behavior.

Vendor Response :

Hi Pankaj, after discussing with our development team, I’ve learned that the Webex mobile client accepts self-signed certificates because the Webex meetings component also allows for deployments using self-signed certificates. Similarly, because the Webex mobile client has to be used with so many different sites, certificate pinning is also not enabled.

See the documentation: https://www.cisco.com/c/en/us/td/docs/collaboration/CWMS/3_0/Administration_Guide/cwms_b_administration-guide-3-0.html

Page 219 of administrator guide instructs how to import self-signed certificate on mobile device to join meetings. There are also instructions for iOS there as well.

Page 256 of administrator guide instructs certificate management on the meetings server itself, including self-signed certificates.

The guide also mentions that the client warns on accepting the self-signed certificate, and users should make sure the application is genuine before accepting Connect.

These choices are consciously made by the business and documented for customers. As such, we do not consider them vulnerabilities. Although, you are correct, these configurations leave open the possibility of some attacks intended to defeat some SSL protections from attackers with privileged network positions. However, OCSP stapling is enabled as a hardening measure to verify SSL certificates.

Due to requirements of supporting applications using self-signed certificates, the Webex business unit will not make any changes to address your findings. You are of course free to make public your findings. If you do so, please include references to the above documentation.

Thank you again for your reports.

Timeline:

03/10/2018 – Issue reported to Cisco PSIRT
03/10/2018 – Report acknowledged by the incident manager and I was asked for more information
03/10/2018 – Shared the required details. Shared some screenshots from Packet Capture app.
03/27/2018 – I was asked if I could gather more information.
04/10/2018 – I shared some information again.
10/05/2018 – Reached out to the case manager and PSIRT DL for an update. 10/17/2018 – Reached out to PSIRT DL again for an update.
03/13/2019 – Reached out to PSIRT DL again for an update and asking permission for a public disclosure.
03/15/2019 – Got a response that previous case managed had moved on to a different position and also dev team was not able to confirm my report and because of that, there were no fixes.
03/20/2019 – Got the response confirming that Webex mobile clients accept self-signed cert and it is an intended behavior.

04/30/2019 – Requested for a public disclosure as even though Webex suggested they have it in the ‘admin’ documentation, I didn’t think Webex users were aware about the inherent risks.
06/20/2019 – Shared a draft write up with the PSIRT team
06/24/2019 – Released the advisory for the public.

Credits:

No CVE or bounty was awarded as vendor does not consider it a security issue. Vendor credited me for reporting this bug in their public bug release notes.

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvi63354

Update :
Someone pointed out that this issue was previously reported for the iOS app in 2012. CVE for that issue is CVE-2012-6399.

Popping up an XSS alert via a field which does not accept more than 20 characters

While testing an app, a text field was not accepting more than 20 characters (server side validation). I inserted following piece of code to check XSS (From RSnake’s XSS cheat sheet):

'';!--"<XSS>=&{()}

and verified the HTML source for the encoded characters . As < was in the HTML source,  the input field was seem to be missing output encoding and hence was vulnerable to cross site scripting.

Now, I just needed a popup to conclude this theory. I started looking for a smaller script. I tried to create/find some payloads which were less than 20 characters but I was unable to find anything. At that point of time, a random question came to my mind that, what is the smallest possible payload to pop up an alert. I know it was not needed to prove the XSS or missing output encoding but just a random question.
Here are some possible payloads compiled from my own answer and a few others:

<a href=http://a.by>
<a onclick=alert(2)>
<b onclick=alert(2)>
<script src=//h4k.me

Update (7th March, 2019)- This is very old post and may be obsolete now. I guess as someone replied to that question in 2017, following may be the smallest payload to pop up an alert now. I need to check.

<svg/onload=alert()>