A Survey of Google Trusted Stores
Google Trusted Store is an endorsement placed on qualified e-stores that allows consumers to “Shop online with confidence.” Through various metrics, including the number of daily consumers as well as shipping speed and reliability, Google determines whether or not an applicant is deserving of the endorsement. The entire application process generally takes between one to three months. Stores that receive the endorsement are able to display the Google Trusted Store badge on their website.
Original image from Google
Previously, Google would only allow the badge to be displayed on pages that were served over non-secure connections. Confused by this decision, I emailed the Trusted Store team asking for confirmation. Here is their response:
Actually, we currently suppress the badge from displaying on HTTPS pages. So if the entire site is on HTTPS then we won’t be able to display the badge… To clarify, if a site is entirely is in HTTPS, then currently we do not support that scenario and the merchant cannot be considered for the Trusted Stores program.This policy has since changed, but the initial rule resulted in some lingering security issues on many Google Trusted Stores. Google now has increased requirements surrounding stores’ SSL implementations:
Payment and transaction processing, as well as collection of any sensitive and financial personal information from the user, must be conducted over a secure processing server (SSL-protected, with a valid SSL certificate - https://). Merchants remain responsible for ensuring that they are compliant with local laws and regulations on the subject of privacy and data protection. Google may suspend any account found to be in violation of our policy or the law.
In this blog post, I’ll summarize the prevalence of passive security vulnerabilities across 25 arbitrarily chosen Google Trusted Stores. These vulnerabilities were found via passive analysis of the web applications through usage as a normal user, not a penetration tester. No malicious payloads (such as SQL Injection or directory traversal) were delivered to any of the servers.
I gathered the websites via various sources, including user recommendations and Google searches. Within each website, I created a user account. I then used that account to exercise the Forgot Password mechanism, add products to the cart, and check out. Through this process I was able to detect the following vulnerabilities:
- Lack of Secure cookie directive
- Lack of HttpOnly cookie directive
- Username Enumeration
- Caching of Sensitive Information
- Session Fixation
- Sensitive Resource Transmitted over Plaintext
- Cross-Site Request Forgery
- Insufficient Password Complexity
Below is a quick summary of the prevalence of each vulnerability across the surveyed applications:
Lack of Secure Cookie Flag
The Secure flag was designed to prohibit browsers from sending the associated cookie over a non-secure connection. This flag is generally placed on session identifier cookies to prevent the accidental disclosure of session identifiers. Disclosure of session identifiers can lead to session hijacking. This vulnerability is likely due to the original requirement that stores must serve over non-secure connections.
Out of the 25 surveyed applications, 23 were found to lack the Secure flag on session identifier or anonymous checkout cookies.
Users leveraging public wi-fi or insecure networks to access the vulnerable web applications would be susceptible to network sniffing attacks that disclose session identifiers.
Lack of HttpOnly Cookie Flag
14 of the 25 surveyed applications lacked HttpOnly flags on application-sensitive cookies such as session identifiers.
In the event of a Cross-Site Scripting (XSS) vulnerability or social engineering attack, attackers would be able to access the values of these cookies, allowing them to perform a session hijacking attack.
Username Enumeration occurs when part of an application’s functionality confirms the existence of a username or email address. Within the surveyed applications, I tested for username enumeration in the login page and the Forgot Password mechanisms.
Of the 25 applications surveyed, 22 of those offered registration mechanisms. Of those 22 applications, 17 suffered from username enumeration flaws.
A malicious user could utilize a list of known email addresses to determine which accounts exist on the vulnerable applications.
Caching of Sensitive Information
Caching of sensitive information is a common problem in web applications. Caching is a technique designed to increase the performance of web applications by allowing certain information to be stored locally. However, storing sensitive information locally increases the likelihood of an attacker being able to access that information.
In this case, sensitive information was considered any personally-identifiable information (PII) such as physical addresses or billing information.
Out of the 25 surveyed applications, 12 cached sensitive information.
Session Fixation is a vulnerability that occurs when session identifier tokens are not rotated upon authentication or privilege escalation.
Out of the 22 applications that offered persistent sessions, 11 were vulnerable to session fixation.
An attacker who had been able to grab a victim’s unauthenticated session identifier could replay that identifier once the user had authenticated, providing the attacker with an authenticated session. Five of the surveyed applications passed session identifiers in the URL, increasing the likelihood of session fixation attacks.
Sensitive Resources Transmitted Over Plaintext
Transport security is designed to protect sensitive information as it is sent from the client to the server and vice versa.
By default, 5 of the 22 applications that offered login capabilities transmitted login credentials over plaintext connections. Furthermore, it was possible to force an additional 4 of the 22 applications to load the login or registration pages over a non-secure connection.
Users who accessed these applications on a public wi-fi network or other insecure network would be susceptible to credential harvesting via network sniffing.
Cross-Site Request Forgery
Cross-Site Request Forgery (CSRF) is a vulnerability that occurs due to the trust a web application places in client-side web browsers. When a user loads a web site, their browser sends that site all cookies the browser knows are valid for the particular domain and path. This includes any references to third-party sites for content not directly related to the site being viewed. Since web applications are often designed in such a way that the user’s session identifier (stored as a cookie) is the only metric that validates the user’s identity after authentication, these requests of additional content may allow an attacker to execute sensitive functions on a previously authenticated site.
24 of the 25 applications surveyed were susceptible to CSRF, allowing an attacker to force victims to add items to their cart.
Insufficient Password Complexity Requirements
Of the 22 applications that offered user registration, 19 allowed users to register accounts with passwords of 7 characters or less. Furthermore, all 22 applications allowed users to create passwords without requiring at least one letter, one number, and one symbol, indicative of a weak password policy.
It’s likely that many of the vulnerable applications’ users had passwords that would be considered weak. As such, in the event of a database compromise, password hashes could be easily cracked for a large number of user accounts.
Also known as a “UI redress attack,” Clickjacking occurs when an attacker uses a mixture of transparent and opaque layers to trick a victim into interacting with an attacker-controlled page. The attacker renders the vulnerable web application in a frame on an external web page. The attacker then strategically places transparent elements over sensitive functionality in order to give the victim the illusion that the frame is innocuous. The victim is prompted to interact with the frame, unknowingly triggering the application overlay.
Out of the 25 websites surveyed, all 25 of them were vulnerable to Clickjacking.
Although the surveyed stores may be considered “trusted” by Google’s standards, I would urge users to recognize that Google makes no guarantees related to the security of the associated stores. Many of the vulnerabilities that were detected are common, low-risk vulnerabilities which exist in most applications. The ability to chain several of these vulnerabilities together can greatly increase the total risk and lower the security posture of an application.
It is also worth mentioning that due to the original restriction that trusted stores must serve over plaintext HTTP, none of the surveyed applications implemented the HSTS header. The HSTS header is designed to prevent SSL stripping attacks by forcing sessions to be encrypted between modern browsers and the application. Geller Bedoya talks more about HSTS in his post: Is Your Site HSTS Enabled?
Users love shopping online. It’s convenient and can be done anywhere. The decision Google originally made, forcing sites to support plaintext connections, essentially opened each trusted store to vulnerabilities such as session hijacking and Man-in-the-Middle (MitM) attacks. Despite Google’s recent changes to the policy, I found many sites were still vulnerable.
If you shop at Google Trusted Stores, you should have confidence that you’ll receive your product in a timely manner. Just be aware that Google makes no endorsement of the websites’ security. To reduce the likelihood of session hijacking, be sure to shop from home, as I describe in another blog post: 5 Tips for Secure, Online Shopping.
If anyone is interested in helping to extend this research to a larger sample size, please let me know. I would be happy to help you get started.
Thanks to @sethlaw for his help during the analysis.