“Don’t let the fox guard the henhouse,” the old adage goes. But for our bug bounty program, we’ve flipped this conventional wisdom on its head to yield some strong results for the security of our online properties.
Since its inception three years ago, our bug bounty program has increasingly helped to harden the security of our products. Over this short period, we’ve received thousands of submissions, and, as of December 2016, the bounties awarded for reports that resulted in real bug fixes has now surpassed a total of $2 million. Just last month, a security researcher helped us identify and patch a vulnerability in Flickr.
In 2016 alone, we awarded nearly 200 researchers around the world. These bounties helped to fix vulnerabilities of varying severity across our web properties. Most bounties accounted for less impactful vulnerabilities, but some were more substantial.
Yes, this all comes with a degree of vulnerability. After all, we’re asking some of the world’s best hackers to seek out soft spots in our defenses. But it’s acceptable risk. The right incentives combined with some hackers who actually want to do some good has resulted in a diverse and growing global community of contributors to our security. Currently, our bug bounty program sees more than 2,000 contributors from more than 80 countries.
Visual representation of the locations of researchers who have contributed to Yahoo’s bug bounty program.
In 2017, we’ll look to continue to foster this healthy marriage in security. Attracting the highest skilled hackers to our program with meaningful bounties will continue to result in impactful bug reporting.
Following a recent investigation, we’ve identified data security issues concerning certain Yahoo user accounts. We’ve taken steps to secure those user accounts and we’re working closely with law enforcement.
What happened?
As we previously disclosed in November, law enforcement provided us with data files that a third party claimed was Yahoo user data. We analyzed this data with the assistance of outside forensic experts and found that it appears to be Yahoo user data. Based on further analysis of this data by the forensic experts, we believe an unauthorized third party, in August 2013, stole data associated with more than one billion user accounts. We have not been able to identify the intrusion associated with this theft. We believe this incident is likely distinct from the incident we disclosed on September 22, 2016.
For potentially affected accounts, the stolen user account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security questions and answers. The investigation indicates that the stolen information did not include passwords in clear text, payment card data, or bank account information. Payment card data and bank account information are not stored in the system the company believes was affected.
Separately, we previously disclosed that our outside forensic experts were investigating the creation of forged cookies that could allow an intruder to access users’ accounts without a password. Based on the ongoing investigation, we believe an unauthorized third party accessed our proprietary code to learn how to forge cookies. The outside forensic experts have identified user accounts for which they believe forged cookies were taken or used. We are notifying the affected account holders, and have invalidated the forged cookies. We have connected some of this activity to the same state-sponsored actor believed to be responsible for the data theft the company disclosed on September 22, 2016.
What are we doing to protect our users?
We are notifying potentially affected users and have taken steps to secure their accounts, including requiring users to change their passwords. We have also invalidated unencrypted security questions and answers so that they cannot be used to access an account. With respect to the cookie forging activity, we invalidated the forged cookies and hardened our systems to secure them against similar attacks. We continuously enhance our safeguards and systems that detect and prevent unauthorized access to user accounts.
What can users do to protect their account?
We encourage our users to visit our Safety Center page for recommendations on how to stay secure online. Some important recommendations we’re re-emphasizing today include the following:
Change your passwords and security questions and answers for any other accounts on which you used the same or similar information used for your Yahoo account;
Review all of your accounts for suspicious activity;
Be cautious of any unsolicited communications that ask for your personal information or refer you to a web page asking for personal information;
Avoid clicking on links or downloading attachments from suspicious emails; and
Consider using Yahoo Account Key, a simple authentication tool that eliminates the need to use a password on Yahoo altogether.
For more information about these security matters and our security resources, please visit the Yahoo Security Issue FAQs page, https://yahoo.com/security-update.
Statements in this press release regarding the findings of Yahoo’s ongoing investigations involve potential risks and uncertainties. The final conclusions of the investigations may differ from the findings to date due to various factors including, but not limited to, the discovery of new or additional information and other developments that may arise during the course of the investigation. More information about potential risks and uncertainties of security breaches that could affect the Company’s business and financial results is included under the caption “Risk Factors” in the Company’s Quarterly Report on Form 10-Q for the quarter ended September 30, 2016, which is on file with the SEC and available on the SEC’s website atwww.sec.gov.
By Kathleen Lefstad, Policy Manager, Trust & Safety
Yahoo’s “train the trainer” Digital Online Safety Course was shared with law enforcement in Quincy, Washington this past week, with school resource officers from Grant County, Warden, Ephrata, Yakima, Moses Lake and Quincy in attendance. With more than 1,000 officers trained to date, Yahoo was proud to bring this course to Quincy, providing the resources and tools to help officers facilitate discussions about online safety and good digital citizenship with their communities.
Police Chief Bob Heimbach was grateful for Yahoo’s commitment to bring the course to Washington saying, “With the world interconnected in this electronic age, this safety training, and providing us the ability to support our community members in digital safety, is invaluable. Yahoo has demonstrated their intent and commitment to being a good partner and community member here in Quincy.”
It was nearly eight years ago that the course was first created, when Officer Holly Lawrence approached Yahoo to create presentations for School Resource Officers to give about safety and citizenship in a digital world. The training has been successful due to it’s focus on education of the material, sharing of available resources and, specifically, how to present the material effectively for different audiences.
With an emphasis on communication, these presentations open the door to talk about online trends and safety issues, and identify workable solutions and preparedness together. “The old adage about ‘it takes a village’ is still true, but maybe we should start saying ‘it takes an ivillage,’” said Officer Holly Lawrence, Ret., a law enforcement partner of Yahoo, who helps run these courses nationwide. “As more communities develop and thrive in the digital space, kids and their trusted adults need the tools to be able to speak one-to-one (if not face-to-face) about the challenges and opportunities of life online.”
By Dylan Casey, Vice President of Product Management
We’re making it easier than ever to see and manage all of the devices connected to your Yahoo account. Today, you might notice some new improvements to help you keep track of the account activity and devices associated with your Yahoo account. This information is available to all users under “Account Info” here: https://login.yahoo.com/account/activity. Before we get too technical, let’s explain how this works in a real-world scenario.
Imagine that your phone falls out of your pocket in a taxi and later that day you realize that you’ve lost it. From a computer, tablet or alternate device, just sign in to your Yahoo account and head over to “Account Info.” There you’ll find a tab that says “Recent Activity.” Find the apps on your phone that are shown to have access to your account and remove them. This will invalidate the OAuth token so that no one else can use those apps to access your account on your lost phone. The same can be done for any other devices you might own that are authorized to use your Yahoo account, including a laptop, desktop computer, tablet or cell phone.
Users already had the ability to invalidate OAuth tokens through the Member Center, but this feature makes it easier to see and control which devices and apps are validated to access their Yahoo account, offering greater convenience and peace of mind.
We have confirmed that a copy of certain user account information was stolen from the company’s network in late 2014 in what we believe is a state-sponsored actor. The account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (the vast majority with bcrypt) and, in some cases, encrypted or unencrypted security questions and answers. The ongoing investigation suggests that stolen information did not include unprotected passwords, payment card data, or bank account information; payment card data and bank account information are not stored in the system that the investigation has found to be affected. Based on the ongoing investigation, Yahoo believes that information associated with at least 500 million user accounts was stolen and the investigation has found no evidence that the state-sponsored actor is currently in Yahoo’s network. Yahoo is working closely with law enforcement on this matter.
We are taking action to protect our users:
We are notifying potentially affected users. The content of the email Yahoo is sending to those users will be available at https://yahoo.com/security-notice-content beginning at 11:30 am (PDT).
We are asking potentially affected users to promptly change their passwords and adopt alternate means of account verification.
We invalidated unencrypted security questions and answers so they cannot be used to access an account.
We are recommending that all users who haven’t changed their passwords since 2014 do so.
We continue to enhance our systems that detect and prevent unauthorized access to user accounts.
We are working closely with law enforcement on this matter.
Change your password and security questions and answers for any other accounts on which you used the same or similar information used for your Yahoo account.
Review your accounts for suspicious activity.
Be cautious of any unsolicited communications that ask for your personal information or refer you to a web page asking for personal information.
Avoid clicking on links or downloading attachments from suspicious emails.
Additionally, please consider using Yahoo Account Key, a simple authentication tool that eliminates the need to use a password altogether.
An increasingly connected world has come with increasingly sophisticated threats. Industry, government and users are constantly in the crosshairs of adversaries. Through strategic proactive detection initiatives and active response to unauthorized access of accounts, Yahoo will continue to strive to stay ahead of these ever-evolving online threats and to keep our users and our platforms secure.
For more information about this issue and our security resources, please visit the Yahoo Security Issue FAQs page, https://yahoo.com/security-update, which will be up beginning at 12pm (PDT).
Statements in this press release regarding the findings of Yahoo’s ongoing investigation involve potential risks and uncertainties. The final conclusions of the investigation may differ from the findings to date due to various factors including, but not limited to, the discovery of new or additional information and other developments that may arise during the course of the investigation. More information about potential risks and uncertainties of security breaches that could affect the Company’s business and financial results is included under the caption “Risk Factors” in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 30, 2016, which is on file with the SEC and available on the SEC’s website at http://www.sec.gov./
By Katie Shay, Legal Counsel, Business & Human Rights
Twelve security trainers, tool developers and human rights activists from four continents came to our headquarters in Sunnyvale, California. Their mission? To share their unique perspectives with our Yahoo products, engineering, security, public policy and legal teams. Yahoo’s Business & Human Rights Program, the Paranoids and Yahoo for Good orchestrated this ‘hack of the minds’ in partnership with Internews and the USABLE Project.
USABLE Project’s aim is to inform the development of security tools that are easy to use and simple to understand for users from diverse backgrounds and skill levels. Their goal is to support vulnerable populations around the world who use the internet for more than just sharing pictures of cats or Venmoing a friend for lunch. In many cases, these users rely on the internet to exercise their right to free expression, expose corruption or fight against injustice in their communities. For these users, the ability to be secure online is critical.
In July, Yahoo was proud to sponsor the USABLE Project’s first ever public forum, UX in a High Risk World in San Francisco, bringing together frontline digital security practitioners, users, tool developers and UX experts from around the world. In addition, Yahoo participated in the final day of USABLE’s four-day closed-door workshop leading up to this event, working directly with this community to build concrete, actionable roadmaps to improve usability in security tools.
Following the forum, the delegation from USABLE that visited Yahoo shared their on-the-ground perspective on why remaining secure online is so important to their work. They explained how they use Yahoo products, including Flickr and Mail, why it’s important to have a principled approach to responding to government requests for user data and content moderation, as well as the importance of baking in security features to products from the outset by turning them on by default. These visionary leaders are working toward solutions for activists facing censorship, hacking, surveillance and suppression in some of the world’s most challenging environments.
During the delegation’s visit, our Yahoo teams asked pointed questions to understand the experience of some of our most vulnerable users and to explore how their experiences might inform Yahoo’s product development and online security work.
We are grateful to the USABLE team for sharing their stories with us, and for inspiring our teams to continue to find new and innovative ways to put our users first!
Recent headlines might lead you to believe that when a company runs a red team exercise that the red team should fail. After all, the company has invested in security teams, products and processes. So the outcome should be a win for the blue team and a failure for the red team. (For those of you who are lost already, a red team is an independent group within a company’s security organization that challenges the effectiveness of its security defenses. The red team performs analysis of systems and process gaps. Then it attacks you, hopefully before a real adversary does.) Let’s set the record straight on this critical aspect of modern security programs.
The red team always wins. Always.
It can be humiliating. And the timing is rarely convenient. Friday late night or on Christmas morning? Fair game.
The red team adopts the tools and techniques of actual adversaries. They use their understanding of attacks on other organizations that have been made public. They mimic the work of adversaries that the blue team has caught. They do not fight fair, nor will your adversaries.
Most companies prepare their defenses around best practices and compliance. Those alone will not get you very far. Even the organizations that use threat models and attack chains (i.e. the common events in an attack) need to practice. Practice. Measure. Learn. Repeat.
Most companies think they have a security plan. One of the great philosophers of our time, Mike Tyson, once remarked “Everybody has a plan until they get punched in the mouth.” Will your muscle memory kick in after getting hit? Or will you be stunned? Companies that engage in continuous red/blue battles are far more likely to detect and survive real attacks.
Having a security program without a red team is like practicing martial arts in the mirror rather than with a worthy sparring partner.
A red team exercise should not be an annual activity. It should represent a continuous clear and present danger. An employee, for example, may (incorrectly) doubt that they are the target of state-sponsored actors. They might think “Why should I close these minor gaps? It’s not like anyone would use these vulnerabilities against us!” They can, however, be sure that their red team is actively targeting them. Continuous red team exercises, over time, will give the blue team a fighting chance.
After the red team attack, what do you do? Do you “fix the glitch”? Or do you take time in the post-mortem to find the root cause and to fix it? More mature organizations will revisit the gaps over time. They provide input into the next planning cycle. Lessons learned from red team exercises contribute to a stronger defense and a better chance of stopping the real adversaries.
The real scandal is not that a red team won (the red team always wins!), but that many companies do not have red teams. Reporters: want a great story? Ask every CISO you talk to if they have a full-time, dedicated red team. Prepare yourself to hear some spin.
Unacceptable answers:
We are not the target of sophisticated adversaries.
We already know we have a lot of work to do so adding a red team report isn’t going to help.
We work in a highly regulated industry so it’s not necessary.
We have not had a breach in years.
Our attack surface is small.
Our IT team is great and we do a good job of user training.
Yahoo has its own internal red team known as Offensive Engineering (yes, that can be read two ways!). Their job is to take a contrarian view of Yahoo systems. They don’t care what the code was designed to do. They care about what it actually does. And yes, this red team always wins. Always. It’s what we pay them to do.
Let’s stop talking about red team wins as if they are a bad thing and let’s start talking about the red vs blue feedback loop: Practice. Measure. Learn. Repeat.
In our inaugural post to The Paranoid, we discussed the human element behind online attacks–the human adversary. We sought to give some perspectives as to who is behind online threats in order to better understand how to defend against them. Yahoo’s bug bounty program applies that insight in our ongoing efforts to provide a safe environment for our users. By thinking about the economics of security, we’ve found that we can tilt the advantage in our favor by partnering with industry-leading security researchers.
We often get questions from both security researchers, and people just interested in learning about how programs like these work. We thought we’d use this opportunity to take a quick look under the hood.
First, some background. Bug bounty programs essentially crowd-source security. They allow companies to improve coverage so they are able to add additional eyes where they need them. Bug bounty researchers also bring depth of expertise and different skill sets that can uncover hard to find bugs.
For the past two years, Yahoo has developed one of the largest and most successful bug bounty programs in the industry. We’ve paid out over $1.7 million dollars in bounties, resolved more than 2,000 security bugs and maintain a “hackership” of more than 2,000 researchers, some of whom make careers out of it.
Security researchers often ask us how we decide the payout associated with a given bug report. At first it might seem logical that we pay based on the type or classification of a security bug. Some bug types tend to be bad, so you might think that they would be paid the same. However, in the vast majority of cases, that’s not the complete story. So if the bug type alone is not what we use to determine the payout, what is? The missing input to the calculation is the impact of the vulnerability. We take into account what data might have been exposed, the sensitivity of that data, the role that data plays, network location and the permissions of the server involved. Those factors are of great importance.
Given the importance of the impact of a bug, the Yahoo bug bounty program does not reward researchers solely based on bug type. The type of bug a security researcher finds is mostly irrelevant. It’s what the bug allows them to do and where that are most important. What can an attacker actually do with this specific bug to potentially affect the security of Yahoo or our users? Furthermore, Yahoo’s application landscape is not necessarily uniform; certain properties or applications are more equal than others.
Here’s an example to show how these factors work in practice. SQL injection bugs are often a devastating bug class because they can provide full access to a database. Odds are, if a company has a presence on the web, they are storing sensitive information in databases. But just because an attacker can access the database does not mean it’s game over. The real reason that the SQL injection bug class can be so devastating is the data stored in the database may be accessed or changed by unauthorized parties. The typical impact of a SQL injection bug is high because the data exposed is typically sensitive, except when it’s not. What if the database doesn’t contain any sensitive data?
Part of the process in determining impact can seem opaque to the researcher, and we understand that. That obscurity is an unfortunate but necessary fact of life in a bug bounty program. As an external party, it is just not possible to have all the information. The sort of testing available to participants in a public bug bounty program is inherently “black box”–no documentation, no source code, what you see is what you get.
So we encourage bug reporters to include in their reports what they believe the impact of the vulnerability to be (example report here). Submitting a report that contains a thorough and detailed explanation of a legitimate security issue is much more highly valued and rewarded.
We also work closely with the developers to ensure the bug is fixed in a timely manner, and to obtain their expert opinion on impact if necessary. If the developers that created the application tell us that no sensitive data is stored in a particular database, we take that into consideration when awarding your bug. More detailed guidelines for our bug bounty program are available at hackerone.com/yahoo.
To paraphrase a little-known quote, “bug bounty programs don’t reward you for being clever.” Users and researchers should know that we place far more weight on how impactful bugs are to our platforms.
Life as a Paranoid: Understanding the Human Adversary
By Bob Lord, Yahoo CISO (Paranoid in Chief)
If the countless data breaches we read about in the news have confirmed anything, it’s that online security is somewhat of a moving target. We’ve witnessed compromised security at one point or another across every industry and government. From health records and email to financial information, intellectual property and critical infrastructure, it would seem nothing is secure these days.
Yet, despite being armed with this fundamental understanding of online security, it’s often treated as a static challenge–as if there is one solution for one vulnerability. In an inherently insecure world with ever changing threats, our conventional wisdom must evolve just as online threats do.
The obvious next question is how, and that’s a good question to ask with a plethora of answers. But in order to understand how we adapt to emerging threats, it’s first and foremost critical to understand the dynamics behind the threats themselves. Why are the threats changing and what allows them to continue to be successful?
In fact, the next best question to ask is who is behind today’s online threats. The most important aspect of online security that we can internalize is that we are up against dedicated, human adversaries who organize their activities into campaigns.
They are dedicated, which means they have a job to do, or a calling. They’re going to keep coming back until they achieve their goals. Maybe they work for a criminal syndicate, or for a foreign military. Or maybe they are on a mission from God.
They are also human, which means they can be creative and resourceful. They are like water in a cracked vase. It will find a way to seep out. They spend time learning your internal processes and reading your internal documentation before acting.
And finally, they work in campaigns. The data they seek from a system may not be valuable by itself. It may be that the data is valuable because it provides information about human rights activists in their own country. Or because they want to know what their political opponents are doing. They are likely targeting other services of peers and competitors. The data they collect is only valuable to the extent the campaign objectives are known.
Our activities as defenders, whether the casual user to the chief information security officer, need to line up against these characteristics of our adversaries. Are we considering how a phone call from an unfamiliar number but a familiar voice might be part of a social engineering scheme? Are we employing security tactics that eliminate an attack instead of letting it shift to a new vector?
Until we start thinking about online adversaries this way, we’ll continue to find ourselves playing whack-a-mole without ever turning the tide.
This is the first edition of our new Yahoo Tumblr series–The Paranoid–where we will delve into the security space and share how we’re working to protect our users, as well as useful tips for users to consider as they go about their everyday lives online. Like all good security researchers, we will look at security issues from the viewpoint of an adversary. Our goals with this series are to break conventional wisdom, ask tough questions about how we approach online security, and ultimately allow our users to hold us to a higher standard. Most importantly, we want to start a conversation to ultimately improve the safety and security of our users and our network.
We
put our users’ security first at Yahoo, and today we’re proud to
highlight one way in which we’re protecting our users against evolving
online threats through our bug bounty program. Partnering with HackerOne,
Yahoo’s bug bounty program has grown dramatically since our launch
about two years ago. Our bug bounty program boasts more than 2,000
security researchers and we’ve awarded $1.6 million in the last two
years. Our security team, known as the Paranoids, work night and day to
secure our users, but, with an online property as large as Yahoo, having
as many eyes as possible focused on the security of our users
crowd-sources what would otherwise be an impossible task for the
resources of a few.
Learn more about our growing bug bounty program here.
We recently learned that a third party had obtained access to a set of Tumblr user email addresses with salted and hashed passwords from early 2013, prior to the acquisition of Tumblr by Yahoo. As soon as we became aware of this, our security team thoroughly investigated the matter. Our analysis gives us no reason to believe that this information was used to access Tumblr accounts. As a precaution, however, we will be requiring affected Tumblr users to set a new password.
For additional information on keeping your accounts secure, please visit our Account Security page.
Recent years have witnessed exciting progress in the development of cryptographic techniques enabling new functionalities and ways of interaction, such as fully homomorphic encryption, program obfuscation, and verifiable outsourcing of computation. The second Bay Area Crypto Day workshop, for Bay Area researchers to present and discuss the latest developments in the theory of crypto, will take place at Stanford University on Monday, May 2. The workshop’s program and other relevant information can be found here. Yahoo Research is proud to co-organized the event along with Stanford University and UC Berkeley.
By Binu Ramakrishnan, Security Engineer, Yahoo Mail
Summary
At Yahoo, our users send and receive billions of emails everyday. We work to make Yahoo Mail easy to use, personalized, and secure for our hundreds of millions of users around the world. In line with our efforts to protect our users’ data, our security team recently conducted a study to measure the deployment quality of SMTP STARTTLS deployments. We found that while the use of STARTTLS is common and widespread, the growth has slowed in recent years. Providers with good/valid certificates have better TLS settings compared to others, and we believe there is an important need to improve the quality of STARTTLS deployments to protect messages – and therefore, users – from active network attacks.
The Modern Mail Ecosystem
Simple Mail Transfer Protocol (SMTP) is the underlying protocol used for email transmission, especially when sending or receiving email between different providers. The SMTP protocol does not require encryption by default, and mail providers like Yahoo depend on the STARTTLS extension to encrypt messages in transit. Unfortunately, not all providers support STARTTLS when they send or receive emails, potentially exposing them to network eavesdropping.
The diagram below offers a simplified view of a modern mail ecosystem. Communication between service providers are over the SMTP protocol, and the providers use MTAs to send and receive messages to/from other providers. MTAs speak the SMTP protocol and use STARTTLS to encrypt the messages in transit. To send a message, the sender (MTA outbound) resolves a mail exchanger record (MX) for the recipient’s domain from DNS. The MX record contains the recipient’s (MTA inbound) server name. Once the recipient’s server name is resolved, the sender connects to that server and transmits messages.
Figure 1: A high level overview of a mail ecosystem
STARTTLS has received a lot of attention in recent years. Around half a dozen studies were published and presented in 2015 (see Appendix), all of which underscore the importance of securing mail delivery infrastructure against mass surveillance and network eavesdropping. Since mail is an open system, a collective industry wide effort is critical to secure our email communication.
What is STARTTLS ?
STARTTLS is an extension that enables opportunistic upgrades of plaintext communication to encrypted communication between STARTTLS aware client and server. The diagram below shows an SMTP session between a client and a server. When the server desires to receive emails over TLS, it returns 250 STARTTLS back to client in response to EHLO from client. If the client supports TLS, it may initiate a TLS handshake and once the TLS session is established, messages will be sent over an encrypted channel.
Figure 2: SMTP STARTTLS session between a client and a server
STARTTLS provides protection against passive attacks and, in fact, the opportunistic nature of STARTTLS drove widespread adoption of TLS in SMTP. At the same time, ‘opportunistic’ encryption also means that STARTTLS is not effective against MITM (active) attacks because of: (1) STARTTLS downgrade attacks - by stripping STARTTLS from an active SMTP session that forces messages to send over cleartext, and (2) the possibility of DNS MX spoof attacks in which a compromised name server returns a spoofed MX target host or IP address and diverts the traffic through the attacker’s mail server.
Methodology
For this study, we collected 12M unique domains from a 30 day period in January 2016 of mail outbound logs. Of the 12M domains we scanned, we gathered stats for 9M domains with 3.7M unique MX hosts and ~1M unique IP addresses. The data collected is aggregated and presented in multiple buckets – unique Domain, MX, IP etc. This data is also compared with a previous study (slides) we did in May 2015 (presented at M3AAWG 34th General Meeting in Dublin, Ireland). We scanned the domains with a fast TLS scanner written in Go and used Unix tools to analyze the data.
Caveats
The Go TLS implementation has limited cipher support: specifically, it does not support deprecated/insecure ciphers. It also does not have SSLv3 client side support. This study is based on domains we collected from Yahoo, and we considered only those domains with at least three or more emails sent during that period.
Findings
Our findings are grouped and presented in buckets based on:
Domains - Unique domains (9M)
MX - Unique MX hosts (3.7M)
IP - Unique IP addresses (1M)
Valid Cert - Unique MX with valid CA signed certificate (1.8M)
Strict validation - Valid cert with a matching host name (peer verify) (626K)
Note that these 9M domains are hosted by 3.7M MX hosts which in turn map to 1M unique IP addresses. Many domains share the same MX and many MXs share the same IP.
STARTTLS Adoption
Around 80% of MXs we scanned support STARTTLS. When compared to a similar study we conducted last year, STARTTLS adoption rate was flat with no significant growth expected in the near future. Adoption rate in the case of the unique IP bucket is lower than the other two buckets.
Figure 3: STARTTLS adoption (*data from 2015)
TLS X.509 Certificates
Public Key Size
Public key size is the length of the RSA (or ECDSA) key used by the server. An RSA key size less than 2048 bits is considered weak, but we found that around 14% of MXs are still using weak 1024 bit RSA public keys. Interestingly, key sizes in the last two buckets were found to be more compliant than other buckets, which is expected considering that those hosts have valid CA signed certificates. We also observed five valid ECDSA certificates.
Figure 4: Public key size distribution chart
Signature Algorithm
Signature algorithm is the cryptographic hash algorithm used by certificate authorities to sign TLS certificates. SHA1 based certificates are deprecated and currently being phased out. We have observed a few RSA-SHA1 based certificates issued in 2015 but found no RSA-SHA1 certificates issued in 2016 (as of January 31, 2016). However, a significant number of these SHA1 certificates remain valid well beyond 2016, which is a concern. Almost all browser vendors (in the HTTPS world) decided to mark SHA1 signed certificates as ‘untrusted’ if they encounter them after January 1, 2017. When compared with data from 2015, we find a significant increase in SHA256-based certificates which is expected. You may also notice a small percentage of MD5 based certificates, especially in Domain, MX and IP buckets. Note that almost all are either expired or self-signed.
Figure 5: Signature algorithm distribution chart
Certificate Validation
This chart presents the certificates distribution in three groups: (1) Untrusted, (2) ValidCert, and (3) StrictValidCert. The ValidCert group represents certificates that chain to a trusted root CA and the StrictValidCert is the grouping of valid certificates with peer verified. Note that peer verification is against the MX hostname, not to the email domain. The unique domain bucket has more valid and strict-valid certificates than the other two buckets with more than 50% certificates that are peer-verified. This was largely because the large mail service providers that host millions of third party domains mostly use valid certificates for STARTTLS. In the case of unique IP category, we find a large percentage of untrusted certificates.
Figure 6: Certificate validation
Certificate Validation - Error-type Distribution
This chart shows the distribution of certificate validation error types. Hostname mismatch (PeerVerifyFailed) is more prevalent than self-signed/expired certificates in the domain and MX buckets. This was largely because the large hosted email providers prefer to use CA signed certificates over self-signed certificates. Interestingly, even the large mail providers grapple with hostname mismatch. Self-signed and expired certificates are more prevalent within the IP bucket.
Figure 7: Certificate validation error-type distribution
Certificate Chain Depth
Chain depth of zero mainly represents self-signed certificates (in red) and is more prevalent in the first three buckets. However, for valid and strict-certs buckets, the chain depth is either two or three, which is expected.
Figure 8: X509 certificate chain depth distribution
TLS Session
TLS Protocol Version
TLS version 1.2 usage increased since last year. The usage is higher in verified and strict-certs buckets. TLS1.1 usage is not statistically significant.
Figure 9: TLS protocol version
Negotiated Ciphers
The data presented in this chart may not be 100% accurate, as our scanner is written in Go and the Go TLS implementation has limited cipher support. In particular, the Go TLS implementation does not support deprecated/insecure ciphers and DHE cipher suites, nor does it have SSLv3 client side support.
Figure 10: TLS session cipher distribution
Deployment Quality - Focus areas for email service providers
Though STARTTLS protects against passive network eavesdropping, it is not effective against active MITM attacks in its current form. An industry-wide effort is underway to strengthen the mail delivery infrastructure and the end goal is to protect against active MITM attacks, thereby upholding users’ privacy. Below are a few recommendations that can greatly improve STARTTLS deployment quality. While these steps alone cannot protect against active attacks, by implementing these changes, mail providers can meet the baseline requirements to fight against pervasive monitoring attacks and increase the difficulty of active attacks.
Server side
Eliminate self-signed and expired certificates. There are a few certificate authorities that provide certificates free of cost, including Let’s Encrypt. Let’s Encrypt is a new certificate authority that provides free TLS certificates with the ability to automate certificate refresh, which solves the cert expiration issue. DNS-based Authentication of Named Entities (DANE) is an alternate way to authenticate STARTTLS server entities without a certificate authority. DANE relies on Domain Name System Security Extensions (DNSSEC) for security, but the challenge is that DNSSEC is not widely deployed and its adoption rate remains low. DANE does not require certificates issued by certificate authorities.
Upgrade valid certificates to conform to strict validation (peer verify). Operators must make sure their certificates are not only valid, but also match their hostname. We observed a large number of valid certificates with hostname mismatches, some of which were from large mail providers.
Replace SHA1 based certificates with SHA256 based certificates. The SHA1 cryptographic hash algorithm is considered weak and the industry recommendation is to transition from SHA1 signed certificates to SHA256 signed certificates as early as possible.
Strict certificate validation. Validate MX certificates and verify them by matching the hostname of the server with the name in the certificate presented by the server. A soft validation is recommended initially, which is useful for Log and monitoring (see below).
Log & monitoring. Data related to validation failures when connecting to a recipient server help to detect active network attacks. Log events such as STARTTLS=false, MX mismatches, and cert validation failures for this purpose.
Keep up to date with root CA certificates bundle. SMTP clients, unlike browsers, have no standard mechanism to update CA bundles. In recent years, Microsoft and Mozilla pruned their CA bundle and removed many old root certificates. Our recommendation is to keep your root CA bundles up to date, irrespective of which root CA bundle you trust.
Certificate revocation support (CRL, OCSP, OCSP stapling). Considering the opportunistic nature of current SMTP deployments, until now there was no compelling reason to check whether the certificates presented by servers are revoked or not. But this feature may become more important in coming years.
Recommendations
The use of STARTTLS is common and widespread; however, its growth has slowed in recent years. Through our study, we found that providers with good/valid certificates have better TLS settings compared to others. There is an important and fundamental need to improve the quality of STARTTLS deployment in order to protect messages – and therefore, users – from active network attacks. As a baseline requirement, email providers should work to eliminate self-signed, expired certificates and use good ciphers with PFS on SMTP servers. Senders should validate the certificates and log validation failures, as the failure logs can provide valuable insights and use it for reporting.
Acknowledgments: We want to thank Mike Shema, Elizabeth Zwicky, Suzanne Philion, and colleagues from Yahoo Mail Delivery and Paranoids teams for their support and contribution to this work.
Passwords can be a hassle - they’re easy to lose track of and forget, or they are weak passwords that are vulnerable to hacking. At Yahoo, we are moving fast in our mission to “kill the password” and make it easier for users to sign in without sacrificing security.
With Yahoo Account Key, you can easily and securely sign in to your Yahoo account using your mobile phone. Whether you use Yahoo Finance, Fantasy, Mail, Messenger, and Sports for iOS or Android, each time you sign in, you will receive a push notification on your mobile phone for you to approve. Once you tap it, you’ll be signed in immediately. It’s secure, and there’s no need to remember a difficult password. Read on for how to set up Account Key.
How to set up Account Key First, make sure you are signed into a Yahoo mobile app, then click here to set up Account Key. Or, you can follow the steps below.
In the Yahoo Mail app:
On Android, tap the top left menu icon. On iPhone, tap the profile icon in the top right of the navigation bar.
Tap the key icon next to your account
Tap Set up Account Key and follow the steps
In Yahoo Sports, Finance or other Yahoo apps:
Tap the top left menu icon
On Android, tap the key icon next to your account. On iPhone, select Account Key from the list (under the Tools section).
Tap Set up Account Key and follow the steps
And now you’re ready to go! Next time you sign in from your desktop, we will send you a push notification to your mobile app. Simply open it and tap “Yes” to approve and sign in. Make sure not to sign out of your app or turn off notifications, as this will prevent you from receiving your Account Key push notification.
By Christopher Rohlf, Senior Manager, Penetration Testing
Later today the House Oversight and Government Reform Subcommittee on Information Technology, along with the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies, will hold a hearing on the Wassenaar Arrangement, designed to restrict international sales of items with both civilian and military applications, and proposed changes on cybersecurity and export control. We welcome this opportunity for Congressional representatives to hear from expert industry stakeholders as they review the proposed 2013 Wassenaar Arrangement cybersecurity technologies additions. The hearing will highlight the impact these changes will have on American businesses and the cybersecurity industry. We thank the Co-Chairs, Rep. Will Hurd and Rep. John Ratcliffe, for their leadership on this issue, as well as the 125 bipartisan Members of Congress who expressed concerns about the impact of the current rules on cybersecurity and research.
At Yahoo, we are committed to protecting our users. As written, the proposed rule changes to the Wassenaar Arrangement will have unintended consequences that would undermine the ability of companies to protect and enhance the safety and security of their networks and users’ information. For example:
Overly Broad Language: The current language defines ‘intrusion software’ so broadly that the inevitable result is a regulation that becomes burdened with exception clauses for specific products. This harms our ability to get access to and use these products in real time to defend ourselves.
Lack Of Intra-Company/Party Exception: The proposed language has no intra-company exception. This makes it difficult for global companies such as Yahoo to properly defend themselves in the face of a sophisticated attack.
Bug Bounty: At Yahoo, we rely on our bug bounty community of security researchers to help keep Yahoo secure by crowdsourcing our vulnerability discovery efforts. Sometimes this involves exchanging detailed information about exploits with researchers all over the world. The proposed language makes this risky and difficult to do.
Information Sharing: We may lose the ability to easily share information with colleagues and partners through effective, collaborative mediums.
We will continue to work with policymakers to propose constructive solutions, for example including an intra-company/party exception, encourage a better focus on exfiltration and the use of cybersecurity items for unauthorized activities, request additional clarity around acceptable uses that do not require a license, and sharpening the definition of specific technologies named in the control. By introducing these modifications, Yahoo – and other tech companies – could continue our proactive work to ensure the highest level of safety and security for our users around the world.
The Yahoo Paranoids are excited to announce a restructuring of our Bug Bounty Program geared toward continuing to protect our users while encouraging our security reporters to submit high quality reports. We’ve been running our Bug Bounty Program for two years now and it has helped us ensure our users have the safest possible online experience. We’re proud of the security community that we’ve built through our program, with over 1,800 participating hackers who have helped Yahoo resolve more than 2,500 bugs.
Our Bug Bounty Program continues to play a critical role in the overall security posture of Yahoo, provides a safe learning environment for both new and experienced security researchers, and above all helps to ensure Yahoo products and systems are as secure as possible to provide the greatest value to our users.
We will continue to enforce a strict set of rules to maintain focus and prevent individuals acting outside the spirit of the program. We also want to encourage more accurate and well-documented reports. We occasionally encounter ambiguous vulnerabilities or reports that lack reproducible steps. To help highlight the kinds of impactful vulnerabilities we’re looking for, we’ve updated the list of properties and bug classes that are in-scope. We hope these changes will serve to help focus researchers towards better quality (and higher paying!) bounties.
All bugs submitted prior to the date and time of this message will be considered under the previous set of guidelines. The new rules are now live at https://hackerone.com/yahoo. Good hunting!
Notifying Our Users of Attacks by Suspected State-Sponsored Actors
By: Bob Lord, Chief Information Security Officer
We’re committed to protecting the security and safety of our users, and we strive to detect and prevent unauthorized access to user accounts by third parties. As part of this effort, Yahoo will now notify you if we strongly suspect that your account may have been targeted by a state-sponsored actor. We’ll provide these specific notifications so that our users can take appropriate measures to protect their accounts and devices in light of these sophisticated attacks.
Our notifications provide targeted users with specific actions they can take to help ensure that their Yahoo accounts are safe and secure. If you receive such a notification from us, here are some of the actions you should take immediately:
Check that your account recovery information (phone number or alternate recovery email address) is up to date and that you still have access to them. Remove ones that you no longer have access to or don’t recognize.
Install anti-virus software on your computer and ensure that your computer and other devices have all the latest security updates applied.
Review the account security guidelines posted by other services you use. For example, social networks, financial institutions, and other email providers. Follow their guidelines to secure those accounts, too.
It’s important to note that if you receive one of these notifications, it does not necessarily mean that your account has been compromised. Rather, we strongly suspect that you may have been a target of an attack, and want to encourage you to take steps to secure your online presence. In addition, these warnings to our users do not indicate that Yahoo’s internal systems have been compromised in any way.
So how do we know if an attack is state-sponsored? In order to prevent the actors from learning our detection methods, we do not share any details publicly about these attacks. However, rest assured we only send these notifications of suspected attacks by state-sponsored actors when we have a high degree of confidence.
We will continue to refine our detection and notification of state-sponsored threats and remain committed keeping your account safe from unauthorized access.
Yahoo Pentest Team members Stuart Larsen (@xc0nradx) and John Villamil (@day6reak) presented original research at Pacsec 2015 on the HTTP/2 protocol, its security implications, and flaws discovered in a number of implementations. Through this presentation, summarized below, we hope to make the protocol a more popular research target. What follows is a summary of our presentation given at Pacsec 2015 (slides).
HTTP/2 is a new technology that is already seeing widespread use across the Internet. There has been little security research into this new protocol yet multiple implementations and widespread adoptions already exist. HTTP/2 lives in browsers, caching proxies, and libraries. It is the undisputed future of Internet connections and vulnerabilities in this protocol have the potential to cripple infrastructure. Our talk focused on threats, attack vectors, and vulnerabilities found during the course of our research. Two Firefox, two Apache Traffic Server (ATS), and four Node-http2 vulnerabilities will be discussed alongside the release of the first public HTTP/2 fuzzer. We showed how these bugs were found, their root cause, why they occur, and how to trigger them.
We will also discuss http2fuzz, a fuzzer for both client and server endpoints of HTTP/2 connections. The fuzzer is open source and written in Go. It implements a large part of the HTTP/2 protocol and supports various frame types. It also includes a unique replay mechanism to help track down crash causing packets. We had previously blogged about two ATS bugs found by an earlier version of this fuzzer.
Overview
HTTP/1.1 came out back in 1999 and it was a huge step in bringing the web forward. But since then, websites have grown drastically, and HTTP had to be revisited. Today’s sites are much more complex with many more interconnected dependencies. ISP speeds have improved and more bandwidth is available.
The changes from HTTP/1.1 to HTTP/2 are all about performance. The major changes are:
- Binary Protocol / Compression
- Multiplexing
- Server Push
- Frames
But these new changes in functionality and complexity also introduce additional attack surface to HTTP implementations.
HPACK
Originally, HTTP was stateless. It followed a very simple model to make a request and receive a response. But that also means lots of redundant information is sent. HPACK (RFC7541) was released to address these and other issues.
HPACK is a binary header compression protocol. It uses dynamic lookup tables to store and retrieve headers. Headers only need to be sent once, and are remembered for future requests on the same connection. This differential encoding saves space and time and is a huge improvement over the vanilla protocol.
Frames
Frames are the fundamental unit of communication within HTTP/2. Here is a typical HTTP/2 header visualized:
There are 10 different types of frames:
- Headers
- Data
- Priority
- Reset
- Settings
- Push
- Ping
- Goaway
- Update
- Continuation
To learn more about individual frames, checkout the RFC.
Push Promise
Push Promise is a new feature of HTTP/2 that allows you to push resources to a client before the client requests them. For example if a client requests /index.html, the server can probably assume the client will also want /logo.png.
New Attack Surface
- HPACK
- Upgrades / Downgrades
- Inconsistent Multiplexing
- Malformed Frames
- Pushing arbitrary data to client
- Pushing arbitrary data to server
- Stream dependencies
- Invalid Frame States
With all of this new attack surface we needed an automated way of getting good code coverage in HTTP2 implementations. For this we decided to build a new fuzzer.
http2fuzz
http2fuzz is a fuzzer written in golang for fuzzing HTTP/2 implementations in either server or client mode.
It has a variety of strategies for both smart and dumb fuzzing. It can either rebuild valid frame structures with invalid data, or use completely random data.
A big challenge in fuzzing is determining what payload actually caused the target to crash. We decided to build a replay feature that saves each frame that is sent. If a crash occurs, the replay list can be inspected and minimized to determine which payload was the cause of the crash.
Bugs 1,2: Apache Traffic Server
Our fuzzer discovered two remotely exploitable vulnerabilities in Apache Traffic Server. Both of these had the potential for arbitrary code execution. These bugs were covered in a previous blog post.
Bug 3: Firefox HTTP/2 Malformed Header Frame DoS
Normally a header frame consists of a pad length, stream dependency identifier, weight, block header fragment, and padding. If only a single byte is sent an integer underflow occurs which causes nsCString to try to allocate nearly 2^32 bytes of memory.
We found a number of bugs inside node-http2 through fuzzing. Most of them involve buffer out of bound reads or invalid state handling within Javascript. These issues do not appear exploitable for arbitrary code execution but could be used to perform denial of service attacks against Node based web servers that use the package.
[*] These issues have not been addressed by the project maintainers. The package no longer appears to be in active development.
Conclusion
HTTP/2 brings with it a lot of new attack surface. More research needs to be conducted on the implications of this protocol on web security. New tools need to be developed which handle the protocol and allow penetration testers to effectively audit HTTP/2 based web sites. Security products, including NIDS, will need to implement a subset of the protocol to effectively audit connections for malicious behavior or exploits. Lastly, more testing needs to be done on implementations of the protocol before they are enabled for popular use.
Stuart Larsen and John Villamil of the Yahoo Pentest Team
Recent years have witnessed exciting progress in the development of cryptographic techniques enabling new functionalities and ways of interaction, such as fully homomorphic encryption, program obfuscation and verifiable outsourcing of computation. Yahoo Labs, in cooperation with Stanford University and UC Berkeley, is starting a series of one-day workshops for Bay Area researchers to present and discuss the latest developments in the discipline. The first event will take place at UC Berkeley on Friday, November 20. The workshop’s program and other relevant information can be found here.
By Jay Rossiter, SVP, Product & Engineering, Science & Technology
I’m so pleased to welcome Bob Lord as Yahoo’s new Chief Information Security Officer (CISO). Bob brings more than twenty years of significant experience in the information security space, most recently as CISO-in-Residence at Rapid 7. Before that, Bob was Twitter’s first security hire, heading up their information security program. In this role he established Twitter’s efforts in compliance, application security, product security, and information security. Previously he held positions in product and information security at companies like Red Hat, AOL, and Netscape.
Security has never been a more important priority for our company and the subject of global debate than right now. At Yahoo, we’re committed to protecting our users’ security and maintaining their trust. We offer users encrypted products, provide an end-to-end encryption plugin on GitHub for Yahoo Mail, offer two-factor authentication, and have taken an important step toward a password-free future through Yahoo Account Key, which allows users a fast and secure way to access their Yahoo accounts.
Bob will lead our security team – known as the Paranoids – in offensive and defensive protection of our more than one billion users around the world and for our employees globally. He’ll work closely across our teams and collaboratively across the industry to ensure that we’re providing the highest level of security possible to our users, and continue to provide our users with the latest security innovations.
Stay tuned for updates from Bob around our continued efforts to protect our users’ security and maintain their trust.
Paranoid Labs, Open Source, and Solving XSS in Handlebars
By: Christopher Harrell, Director of Paranoid Labs at Yahoo
Our Paranoid Labs team researches the most widespread and impactful security issues for Yahoo, then builds usable, production ready solutions to help mitigate them. To do this, we collect data and feedback from our systems, our product developers, our fantastic infosec colleagues in the Paranoids and in the industry, and from the extended community via our bug bounty program.
Increasingly, we’re open sourcing completed work on yahoo’s github or even building open source first because we believe security shouldn’t be a competitive advantage on the Internet; everyone deserves access to the tools to be safe, and the opportunity to collaborate and improve them.
Yahoo Paranoids Nera Liu, Adonis Fung, and Albert Yu recently found that our existing controls to prevent Cross-Site Scripting (XSS) were not as effective on newer javascript based apps and set out to develop a solution. A brief description of that solution in their words follows, and you can find their open source work via the npm package and the source repository. You can also join them at their AppSec USA talk next month if you’d like to talk in person or learn about our plans for some safety enhancements for users of React JSX.
We’ve made a concerted effort (see below for specifics on how) to ensure the safety of this solution, and would love to hear from you if you find any issues we’ve missed. We’re offering a double bug bounty for a limited time on eligible findings in this framework to encourage the research community to help us make sure this open source solution is rock solid for all who choose to use it.
Hardening Handlebars with Yahoo Automatic Contextual Escaping, or How to Kill XSS in Seconds
By: Nera Liu, Adonis Fung, and Albert Yu of Yahoo’s Paranoid Labs
Despite all of the advances in web security over the past few years XSS still remains a big problem. A website is vulnerable to XSS if untrusted user-supplied inputs are not properly filtered either on the way in or on the way out. Malicious scripts submitted within these inputs will render in browsers as originating from a trusted source, and can thus deface the website and exfiltrate sensitive information such as session tokens.
Contextual escaping is a widely recommended approach for XSS prevention. It works by applying filters based on the output context of untrusted data, at the right place and in a correct order to be secure. For instance, placing untrusted data in an anchor’s href attribute (i.e., <a href=“{{input}}”>) requires URL encoding, followed by an HTML escaping that is sensitive to how the attribute value is quoted (un-/single-/double- quoted), and a protocol blacklist to prohibit scriptable URIs (ex. javascript:). It is different from putting untrusted data inside a simple division element (i.e., <div>{{input}}</div>), which requires encoding only the < character to <.
However, contextual escaping is missing in most template frameworks including Handlebars JS, React JSX, and Dust JS. Those frameworks apply a single set of filtering rules to every output placeholder regardless of its contexts by blindly escaping & < > ` ’ “. This approach is good enough for preventing XSS in simple HTML context. Yet the protection can be bypassed by attack vectors such as “javascript:alert(1)” in an href attribute, or “ onclick=alert(1)” in an unquoted attribute. The approach can also cause double encoding which looks unsightly: & etc. Web applications that rely on these frameworks remain vulnerable unless developers take extra care to sanitize untrusted data manually, which can be error prone..
Even choosing the correct output encoding from the 20+ contexts available is too much to ask developers to do manually.
We attempted to solve this problem by automating contextual escaping and minimizing the amount of work developers have to do to gain the advantages of this approach:
Secure and Standard-compliant. The contextual analysis and escaping is automated and backed by our standard-compliant and up-to-date HTML and CSS parsers that we built from scratch. The solution has undergone a series of manual code reviews, as well as unit and integration tests. The filters also stand up against fuzzing tests using popular browsers. If you find anything we missed, please submit through our bug bounty program.
Auto-correcting Compatibility Issues. Compatibility issues can arise from non-standard behaviors (aka quirks) of browsers and from different HTML versions. It is challenging to apply filtering to problematic contexts, of which the interpretations by real browsers deviate from the equipped standard-compliant parsers. Our work outperforms prior ones by correcting the problematic HTML automatically, and filtering also those harmful characters that have different syntactic meanings in browsers.
Efficient by Design. Templates can be pre-processed completely offline to avoid any runtime contextual analysis, as depicted in Figure 1. Filters escape a just-sufficient set of sensitive characters, and are thus keeping security intact with speed improvement up to 2 times what existing template engines apply.
Effortless Adoption. Handlebars is the first template engine that comes with our contextual analyzer, as realized in the secure-handlebars package. Users of express-handlebars can also upgrade to the express-secure-handlebars package. With as little as 2 lines of code changes to upgrade, one can robustly mitigate XSS in seconds.
Figure 1. High-level architecture of secure-handlebars. The contextual analysis is decoupled from the template engine, and can be done completely offline.
The solution is already in use for some production systems in at Yahoo, and the integration required less than 10 lines of code changes for one dev team. We also conducted offline analysis of some 880 templates in less than a minute. The solution is able to identify and secure those output expressions that are placed in dangerous contexts such as URI and event attributes, which would otherwise result in critical XSS vulnerabilities.
In terms of caveats, sub-template (a.k.a. partial template) support is already in the source repo but has not yet made it to the npm package, and we plan to support running contextual analysis over style tags in the future. Pull requests are welcome!
We’re committed to the security of our users at Yahoo, which is why our Paranoids team was excited to participate in the Black Hat USA Conference last week. Black Hat brings together leaders from all facets of the security world - from the corporate and government sectors to academic and industry researchers - and facilitates transparent dialogue on security issues in today’s constantly evolving security landscape.
At Black Hat 2015, we announced three programs to improve our users’ security:
Launch of New E-Crimes Portal
Yahoo’s e-crime team investigates and responds to bad actors who attack Yahoo users, including account hijackers, fake customer care companies, and those who exploit children. As part of a partnership with HackerOne, Yahoo announced a new invite-only portal for members of the security community to securely report to this team potentially-criminal fraud or abuse on its network.
Launch of TrustKit: Open Source Secure Mobile API
Yahoo partnered with DataTheorem to unveil a new, open source mobile developer toolkit that helps iOS developers easily include complex mobile security functionality, known as public key pinning, in their apps. Public key pinning is a step developers can take to ensure eavesdropping cannot occur on data connections from their mobile apps by making sure the app verifies servers’ certificates. Angela Chow, Senior Software Development Engineer at Yahoo, and DataTheorem presented the launch of TrustKit Thursday morning to help attendees better understand how mobile developers can use the toolkit to minimize security risk without making any coding changes to their iOS applications.
Bug Bounty $1M+ Payout to Network Vulnerability Reporters
Yahoo and HackerOne held a bug bounty appreciation party to celebrate incident reporters who have helped Yahoo find vulnerabilities on its network. To date, Yahoo has paid out over $1 million to more than 600 reporters who have found verifiable bugs. This community sourced method of vulnerability detection has quickly become an important part of securing the Internet.
Black Hat was a great opportunity to interact with other members of the security community and foster ideas about how we can work together to protect our users from vulnerabilities. Achieving security online is not an end state; it’s a constantly evolving challenge that we tackle head on. We’re conscious of the need to continuously update our security protocols, especially in light of the recently-reported malware attack.
At Yahoo, we know that our users and our advertisers rely on us to help protect their information for them. We also see security as a partnership - we want to educate our users and our advertisers to be mindful of their own security habits, and we provide intuitive, user-friendly tools and security resources to help them do so. The ideas and community at Black Hat continue to inspire our team and we remain dedicated to working closely with the security community to make the Internet safer for our users.
By Chris Rohlf, Senior Manager, Penetration Testing
At Yahoo, we work to protect our users on a daily basis. That includes protecting our users’ information and fixing potential security vulnerabilities by investing in programs like our bug bounty – a program that compensates security researchers worldwide who help us to uncover potential security issues on our web properties.
Considering the success of such vulnerability programs, we’re concerned by the Department of Commerce’s proposed rules on export controls identified in 2013 by the Wassenaar Arrangement. These proposed rules would undermine the ability of companies to protect and enhance the security of their corporate networks and users’ information. Our main concerns with the proposed rules can be found here, in comments filed by the Internet Association.
We’re committed to working with the Commerce Department to make sure that our users’ safety and security comes first. The goal should be to create rules that make it tougher for repressive regimes to access surveillance technologies without putting the security of our users and networks at risk.
Apple’s 10.10.4 OS X update brought a high number of security patches for vulnerabilities reported by the Yahoo Pentest Team. During my research into various OS X frameworks I chose to focus on OS X font parsing and spent a week fuzzing and reversing native libraries. This research resulted in six CVEs, five of which are shared between OS X and iOS.
Client side font parsing is often a good target because the file formats are varied and complicated. For example, TrueType comes with its own turing complete instruction set which you can learn more about here. OTF and the less popular PostScript file formats are complex and also supported.
Many of these flaws are the result of using untrusted length values extracted directly from the file without validation. In one example CoreText, a low level font layout framework, the ArabicLookups::AddLookup function (shown below) reads a length value from the memory mapped font file, using it to increment a pointer out of bounds. The pointer is held in the rdi register which is later dereferenced in the ResolveLookup function.
Apple Type Services is a process that manages fonts. The documentation states that, “The ATS server is a process that is responsible for maintaining the font database for Mac OS X. It activates and deactivates fonts, maintains and scales glyph outline data, maintains font caches, and communicates information about font availability between font clients and font utility applications.” Since it parses font files, ATS makes a great client side target.
One of the vulnerabilities found in ATS had to do with the size argument being attacker controlled in a call to memcpy. A snippet of the stack trace is pasted below:
To reach this part of the code the AssureScalerFontBlock function must find a null value in [rax+10h]. ScalarGetTableParts() is eventually called, retrieving an integer from the font file and byte swapping for endianness before storing it in [rcx+50h].
mov ecx, [rcx+50h] # read from file, set in ScalarGetTableParts
imul ecx, eax
lea edx, [rdx+rcx*2+40h]
mov [r14+28h], edx # edx is a controlled length
...
mov r12d, [r14+28h]
sub r12d, [r14+18h] # 44h
...
movsxd rdx, r12d # size_t
call _memcpy
It was interesting seeing how OS X handles different fonts. If you want to learn more about font attack surface and vulnerabilities then I suggest reading some of the high quality presentations on Windows kernel and browser font vulnerabilities.
Infrastructure vulnerabilities present a unique challenge for a large enterprise. These vulnerabilities are often widespread and affect many systems that are key infrastructure components. Fixing these kinds of issues is not as simple as clicking “yes” on an auto-update prompt and rebooting. It requires planning, coordination and a well-tested patching mechanism that takes uptime and performance into account.
The Internet is beginning to adopt HTTP/2 on a large scale. It only takes a handful of high traffic sites and one or two browsers to add support to make a large change in the adoption rate. However HTTP/2 brings some challenges with it. If you’re not familiar with the HTTP/2 protocol it is best summarized by the official page linked above:
“The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage. One major goal is to allow the use of a single connection from browsers to a Web site.”
HTTP/2 is a binary protocol that improves the performance of web-based sessions by keeping connections open, allowing for multiple exchanges per connection, and offering compression (HPACK) of a rather bloated ASCII based protocol (HTTP/1.1). It’s a great step forward for the web in general. Following the standardization and initial implementations of HTTP/2, the Yahoo Pentest Team began bug hunting in hopes of finding security vulnerabilities before they were widely deployed. This resulted in the development of an internal HTTP/2 fuzzer. Stuart Larsen wrote the first one in Go over the course of a few days and it immediately resulted in some great bugs.
To understand the fuzzer we built, you have to know a little bit about the protocol. HTTP/2 is very similar to HTTP/1.1 at its core. It still uses verb methods (GET/PUT/POST etc), and has the same HTTP headers we are used to (Content-Type, Origin, Referer). One of the key differences between these two protocols is how the requests are sent. Multiple binary requests are made over a single TCP session and they are often multiplexed.
Within a single TCP connection, messages called “frames” are exchanged between the client and the server. Frames manage things like headers, requesting data, setting priorities, and terminating connections. In total there are 10 different frame types: DATA, HEADERS, PRIORITY, RST_STREAM, SETTINGS, PUSH_PROMISE, PING, GOAWAY, WINDOW_UPDATE, CONTINUATION
The initial fuzzer design was to split outputs into 12 different strategies, 10 of them are for each of the frames, one for raw frames, and one for completely random data. Each fuzzing strategy manages a single TCP connection, and fuzzes that particular frame-type. The fuzzer then monitors the connection and restarts it as soon as the connection is dropped. On the server side we attach a debugger to the process and monitor for unexpected behavior such as segmentation faults.
Our first target for this fuzzer was Apache Traffic Server, a reverse caching proxy that Yahoo created and uses extensively. The idea behind targeting ATS was to discover security vulnerabilities before HTTP/2 was widely deployed. Below is our analysis of two vulnerabilities that the fuzzer discovered. Both of these issues have been patched upstream and are credited to Stuart Larsen. They have been assigned CVE-2015-3249. If you can’t patch ATS right now you can disable HTTP2 support mitigate these issues.
This vulnerability may allow for arbitrary code execution, but is highly dependent on the process memory layout. The issue is on line 637 of Http2ConnectionState.cc
[637] case HTTP2_SESSION_EVENT_RECV: {
[638] Http2Frame *frame = (Http2Frame *)edata;
[639] Http2StreamId last_streamid = frame->header().streamid;
[640] Http2ErrorCode error;
[641]
[642] // Implementations MUST ignore and discard any frame that has a type that is unknown.
[643] ink_assert(frame->header().type < countof(frame_handlers));
[644] if (frame->header().type > countof(frame_handlers)) {
[645] return 0;
[646] }
[647]
[648] if (frame_handlers[frame->header().type]) {
[649] error = frame_handlers[frame->header().type](*this->ua_session, *this, *frame);
On line 644 an if statement is used to validate the value of type which is provided by the untrusted HTTP/2 frame. The frame_handlers array holds 9 function pointers to various functions for handling HTTP/2 frames. However it is declared with a size of HTTP2_FRAME_TYPE_MAX(10). The enum for indexing this array is named Http2FrameType and the last member is HTTP2_FRAME_TYPE_MAX (10). Providing a type value of 10 will satisfy the check on line 644 because countof(frame_handlers) properly returns the value 10, despite the array only holding 9 valid function pointers.
On line 649 frame_handlers is indexed with type and the value at that index in frame_handlers is treated as a function pointer and called. This array is static and thus located in global memory in the process. If an attacker can control the bytes just beyond this array in memory then this vulnerability can be used for arbitrary code execution.
This vulnerability may allow for arbitrary code execution via arbitrary read and write primitives. The set_dynamic_table_size function allows an HTTP/2 client to control the value of the _settings_dynamic_table_size member variable. There is no constraint on the minimum or maximum size of this value.
The default table size, _settings_dynamic_table_size, is 4096. If new_size (attacker controllable) is smaller than old_size then the while loop on line 217 is entered. This incorrectly checks _settings_dynamic_table_size against new_size, instead of the current_size member, which should be updated anytime the table size changes.
On line 219 the last member function of the _headers vector is called.
C &
last() const
{
return v[n - 1];
}
If the vector contains no elements (this is the default as the Vec constructor initializes its member variables to 0) then n is 0 and the call to last returns a reference to an object that is not a member of the vector. In this particular case it returns a pointer to memory that is interpreted as a MIMEField object. Several values are retrieved from this object with the non-virtual calls to name_get and value_get. On line 225 a call to remove_index is made which results in an out-of-bounds write via a call to memmove.
Vec<C, A, S>::remove_index(int index)
{
if (n > 1)
memmove(&v[index], &v[index + 1], (n - 1 - index) * sizeof(v[0]));
n--;
if (n <= 0)
v = e;
}
After this function completes the field_delete function is called with a pointer to the fake object which results in a number of different exploitable write primitives.
The Yahoo Paranoids are working hard to increase the security of critical Internet infrastructure that we all rely on. For a minimal investment in time and effort we were able to discover multiple vulnerabilities. This sort of payoff is the ultimate goal of fuzzing - minimal effort for maximum gain. We will continue running and tweaking our fuzzer and hope to uncover more bugs in other implementations soon. We hope you enjoyed this post!
I had the pleasure of giving the keynote at BSides NOLA this year on the topic of ‘Offense at Scale’. This was a Digital Forensics/Incident Response focused conference with a lot of smart defenders in the audience. I wanted to deliver a talk to this crowd that was forward-looking and decided on something that I have been thinking about at Yahoo a lot lately: Offense At Scale.
Before coming to Yahoo I would hear people say scale or cloud and just roll my eyes because of how meaningless those words had become. However, working at Yahoo has shown me what scale really means. I have seen firsthand how such an enormous operation affects both attacker and defender roles.
In this talk I explored how scale has a way of magnifying even the smallest security issues, and how we often rely on the “1 in a million defense” which probably won’t work for too much longer. Right now attackers enjoy an asymmetry that works in their favor. But defenders have a chance to turn that around with automation at scale using technologies such as Docker. However, that will require doing things differently and retooling a lot of things we have come to rely on.
My team and I are very focused on how scale and size affect our ability to successfully find and exploit vulnerabilities on our systems. For example: how can a real attacker compromise a Docker container, escalate privileges, and then exfiltrate data faster than log analysis can catch him? This is just one of the many things we are thinking about as we try to innovate offensive tools and techniques that work at scale.
The general take away from the talk is that operating at scale, for both attackers and defenders, will require a lot of automation and we (offense) simply aren’t there yet. The low and slow approach is on the way out. Whoever can master automation and speed first will own the next generation of computing.
If you’re interested you can check out my slides here.
In this article I will talk about web spidering, why it is useful when pentesting a rich web application, and different spider techniques the Yahoo! pentest team has put to use.
Often one of the most useful things you can do at the start of a pentest is enumerate all of the available attack surface of an application. An application’s attack surface stretches beyond their intended use, so as a pentester you look for entry points a developer may not have considered, or forgot about.
One of the fastest ways to discover content on web applications is through spidering. A spider is a tool that crawls a website looking for all the available content. There’s a few different ways to discover content:
The most common technique for spidering is the use of page elements as seeds for further exploration. Here we parse the HTML and look for any element that has a link we have not yet seen. A good list of elements to look for might be.
The form[action] requires a little extra attention. You want to make sure you also grab all input[name,value] pairs, and use the correct form[method]. Forms normally use POST as their method, but it can also be a GET.
Dirbuster
The name of this method comes from a popular technique that brute forces web directories for possible valid URLs. A lot of web pages have similar pages in their webroots, and so web pages can sometimes be guessed.
For example, a lot of Apache httpd instances have /cgi-bin/, or info.php scripts. Information panel websites commonly have a config.html, or an /admin directory.
The Dirbuster technique uses a list of common directories and files, and checks to see if any of them actually exist. It’s crude, but it works well.
HTTP Method
HTTP uses method verbs for specifying what type of action to take. The four most common verbs are:
POST - Create content/Perform Action GET - Return content PUT - Update content DELETE- Remove content
In older webstacks you’ll normally only see GET requests and POST on forms. But in newer webstacks, more of the HTTP verbs are used commonly in RESTful APIs.
There are approximately 30 different verbs, but most are not supported by the majority of webstacks.
We begin by checking for the following:
OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, CONNECT, FOOBAR
Foobar is not a valid HTTP method, but it can be useful to analyze the error returned by the web server.
This strategy is useful for finding open directory listings, which can be a gold mine or additional content to spider.
Query Fuzz
We can also fuzz the query string. Normally query strings come in the form of ?a=1&b=2&c=3 at the end of the URL. This technique relies on swapping out these values for other strings and submitting the request. For example, we could try these GET requests:
It’s also useful to submit query parameters such as admin=true, or debug=1. Every once in awhile you’ll get a hit, and it’s another goldmine of information (and possibly a privilege escalation).
Cookie Fuzz
Web browsers maintain session information through HTTP Cookies. Your browser hides this from you, but on the back end websites are telling your browser: “Every time you make a request to my website, send the value "Cookies: username=Foobar&admin=False”. This way the web application knows who you are.
Different users may be able to see different content on a website. If possible, it’s best to authenticate to a website with different user credentials, preferably with different levels of privilege, and spider with each set of cookies. Administrators will likely have access to additional content.
Websites sometimes store user settings within cookies. An example might be “displayHelpBar=false”. These can be pretty difficult to fuzz and enumerate, but if you can, they sometimes generate more unique content.
Similar to the Param fuzz, we’ll try to change the values of cookies until we get different responses.
A lot of websites have a file called robots.txt in their webroot that tells web crawlers which directories not to index/scan. This is known as the robots exclusion standard.
Unfortunately, by specifying where not to scan, you’re telling bad guys exactly where potentially very juicy content is located.
Another related file is sitemap.xml. This is a website inclusion map. It tells spiders which content is available on the website. Simply pull out the URLS from urlset[‘url’]['loc’].value.
Rich Internet Applications (Flash/Silverlight) have their own Same Origin Policy access models, with corresponding files to specify how they interact with other domains.
The two main files are /crossdomain.xml (flash) and /clientaccesspolicy.xml (Silverlight).
crossdomain.xml doesn’t help you find any new content, but it’s useful for analyzing trust boundaries with different websites (and is commonly misconfigured).
clientacesspolicy.xml is not common, but it does allow the binding of resources to the path level, meaning you can sometimes discover new paths on the web server.
Sometimes a website may return content that has a url, but may not actually link to it. A simple regexp for these leftover values can be tremendously useful.
Javascript often makes web requests to API endpoints that may not be viewable from HTML elements. The preferred method for discovering these is through a headless browser (discussed later), but a simple regexp catches a lot of them, and is simple to write.
Public Cache (Search Engines / Way Back Machine)
Search engines and the Way Back Machine index and store as many web pages as they can. Sometimes a certain link will exist on a webpage, and then the link will be removed but the content behind the link will still exist. Using search engine keywords, you may be able to uncover the old content.
For example try Yahoo! searching:
site:yahoo-security.tumblr.com
Status
The Apache httpd webserver provides a module to display a /status page, which shows recent web requests made to the server. This page can be scraped for links.
Conclusion
The different strategies are more effective if combined together. For example, running the Ascension method may return an open directory listing. Feeding the open directory listening into the Static Content strategy will return lots of new links which can be used by different strategies for more content discovery.
These strategies will help you quickly enumerate the content of a web application.
Next Up: Headless Browser
A lot of modern websites are very front end heavy. The work for generating pages and links is actually done in the browser, not on the server. This means your spider may not be able to see that much content because the spider doesn’t execute the page’s javascript to build the pages.
We will talk about how to get around this problem by using headless browsers in the next blog post.
The Yahoo! Paranoids have the mission of protecting the privacy and security of a billion users. Its a tough but rewarding job and we wouldn’t have it any other way. Like many of you, we frequently use Open Source security tools to get the job done. These tools are often built by the security community for the security community. We value these mutually beneficial relationships, and believe that shared contributions by all members of the community are important.
Today we are happy to announce that we devoted two weeks to audit the osquery project. osquery is a valuable system security tool that enables the collection of data on processes, network connections and more. All of this is made available through a convenient and familiar SQL interface. Our audit of osquery consisted mainly of manual source code analysis and some light fuzzing. We reported 10 security vulnerabilities to Facebook along with additional hardening recommendations. The issues we uncovered ranged from uncaught C++ exceptions to an arbitrary file read (with the potential for privilege escalation with local system access by chaining together two different vulnerabilities). We communicated our findings to the Facebook osquery team who quickly took action to mitigate them. As of May 6th 2015 all of the fixes have been committed to the osquery git repository.
Thanks to the Facebook osquery team and the security community as a whole for your ongoing support. Be on the lookout for more security research and tools from Yahoo in the near future. We are just getting started!
We’re about to kick off our Yahoo Trust Unconference at our offices in San Francisco!
Can’t make it here in person? No problem.
Tune in to the livestream here and tweet any questions or comments you have for our speakers @YahooInc.
Our CISO Alex Stamos (@alexstamos) will kick us off with opening remarks at 0910 AM PT.
Today our participants will hear from industry experts and academics including Frank Chen (Andreesen & Horowitz), Zooko WIlcox-O’Hearn, Elisabeth Morant, Trevor Perrin, Adam Langley and our very own Yan Zhu (@bcrypt) on how we can build products that are safe and trustworthy for every user. Our discussions will focus on cryptography, web standards for security, anonymity protocols, browser security, and product security.
The Yahoo Pentest Team discovered a NULL pointer dereference flaw (CVE-2015-1137) in the nVidia GeForce (nvAccelerator) kernel driver which ships with OS X Yosemite. This bug was discovered and verified on Macbook models using the GeForce driver version “10.2.1 310.41.15f01".
The crash occurs when the affected service is opened via userclient type 1 and memory type 4. A CALL instruction at the end of the basic block executes an attacker controlled function pointer at an offset from NULL. The screenshot below shows a disassembly of the location where the flaw occurs.
It is possible for an attacker to exploit this vulnerability by mapping the NULL page which can result in code execution and privilege escalation. Using publicly available techniques a 32-bit exploit can be created that maps a page at NULL filled with user controllable data.
This issue was reported and coordinated in accordance with the Yahoo! vulnerability disclosure policy, details of which can be found here:
By Alex Stamos, Chief Information Security Officer
At Yahoo, we’re committed to creating a safe and secure platform our users can trust. That’s why I’m excited to share that we’ll be hosting the “Yahoo Trust UnConference” at our San Francisco office on Saturday, April 25, 2015 from 9:00 AM to 1:00…
At the end of 2014 we published our security vulnerability disclosure policy which outlined our approach to handling security issues we discover. Today we are happy to announce the first round of security vulnerabilities our team has discovered. The Yahoo Pentest team discovered multiple vulnerabilities in the Bro Intrusion Detection System. The root cause of these vulnerabilities is a lack of bounds checking in protocol parsing C++ code emitted by the binpac utility.
Bro offers a complex and featureful grammar, BinPAC, for describing protocol definitions. These grammar files are fed to the binpac utility at compile time which produces C++ code for parsing protocol traffic captured on the wire. Here is the description for BinPAC from the authors:
BinPAC is a high level language for describing protocol parsers and generates C++ code. It is currently maintained and distributed with the Bro Network Security Monitor distribution, however, the generated parsers may be used with other programs besides Bro.
These generated C++ functions are invoked via callbacks and the Bro scripting language. Bro uses PAC grammars to define protocol headers throughout most of its analyzers. Here is a simple example adapted from the BinPAC README:
// PAC type record. This is stored in myProtocol.pac
type myProtocol = record {
data:uint8;
};
// The binpac utility generates this C++ class to parse myProtocol
class myProtocol {
public:
myProtocol();
~myProtocol();
int Parse(const_byteptr const t_begin_of_data, const_byteptr const t_end_of_data);
uint8 data() const { return data_; }
protected:
uint8 data_;
};
// The resulting binpac utility generated Parse() function
myProtocol::Parse(const_byteptr const t_begin_of_data, const_byteptr const t_end_of_data) {
// Assign data_ from beginning of protocol bytes
data_ = *((uint8 const *) ((t_begin_of_data)));
…
}
We discovered in some cases that BinPAC produces C++ code that lacks bounds checking related to pointers (t_begin_of_data, t_end_of_data) that track where in memory the protocol data resides. If the protocol is defined a certain way in the .pac grammar file then BinPAC may emit the vulnerable C++ code. A remote attacker can send protocol headers that trigger these vulnerabilities.
The following is a list of BinPAC generated Parse functions we were able to determine are affected by this issue. We do not know if this list is complete at this time.
SMB::SMB_write_andx::Parse
Unified2::Packet::Parse
Unified2::ExtraData::Parse
SNMP::ASN1Encoding::Parse
BitTorrent:BitTorrent_Unknown::Parse
BitTorrent::BitTorrent_Bitfield::Parse
If you are unable to patch Bro IDS then disabling these protocol parsers may help to mitigate these issues.
We discovered these vulnerabilities in early December 2014 and immediately reported them to the Bro IDS developers. We are not aware of any active attacks involving these vulnerabilities and chose to delay publication until after the new year when we could be sure they would be patched by the majority of Bro users.
CVE-2014-9586 has been assigned to this vulnerability.
By Jay Rossiter, SVP of Platforms & Personalization Products
At Yahoo, we’re focused on having the absolute best talent in place to provide our users with outstanding product experiences. Part of that experience is the trust that consumers put in us to keep their personal data secure. This…
Choosing a strong password is just one part of protecting your Yahoo account. You should also follow these tips to keep it safe:
Your Yahoo ID and password are confidential information. A Yahoo employee will never ask you for your password in an unsolicited phone call or email. Do not respond to any message that asks for your password.
Do not write your password down. If you must write it down, keep it safe away in a place only you can access. Treat it as if it were cash.
Change your password if you suspect something is amiss. To change your Yahoo password, go to“How do I change my password?” and follow the instructions.
Verify your Yahoo account information. From time to time, make sure your information is accurate and that no one has changed your data. If you suspect someone knows the answer to your secret question and any other information asked on the Sign-In Problems page, contact the Yahoo account security team as soon as possible.
Use care with automatic sign-in. If you check "Keep me signed in" when you sign in to Yahoo, you’re still signed in even after you close your browser.
This feature can be a convenience for you: When you return to Yahoo, you don’t have to re-enter your password. (If you’re away from your computer for a while, you may be asked to re-enter your password.)
Do not check the "Keep me signed in" box if you use a shared computer.
To change the setting of this feature, click the Sign out link at the top of most Yahoo pages, and then sign in again, but do not check the "Keep me signed in" box.
Read the fine print. Before saving your password on any browser, plug-in, or program, thoroughly read the security documentation for that program or service. Depending on the program, your passwords may be available to anyone who uses that computer.