Learning lessons from another financial institution’s mistake, such as the Capital One Breach in 2019, is one of the best ways to help make important improvements.
Paige Thompson, 33, of Seattle, Washington was arrested on July 29, 2019 for stealing more than 100 million credit card applications from Capital One Financial Corporation. Court filings allege that the hacker, a former Amazon employee, exploited a known vulnerability in a Capital One firewall. The court records also allege that she stole data from at least 30 other companies.
The open-source web application firewall (WAF) called ModSecurity had been deployed to protect data stored on Amazon Web Services (AWS), the cloud service that Capital One utilizes as its primary storage solution. The firewall was misconfigured, which in this case meant it had too many permissions. That left it open to a type of attack known as a Side Server Request Forgery (SSRF).
According to cybersecurity journalist Brian Krebs: “The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform. This resource, known as the “metadata” service, is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access. In the case of Capital One, the misconfigured WAF was assigned an excessive number of permissions. In other words, it could list all the files in any bucket of data and read the contents of each of those files.”
To put it in layman’s terms, think of the AWS server as a library full of books containing information about Capital One customers’ credit card application data. The library is surrounded by a ring of heavily-armed security guards. The security guards represent the firewall, and they are very effective at keeping intruders out; they don’t let anyone into the library except the librarian. The librarian’s job is to take down information (like social security numbers and dates of birth) from a line of applicants waiting outside the ring of security guards. Using his special key, the librarian brings the information into the library where he puts the applications in locked file cabinets, then comes back outside through the ring of security guards to relay the results of the applications.
Now, in this metaphor, the security guards also have keys to get into the library. They need to be able to get inside to do security sweeps. But their keys aren’t supposed to open the file cabinets where the librarian keeps all of the applications. That’s where Capital One screwed up. The firewall, in Capital One’s case, had too many permissions – it was allowed to read and write buckets of data and relay requests for information. In other words, the security guards had keys to file cabinets that they did not need and weren’t supposed to have, which let them sneak into the library when the librarian wasn’t looking and relay information to the hacker.
At first glance, this breach might seem like Amazon’s fault – after all, the hacker was a former employee with inside knowledge of AWS. But Thompson simply exploited a well-known vulnerability of which Amazon has made its customers well-aware. In a statement neatly sidestepping responsibility, Amazon’s CISO Stephen Schmidt, commented that “Cloud customers, while supported by providers, have their own access management. Most of the security around the cloud is within the control of the customer.”
He’s right, and Amazon does a pretty decent job of helping customers understand their security responsibilities pertaining to AWS and how to avoid these types of attacks. One tool it recommends a la carte, Access Advisor, helps identify AWS roles that may have more permissions than needed. This could have prevented the SSRF attack on Capital One. Indeed, many companies, including several other large banks, and even FINRA, use AWS successfully as a cloud storage solution. None have experienced a breach at the same level as Capital One.
So, while Amazon’s role in the incident cannot be disregarded, it seems clear that a reasonable amount of human error on the part of Capital One is to blame. An August 2019 article from the Wall Street Journal reports that multiple employees raised concerns about the failure to implement certain software services that could have detected and defended against the hacks: “Before a giant data breach at Capital One Financial Corp. employees raised concerns within the company about what they saw as high turnover in its cybersecurity unit and a failure to promptly install some software to help spot and defend against hacks, according to people familiar with the matter.”
Despite the recommendations of Capital One’s cybersecurity team, it seems clear that some of these protocols were not properly implemented, perhaps in part because of the rapid rate at which the financial institution moved to cloud storage, and almost certainly due to a shortage of trained personnel – a phenomenon born of alleged mismanagement by Capital One’s CISO, Michael Johnson.
Within one year of Johnson’s arrival at Capital One in 2017, one-third of the personnel in the cybersecurity unit had left the company. Johnson, a former CIO of the U.S. Department of Energy, “quickly clashed with employees who thought his style was unsuited to the private sector…. He berated employees and prioritized building what he called his own ‘front office’ that included administrators and employees who helped with internal public relations.”
Also complicating the situation is the bank’s “all in” cloud strategy – Capital One has reduced its physical data storage centers from eight in 2014 to a planned zero in 2020. Cloud storage, though more versatile, requires significantly different security protocols and security training than physical storage. Even if Capital One properly trained all of its employees in cloud security at a rate fast enough to keep up with cloud implementation, it seems as though it was not able to keep those employees – a lesson worth noting for all businesses.
Lessons and Conclusions
The takeaways from this cybersecurity calamity read like a checklist for how NOT to conduct cybersecurity safeguards and training:
- Ignoring the recommendations of IT personnel,
- Failing to ensure the security of third-party vendors and software,
- Allowing a toxic, segmented culture where cybersecurity takes a backseat to public relations.
But still, there are many reasons why Capital One has long enjoyed a reputation as a cybersecurity-focused leader in banking. Its cofounder and CEO, Richard Fairbank had already worked with the bank’s cybersecurity team to develop a business continuity and disaster recovery plan based on the missteps and failures of other financial institutions. Because of these preparations, and the involvement of the c-suite, the bank is well on its way to recovery, a tiered process which ACDG’s Carlos Solari explains here.
In addition to implementing rigorous cloud security training programs for its employees, Capital One has hired William Bengston, a former security engineer for Netflix, as its new Director of Cloud Security. Bengston, who has special experience in cloud security gleaned from his time at the streaming giant, recently authored a series of Medium posts on Netflix’s approach to detecting and preventing credential compromise in AWS.
Despite being a leader in cybersecurity, Capital One still got hacked. It’s a valuable lesson in the importance of compliance and safeguard training for the whole organization – you can have the best cybersecurity software in the world, but it won’t work unless you know how to use it. Training may not help Capital One recover the $100- $500 million that the bank will potentially have to pay in fines, but it will cost a lot less – and prevent a gross oversight like this from happening in the future.