Trial By Fire - Tiger Teams
|
Abstract
In both sports and warfare, allies will often split into mock adversaries and oppose each other to test their weaknesses. Hockey teams divide and scrimmage themselves as a regular practice routine. Martial arts classes spar and train against each other. At the end of a session, they might notice the need for a new goalie or a quicker left uppercut. Or, they might go home conceded, find justification to skip practice next week, and worst of all - remain unprepared for the real enemy.
Our sense of security is inversely related to how talented we believe our adversaries may be. If our judgment is exaggerated in one of several directions we are at a great risk to be defeated. Tiger teams exist to provide an accurate assessment of an organization's security. This evaluation process is typically done with good intention, but often the organization is left with a big hole in their pocket, offended top-level managers, and a false sense of security.
Definition: What are Tiger Teams?
Tiger teams are also known as red teams, ethical hackers, penetration testers, and intrusion testers. They are people hired to demonstrate vulnerabilities in systems by exploiting those vulnerabilities, [1]. Typically the teams will also prepare a detailed report of their attack methods and recommend ways to secure what was exploited.
For example, in the movie Sneakers, Robert Redford and his team of penetration testers are hired by a bank's management to evaluate its security. While slapping the stolen $100,000 on the table after a successful mission, Redford said:
"Gentlemen - Your communications lines are vulnerable, fire exits need to be monitored, your rent-a-cops are a tad under trained, besides that everything seems to be just fine. You'll be getting our full report and analysis in a few days, but first, who's got my check?"
The bank clerk then asked: "So, people hire you to break into their places to make sure no one can break into their places?" Redford replied, "Yes, it's a living."
Identification: Who Are Tiger Teams?
Members of tiger teams can be graduates of computer science or a number of other fields. They can be employees of prestigious firms with several years of experience. Among the most well known teams are those at Computer Sciences Corporation (CSC), International Business Machines (IBM), and Sandia National Laboratory's Information Design Assurance Red Team (IDART). Ideal tiger team members have strong programming and networking skills, are adept to several operating systems, have a detailed knowledge of hardware and software, and are experienced with the most common and up-to-date exploit tools.
The threats initialize here. But of course, this is only the beginning. Tiger team members can also be self-employed, self- proclaimed experts who are really nothing more than arrogant amateurs. They can be ex-crackers worthy of little or no trust at all. Of a most deceiving altitude they can be an organization's rival, under thick disguise. The nature of the job potentially exposes an organization's most valuable assets, making this position attractive to these sorts of people.
Objective: Why Hire a Tiger Team?
It is widely known that the proper level of protection depends on what needs to be protected, [2]. Many organizations appear to lack the resources to assess what level of host and network security is adequate, [2]. Tiger teams are hired because these organizations want to take advantage of the Internet for electronic commerce, advertising, information distribution and access, but they are afraid of getting hacked by criminals, [3].
Clients hope these (in most cases) independent computer security professionals will evaluate the security of their computer systems and suggest preventive mechanisms. As we shall learn later, in almost every case, a decent tiger team can defeat the protective measures put in place, but this does not necessarily mean protection should be enhanced, [4]. More often than not, the real reason tiger teams are hired is to demonstrate weaknesses to upper management in a believable scenario and scare them into allocating more funds to the I.T. department, [4].
Process & Methodology: What, When, How, and How Much?
Tiger teams (attempt to) simulate a real intruder's attacks, while taking caution to not cause any damage. The team will execute a range of intrusion tests using the same techniques known to be used by the most common crackers, [2]. The general order of activity mimics that of traditional penetrations: information gathering, preparation and development, live network discovery, attack and privilege gain, [6]. Overall, the tiger team seeks to discover what intruders can see on the target systems, what the intruders can do with this information, and whether anyone at the target notices these intrusion attempts or successes, [3].
The most accurate evaluation would be a "no-holds-barred" approach, because this is most representative of the real adversary. For example, defenders and developers will typicallywant to constrain tiger teams to very specific portions of a network, [6]. Conversely, adversaries usually want to (and will) attack anything that stands between them and the target, [6]. This is an immediate weakness that can lead to a false sense of security.
The timing of penetration tests can vary depending on the client organization and the team hired. Some clients prefer to avoid normal business hours because disruptions on these hours could cause some major chaos, [6]. Furthermore, it is characteristic of intruders to attack in the late evening or early morning. For this reason, daytime testing might reduce the accuracy of the evaluation somewhat. On the other hand, alerts from intrusion detection systems may be disabled, less carefully monitored, and provide a way for real intrusions to blend in during the day, [6]. These factors should be considered on an individual basis.
The contract amount for IBM's team ranges from $15,000 to $45,000 for a standalone ethical-hack, [2]. Other teams consist of three to twelve people, charge by the man-hour, and design projects lasting a week to several months.
Critique: The Risks, Threats, and Reality of Tiger Teams
Adversary work factor is an informative metric for gauging the relative strengths and weaknesses of modern information systems, [6]. A real problem with tiger teams is that they rarely are a good model of the adversary.
- Announced Penetration: Ready or Not, Here We Come. Sandia's IDART team, amongst others, explains to clients in advance exactly how and when they will attack, [3]. In this way, system defenders have time to prepare specific, automatic, and even redundant defenses for their software, platforms, firewalls, and other system components, [3]. The problem with this is that organizations are building their defenses in preparation for an expected, simulated attack, rather than always being prepared in the first place. It's like cramming for the pop quiz announced five minutes ago, instead of coming to class prepared. In either scenario this is a bad habit that leads to bad performance.
Organizations that cram every bit of defense in at the last minute also create an abnormal testing environment. This is exactly what should be avoided if the results are to be indicative of a real adversary's work factor under normal conditions. Furthermore, security is something we do, not buy, [8]. If an organization's best effort at defense yields acceptable results they might "lower their guard" so to speak; and not encounter future security risks as immediately and aggressively as they should. The false sense of security sets in.
- Internal Point-of-Presence. If tiger teams are to assess intranet security, their physical presence on site will likely be required. This is when the threats discussed much earlier begin to haunt us - unfamiliar, potentially malicious people are exposed to nearly everything. Malicious team members can dumpster-dive, snoop desk areas, view on-screen displays, and certainly exercise social engineering. The chances of tiger teams engaging in this sort of activity are slim, but not so slim that it should be ignored or forgotten.
- Tiger Team Toolkit: Is there something missing? Tiger teams are at a great disadvantage when they abide by the general guidelines of using only common and widespread exploit tools. Sandia's IDART, for example, uses only open- source publicly available software for their attacks. The problem is that the most dangerous of adversaries do not use these tools - they write their own. These are the people who find un-chartered vulnerabilities and exploit them before the good guys even know about them. The question lies heir in: are tiger teams a representative model of the real adversary with such limited weapons? After all, hockey teams don't scrimmage with broomsticks and acorns - they use the same (type of) equipment as they do in competitive matches.
- Infinite flaws: Keep Counting... There is no test possible that can prove the absence of flaws, and thus no one can guarantee 100% security, [8]. Sandia's IDART team either successfully invaded or devised successful mock attacks on 35 out of 35 information systems at various sites, [7]. Their message is clear: "...competent outsiders can hack into almost all networked computers as presently conformed no matter how well guarded, [7]." Defenders have to protect against every possible vulnerability, but an attacker only has to find one security flaw to compromise the whole system, [8]. So lets face it: the odds will always favor the attacker. Moreover, even if no security flaws are found, this does not mean the organization is safe, only that the team could not get in, [4].
This sounds like disappointing news, which indeed it is, but organizations need not waste thousands of dollars and make additional risks for someone else to prove it to them - it is reality and it has already been proven for decades. If top-level managers had a healthy respect for their dependency on information technology and its limitations, as they should, these tactics would not be necessary, [4].
- Here Today, Gone Tomorrow. As stated before, security is dynamic. It is something that deserves (and requires) constant attention. On the other hand, tiger teaming is not dynamic. They show what an organization's vulnerabilities are at the moment the test was executed. Chances are, by the time a formal report is presented, new vulnerabilities have been discovered. This is an everyday occurrence, as security assessment should be. Furthermore, on a longer-term scale, "...few tiger teams test network attacks that will probably dominate the high quality attack technology over the next 10 years, [4]."
- Cracker Virtues: Patience & Persistence. Professional crackers are known to be very patient and persistent. Tiger teams normally establish deadlines before the project begins and are thus not able to conform to these attributes. Once again, the adversary model is weakened.
- Legal Limitations: Another Handicap. Real adversaries are not restricted to staying within legal boundaries when performing a penetration test. On the other hand, not only must tiger teams secure permission from the hiring organization, but the organization must be absolutely sure it has the right to give that permission. Internet Service Providers may hold an organization responsible if, as a result of the tiger teams' actions, other network users experience denial-of-service conditions or other problems. Tiger teaming is full of liability issues that can create a big mess.
Now, consider another effect that legal limitations have on tiger team's simulated attack scenario. Since when do intelligent crackers directly attack from their own machines? Ok, rhetorical question. Usually they would route an attack through a series of machines in attempt to disguise their true origin. For example, Clifford Stoll's cracker routed his attack from Hanover, Germany through satellite links, transatlantic cables, international gateways, and several states within the U.S, [9].
Due to liability issues, tiger teams cannot involve computer systems on which they do not explicitly have permission. This problem unveils when the hiring organization has business partners or special client relations which have a higher level of clearance than complete strangers. Assume insecure_business_partner dials into otherwise_secure_client_organaization for a routine transaction. Then a cracker breaks into insecure_business_partner and routes an attack into otherwise_secure_client_organization via the trusted line.
Security is a chain and it is only as strong as its weakest link, [4]. For a tiger team's assessment to be complete it must stress every link, however legal limitations prevent this. By not evaluating these types of connections, the client organization can only assume an equal or lesser security level of its business partners, [10].
- Unexpected Consequences & Accidents. Back to our example - even during practice, sportsmen take a risk of getting injured. This concept applies to security testing on computer networks as well. In attempt to locate a vulnerability, inexperienced tiger teams might create several new ones. Additionally, once flaws are exposed, they may not be properly repaired, and thus leave residual vulnerabilities, [1]. The potential threats posed by accidents are so vast they cannot be distinguished from intentional acts by adversaries. In other words this includes denials-of-service, data corruption, information leakage, and the like. In several cases it might even be required to bring down a few systems or expose passwords on the network. Now, consider the possibility of a real adversary attacking simultaneously. After all, it would be the prime time to blend in.
- Not Exactly The People You Tell Secrets To. The fact that an organization would hire a so-called expert means they are not experts (in the security context, that is). How then might they validate what they are told? Would they hire another tiger team for a second opinion? Considering the costs, probably not. The problem is that tiger teams are not always honest and often lie or distort their findings for a number of reasons. Perhaps the most embarrasing experience for a tiger team would be admitting their incompetence. To cover the shame they might report some weaknesses that do not exist. Or worse yet, they might intentionally create a few so they can claim to have fixed something. This makes them look good and the organization will be proud - money well spent, right? If the outcome is attractive enough, the team may even get re-hired for a second round - just in case they missed something the first time. The motivation to be dishonest ranges from $10,000 to $45,000 or more.
Summary, Conclusions, and Further Work
The absolute values of adversary work factor measured as a result of any tiger team experiment contains little valuable information, [6]. This is because different tiger teams exhibit very different behaviors, depending on their preparations, training, talents, and motivations. Even with the most sophisticated members, the cooperative nature of tiger teams clash with thier ability to replicate hostile attacks. The threat is not the concept of tiger teaming itself, but the reasons organizations hire them, how they interpret the results, and the negative side effects. Organizations should overestimate their enemies, constantly prepare for the worst, and find comfort in the limitations of information technology.
Bibliography
1. The all.net Security Database, Threat 14 - Tiger Teams. Available online at www.all.net.
2. Farmer, Dan and Wietse Venema. Improving the Security of Your Site By Breaking Into It. Available online at www.fish.com.
3. Palmer, C.C. Ethical Hacking. Available online at www.research. ibm.com.
4. Cohen, Frederick B. Protection and Security on the Information Superhighway. Available online at www.all.net.
5. IBM Business Strategy Consulting - Ethical Hacking Available online at www.ibm.com.
6. Schudel, Greg and Bradley J. Wood. Adversary Work Factor as a Metric for Information Assurance. Available online at www.csl.sri.com.
7. Sandia National Laboratories News Releases: "Sandia Red Team hacks all computer defenses. Available online at www.sandia.gov.
8. Schneier, Bruce. Why Cryptography Is Harder Than It Looks. Available online at www.all.net.
9. Stoll, Clifford. The Cuckoo's Egg. New York: Pocket Books, 1989.
10. Computer Sciences Corporation. Vulnerability Assessments: An Ethical Hacker's perspective Available online at www.csc.com.
|
Malware Analyst's Cookbook
|
|
|