Some definitions of red team are broader, and include any group within an organization that is directed to think outside the box and look at alternative scenarios that are considered less plausible.
In technical red teaming, attack vectors are used to gain access, and then reconnaissance is performed to discover more devices to potentially compromise.
Red teams are used in several fields, including cybersecurity, airport security, law enforcement, the military, and intelligence agencies.
One early example of red teaming involved the think tank RAND Corporation, which did simulations for the United States military during the Cold War.
[1] Red teams are sometimes associated with "contrarian thinking" and fighting groupthink, the tendency of groups to make and keep assumptions even in the face of evidence to the contrary.
Ipcha Mistabra was formed after the war, and given the duty of always presenting a contrarian, unexpected, or unorthodox analysis of foreign policy and intelligence reports, so that things would be less likely to be overlooked going forward.
In cybersecurity, a penetration test involves ethical hackers ("pen testers") attempting to break into a computer system, with no element of surprise.
Threats may range from something traditional such as hacking the network's domain controller, or something less orthodox such as setting up cryptocurrency mining, or providing too much employee access to personally identifiable information (PII) which opens the company up to General Data Protection Regulation (GDPR) fines.
[25] Credentials can be stolen from many locations, including files, source code repositories such as Git, computer memory, and tracing and logging software.
Techniques such as optical character recognition (OCR), exploiting default passwords, spoofing a credential prompt, and phishing can also be used.
Red teams can take control of a browser using Internet Explorer's COM, Google Chrome's remote debugging feature, or the testing framework Selenium.
[29] One tactic is to engage in "active defense", which involves setting up decoys and honeypots to help track the location of intruders.
[32] The use of rules of engagement can help to delineate which systems are off-limits, prevent security incidents, and ensure that employee privacy is respected.
[33] The use of a standard operating procedure (SOP) can ensure that the proper people are notified and involved in planning, and improve the red team process, making it mature and repeatable.
[35] Tracking certain metrics or key performance indicators (KPIs) can help to make sure a red team is achieving the desired output.
These statistics can be graphed by day and placed on a dashboard displayed in the security operations center (SOC) to provide motivation to the blue team to detect and close breaches.
[36] In order to identify worst offenders, compromises can be graphed and grouped by where in the software they were discovered, company office location, job title, or department.
[52] Before physical reconnaissance occurs, open-source intelligence (OSINT) gathering can occur by researching locations and staff members via the Internet, including the company's website, social media accounts, search engines, mapping websites, and job postings (which give hints about the technology and software the company uses).
[58] Most physical red team operations occur at night, due to reduced security of the facility and so that darkness can conceal activities.
[61] The use of military equipment such as MOLLE vests and small tactical bags can provide useful places to store tools, but has the downsides of being conspicuous and increasing encumbrance.
[87] It is good practice to radio situation reports (SITREPs) to the team leader when unusual things happen.
When confronted by law enforcement, it is good practice to immediately surrender due to the potential legal and safety consequences.
[96] Alternative analysis involves bringing in fresh analysts to double-check the conclusions of another team, to challenge assumptions and make sure nothing was overlooked.
[97] After failures to anticipate the Yom Kippur War, the Israeli Defense Forces' Intelligence Directorate formed a red team called Ipcha Mistabra ("on the contrary") to re-examine discarded assumptions and avoid complacency.
[100] The key theme is that the adversary (red team) leverages tactics, techniques, and equipment as appropriate to emulate the desired actor.
[101] Red teams were used in the United States Armed Forces much more frequently after a 2003 Defense Science Review Board recommended them to help prevent the shortcomings that led to the September 11 attacks.
Most resident courses are conducted at Fort Leavenworth and target students from U.S. Army Command and General Staff College (CGSC) or equivalent intermediate and senior level schools.
In this document, Amos discussed how the concept of the red team needs to challenge the process of planning and making decisions by applying critical thinking from the tactical to strategic level.
[107] The United States Federal Aviation Administration (FAA) has been implementing red teams since Pan Am Flight 103 over Lockerbie, Scotland, which suffered a terrorist attack in 1988.
[108] Before the September 11 attacks, FAA use of red teaming revealed severe weaknesses in security at Logan International Airport in Boston, where two of the four hijacked 9/11 flights originated.