Phishing the Phishers: Using Attackers’ Own Tools to Combat APT-style attacks

Phishing the Phishers: Using Attackers’ Own Tools to Combat APT-style attacks

As a deceptions researcher, part of my job is to design deceptions against attackers by manipulating or reverse-engineering the common toolkits attackers use. Deceptions are pieces of false information that are planted across the organization and appear as real, relevant information to the attacker. For example, browser deceptions — pieces of information specifically planted in browser history, saved forms, etc. — can be created to lure malicious hackers and insiders to deceptive web servers. In this article, we will show how phishing can be used to catch attackers and how phishing kits can be used for defensive purposes.
Phishing kits are tools built by skilled developers that enable attackers who are nothighly skilled to run their own phishing campaigns by creating and luring innocent users to a fully operational phishing site.  With an understanding of how phishing kits work, we created a tool that manipulates them to lure attackers using a browser deception that points to a deceptive website.

 

Phishing Kit Challenges

Like other utilities, phishing kits are designed to hide underlying complexity to simplify tasks for the common user. However, it is challenging for a phishing kit to create an authentic, believable phishing page that does not raise suspicion when a user is asked to submit sensitive information. The kits I researched help attackers set up phishing sites that imitate well-known sites such as Google, Facebook, and popular banking sites. These nefarious sites look credible because innocent users assume—whether true or not—that these organizations go to great lengths to make sure their users won’t be phished.

Creating Our Proof Environment

To demonstrate how to run a “reverse” phishing campaign on an attacker, we set up a deceptive MediaWiki environment to mimic one that might exist in a typical enterprise. Web software, such as MediaWiki and BitBucket, is often customized to build internal data-sharing sites.
Internal web servers are attractive to attackers because they typically contain sensitive information about the organization that the attacker can use to identify high-value targets, infect other users, and move laterally within the network. In our “proof” environment, the deceptive MediaWiki site is setup to redirect attackers away from the real MediaWiki asset. An alerting mechanism is also set up to notify us when an attacker tries to access the deceptive website.
To create a deceptive MediaWiki server that looks like one already in the network, we need two things:
  • A website scraper
  • A deceptive web server (a.k.a. “trap server”)

Here is where we turn the phishing kit against the attacker: We leverage its built-in web scraper. We modified the King’s-Phisher web cloner script to suit our needs.
An alternative to cloning an existing site would be to set up a new website from scratch. Since MediaWiki is an open source project, we could have simply modified it’s code to meet our needs. With this approach, though, we would have faced a more complicated setup for each deceptive web server and would lose customization, which would reduce the credibility that organization-specific deceptive websites have.
To create the trap server, we used Python SimpleHTTPServer as a platform. This also includes a Request Handler module that receives and processes any inbound HTTP request. We set up the Request Handler to alert the organization by sending syslog messages when an attacker is surfing a deceptive website.

 

Addressing the Challenges

To mimic the MediaWiki server already in the network, we first needed to clone its index page, but we encountered several problems.

Problem 1: References to the original (real) websites

If references to the original website are left in the scraped web pages, the attacker will quickly catch on and will easily navigate to the real site. In addition, if the attacker is in a network segment that is segregated from the original web server, references to the original website can create defects in the web page being mimicked.  
Illusive Networks Phishing the Phishers Address the Challenges 1.png
We carefully modify every reference that points to the original website to make sure that it points to the local trap server.
Illusive Networks Phishing the Phishers Address the Challenges 1.1.png
As illustrated above, we scrape the references containing the name of the original MediaWiki server, and also change them to point to a relative path instead of a full path.

Problem 2: Limited capability of static cloning

As illustrated below, static web scraping is limited in its ability to mimic dynamically generated web pages. In this example, we have multiple references that point to the same page, load.php. Each page that is dynamically generated receives a different combination of parameters that creates different pages. Because we didn't have the source code for load.php, we had to find a way to bypass it.
Illusive Networks Phishing the Phishers Address the Challenges 2.png
Our solution was to save the different results generated by each of the given parameters to a different resource on our local website. For every dynamically generated resource we encountered, we saved it with a unique name based on the given parameters as a static page. Post processing was then used to replace the original file paths associated with the dynamically generated pages with our own static pages. This code has been through post processing:
Illusive Networks Phishing the Phishers Address the Challenges 2.1.png
Once we created a deceptive web page that met our needs, we set it up in a trap server. The trap server serves the deceptive pages when requested and alerts us when a page request is made.

 

Collecting Attackers’ TTPs

In the real world, an alert by itself has limited value. To provide defensive teams with actionable alert information, we configured our tools to gather forensic evidence from both the server and client side. From the server side, we collect the attacker’s operating system and information about the device the attacker is using by analyzing the browser user-agent. In addition, we save the IP address from which he is connecting, as well as submitted form information.  
To collect additional forensics evidence from the client side, we added a special JavaScript code to the deceptive web page that was not part of the original page. Running our code in the attacker’s browser supports investigation of the incident and can provide additional clues about what the attacker is looking for and what tools he is using.

Conclusions

Through this phishing kit example, we showed how attackers’ tools can be used for blue team purposes and to catch an attacker using web scrapers and deceptive web servers. In a “live” environment, these kinds of tailor-made phishing portals increase the odds that an attacker will reveal himself and force attackers to be more cautious, giving defenders an advantage that can tip the scales in their favor. Here is where you can find the standalone tool we created to easily allow you to create a deceptive website that is customized to your own network.
Previous Post Next Post

Contact Form