EnGenius routers found in Mirai-like botnet

EnGenius routers were recently found in a Mirai-like botnet with a distinct network traffic fingerprint. Locating this botnet subset was a joint effort between myself and Dr. Neal Krawetz.

EnGenius logo

This Mirai-like botnet traffic was fingerprinted after a distinct pattern in the packets received was identified by Dr. Krawetz. While the source port was usually randomized, the TCP sequence number was always the same. However, it wasn’t just any static number, it was the destination IP address of the bot’s target.

This behavior was previously noted in a LinkedIn post about IDS rules used to block Mirai scans. This is expected, per the snippet of source code of Miari shown below.

Snippet of Mirai source code

I found 85,100 unique IP addresses used by devices in Mirai-like botnet since 2/18/2017. AS4134 (China Telecom) had most unique IPs with 10,972 seen.

IP addresses seen in Mirai-like botnet by ASN since 2017-02-18

This destination IP address was found to be encoded in of each incoming packet’s sequence number. The example log snippet below illustrates how this is extracted.

PROTO=TCP
SRC=194.132.237.47
DST=72.193.175.65
SPT=49795
DPT=23
SEQ=0x48c1af41

In this case the TCP sequence number, written as hex, is 0x48c1af41. When we convert this value from hex to an IP address, we get 72.193.175.65 – which is the destination (target) IP address.

Your logs may vary and instead record the sequence number in decimal format. In the example above, the decimal version of the SEQ = 1220652865 which converts to 72.193.175.65 just the same.

The fingerprint is best illustrated when the target IP address changes as shown below:

TCP Sequence Number = Destination IP

Once the fingerprint of botnet was established, I was able to review the IP addresses found in my logs for further patterns. After reviewing a handful of devices coming from IP addresses in the United States , I noticed a trend in the type of devices. Each was an Engenius ESR300 or ESR600 router.

EnGenius ESR300 router

Both router models are listed on the Engenius website as a “Discontinued Product” and the latest firmware was released on 5/23/2016.

EnGenius Firmware Screenshot

Combining the botnet data from Dr. Krawetz, I independently confirmed 81 of 130 EnGenius routers known to be participating in the botnet.

All incoming traffic from the EnGenius routers was on TCP port 23/2323 (telnet). The highest-volume attackers are shown below and the raw data is available here.

EnGenius Botnet - Top Attackers

The majority of the attacks occurred between 8/25/2017 and 8/29/2017. The type of attack was a SYN flood. This first network traffic from an EnGenius router was observed on 6/15/2017. The raw data of all traffic I observed is available here.

Attacks from EnGenius routers came from all over the world. Most however came from networks in the United States. Both AS11796 (Airstream Communications) and AS13370 (LocalTel Communications) had the most with 12 unique IP address in the EnGenius router botnet.

EnGenius Routers found by ASN with more than one unique IP address
EnGenius Routers found by ASN with more than one unique IP address

The majority of EnGenius routers found had the same ports open to the internet:

TCP 80 (HTTP)
UDP 5060 (SIP)
TCP 8081 (HTTP)
TCP 9000 (HTTP)
TCP 10000 (HTTP)

So how easy is it for the average user to access the administrator interface of these routers? Not surprisingly, very easy. The router’s default credentials are quickly found in the user manual.

EnGenius default credentials

But if what you want to take a more “challenging” approach to locate the default creds? Look no further than the JavaScript file loaded when you visit the router’s login page:

Locating the EnGenius default credentials

This file describes all the functions of the router in addition to providing the default credentials:

“Please enter user name and password.”
“The default account is admin/admin.”

If you looking for an even more challenging method to gain access to an EnGenius router, a remote code execution exploit PoC was published by Zero Science Lab earlier this year in which they stated:

EnGenius EnShare suffers from an unauthenticated command injection vulnerability. An attacker can inject and execute arbitrary code as the root user via the ‘path’ GET/POST parameter parsed by ‘usbinteract.cgi’ script.

I was able to confirm this method was viable for some, but not all of the EnGenius routers found in the botnet. Since it’s very easy to gain root access to EnGenius routers, it presents a clear avenue for any malicious party to add them to their botnet.

I contacted EnGenius with my findings and their customer service team replied that my case “has been escalated to the engineering team.” I haven’t received further communication from EnGenius and will update this post if I hear back.

In the meantime, Dr. Krawetz advises:

For network administrators who want to detect infected hosts from this new botnet: Look for SYN packets where tcp.seq==ip.dst.

If you see a match, then the ip.src denotes an infected address. Either the device at that address is infected, or something behind that NAT router is infected.

Coinhive miner found on official Showtime Network websites in latest case of cryptojacking

Another case of “cryptojacking” was recently found on two official Showtime Network websites:

showtime.com

showtimeanytime.com

This was first reported by Twitter user @SkensNet on September 23 at 9:10 PM GMT. No statement from Showtime Networks or CBS Corporation has been given yet as to why the Coinhive cryptocurrency miner has appeared on their websites.

Showtime Anytime Logo

This could be simply found by viewing the source code of https://www.showtime.com/ and  https://www.showtimeanytime.com/ in any browser:

Coinhive found on Showtime's website

Coinhive is JavaScript library that can be embedded into any website. Once a user visits the website, they unwittingly start mining the cryptocurrency Monero. This can put a tremendous load on the CPU of anyone who visits a website with the Coinhive miner on it.

Catalin Cimpanu of BleepingComputer.com recently published an article describing the nefarious uses of Coinhive and how it’s rapidly becoming a favorite tool among malware developers:

In the few days that have passed after it launched, Coinhive has spread to almost all corners of the malware community.

First, we saw it embedded inside a popular Chrome extension named SafeBrowse, where the Coinhive code was added to run in Chrome’s background and mine Monero at all times the browser was running.

Then, we saw Coinhive embedded in typosquatted domains. Someone registered the twitter.com.com domain name and was loading the Coinhive JS library on the page. Users who mistyped the Twitter URL and ended up on the page would mine Monero for the site’s owner.

This would happen for only a few seconds until the user realized he was on the bad page, but that would be enough for the site’s owner to generate a profit. In time and with more of these domains in hand, the owner of all those mistyped site URLs would make a nice profit.

So how did this even end up on the official Showtime Networks websites?

High CPU usage detected!
60% CPU usage was observed sitting idle at the https://www.showtimeanytime.com/ homepage as the Coinhive miner slowly chugged away.

The answer is not clear yet, however Coinhive did recently appear on The Pirate Bay website and was quickly found by TorrentFreak. Other users of the site also noticed and took their outcry to thepiratebay subreddit . The Coinhive miner was later removed by The Pirate Bay operators, who released the following statement:

As you may have noticed we are testing a Monero javascript miner. This is only a test. We really want to get rid of all the ads. But we also need enough money to keep the site running.

Since the answer is not clear yet when the Coinhive miner was implemented on the Showtime Anytime website, I reviewed a cached copy saved by Google and the latest copy saved by Archive.org.

The copy saved by Google on September 21, 2017 21:31:06 GMT did not appear to have the Coinhive javascript code:

Google's cached copy of ShowtimeAnytime.com

Neither did the Archive.org copy from September 18, 2017 10:06:54 GMT:

Archive.org copy of ShowtimeAnytime.com

MalwareBytes users have noted that Coinhive is now detected and blocked. I verified this by visiting the Showtime Anytime website with active protection enabled.

Malwarebytes detects Coinhive

MalwareBytes blocked the outbound connection  before the cryptocurrency mining could begin.

9/25 Update: Both official Showtime Network websites have the Coinhive miner: showtime.com and showtimeanytime.com.

9/26 Update: The Coinhive script was removed from Showtime Network websites around 4:45 PM GMT on 9/25. No statement yet from Showtime/CBS regarding this incident.

9/29 Update: Showtime/CBS continues to decline providing any statement regarding this incident.

This is a developing story, check back for updates.

Ongoing, large-scale SIP attack campaign coming from Online SAS (AS12876)

A month ago, I wrote a brief, half-humorous post about stopping a SIP attack. However, the unfunny truth is I have collected enough evidence documenting an ongoing, large-scale SIP attack campaign coming from ONLINE SAS (AS12876) more commonly known as “online.net.” They are also known as “Poney Telecom” and “Scaleway” in other references.

online.net

In the last few months, I’ve logged over 8,000 SIP attacks from IP addresses residing in AS12876’s network. The SIP attacks came from 401 unique IP addresses, documented here. An additional 6,000 non-SIP attacks were logged for grand total over 14,000, detailed here.

This led me to send countless abuse reports via Online.net’s Abuse Report Form. Their response was always a message saying here’s a “comment left by our customer” and that the request, was “now closed.”

Some common responses received were similar to this one:

Hello,
this seems that our server has an issue or it has been hacked, I am waiting for my account to be unblocked to check server or reinstall it

Other comments were received from what appeared to be resellers:

Hello Sir,
I am really sorry for this issue, I have forward this abuse email to my client and warning him that if he do not stop this, I will turn off server
So please accept my apologize Sir
Sincerely Yours,

Some  appeared to come from the “Scaleway Team” directly:

No answer from customer, account has been suspended by the Scaleway Team.

Unfortunately due to the sheer volume of the attacks coming from 401 unique IP addresses, I couldn’t continue using their abuse form which only allows reporting a single IP only each time.

Instead I contacted Online.net’s abuse team directly. I provided logs of the numerous attacks from hundreds of devices on their network. I did not hear back from them. Communication was only done on a per-IP basis through their abuse form.

I decided to dig a little deeper into the attacks themselves. To do this, I completed a packet capture on a tiny sample of the incoming SIP attacks.

SIPVicious Attack

The capture showed the attacks being performed by a device running SIPVicious.

SIPVicious
SIPVicious logo created by Sandro Gauci of Enable Security.

So what is SIPVicious? Back in March 2014, Cisco issued a Security Activity Bulletin detailing SIPVicious and how it can be used:

SIPVicious is a Session Initiation Protocol (SIP) auditing tool that has been observed to be used in increasing reconnaissance attacks against IP and VoIP phones and PBX systems.

SIPVicious is used as an auditing tool for scanning phone systems by performing INVITE scans silently. However, attackers could use this feature to perform INVITE scans with a call command to determine weak passwords to connect to a particular phone host on the PBX telephony network. Access to such hosts could allow attackers to make free phone calls through a successful connection.

The tool could also be used to scan the IP or VoIP telephony network. Due to a flaw in the processing of SIP messages by the telephony device firmware, an attacker could use any number or any SIP address in the INVITE message to scan random networks to determine availability of live hosts. The attacker could initiate an INVITE session and determine a successful detection by receiving a phone ring as a response. This detection could allow the attacker to conduct further attacks such as host spoofing to make phone calls using the detected IP phone identity.

Threatpost reported SIPVicious attacks much earlier in 2011, stating that:

Though its name suggests otherwise, the Sipvicious program is a mainstream auditing tool for VoIP systems. The tool is intended to aid administrators in evaluating the security of their SIP-based servers and devices.

Rick Moy, the founder of NSS Labs, said the latest attacks seem designed to create a base from which attackers can make VoIP calls from the victim’s phone or VoIP infrastructure. Those calls might be used to rack up charges on premium rate numbers controlled by the attackers, or as part of voice phishing (vishing) scams that target unwitting consumers.

Moy said the attack shows that even “good tools’ can be used for malicious purposes.

Attacks on VoIP infrastructure are becoming more common and are often traced back to underlying vulnerabilities in VoIP infrastructure. To date, there have been some arrests. In December, authorities in Romania disrupted a criminal group that was accused of hacking VoIP servers and using them to place bogus calls to premium numbers.

SIPVicious can still be obtained from GitHub and the Kali Linux Git Repository. However it has not been updated by the original creator, Sandro Gauci, for almost five years.

I compared the IP addresses that I logged SIP attacks from with the total number of AbuseIPDB reports, shown in the chart below. There were over 6,800 AbuseIPDB  reports for those 401 unique IP addresses, however there wasn’t much correlation with the 8,000 SIP attacks I logged, especially for the highest volume offenders.

Due to this, I highlighted everything above the 95th percentile in red, above the 75th percentile in yellow, and everything below in green for each column.

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Attacks Logged vs. AbuseIPDB Reports

Next I charted unique IPs on the default SIP port only (UDP/TCP 5060) and grouped by ASN. This was because I had way too much data, so I had to exclude non-default port SIP attacks.

Most SIP attacks came from AS12876

It’s clear most of the SIP attacks on the internet originate from AS12876’s network.

Do you see any SIP traffic from AS12876’s ranges in your logs? Are your VoIP servers properly secured?

Is your PRTG server leaking your SNMP community string? Mine was!

I recently had a lengthy discussion with Paessler’s Technical Support Team Manager regarding a leak of my SNMP community string. The conclusion reached was this behavior is actually expected, per the default configuration of PRTG. If you’re not familiar with PRTG, it’s an enterprise network monitoring application created by Paessler AG.

PRTG logo

I became aware of the leak after reviewing my firewall logs and finding the unexpected incoming SNMP traffic from my Remote Probe. I found the traffic occurring every day at the same time, roughly 2:50 PM local time, with three packets sent each time.

Unusual SNMP traffic

I fired up my packet capture machine and re-routed the incoming SNMP traffic to it. Upon inspecting the traffic in Wireshark I found each packet was an SNMP get-next-request which contained my community string for all to see.

Wireshark screenshot

So one might stop at this point and ask, why am I not using SNMPv3 instead of SNMPv2c? This was a calculated choice I made, given that my SNMP traffic is only flowing on a segmented portion of my LAN and never should be traversing the internet.

At this point I contacted Paessler’s Security Team to share my findings. Unfortunately I didn’t make much headway and was soon escalated to Technical Support Team Manager after I sent a follow up to Paessler’s CEO, Dirk Paessler.

After much discussion back and forth it was finally discovered my off-site Remote Probe was sending the SNMP traffic due to it inheriting the default “Advanced Network Analysis — System Information” permissions from my Local Probe (Core Server).

I was a bit dismayed at this fact, since I had diligently turned off the other default settings for “Unusual Detection” and “Similar Sensors Detection” when I configured my PRTG installation.

However the horror didn’t stop there. I found the “System Information” feature was enabled by default for all my devices, due to the permission inheritance. While this may be a useful feature in some cases, I found my SNMP community string had been broadcast daily to every device I monitored. This included external websites, public DNS servers, and other devices outside my LAN.

So how can this be prevented? I recommend always turning off Unusual Detection, Similar Sensors Detection, and now System Information as well when configuring PRTG. These settings are found under Advanced Network Analysis section and can be configured at the “Root” level, as shown below.

PRTG configuration

If any of these features are desired, they can be enabled at the group and/or individual device level.

Per my recommendations, Paessler has updated their documentation regarding the System Information feature, found here. The following note is now included:

Note: The feature System Information is enabled by default. To retrieve the data, PRTG will automatically use Credentials for Windows Systems and Credentials for SNMP Devices as defined in the Device Settings or as inherited from a parent object like the Root group. Please consider this when you monitor devices outside the local network, especially when using SNMP v1 or v2c that do not provide encryption.

Is your PRTG installation leaking?

RIPE NCC releases new policy proposal for abuse contact validation

Today, RIPE NCC released a policy proposal update to ripe-563, better known as Abuse Contact Management in the RIPE Database. According to Marco Schmidt, Policy Development Officer at RIPE NCC, “The goal of this proposal is to give the RIPE NCC a mandate to regularly validate ‘abuse-c:’ information and to follow up in cases where contact information is found to be invalid.”

RIPE NCC

So what are the exact changes being proposed?

The current Abuse Contact Information policy states:

The role objects used for abuse contact information will be required to contain a single “abuse-mailbox:” attribute which is intended for receiving automatic and manual reports about abusive behavior originating in the resource holders’ networks.

The “abuse-mailbox:” attribute must be available in an unrestricted way via whois, APIs and future techniques.

The proposed Abuse Contact Information policy states:

The role objects used for abuse contact information will be required to contain a single “abuse-mailbox:” attribute which is intended for receiving automatic and manual reports about abusive behaviour originating in the resource holders’ networks.

The “abuse-mailbox:” attribute must be available in an unrestricted way via whois, APIs and future techniques.

The RIPE NCC will validate the “abuse-mailbox:” attribute at least annually. If no valid reply is received by RIPE NCC within two weeks (including if the email bounces back), the “abuse-mailbox:” contact attribute will be marked as invalid.

In cases where the “abuse-mailbox:” contact attribute is invalid, the RIPE NCC will follow up with the resource holder and attempt to correct the issue.

I particularly found one note in the “Rationale” section for “Arguments opposing the proposal” interesting, which states:

If organisations are not cooperative, the RIPE NCC ultimately has the possibility to close their RIPE NCC membership and deregister their Internet number resources.

I’m curious as to why this wasn’t the number one reason listed for “Arguments supporting the proposal” instead. If this was the case, would we still see blatant disregard to abuse complaints or claims of “fake abuse” from network operators in the RIPE NCC Service Region?

Other arguments supporting the proposal currently include:

  • Accurate and validated information in the RIPE Database is essential to establish a trusted and transparent environment in which all network operators can operate safely.
  • The lack of reliable accurate and validated information in the database negatively impacts legitimate uses of the RIPE Database, including:
  • Validating “abuse-c:” information is essential to ensure the efficiency of the abuse reporting system.

This proposal will now go through a four-week “Discussion Phase” allowing RIPE NCC community members to provide feedback. Once this phase is completed, the proposer, with the agreement of the RIPE Working Group Chairs, decides how to proceed with the proposal.

Amcrest releases firmware update to correct constant cloud server connection issue

Back in May, I published a report on the latest firmware update from Amcrest resulting in a constant connection to cloud servers even for non-cloud customers.  I have verified the newest firmware from Amcrest has corrected the issue.

Amcrest

After speaking with Alen from Amcrest Cloud Support, I confirmed the expected behavior is the cameras will stop attempting to connect to their cloud servers two hours after powering up.

I updated my Amcrest IP2M-841 and IP3M-943 cameras to the latest version, V2.520.AC00.18.R and V2.400.AC01.15.R  respectively. I patiently waited two hours and confirmed the connection to the cloud servers ceased.

The only connection I observed the cameras making to the internet was to 54.84.228.44.

dh.amcrestsecurity.com

This was noted in my previous post as dh.amcrestsecurity.com where the camera reads the file “readbinfile.html” as some sort of firmware check.

While it took two months to correct this issue, I still commend Amcrest for taking the matter seriously and updating their firmware.

How to stop a SIP attack with a wordsmith gotcha

Over the last six months, I’ve noticed almost 6,000 Session Initiation Protocol (SIP) attacks coming from Online SAS (AS12876) network. These attacks were typically seen coming in on the default SIP port which is UDP port 5060.

online.net

While the attacks poured in, I was frequently using the Abuse Report Form for Online SAS, which was very easy to use. After confirming my abuse requests, I would wait to receive a follow up from Online SAS or their customer directly.  Typically within 24 – 48 hours I’d receive a response and confirm the attacks have stopped. However in one case the attacks didn’t stop and continued for twelve straight days.

On August 7, I reported IP address 163.172.216.251 and received the following update from Online SAS on August 9:

Dear Sir or Madam,

Your abuse number 183740 is now closed.

Here is a comment left by our customer:
—————————————————————-

sent the complaint to this client for checking about this issue and resolving

—————————————————————-

This was not resolved, so I sent another follow up to take corrective action. On August 10, the following message was received:

Dear Sir or Madam,

Your abuse number 183966 is now closed.

Here is a comment left by our customer:
—————————————————————-

sent the complaint to this client for checking about this issue and resolving

—————————————————————-

Yet again the attacks did not cease, so I sent another abuse request on August 14. The next day I received the following:

Dear Sir or Madam,

Your abuse number 184423 is now closed.

Here is a comment left by our customer:
—————————————————————-

sent the complaint to this client for checking about this issue and resolving

—————————————————————-

Sadly the attacks persisted with fervent vigor, so I decided a new approach was needed. On August 19, I sent in a new abuse request for 163.172.216.251 stating, “If you are a cybercriminal, please respond ‘sent the complaint to this client for checking about this issue and resolving’ to this message”

The very same day, I received the following update:

Dear Sir or Madam,

Your abuse number 184747 is now closed.

Here is a comment left by our customer:
—————————————————————-

service is suspended for set on rescue mode

—————————————————————-

After this I confirmed no further SIP attacks from 163.172.216.25 were seen!

Large-scale ongoing RDP attack campaign and Global Layer B.V. (AS49453) decries as “fake abuse”

A little over a month ago I reported an ongoing RDP attack campaign coming from Global Layer B.V. (AS49453) and their lone upstream peer Regionalnaya Kompaniya Svyazi Ltd. (AS57028) to their abuse team.

On July 11, an unnamed Global Layer Abuse Desk representative responded:

Our customer has already been informed to take action in this matter.

However, the RDP attacks from their network continued and I followed up again on August 6, receiving the following response:

The IP in question has been blocked and customer informed.

Unfortunately this was not case and the  attacks continued to pour in, so I sent daily updates requesting comment why action had not been taken.  I even offered to help them update their firewalls to nullroute the offending customer.

Global Layer

On August 10, I received a follow-up from the Global Layer Abuse Desk:

I suggest you stop sending us fake abuses. We first of all blocked the IP in question and that vps was terminated days ago.
It’s not possible you are getting any more complaints from our network. So check again.

So I checked again and found a massive, ongoing RDP attack campaign coming from their network.  I noted the prefixes announced by AS49453 and their direct associate AS57028 and reviewed my firewall logs accordingly. I was astonished to find 2,940 RDP attacks, as of this writing.

IP  address RDP attacks logged
91.230.47.37 1123
91.195.103.102 368
91.195.103.85 308
91.195.103.84 134
91.230.47.41 131
91.195.103.164 128
91.230.47.44 118
91.230.47.39 101
91.230.47.10 95
91.195.103.157 69
91.195.103.101 59
91.195.103.250 53
91.195.103.86 40
91.195.103.171 33
91.195.103.149 26
91.230.47.4 18
91.195.103.167 15
91.195.103.170 14
91.195.103.100 13
91.195.103.168 13
91.195.103.154 11
91.195.103.172 11
91.195.103.169 9
91.230.47.6 8
91.195.103.173 7
91.195.103.4 7
91.195.103.92 7
91.195.103.36 5
91.195.103.37 4
91.195.103.152 2
91.195.103.22 2
91.195.103.98 2
91.195.103.165 1
91.195.103.50 1
91.195.103.68 1
91.195.103.99 1
91.230.47.3 1
91.230.47.40 1

The raw data with timestamps is available here.  Note that a very small percentage of the  attacks were also SSH and are included above.

So how many times has this “fake” abuse been logged on AbuseIPDB?

EDIT:  Due to a “Data Loss Incident” at AbuseIPDB on 08/08/2017, the reported totals below won’t match the current total reports.  The totals below were noted from before the incident occurred.

AbuseIPDB report URL AbuseIPDB total reports
91.230.47.37 325
91.195.103.102 196
91.195.103.85 189
91.195.103.84 112
91.230.47.41 102
91.195.103.164 163
91.230.47.44 3
91.230.47.39 217
91.230.47.10 18
91.195.103.157 97
91.195.103.101 41
91.195.103.250 28
91.195.103.86 29
91.195.103.171 79
91.195.103.149 34
91.230.47.4 12
91.195.103.167 0
91.195.103.170 67
91.195.103.100 32
91.195.103.168 40
91.195.103.154 22
91.195.103.172 2
91.195.103.169 10
91.230.47.6 12
91.195.103.173 102
91.195.103.4 126
91.195.103.92 3
91.195.103.36 12
91.195.103.37 5
91.195.103.152 3
91.195.103.22 5
91.195.103.98 1
91.195.103.165 14
91.195.103.50 2
91.195.103.68 3
91.195.103.99 0
91.230.47.3 92
91.230.47.40 39

Based on the reports above, I feel it’s safe to conclude this network abuse is very real. How long will AS49453’s BGP peers let this abuse continue unabated?

Ongoing spam campaign with forged headers and Amazon EC2 abuse team advises to contact the bogon overlords

Recently I’ve been monitoring an ongoing email spam campaign using forged email headers all referencing bogons as the sending servers’ IP addresses. I wasn’t sure why Gmail’s servers would even process these messages since they were blatantly spoofed. Google didn’t respond for my request for comment.

Here are three example header snippets:

spf=neutral (google.com: 91.232.208.157 is neither permitted nor denied by best guess record for domain of mlhhmaidthf@nghpcsqbwbp.com) smtp.mailfrom=mlhhmaidthf@nghpcsqbwbp.com
Return-Path: <mlhhmaidthf@nghpcsqbwbp.com>
Received: from nghpcsqbwbp.com ([91.232.208.157])
by mx.google.com with ESMTP id w4si2021201ywi.300.2017.07.22.23.41.33

Received-SPF: neutral (google.com: 91.232.208.157 is neither permitted nor denied by best guess record for domain of mlhhmaidthf@nghpcsqbwbp.com) client-ip=91.232.208.157;
Authentication-Results: mx.google.com;
spf=neutral (google.com: 91.232.208.157 is neither permitted nor denied by best guess record for domain of mlhhmaidthf@nghpcsqbwbp.com) smtp.mailfrom=mlhhmaidthf@nghpcsqbwbp.com

Date: Sat, 22 Jul 2017 23:41:33 -0700
From: Becca <66062547.28875F4A9BC62EC42E46B0mlhhmaidthf@nghpcsqbwbp.com>
To: (removed email)
Subject: 1 Weird Trick I Wish My Ex-Boyfriend Knew (Uncensored)

—————————————————————-

spf=neutral (google.com: 194.40.240.181 is neither permitted nor denied by best guess record for domain of amwdnrtjyax@vikhhaeewuf.com) smtp.mailfrom=amwdnrtjyax@vikhhaeewuf.com
Return-Path: <amwdnrtjyax@vikhhaeewuf.com>
Received: from vikhhaeewuf.com ([194.40.240.181])
by mx.google.com with ESMTP id p188si4088924oig.219.2017.07.26.15.19.59

Received-SPF: neutral (google.com: 194.40.240.181 is neither permitted nor denied by best guess record for domain of amwdnrtjyax@vikhhaeewuf.com) client-ip=194.40.240.181;

Authentication-Results: mx.google.com;
spf=neutral (google.com: 194.40.240.181 is neither permitted nor denied by best guess record for domain of amwdnrtjyax@vikhhaeewuf.com) smtp.mailfrom=amwdnrtjyax@vikhhaeewuf.com

Date: Wed, 26 Jul 2017 15:19:59 -0700
From: Single Adult Personals PromoPartner <66298910.977C49F4706816BF7B57F1amwdnrtjyax@vikhhaeewuf.com>
To: (removed email)
Subject: Drool over these sexy selfies

—————————————————————-

spf=neutral (google.com: 212.115.52.158 is neither permitted nor denied by best guess record for domain of bpgpszkntahceo@dhgtghersuiscp.com)
smtp.mailfrom=bpgpszkntahceo@dhgtghersuiscp.com
Return-Path: <bpgpszkntahceo@dhgtghersuiscp.com>
Received: from dhgtghersuiscp.com ([212.115.52.158])
by mx.google.com with ESMTP id d4si11107829qtc.389.2017.07.25.04.21.42

Received-SPF: neutral (google.com: 212.115.52.158 is neither permitted nor denied by best guess record for domain of bpgpszkntahceo@dhgtghersuiscp.com) client-ip=212.115.52.158;

Authentication-Results: mx.google.com;
spf=neutral (google.com: 212.115.52.158 is neither permitted nor denied by best guess record for domain of bpgpszkntahceo@dhgtghersuiscp.com)
smtp.mailfrom=bpgpszkntahceo@dhgtghersuiscp.com

Date: Tue, 25 Jul 2017 04:21:42 -0700
From: Becca <38459093.0ECB22A1DCFEFE74808A2Abpgpszkntahceo@dhgtghersuiscp.com>
To: (removed email)
Subject: 1 Weird Trick I Wish My Ex-Boyfriend Knew (Uncensored)

I ran a check on DomainTools.com of the IP addresses referenced in the email headers:

  • 91.232.208.157
  • 194.40.240.181
  • 212.115.52.158

 If you see this object as a result of a single IP query,  it means the IP address is currently in the free pool of address space managed by the RIPE NCC.

Translation: BOGONS!

What about the domain names referenced in the headers, those can’t be fake too, right?

  • nghpcsqbwbp.com
  • vikhhaeewuf.com
  • dhgtghersuiscp.com

Sadly they aren’t even registered and as such, don’t exist.

So who can we contact to report this network abuse? Looking deeper into the body of the emails, I found they all contain a link for the recipient to click on. The DNS names extracted from those links are:

  • mgvtuzwjyhz.popexploitsraved.club
  • hokaehxylug.gcyclingcyberspacemod.site
  • iezegviufre.gmanglerszorchingspoofing.site

According to DomainTools.com all three domains are registered through Namecheap.com and the owner’s information is hidden by WhoisGuard, Inc.

Namecheap logo

Upon contacting Namecheap and advising them of the above details, I received the following response from Sergey Chernenko in the Legal & Abuse Department:

In this situation, Namecheap acts as the registrar only. It means that our ability to investigate the matter is limited since the content transmitted via the website is not located on our server. Please also note that we do not own the reported domain name, we are simply the company the domain name was registered with.

Considering the aforementioned points, we recommend that you contact the hosting provider, who would be in a better position to validate your claim and take the appropriate action. For your convenience, here are contact details of the company that owns IP address assigned to the domain: https://whois.domaintools.com/35.160.47.71

I followed up and provided additional details to Ksenia Bezuglaya in Namecheap’s Legal & Abuse Department, however it was to no avail and it was clear that Namecheap would not blackhole the DNS records for the three domain names.

At this point, I contacted the hosting provider, Amazon Elastic Compute Cloud (Amazon EC2), I provided them with all the details of my investigation. The two Amazon EC2 managed IP addresses found in the emails were 35.160.47.71 and 35.167.123.130.

Amazon EC2 logo

The first reply I received from the Amazon EC2 abuse team, was somewhat confusing:

We understand your concern regarding the continued availability of this content. As noted previously, as a courtesy we notified our customer of your request to have the content removed or access disabled, however, as we do not consider this content to be in violation of our terms, we are not able to take additional action. We strongly encourage you to continue to work with our customer directly to address any additional concerns that you may have.

So I followed up asking them to confirm AWS allows users to send emails with forged headers and are in violation of U.S. Federal Law 15 U.S.C. ch. 103 (CAN-SPAM Act of 2003). I didn’t hear back from the Amazon EC2 abuse team for a week, so I sent another follow-up asking for further comment.  Shortly thereafter, I received a reply:

Apologies for the delay. As the email wasn’t sent from the AWS IP space, there isn’t any action we can take to stop this from occurring. If you haven’t already, please contact the hosting provider(s) for these IPs to address the origin of the emails.

The only content hosted on AWS are the domains iezegviufre.gmanglerszorchingspoofing.site and hokaehxylug.gcyclingcyberspacemod.site. We determined that this content is not against our Acceptable Use Policy (https://aws.amazon.com/aup/) so we notified our customer(s), but we will not take action.

I replied back and provided them with an explanation of what bogons were and reminded them the AWS AUP clearly states:

You may not use the Services to violate the security or integrity of any network, computer or communications system, software application, or network or computing device (each, a “System”). Prohibited activities include:
Falsification of Origin. Forging TCP-IP packet headers, e-mail  headers, or any part of a message describing its origin or route. The legitimate use of aliases and anonymous remailers is not prohibited by this provision.

Two days later I received the following update from the AWS Abuse team:

Our customer has taken action to resolve the matter.

Please let us know if you receive any further reports and we will investigate further.

Unfortunately the spam campaign continues, so I will be following up with Amazon EC2 abuse again.

Hunting for bogons and the ISPs that announce them

In the coming weeks, I will be monitoring bogons in the wild and the ISPs that announce them. But first, what the heck is a bogon anyway? Bogon is an informal term used to describe IP packets on the public Internet that claim to be from an area of the IP address space reserved, but not yet allocated or delegated by the Internet Assigned Numbers Authority (IANA) or any of the Regional Internet Registries (RIR).

Many ISPs filter bogon ranges, because they have no legitimate use traversing the internet. If you find a bogon in your firewall logs it is likely due to someone either accidentally misconfiguring something or intentionally creating them for malicious purposes.

Bogons may change to legitimate source IPs over time as they are allocated and assigned by IANA or a RIR, meaning there is no static list of bogons. A current list of all IPv4 prefixes that have been allocated or not by IANA can be found here. Only Martians will remain forever on the bogon list.

Marvin the Martian
© Warner Bros.

No, not that Martian. In IP networking, Martians are packets with source or destination addresses within special-use ranges such as:

Address block Present use
0.0.0.0/8 “This” network[5]
10.0.0.0/8 Private-use networks (Class A)[6]
100.64.0.0/10 Carrier-grade NAT[7]
127.0.0.0/8 Loopback[5]
169.254.0.0/16 Link local[9]
172.16.0.0/12 Private-use networks (Class B)[6]
192.168.0.0/16 Private-use networks (Class C)[6]
224.0.0.0/4 Multicast[13]

Now that we have a basic definition of a bogon established, where can we find an up-to-date list of bogon IP ranges? The only recently updated list I could find was provided by Country IP Blocks and they offered a complete bogon list in eleven different ACL Formats.

So how do we locate ISPs letting bogons onto the internet? Luckily this is easy as visiting Hurricane Electric’s Bogon Routes page.

M247 Ltd

One such ISP I found was M247 Ltd (previously known as GlobalAXS Communications).  I contacted them on July 11 and asked for comment.  I didn’t receive a follow up until July 24 when an unnamed M247 support representative stated:

As you can see from the HE site we are no longer announcing these prefixes. I am not authorised to comment any further.

I again asked if someone was authorized to comment further and received the following update on July 27:

I have spoken to our management who have authorised me to give you a further statement.

This was accidental misconfiguration on one of our devices which meant that some RFC1918 prefixes [private IP addresses] were tagged with our announce community. This has been rectified and the member of staff responsible re-trained.