Update 5/6/2017: Close port 445 and apply MS 17-010
I have triaged a handful of Windows servers this week that started out being ticketed as high CPU / performance issues.
Upon investigation, I have found XMR cryptocurrency miners being installed through a Windows OS Vulnerability.
A Service is created with a random name which points to C:\Windows\[RANDOM][name].exe
A folder is created C:Windowswinsxslog (this is unpacked from C:\Windows\win32hlp.tmp using executable un.exe or unx.exe that are in C:\Windows also) and loaded with XMR cryptocurrency miner configuration files.
Here’s a directory listing of C:\Windows\winsxs\log:
- 1493016295_log.txt
- 1493016395_log.txt
- blake256.cl
- cryptonight.cl
- groestl256.cl
- jh.cl
- LICENSE
- msvcr120.dll
- OpenCL.dll
- SystemIIS.exe
- SystemIISSec.exe
- wolf-aes.cl
- wolf-skein.cl
- xmr1.conf
- xmr12.conf
- xmr16.conf
- xmr2.conf
- xmr20.conf
- xmr28.conf
- xmr32.conf
- xmr4.conf
- xmr40.conf
- xmr64.conf
- xmr8.conf
Curious, I checked out xmr1.conf and here is what the contents provided:
{ Algorithms: [ { name: CryptoNight, devices: [ { index: -1, threads: 1, rawintensity: 16, worksize: 16 } ], pools: [ { url: stratum+tcp://xmr.crypto-pool.fr:3333, user: 48okENqW61sXt3cDFYJEZMdnHDdoUY1ymE2wVDPSFM5Z2B6VJodU4kmL24w4vLcNv8ZqgynmJ3gq86MEbNsPkLHnTYh6zGR, pass: x } ] } ] }
Looking at one of the log files, I also see this which gave me more information for researching and removal.
02:44:55:181 700 02:44:55:182 700 ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» 02:44:55:182 700 º Claymore CryptoNote CPU Miner v3.5 Beta º 02:44:55:184 700 ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ 02:44:55:385 700 64-bit version 02:44:55:386 700 CPU does not support AES-NI - slower mining! 02:44:55:386 700 Logical CPU cores: 2 02:44:55:386 700 Number of threads: Autoselection... 02:44:55:387 700 Using 2 threads 02:44:55:389 700 scfg: 1 02:44:55:403 700 1 pool specified. 02:44:55:403 700 Press m key for tune mode. 02:44:55:412 10d8 Stratum - connecting to 'xmr.crypto-pool.fr' port 80 02:44:55:414 e4c Stratum - connecting to 'xmr.crypto-pool.fr' port 80 02:44:55:501 10d8 Stratum - Connected 02:44:55:501 e4c Stratum - Connected
There are files dropped into C:\Windows also, and I identified them by their date of creation.
Listing of files found in C:\Windows:
- 666.exe, fb675e3648e6b676d8372b64007187fb, 204,775
- csrss.exe, 7a41d4310788a69aa9a089550deb08bd, 1,614,848
- g.exe, 6f9514b7cdc612704737c99ff170080d, 1,877,504
- unx.exe, aaafb1eeee552b0b676a5c6297cfc426, 276,480
- winhlp32.tmp, 17854d337e54b47d3c9bec4374f1279c, 3,032,364
Firewall Rule Added
As noted in the comments by MaxRebo, and I can confirm, there is a deny tcp 445 rule added to the firewall. As MaxRebo suggests, and I do agree, this may have been created to block any other CPU Miner payloads via SMB Exploits from taking and using this machine.
Prevention
Update 5/6/2017: Block port 445 on public facing interfaces
For infected systems:
The best method I’ve found to keep this from re-spawning is to disable the Windows Service and set permissions on C:\Windows\winsxs\log to Everyone > Deny All
I’m not sure what the payload was, as these were fully patched servers with Symantec.Cloud management software. Scanning some of these executables provided 1 or 2 hits on VirusTotal which is very alarming. Luckily these were only CPU-miners; imagine if there were some real damaging malware being spread?
Update 5/6/2017
Seems this may be an SMB exploit. I found this github repo on a forum while doing some searching: https://github.com/fuzzbunch/fuzzbunch
In short, if you open port 445 and does not apply the update MS 17-010, the DoublePulsar
rattles this port, makes interception of system calls and memory injects malicious code.
We closed 445 port and recommend to apply MS 17-010 our customers.
Update 5/9/2017
I was able to review logs yesterday but they didn’t give many answers. Found a handful of unique IP addresses from a few different countries. I’m still trying to identify where the payload is coming from and how it got loaded (outside of SMB RCE exploit).
I have decided to run ProcMon from Sysinternals and watch C:Windowswinsxslog to see what is still trying to access that folder. As pointed out in the comments, it seems that this is re-spawning and we don’t know exactly from where, so for some closure, identifying the source would be great — otherwise, a rollback pre-April might be needed if the system can’t be trusted.
I’ll update again when I have let ProcMon run for the next 24 hours since this seems to be a timed or triggered event.
Update 5/10/2017
So I spent some time digging with a link I shared in a comment after another GOogle search today.
The backdoor used by the botnet is a WMI RAT downloaded from an Amazon S3 bucket (mytest01234), and is installed using a known MOF file method. Set to run every night at 11PM, the backdoor defines a new WMI provider class, which allows the attacker to execute code as a result of a WMI event and to hide the activity behind the WMI service process.
Source: http://www.securityweek.com/botnet-thousands-servers-mines-crypto-currency
So looking into this further, a search for mytest01234 turns up an article at Guardicore which hints at the following things that come from The Bondnet Army:
Detection
One of the log files would exist:
- %windir%wb2010kb.log – Contains a log of a successful attack
- %windir% empdfvt.log – Contains the log message from running the WMI trojan.
So I checked one of my servers and sure enough, %windir% empdfvt.log exists (but not wb2010kb.log).
Here’s the contents of dfvt.log:
3389---Guest---6.1---2cpu---15Hours---nolog
That is what is sent to the C&C server; part of:
- Computer name
- RDP Port
- Guest username
- OS Version
- Number of active processors
- Uptime measured in hours
- The original infection vector
- Whether the victim is running a chinese version of Windows
- OS language
- CPU architecture (x86/x64)
So I had the log file, but when I checked the WMI per the command on Guardicore, I came up empty handed. I do know that I no longer have instances of this being spawned (probably because of the permissions set on c:\windows\winsxs\log); but the Guardicore article indicates that the presence of the logfiles means the Trojan is active. shrug
I’m missing something because I check my Guest user account that I change password on and disable and it is enabled again. So, I’ll continue the hunt. Meanwhile, the system’s network interface is disabled as it’s not mission critical at this point because I moved necessary services to another server, so I have some time to investigate.
Hi I just wanted to find out if you had any IIS websites hosted on these servers or any inbound ports as we also found this exact same malware on one of our webservers
Yes, this server does have IIS. I have looked through the logs and was not able to find any strange queries that may have resulted in some type of payload being dropped.
I typically run scans against machines using OpenVAS and it came back with only minor warnings, nothing critical. So I’m still stumped at how/when these were placed.
One thing to note that I don’t think I noted in the post. The “Guest” account gets enabled when I have observed this.
Same things happened to our server running 2008 r2 on 25-4-2017.
Can you share how to remove the total infection from OS ?
I ran full scans with Symantec.Cloud, ESET online scanner and they only picked up some stuff on the OS that I had previously moved.
Look in the Services and find an oddball service name and make sure to disable it.
Double check your Task Scheduler and make sure no odd tasks are in there.
Next, I changed permissions on c:\windows\winsxslog to Everyone Deny All
I double-checked and disabled ports 135, 139, and 445 on the WAN interface as well; I manually added an Inbound Rule to do this just to “be sure”.
The exploit I believe started on April 23rd, 2017 and I believe it is related to the Shadow Brokers wikileaks dump of NSA tools that exploited a vulnerability in SMB1 protocol.
Finally I rebooted and the system has been stable since.
Rich Kreider, great post !
Have you figured out how they got in ?
Unfortunately, not definitively.
My suspicion is through SMB/445 exploit. I won’t have forensic logs until later Monday afternoon where I can tie this all together.
The timing seems to have been April 24th for this specific server and then during that week for other reports. Some of those servers I got break-fix calls on had 445 open.
I’m just really not sure on this server I manage in particular. But all signs are pointing to SMB/445.
Thanks Rich for the reply, Did you get your forensic logs back yet and if so what did you pick up? also what ports did you have open on your firewall to the server? was port 445 directly open from the internet to the server?
One thing Rich didn’t note in this post (maybe the variant that hit our server is slightly different?) is that the worm also added a firewall rule called deny tcp 445, which does exactly what it says. They were probably trying to call dibs on the infected system and lock out other would-be exploiters. That all but confirms SMB/445 was the entry vector.
Also, for us, the miner re-appears every day at exactly 11pm despite applying all the relevant countermeasures, and even if the network interface is turned off completely. So in our case, the thing probably also managed to hide a seed somewhere in the system that it re-spawns from in a scheduled manner. Unfortunately, no luck finding its hideout yet, though there is suspicious PowerShell activity in a conhost.exe process owned by SYSTEM that seems to always coincide with the respawn.
Keeping an inaccessible winsxslog directory seems to successfully stop it from starting the mining process, but this is hardly an acceptable situation. We’ll likely reset the server to a pre-April state if no solution comes up this week…
Other minor things we found out: the actual miner is in a camouflaged RAR archive (the one named winhlp32.tmp), and un.exe/unx.exe is actually just the rar command line tool.
Interesting points MaxRebo!
I did find the deny tcp 445 rules in my firewall logs; I’ll make a note up in the post about that as I didn’t include it!
The miner did re-appear for me at 2:27AM daily. I found a scheduled task but I don’t think it was related (RegDriver). I did find a service, and once stopped and the binary removed, I no longer got a respawn. It pointed to: C:progra~1NQCOHHVGlsm.exe
The winhlp32.tmp is the miner compressed file that contains all the miner applications/configurations and I did have un.exe as well.
A couple other notes…
Here’s a VirusTotal signature for one of the Service executables: here.
Also – I observed performance diagnostics were run on the servers I looked at; I don’t know if that’s the nature of how the CPU Miner stuff works (I’m not really familiar with crypto currency mining), but I would _guess_ that a system is profiled for performance and it is reported back?
On a different system, I found that the service pointed to: C:WindowsYYHJFXZQlsass.exe which was different than one of my other systems, so I don’t know if that system had a variant.
Finally, I was able to review the logs yesterday and saw that there were 11 unique IPv4 addresses attempting port 445 on one server. Geographically, those IPs are from US, Canada, China.
Speaking of your 11PM…
I just saw this article:
Source: http://www.securityweek.com/botnet-thousands-servers-mines-crypto-currency
Whoa, jackpot! I stayed up late because I wanted to see if I notice anything helpful when I witness it again tonight, and then I see your reply mere minutes before the clock hits 11… Following your link, it didn’t take long to find the information that enabled me to remove the malicious provider and event subscription. You might have just saved us from a rollback and downtime! I’ll definitely drop by tomorrow to report if it’s gone for good now. Thanks a ton for the heads up.
Awesome.
I got sucked into a rabbit hole today looking into some interesting things such as WMI RAT stuff. Hopefully things look on the up & up tomorrow for your system.
Thanks Rich, looking good so far.
Some additional info in case it might be helpful to others:
In our case, the WMI __EventFilter was named Power, and was configured to fire when the Win32ClockProvider Win32_LocalTime hits 23:03.03. The corresponding WMI ActiveScriptEventConsumer was named PowerLog and its attached Visual Basic scipt was pretty much in line with those reported to be deployed by this particular botnet in the past. The naming of the WMI objects is likely subject to frequent change, as is the namespace they’re created under: it was rootDefault for us, while earlier reports mention mostly rootSubscription. It seems the C&C server http interface changed slightly too, as it was posting collected system details to /d/info.asp?info and fetching commands from /d/dl.asp (as opposed to /all.asp&info and /dl.asp according to prior reports).
It’s noteworthy that the particular C&C server configured for its next run (which was not to be thanks to you :)) was probably not another infected machine. The script would have accessed it via domain name instead of IP, and the domain was registered via (and the server hosted by) GoDaddy. The whois information makes for some interesting reading:
http://whois.domaintools.com/g-s.site
Apperently, GoDaddy accepts registrants named Google google based in Beijing, with a phone number like 123456… seems their horrible reputation is entirely deserved.
The server seems to be in a blank state currently. This morning http://g-s.site/d/d/dl.asp was still happily handing out encoded commands when navigated to in a browser. Now there is only 404s. Could be that someone made an abuse report already.
Anyway, since everything remained calm tonight, I think it’s safe to say that the attacker is now locked out for good. Thank you again for the valuable pointer!
MaxRebo, can you explain in detail how you found that WMI object?
Mike – you can grab WMI Explorer for a nice GUI way of finding this. (https://wmie.codeplex.com/) Also, in powershell (single line):
Check __FilterToConsumerBinding class:
My output:
So now I check __EventFilter class with name of Power:
Check __EventFilter class:
My output:
So this kicks off the __EventConsumer PowerLog at 2:03:03AM
I checked to see what exactly was happening at 2:03:03 knowing that __EventFilter is Power and __EventConsumer is PowerLog, time to look at PowerLog
My Output:
Out of curiosity, I wonder if they set the Guest password to admin,.123!@#$%^ for all of their breaches? Can anyone confirm?
I’m not sure what my guest password was set to prior to resetting it and disabling that account. Once I did that though, the attack stopped. The only thing I’m seeing now is the dfvt.log file being recreated every night at 11:03 PM.
Yes, 445 was open directly on a server I maintain. I do need 445 on this particular server I manage, so I put explicit rules in to allow only for some specific addresses (which will cause me more headaches down the road, but whatever).
It was also opened on all the other calls I received.
I looked at logs starting from 23rd of April and noted in one case 11 unique IPs from a few countries (US, China, Canada) accessing 445. I do not have packet dumps so I can’t tell what exactly was going on with the connections; it was just netflow data I had access to and total communications were less than the size of one of the vulnerability payloads – so I think they exploited SMB and used RCE to grab data from somewhere. I’m still trying to piece communications together, but this is all I have for right now. I’m suspecting I’ll find the servers connected outbound to an HTTP/HTTPS C&C to download the main payload, but most of these servers have a ton of HTTP traffic in the logs. I pieced things together by time (first seen access to port 445, check for outbound HTTP/HTTPS within 5 minutes after that event) hasn’t yielded much yet.
I found it in the same place. My guest password was the same as yours.
This worked for removing it for me (all 3 are one-liners):
Here are the steps I followed to delete this script:
1) Go to Start –> Run –> and enter WBEMTest to start the Windows Management Instrumentation Tester application.
2) Check the Enable All Privileges check box.
3) Click the Connect… button.
4) Enter rootdefault in the Namespace box and click the Connect button.
5) Click the Enum Instances… button and enter ActiveScriptEventConsumer.
6) Click ActiveScriptEventConsumer.Name=PowerLog in the list of results and click the Delete button.
7) Click the Close and Exit buttons to shut down the application.
Yep, that works too! wbemtest ftw. =)
Glad you got it sorted too, Mike.
Seems all three of our scripts are identical. Guest password was set to admin,.123!@#$%^ here as well. In hindsight, the attacker does not seem like the most skilled of individuals. Likely some script kiddie (although a highly motivated one) picking up pre-made tools and tricks on hacker forums.
Thanks a ton to all of you guys ! Welldone…thankyou very much to save us.
I was noticing dfvt.log for last 155 days and was trying to find out whats going on. A search and landing up here cleared all the doubts and I was able to delete the WMI events finally! Thanks a lot.
Typo.. 15 days not 155!
Wow , i was struggling with this for one month when i found the logfile that led me here.
i even opened a case with the antivirus support.
it worked for me aswell wis wbemtest but it will only remove the script , the wmi scheduled task have to be
deleted with 1st powershell script from Rich
thanks
Any more news, I have similar issue but I also notice that a service is being installed than runs a fake copy of smss.exe which is being installed in Program Files\smss.exe
Even after I killed the service and removed it with sc delete, it will re-appear after a few days.
I have performed all your suggested steps and the system is updates; port 445 is closed but still this system gets infected again after 2 days. I am curious is you have seen the same behavior on your systems.
Thanks in advance.
I have found the PowerLog entry in the WMI Tester but on trying to delete it say Access Denied
Do i need to run the 3 scripts first?
Hi Cliff –
If you are running the WMI Explorer GUI, run it Elevated with Administrator privileges.
Likewise with WBEM Test (wbemtest); make sure to tick the box that says Enable all privileges.
If you run the scripts, you’ll need to do it from Elevated Powershell prompt.
Thanx Rich, lets see if this keeps it at bay