I found the database dump of the CryptoLocker release from May 30, 2015 by the ransomware’s author.  I decided to put it into a database and make a lame front-end for it to be queried against by either the bitcoin address or the public RSA key from the infected computer.

Hope it helps someone out there.



I am the author of the Locker ransomware and I’m very sorry about that has happened. It was never my
intention to release this.

I uploaded the database to mega.co.nz containing bitcoin address, public key, private key as CSV.
This is a dump of the complete database and most of the keys weren’t even used.
All distribution of new keys has been stopped.


Automatic decryption will start on 2nd of june at midnight.

@devs, as you might be aware the private key is used in the RSACryptoServiceProvider class .net and
files are encrypted with AES-256 bit using the RijndaelManaged class.

This is the structure of the encrypted files:

  • 32 bit integer, header length
  • byte array, header (length is previous int)

*decrypt byte array using RSA & private key.

Decrypted byte array contains:

  • 32 bit integer, IV length
  • byte array, IV (length is in previous int)
  • 32 bit integer, key length
  • byte array, Key (length is in previous int)
  • rest of the data is the actual file which can be decrypted using Rijndaelmanaged and the IV and Key

Again sorry for all the trouble.

Poka BrightMinds

~ V

I have thousands of files stored on an external USB attached 1TB drive.  My drive is currently 95% full.  I know I have duplicate files throughout the drive because over time I have been lazy and made backups of backups (or copies of copies) of images or other documents.

Time to clean house.

I’ve searched online for a tool to do the following things, relatively easily and in a decent designed user interface:

  • Find duplicates based on hash (SHA-256)
  • List duplicates at end of scan
  • Give me an option to delete duplicates, or move them somewhere
  • Be somewhat fast

Every tool I’ve used fell short somewhere.  So I decided to write my own application to do what I want.

What will my application do?

Hash each file recursively given a starting path and store the following information into an SQLite database for reporting and/or cleanup purposes.

  • SHA-256 Hash
  • File full path
  • File name
  • File extension
  • File mimetype
  • File size
  • File last modified time

With this information, I could run a report such as the following pseudo report:

Show me a list of all duplicate files with an extension of JPG over a file size of 1MB modified in the past 180 days.

That’s just a simple query, something like:

SELECT fileHash, fileName, filePath, fileSize COUNT(fileHash) FROM indexed_files WHERE fileExtension='JPG' and fileSize > 1024 GROUP BY fileHash HAVING COUNT(fileHash)>1

My application can show me a list of these and make some decisions to allow me to move or delete the duplicates after the query runs.

One problem comes to mind in automating removal or moving duplicates… What if there are more than 1 duplicate file; how do I handle this?

So on to the bits and pieces…

The hashing function is pretty straight-forward in VB.NET (did I mention I was writing this in .NET?).

Imports System.IO
Imports System.Security
Imports System.Security.Cryptography

Function hashFile(ByVal fileName As String)

  Dim hash
  hash = SHA256.Create()

  Dim hashValue() As Byte

  Dim fileStream As FileStream = File.OpenRead(fileName)
  fileStream.Position = 0
  hashValue = hash.ComputeHash(fileStream)
  Dim hashHex = PrintByteArray(hashValue)


  Return hashHex

End Function

Public Function PrintByteArray(ByVal array() As Byte)

  Dim hexValue As String = ""

  Dim i As Integer

  For i = 0 To array.Length - 1
    hexValue += array(i).ToString("X2")
  Next i

  Return hexValue.ToLower

End Function

Dim path As String = "Z:"
' Insert recursion function here and inside, use the following:
Dim fHash = hashFile(path) ' The SHA-256 hash of the file
Dim fPath = Nothing ' The full path to the file
Dim fName = Nothing ' The filename
Dim fExt = Nothing ' The file's extension
Dim fSize = Nothing ' The file's size in bytes
Dim fLastMod = Nothing ' The timestamp the file was last modified
Dim fMimeType = Nothing ' The mimetype of the file

Ok cool, so I have a somewhat workable code idea here. I’m not sure how long this is going to take to process, so I want to sample a few hundred files and maybe even think about some options I can pass to my application such as only hashing specific exensions or specific file names like *IMG_* or even be able to exclude something.

But first… a proof of concept.

Update: 11/28/2016

Spent some time working on the application.  Here’s a GUI rendition;  not much since it is being used as a testing application.

I have also implemented some code for SQLite use to store this to a database.  Here’s a screenshot of the database.

Continuing on with some brainstorming, I’ve been thinking about how to handle the multiple duplicates.

I think what I want to do is

  • Add new table “duplicates”
  • Link “duplicates” to “files” table by “id” based on duplicate hashes
  • Store all duplicates found in this table for later management (deleting, archiving, etc.)

After testing some SQL queries and using some test data, I came up with this query:

SELECT * FROM file a
WHERE ( hash ) IN ( SELECT hash FROM file GROUP BY hash HAVING COUNT(*) > 1 )

This gives me the correct results as illustrated in the screenshot below.

So with being able to pick out the duplicate files and display them via a query, I can then use the lowest “id” as the base or even the last modified date as the original and move the duplicates to a table to be removed or archived.

Running my first test on a local NAS with thousands of file.  It’s been running about 3 hours and the database file is at 1.44MB.

Update 12/1/2016

I’ve worked on the application off and on over the past few days trying to optimize the file recursion method.  I ended up implementing a faster method than I created above, and I wrote about it here.

Here’s a piece of the code within the recursion function.  I’m running the first test on my user directory, C:Users
kreider.  The recursive count took about 1.5 seconds to count all the files (27k).  I will need to add logic because the file count doesn’t actually attempt to open and create a hash like my hash function does;  so 27k files may actually end up only being 22k or whatever.

Just a file count of C:users
kreider (SSD) took about 1.5 seconds for 26k files.

File count of my user directory (SSD disk), no file hashing or other processing done.

Hashing Test Run 1

On this pass, I decided to run the hash on the files.  It took considerably longer, just under 5 minutes.

File hashing recursively of my user directory (SSD).

Something important to note.  Not all 26,683 of the original files scanned were actually hashed for various reasons such as Access Permissions, file already opened by something, etc.

For comparison, the database (SQLite) created 26,505 records and is 5.4MB in size.

Hashing Test Run 2

I moved the file counter further into the hash loop and only increment the counter when a file is successfully hashed.  Here are my results now.

Recursive hash of my user directory (SSD) with a found/processed indicator now.

As you can see, it found 26,684 file and could only process (hash) 26,510.

Comparing the result in GUI to the database with SELECT COUNT(*) FROM file, it matches properly.  The database size remains about the same, 5.39MB.

One thing that I’m trying to decide is whether or not to put some type of progress identifier on the interface.

The thing is, this adds overhead because I have to first get a count of files and that will take x seconds.  In the case of the NAS scan, it took 500+ seconds, over 5 minutes.  So I’d be waiting 5 minutes JUST for a count and then I’d start the file hashing which will take time.  I just don’t know if it is worth it, but it sure would be nice I believe.

Database Schema

[hash] text  NULL,
[fullname] text  NULL,
[shortname] text  NULL,
[extension] text  NULL,
[mimetype] text  NULL,
[size] intEGER  NULL,
[modified] TIMESTAMP  NULL

A few notes on my Observium setup on a Debian 8 Jessie system. All configuration options and details can be found at the Observium documentation page.

Bad Interfaces

These entries are in /opt/observium/config.php

$config['bad_if'][] = voip-null;
$config['bad_if'][] = virtual-;
$config['bad_if_regexp'][] = /serial[0-9]:/;
$config['bad_if'][] = loopback;
$config['bad_if'][] = lo;
$config['bad_if'][] = dummy;
$config['bad_if_regexp'][] = /tunnel_[0-9]/;
$config['bad_iftype'][] = voiceEncap;

Other Configuration Options

A few other customizations in the /opt/observium/config.php file.

$config['rrdgraph_real_95th'] = TRUE;
$config['allow_unauth_graphs']    = 1;
$config['login_message']    = Unauthorised access shall render the user liable to criminal and/or civil prosecution.;
$config['page_title_prefix'] = Rich Kreider - Monitoring :: ;

My photography journey has been hectic from a gear standpoint.  Here’s a bit of history and some of the equipment that I’ve used over the past few years.

Camera Bodies

2010-2012 Canon SX30 IS

The SX30 IS was my first digital camera so-to-speak.  It was a point-and-shoot that I took with me on our NYC trip in 2010.

2012-2014 Canon Rebel T3

The Canon Rebel T3 was my first DSLR camera.  I entered into the DSLR arena and bought this Christmas of 2012.  I knew nothing about DSLRs and how expensive the next 3 years were going to be (lol).

2014-2015 Canon 5D Classic

After being in the DSLR game for a few years now, I decided to take the plunge into a Full Frame sensor and see what all the fuss was about.  Would it really be *that* awesome?  This was an affordable used body I picked up for about $500 in a Facebook group.

2015-2016 Canon 6D

After toying with the 5D, I needed better features like focus and low-light capabilities.  I did a lot of indoor stuff and my studio was coming together with lighting, but not quite there yet.

2016-Present Canon 5D Mark III

The 6D AF system performance left me missing a lot of action shots.  I knew of this when I bought the camera that the 5D Mark III was superior in the AF sense, but the 6D price point pound-for-pound in comparison to the 5D Mark III at the time is what sold me.  Now, though, I need that AF system.  It was either the 7D Mark II or the 5D Mark III — I want to stay full frame, so the 5D Mark III it is!

My newest body will arrive tomorrow, 11/23, just in time for some shoots this weekend.


Let me preface this by stating that I *wish* I would have just saved up my money, knew what I was going to end up shooting, and bought the appropriate glass.  Moving from APS-C (crop sensor) to Full Frame was costly.  All the EF-S lenses I bought when I owned my Rebel T3 had to go bye-bye.


They were sold off, traded, bartered, etc. to ‘upgrade’ things over the years.  Some I lost money on, some I broke even.

My FAVORITE lens HANDS DOWN was my 135mm f/2 L.  I fucking LOVE that lens, but I sold it, because:  bills.

Moving Forward

Looking into the future, after I pay off the 5D Mark III, I would like to have the Canon 24-70mm f/2.8 L at a decent used price, or a new f/4.  I’d also love to have the 135mm f/2 L again;  I see they sell for about $700 on Facebook groups, so I can probably pick that up again in a few months.  I have gear lust problems and I still have not figured out if I’m a prime person or if I am OK with the tele lenses.  I’m a pixel peeper and I demand sharpness.  After using the 135mm f/2 L, no lens (that I’ve used) compares to that badass!

Overall, in my bag, I would like to have:

  • Canon 5D Mark III (Got it!)
  • Canon 50mm f/1.8 STM ($$)
  • Canon 24-70mm f/2.8 II L ($$$$$)
  • Canon 70-200 f/2.8 II L ($$$$$)
  • Canon 135mm f/2 L ($$$)
  • Canon 16-35 f/4 IS L ($$$$)
  • Canon 35mm f/1.4 L ($$$$)
compose new post
go to top
go to the next post or comment
go to the previous post or comment
toggle comment visibility
cancel edit post or comment