# Perceptual image hashes

I recently discovered perceptual image hashes when I was in charge of removing thumbnails from a large set of images. Perceptual hashes are a completely different concept compared to the usual cryptographic hashing methods such as MD5 or SHA. With cryptographic hashes, a one-way digest is generated based on the input data. And because of its avalanche effect, the resulting hash is completely different when you change a single bit:

```
md5("10100110001") = 50144bd388adcd869bc921349a287690
md5("10100110101") = 3e3049b1ed21ede0237f9a276ec80703
```

Because of this, the only way 2 images have the same cryptographic hash, is when they are exactly the same. This makes cryptographic hashing not a feasible solution to solve this problem.

In contrast, a perceptual hash is a fingerprint based on the image input, that **can** be used to compare images by calculating the Hamming distance (which basically means counting the number of different individual bits).

There are a couple of different perceptual image hashing algorithms, but they all use similar steps for generating the media fingerprint. The easiest one to explain is the Average Hash (also called aHash). Let's take the following image and see how it works.

### 1. Reduce size

First, we reduce the size of the image to 8x8 pixels. This is the fastest way to remove high frequencies and details. This step ignores the original size and aspect ratio, and will always resize to 8x8 so that we have 64 resulting pixels.

### 2. Reduce the color

Now that we have 64 pixels, each with their RGB value, reduce the color by converting the image to grayscale. This will leave us with 64 color values.

### 3. Calculate the average color

This is quite self-explanatory; calculate the average color using the previous 64 values.

### 4. Calculate the hash

The final fingerprint is calculated based on whether a pixel is brighter or darker than the average grayscale value we just calculated. Do this for every pixel and you end up with a 64 bit hash.

## Comparing images

To detect duplicate or similar images, calculate the perceptual hashes for both images:

```
Original:
1100100101101001001111000001100000001000000000000000011100111111
Thumbnail:
1100100101101001001111000001100000001000000000000000011100111111
```

As you can see, both hashes are identical. But this doesn't mean that similar images will always create equal hashes! If we manipulate the original image, and add a watermark, we get these hashes:

```
Original:
1100100101101001001111000001100000001000000000000000011100111111
Watermark:
1100100101111001001111000001100000011000000010000000011100111111
```

As you can see, these hashes are very similar, but not equal. To compare these hashes, we count the number of different bits (Hamming distance), which is **3** in this case. The higher this distance, the lower the change of identical or similar images.

If you have a MySQL database containing all hashes, you can do a lookup like this:

```
SELECT images.*, BIT_COUNT(hash ^ :hash) as hamming_distance
FROM images
HAVING hamming_distance < 5
```

*Note: because of the BIT_COUNT operation, InnoDB can not use an existing index on the hash column.*

## Other implementations

The Average Hash implementation is the easiest and the fastest one, but it appears to be a bit too inaccurate and generates some false positives. Two other implementations are Difference Hash (or dHash) and pHash.

Difference Hash follows the same steps as the Average Hash, but generates the fingerprint based on whether the left pixel is brighter than the right one, instead of using a single average value. Compared to Average Hash it generates less false positives, which makes it a great default implementation.

pHash is an implementation that is quite different from the other ones, and does some really fancy stuff to increase the accuracy. It resizes to a 32x32 image, gets the Luma (brightness) value of each pixel and applies a discrete cosine transform (DCT) on the matrix. It then takes the top-left 8x8 pixels, which represent the lowest frequencies in the picture, to calculate the resulting hash by comparing each pixel to the median value. Because of it's complexity it is also the slowest one.

## PHP library

I combined and ported some implementations I found on the interwebs into a PHP package: https://github.com/jenssegers/imagehash

Feel free to contribute!

## Comments

## Yo 3 years ago

Thanx.

## JLO 3 years ago

Great, clear explanation. Thanks!

## damianopetrungaro 3 years ago

Good article!

## Eugene 1 year ago

Very good short introduction.

## Dough Boy 1 year ago

I am having a hard time figuring out how to use your PHP package. It is returning a hash, but how do I see the "long" version of it (binary 1's/0's)? If I just store the hash in a mysql table it isn't being converted. I am probably missing a step.

## Chris 1 year ago

@DoughBoyI have no experience with the package, but I took a short look at the github. I assume the function returns a hexadecimal hash, which you can easily convert into binary by using the base_convert function of PHP: $binary = base_convert($hexHash, 16, 2)

## ventz 11 months ago

Fantastic post -- great job! And thanks for the PHP library.

## amel 8 months ago

@Chrisit's wrong: "$binary = base_convert($hexHash, 16, 2)"

use it $hasher = new ImageHash(new DifferenceHash(), ImageHash::DECIMAL);

$intHash = $hasher->hash($image); $binHash = base_convert($intHash, 10, 2); $hexHash = dechex($intHash); or $hexHash = base_convert($binHash, 2, 16);

## Caz 8 months ago

For those wondering about the whole "PHP returns hex value something is wrong comments". It doesn't matter if its in hex or binary. Hex is just a more easily read representation of binary. You can just use the hex instead it's literally the same value.

## Raven 7 months ago

Regarding the statement "The higher this distance, the lower the change of identical or similar images." It sounds counter-intuitive. Is it correct or a mis-statement?

## Hisun 5 months ago

Can I search for a picture within a picture?

## NotMyName 4 months ago

This was a helpful illustrative explanation, thanks Jens!