Date of Award
Summer 7-29-2025
Document Type
Thesis
Degree Name
Master of Science (MS)
Department/Program
Digital Forensics and Cybersecurity
Language
English
First Advisor or Mentor
Shweta Jain
Second Reader
Hunter Johnson
Third Advisor
Douglas Salane
Abstract
Perceptual hashing algorithms are algorithms that generate content-based image hashes by extracting perceptual features from the images. Unlike cryptographic hashes, which exhibit significant changes with even slight input alterations, perceptual hashes do not change when modifications like compression, color correction and brightness are applied to the images. These hashes are designed to remain similar for inputs that are visually or perceptually alike, which has led to their widespread application in detecting duplicate images, finding similar images for reverse image search and to detecting inappropriate content of Child sexual abuse (CSAM) images by comparing image hashes with dataset of known perceptual hashes. This study focuses on evaluating the robustness of NeuralHash against adversarial attacks including Hash collision attack, standard evasion attack, edges only attack and few pixels attacks. Attacks have been conducted on CASIA dataset with the aim of measuring how effectively NeuralHash resists these adversarial modifications while maintain visual consistency in the image data.
Recommended Citation
Kaur, Gurleen, "Exploring Adversarial Threats to NeuralHash: A Perceptual Hashing Algorithm" (2025). CUNY Academic Works.
https://academicworks.cuny.edu/jj_etds/360
