|
|||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||
PLUS Temporal Image Forensics Dataset (PLUSTIFDS) - Download Page | |||||||||||||||||||||||||||||||||||||||||||||||||
This is "The Multimedia Signal Processing and Security Lab", short WaveLab, website. We are a research group at the Artificial Intelligence and Human Interfaces (AIHI)
Department of the University of Salzburg led by Andreas Uhl.
Our research is focused on Visual Data Processing and associated security questions. Most of our work is currently concentrated on Biometrics, Media Forensics and
Media Security, Medical Image and Video Analysis, and application oriented fundamental research in digital humanities, individualised aquaculture and sustainable wood
industry.
| |||||||||||||||||||||||||||||||||||||||||||||||||
PLUS Temporal Image Forensics DatasetThe PLUS Temporal Image Forensics Dataset (PLUSTIFDS) is a temporal image forensics dataset where content bias is limited. Usually, images taken in close temporal proximity (i.e., belonging to the same age class) share common scene properties (aka content bias). Content bias can be exploited by data-driven models instead of comprehensible evidence (i.e., age traces). With the PLUSTIFDS, the idea is to extract the artifacts and noise introduced by the image acquisition pipeline (which contains the age traces) from the acquired calibration images (i.e., Dark Field Images (DFI) and Bright Field Images (BFI)). The extracted artifacts and noise can then be embedded into synthetic (rendered) images. Synthetic images are completely free of any artifacts and noise introduced by the image acquisition pipeline. When the artifacts and noise are extracted at different points in time (age classes) they can be embedded into the same set of synthetic images. Thus, truly identical images (in terms of image content) are available per age class only with different artifacts and noise embedded. ![]() This dataset could help to, (i) develop deep learning based age approximation methods (ii) facilitate the discovery of new (unknown) age traces, (iii) assess the impact of content bias on existing age approximation methods and (iv) develop and verify new eXplainable Artificial Intelligence methods. DFIs are images in which the camera’s shutter is closed so that the incident light is set to zero (I = 0), i.e., DFIs are not affected by content bias. BFIs are captured by illuminating the sensor with a uniform field. To achieve this, a spotlight with a mounted softbox is used. The softbox includes an inner and outer diffuser. A total of three different types of outer diffusers are used, i.e., a diffuse acrylic glass and two different types of white fabric. To capture BFIs, the camera is pointed directly at the softbox. The images are captured in a basement room with the window completely covered (i.e., controlled light and temperature conditions). DFIs were captured with different combinations of ISO settings and exposure times and BFIs with different combinations of f-number and focal length. Currently, two sessions (November 2023 and June 2024) have been recorded by 8 different imagers.
Filename and Directory StructureThe following directory structure applies:
DFI filenames are encoded according to the following structure: A csv file is provided for DFIs and BFIs (in the respective directory), which contains all relevant metadata for each acquired image. In addition, for each imager exists a Sqlite database in the directory './extracted_signals', which contains all extracted signals of the corresponding imager. Obtaining the DatasetTo obtain the PLUSTIFDS you have to agree to our license agreement:coming soon... Please download, fill in and sign the license agreement and send it to R. Joechl. After checking the license agreement you will be provided with a download link. | |||||||||||||||||||||||||||||||||||||||||||||||||
|