Topic of your interest
Machine Learning Framework for dynamic image style evaluation
Short name: DeepQuality
Fogra no. 13.004
Project leader: M. Wimmer (Fogra) and Prof. Dr D. Merhof (LfB)
Partner: Institute of Imaging & Computer Vision at Aachen University
Funding: BMWi (IGF) via AiF
Timescale: 01.06.2020 - 31.05.2022
Support us with your pictures!
For our research project we need sample data. Take part and support us with your images to help shape the future of artificial intelligence! Confidentiality in handling your image data is guaranteed.
The selection of suitable images for a given image style (also called briefing) from a large number of source images, e.g. from a photo shoot, always means a great deal of time and effort for the agency or repro staff involved.
Precisely targeted image data (original and corrected images) that are characteristic for a specific style are the basis for the development of a neural network that enables a dynamic image quality assessment without a formalized (human) description of the respective taste or image style. The evaluation is carried out in such a way that any images for a given output method are automatically checked to see whether they correspond to a certain image style and how much effort may be required for a necessary retouching (image correction; “traffic light system”).
Working hypothesis is that “image-to-image translation” as well as “image-to-feature translation” are capable of developing such a traffic light system. Since the decisions on the suitability of image data are already made manually by experts, sufficient training data is available. This approach is further based on the hypothesis that the recognition of the relevant features for gradual agreement with an image style is sufficient to successfully apply this style to new images – and thus to correct them automatically. This last step is particularly demanding and represents the greatest challenge within the project.
At the beginning of the project, a website will be created as well as an upload area for the transmission of the test images or image pairs (original and corrected images). Subsequently, areas of application will be discussed, which will enable the images to be divided or categorised into suitable image styles or genres. A scheme is then developed for the concrete “presentation” of the data to the AI networks (data normalisation) and the image data is then used for training and validation runs with different network topologies. The aim is to build up as much application understanding as possible in order to optimise the AI structure for its task.
Afterwards a Web App will be made available (and further developed), which will enable the participants of the PA (project accompanying committee) to evaluate the categorisation and style evaluation of the existing AI machines.