Topic of your interest
Machine Learning Framework for dynamic image style evaluation
Short name: DeepQuality
Fogra no. 13.004
Project leader: M. Wimmer (Fogra) and Prof. Dr D. Merhof (LfB)
Partner: Institute of Imaging & Computer Vision at Aachen University
Funding: BMWK (IGF) via AiF
Timescale: 01.06.2020 - 31.05.2022
The selection of suitable images for a given image style (also called briefing) from a large number of source images, e.g. from a photo shoot, always means a great deal of time and effort for the agency or repro staff involved.
Precisely targeted image data (original and corrected images) that are characteristic for a specific style are the basis for the development of a neural network that enables a dynamic image quality assessment without a formalized (human) description of the respective taste or image style. The evaluation is carried out in such a way that any images for a given output method are automatically checked to see whether they correspond to a certain image style and how much effort may be required for a necessary retouching (image correction; “traffic light system”).
Working hypothesis is that “image-to-image translation” as well as “image-to-feature translation” are capable of developing such a traffic light system. Since the decisions on the suitability of image data are already made manually by experts, sufficient training data is available. This approach is further based on the hypothesis that the recognition of the relevant features for gradual agreement with an image style is sufficient to successfully apply this style to new images – and thus to correct them automatically. This last step is particularly demanding and represents the greatest challenge within the project.