PDS-DPO

Multimodal Preference Data Synthetic Alignment with Reward Model


Singapore University of Technology and Design

Abstract

Multimodal large language models (MLLMs) have significantly advanced tasks like caption generation and visual question answering by integrating visual and textual data. However, they sometimes produce misleading or hallucinate content due to discrepancies between their pre-training data and real user prompts. Existing approaches using Direct Preference Optimization (DPO) in vision-language tasks often rely on strong models like GPT-4 or CLIP to determine positive and negative responses. Here, we propose a new DPO variant that leverages synthetic data from generative and reward models as proxies for human preferences to improve MLLM alignment efficiently. The resulting DPO dataset, ranging from 2K to 9K image-text pairs, was evaluated on LLaVA-v1.5-7B, where our approach demonstrated substantial improvements in both the trustworthiness and reasoning capabilities of the base model across multiple hallucination and vision-language benchmark. The experiment results indicate that integrating selected synthetic data, such as from generative and rewards models can effectively reduce reliance on human-annotated data while enhancing MLLMs' alignment capability, offering a scalable solution for safer deployment.




Method

The proposed PDS-DPO framework:

Illustration of the PDS-DPO frmework
Starting with an initial text-to-image prompt, the Stable Diffusion model generates synthetic images. These images are then filtered using a reward model to exclude low-quality samples and retain only those with the highest scores. The selected images, along with their corresponding instruction prompts, serve as input for open-source MLLMs to generate responses. These responses are evaluated based on various criteria, and only the highest-scoring ones are selected to identify the most suitable positive and negative pairs for DPO-based training.


Highlights


Our framework generates multiple images using Stable Diffusion and retains only the one with the highest scalar score as determined by the reward model:

Selected images by reward model
The figure illustrates image generation results using Stable Diffusion across four different guidance scales (5.0, 7.0, 9.0, 11.0), with the highest-scored image selected for each prompt based on a preference model evaluation.


Similar to the images, we rank the generated responses from open-source MLLMs and retain only the one that is preferred:

The figure compares preferred and dispreferred responses from multimodal models interpreting visual prompts. The preferred responses are concise and focused on relevant details, while the dispreferred ones include speculative, redundant, or unclear information.


Competitive results on both vision-language and hallucination task benchmarks:


Citation


      @misc{wijaya2024multimodalpreferencedatasynthetic,
        title={Multimodal Preference Data Synthetic Alignment with Reward Model}, 
        author={Robert Wijaya and Ngoc-Bao Nguyen and Ngai-Man Cheung},
        year={2024},
        eprint={2412.17417},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
      }

Examples

  • Short-form QA: PDS-DPO can give a more trustworthy answer in short-form QA.
  • Long-form QA: PDS-DPO can generate more concise but informative answer.
  • Long-form QA: PDS-DPO can provide image description with detailed reasoning and less hallucinations.