Exploring the alignment of multi-agent systems with human values during group process is an essential step towards the development of artificial general intelligence. In this work, we present a novel approach to systematically evaluate factors that influence the value orientation of large language models (LLMs) in simulating human group process. Our proposed framework, which requires neither fine-tuning nor pre-training, enables LLMs to simulate debates among personas on a given topic, autonomously assess response confidence, and retrieve external information to enhance low-confidence response...