Large language models (LLM) have immense potential in the field of general intelligence but come with significant risks. As a research team at Peking University, we are actively focusing on alignment techniques for large language models, such as safe-alignment to enhance the model's safety and reduce its toxicity.
Welcome to follow our AI Safety project: