DeepSeek Releases Prover-V2 Model: 671B Parameters to Boost Math Theorem Proving

DeepSeek open-sourced the DeepSeek-Prover2 model designed for math proofs on May 1, containing 671 billion parameters and 7 billion parameter versions. The model uses a combination of recursion and reinforcement learning to perform well in several math tests, such as the MiniFF test with a pass rate of 88.9%. The ProBench dataset released at the same time contains 325 questions to evaluate the model's capabilities. Experiments have found that the Chain of Thought model significantly proves accuracy, and the mini-model even outperforms the model on specific problems. The model is already at Hugging Face, supporting a new paradigm in math research.