AI动态汇总:openAI发布GPT-4.1,智谱发布GLM-4-32B-0414系列
中邮证券·2025-04-23 07:54
- GPT-4.1 significantly improved coding capabilities, achieving 54.6% in SWE-bench Verified tests, outperforming GPT-4o by 21.4% and GPT-4.5 by 26.6%[12][13][15] - GPT-4.1 demonstrated enhanced instruction-following ability, scoring 38.3% in Scale's MultiChallenge benchmark, a 10.5% improvement over GPT-4o[12][13][17] - GPT-4.1 achieved new SOTA in long-context understanding, scoring 72.0% in Video-MME benchmark, surpassing GPT-4o by 6.7%[12][13][22] - GLM-4-32B-0414 utilized 15T high-quality data for pretraining and applied reinforcement learning techniques to improve instruction-following, engineering code, and function-calling capabilities[26][28][30] - GLM-Z1-32B-0414 enhanced mathematical and logical reasoning through stack-sorting feedback reinforcement learning, significantly improving complex task-solving abilities[31][33] - GLM-Z1-Rumination-32B-0414 focused on deep reasoning and open-ended problem-solving, leveraging extended reinforcement learning and search tools[34] - Seed-Thinking-v1.5 adopted MoE architecture with 200B parameters, achieving 86.7% on AIME 2024 and 55.0% on Codeforces benchmarks, showcasing strong STEM and coding reasoning capabilities[35][37][41] - Seed-Thinking-v1.5 employed dual-track reward mechanisms for training, combining verifiable and non-verifiable data strategies to optimize model outputs[36][38][40] - GPT-o3/o4-mini introduced visual reasoning into the chain of thought (CoT), achieving 96.3% accuracy in V* benchmark, marking a major breakthrough in multimodal reasoning[42][46][48] - Video-R1 model applied T-GRPO algorithm to incorporate temporal reasoning in video tasks, achieving 35.8% accuracy in VSI-Bench, surpassing GPT-4o[63][65][68] - Pangu Ultra, a dense model with 135B parameters, achieved top performance in most English and all Chinese benchmarks, rivaling larger MoE models like DeepSeek-R1[69][73][74]