×
Chinese VC firm launches AI benchmark testing real-world business value
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Chinese venture capital firm Hongshan Capital Global has launched Xbench, an AI benchmarking system that evaluates models on both traditional academic tests and real-world task execution. The platform addresses a critical gap in AI assessment by testing whether models can deliver actual economic value rather than just pass standardized tests, with regular updates designed to keep evaluations current and relevant.

What you should know: Xbench takes a dual approach to AI evaluation that goes beyond conventional benchmarking methods.
• The system includes traditional academic testing through Xbench-ScienceQA, which covers postgraduate-level STEM subjects from biochemistry to orbital mechanics, rewarding both correct answers and reasoning chains.
• Xbench-DeepResearch tests models’ ability to navigate Chinese-language web research across music, history, finance, and literature with questions that require significant investigation rather than simple searches.
• Real-world readiness gets assessed through professional workflow simulations, including tasks like sourcing qualified battery engineer candidates and matching advertisers with appropriate influencers from pools of over 800 creators.

The big picture: Hongshan Capital Global, led by partner Gong Yuan, originally developed Xbench in 2022 as an internal investment assessment tool following ChatGPT’s breakthrough success, but has now opened parts of it to public use with plans for quarterly updates.
• The team has released a leaderboard showing ChatGPT o3 ranking first across all categories, though ByteDance’s Doubao, Gemini 2.5 Pro, Grok, and Claude Sonnet all performed well.
• For the challenging Chinese research question “How many Chinese cities in the three northwestern provinces border a foreign country?” only 33% of tested models correctly answered 12.

Why this matters: Traditional benchmarks struggle to determine whether AI models are truly reasoning or simply regurgitating training data, making real-world task assessment increasingly crucial for enterprise adoption.
• The system’s focus on economic value delivery could help businesses make more informed decisions about which AI models to deploy for specific workflows.
• Regular updates and the half-public, half-private dataset approach aims to prevent models from gaming static benchmarks.

What’s coming next: Hongshan plans to expand Xbench beyond its current recruitment and marketing categories into finance, legal, accounting, and design workflows.
• The team intends to add dimensions for creativity, collaboration between models, and reliability assessments.
• Question sets for upcoming professional categories have not yet been open-sourced.

What experts think: The approach represents progress in addressing quantification challenges in AI benchmarking.
• “It is really difficult for benchmarks to include things that are so hard to quantify,” says Zihan Zheng, lead researcher on LiveCodeBench Pro and NYU student. “But Xbench represents a promising start.”

A Chinese firm has just launched a constantly changing set of AI benchmarks

Recent News

Salesforce launches Agentforce 3 with real-time monitoring tools

Enterprise adoption surged 233% in six months as companies move beyond pilot programs.