×
DeepSeek R1 vs DeepSeek V3: Which is better at coding?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Testing methodology and scope: A comprehensive evaluation of DeepSeek’s V3 and R1 models was conducted by a journalist at ZDNET using four established coding challenges that have previously been used to benchmark other AI models.

  • The testing framework included writing a WordPress plugin, rewriting a string function, debugging code, and creating a complex automation script
  • Both V3 and R1 variants were evaluated against identical criteria to ensure consistent comparison
  • The assessment focused on code accuracy, functionality, and practical implementation

Performance breakdown: DeepSeek V3 emerged as the stronger performer, successfully completing three out of four challenges while R1 managed two successful completions.

  • Both models excelled at WordPress plugin development and bug identification tasks
  • V3 demonstrated superior capability in string function manipulation, while R1 showed potential stability issues
  • Neither model could successfully navigate the complex automation script involving AppleScript, Chrome, and Keyboard Maestro integration

Technical strengths and limitations: DeepSeek’s performance indicates capabilities comparable to GPT-3.5, while highlighting areas for potential improvement.

  • The code output tends toward verbosity, often including unnecessary complexity
  • Both models demonstrate strong fundamental programming knowledge
  • The platform shows impressive capabilities despite operating with reduced infrastructure compared to major competitors

Competitive positioning: DeepSeek’s performance relative to established AI coding assistants reveals promising potential in the development tools landscape.

  • The V3 model outperformed several major platforms including Gemini, Copilot, Claude, and Meta’s offerings
  • Its open-source nature provides transparency and potential for community-driven improvements
  • The platform achieves competitive results while maintaining a lighter infrastructure footprint

Future outlook: While DeepSeek shows promise as an emerging development tool, several factors will influence its adoption and evolution in the AI coding assistant space.

  • The platform’s ability to maintain competitive performance while using fewer resources suggests potential for efficient scaling
  • Continued development could address current limitations in handling complex, multi-platform automation tasks
  • The open-source approach may accelerate improvement through community contributions and transparency
I tested DeepSeek's R1 and V3 coding skills - and we're not all doomed (yet)

Recent News

UI challenges Lightcone could address to improve user experience

Addressing key interface bottlenecks could help bridge the growing gap between AI capabilities and effective human usability in the coming years.

Strategies for human-friendly superintelligence as AI hiveminds evolve

Networks of interacting AI models could create emergent superintelligent capabilities that require new approaches to ensure human values remain central to their development.

AI metrics that matter: Developing effective evaluation systems

Effective AI evaluation requires both technical performance metrics and customer value indicators to prevent misaligned goals and drive informed product decisions.