Deep Seek - Just how censored is it?
Deep Seek R1: Controversy, Innovation, and AI Export Controls Recent developments surrounding Deep Seek's R1 model have sparked significant discussion in the AI industry, particularly regarding potential data distillation from OpenAI models and the implications for GPU export controls. Former OpenAI employee and current Anthropic CEO Dario Amade has shared insights on these developments through a new essay. Data Distillation Controversy Evidence has emerged suggesting Deep Seek may have used OpenAI's model outputs in training R1: - The R1 model occasionally self-identifies as being trained by OpenAI - Industry leaders, including Grock CEO Jonathan Ross, suggest Deep Seek spent significant resources distilling knowledge from OpenAI's models - The reported $5 million training cost may not reflect total R&D investment Scaling Laws and AI Development Amade outlines three key dynamics in AI development: 1. Scaling Laws - Larger models consistently show linear improvements in...