Chinese AI labs flood the zone with 10+ new models in 7 days; reasoning models and diffusion scaling dominate research
Sub-Indices
Field Report — February 18, 2026
The Chinese model labs have gone absolutely feral. In the span of seven days, we've seen GLM-5, MiniMax-M2.5, Nanbeige4.1, MiniCPM-SALA, and Qwen3-Coder-Next all hit HuggingFace with production-ready releases. GLM-5 alone pulled 170K downloads and spawned three derivative versions (FP8, GGUF quantizations) before most people finished reading the model card. This isn't a research tempo anymore—this is an industrial deployment race with the safety rails removed.
The technical signals are particularly spicy today. ArXiv dropped research showing discrete diffusion models can be made 12% more FLOPs-efficient with simple modifications to training—the kind of incremental-but-compounding improvement that quietly enables the next capability jump. Meanwhile, papers on "Goldilocks RL" and bounded-error neural PDE solvers suggest the community is cracking the code on making models both more capable and more reliable simultaneously. That's the worst-case scenario for gradual adaptation: improvements that make AI both smarter and cheaper at the same time.
Top Signals
- ▸10+ major Chinese model releases in 7 days with enterprise deployment features
- ▸12% FLOPs efficiency gain demonstrated in discrete diffusion training
- ▸RF-GPT and CT-Bench show AI expanding into wireless signals and medical imaging