Tópicos populares
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Xinyu Zhou
Vamos ver 👀

Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)28/11/2025
Kimi a fazer melhor do que Qwen-VI realmente me surpreenderia. Em vez disso, K2-VL não (é vastamente maior do que o maior Qwen-VL, afinal), mas a equipa do Qwen acumulou uma experiência incrível até este ponto, damn 30B MoE compete com a Gemini.
Desejo-lhes sorte.
33,59K
É um evento Moonshot🚀

Kimi.ai6/11/2025
🚀 Hello, Kimi K2 Thinking!
The Open-Source Thinking Agent Model is here.
🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%)
🔹 Executes up to 200 – 300 sequential tool calls without human interference
🔹 Excels in reasoning, agentic search, and coding
🔹 256K context window
Built as a thinking agent, K2 Thinking marks our latest efforts in test-time scaling — scaling both thinking tokens and tool-calling turns.
K2 Thinking is now live on in chat mode, with full agentic mode coming soon. It is also accessible via API.
🔌 API is live:
🔗 Tech blog:
🔗 Weights & code:

15,31K
Como ex-programador competitivo, isso me dá arrepios e marca meu momento Lee Sedol.
Aqueles dias e noites passados resolvendo problemas—começando em confusão e suor, terminando em empolgação e um senso de realização—podem agora ser facilmente esmagados pelos modelos de linguagem de hoje.
Isso é bastante agridoce, pois agora estou também desenvolvendo os próprios modelos que podem vencer humanos em mais e mais domínios.
Mas a vida continua, como sempre. Os modelos continuarão a ficar mais inteligentes e mais capazes, e a humanidade no século XXI encontrará novas maneiras de viver vidas significativas com esses modelos.

Jakub Pachocki18/09/2025
Last week, our reasoning models took part in the 2025 International Collegiate Programming Contest (ICPC), the world’s premier university-level programming competition. Our system solved all 12 out of 12 problems, a performance that would have placed first in the world (the best human team solved 11 problems).
This milestone rounds off an intense 2 months of competition performances by our models:
- A second place finish in AtCoder Heuristics World Finals
- Gold medal at the International Mathematical Olympiad
- Gold medal at the International Olympiad in Informatics
- And now, a gold medal, first place finish at the ICPC World Finals.
I believe these results, coming from a family of general reasoning models rooted in our main research program, are perhaps the clearest benchmark of progress this year. These competitions are great self-contained, time-boxed tests for the ability to discover new ideas. Even before our models were proficient at simple arithmetic, we looked towards these contests as milestones of progress towards transformative artificial intelligence.
Our models now rank among the top humans in these domains, when posed with well-specified questions and restricted to ~5 hours. The challenge now is moving to more open-ended problems, and much longer time horizons. This level of reasoning ability, applied over months and years to problems that really matter, is what we’re after - automating scientific discovery.
This rapid progress also underscores the importance of safety & alignment research. We still need more understanding of the alignment properties of long-running reasoning models; in particular, I recommend reviewing the fascinating findings from the study of scheming in reasoning models that we released today (
Congratulations to my teammates that poured their hearts into getting these competition results, and to everyone contributing to the underlying fundamental research that enables them!
147,58K
Top
Classificação
Favoritos

