langchain-core==1.4.0 langchain-core==1.4.0
Changes since langchain-core==0.3.86 chore(infra): merge v1.4 into master ( #37350 ) chore: bump urllib3 from 2.6.3 to 2.7.0 in /libs/core ( #37329 ) fix(core): avoid eager pydantic.v1 import in @depr
Changes since langchain-core==0.3.86 chore(infra): merge v1.4 into master ( #37350 ) chore: bump urllib3 from 2.6.3 to 2.7.0 in /libs/core ( #37329 ) fix(core): avoid eager pydantic.v1 import in @depr
| Date | Count |
|---|---|
| 2026-05-06 | 34 |
| 2026-05-07 | 48 |
| 2026-05-08 | 26 |
| 2026-05-09 | 14 |
| 2026-05-10 | 36 |
| 2026-05-11 | 288 |
| 2026-05-12 | 56 |
og EN Monthly release notes for VS Code 1.120.
EN Monthly release notes for VS Code 1.120.
og EN Monthly release notes for VS Code 1.119.
EN Monthly release notes for VS Code 1.119.
og EN Monthly release notes for VS Code 1.118.
EN Monthly release notes for VS Code 1.118.
og EN Monthly release notes for VS Code 1.117.
EN Monthly release notes for VS Code 1.117.
og EN Monthly release notes for VS Code 1.116.
EN Monthly release notes for VS Code 1.116.
og EN Monthly release notes for VS Code 1.115.
EN Monthly release notes for VS Code 1.115.
og EN Monthly release notes for VS Code 1.114.
EN Monthly release notes for VS Code 1.114.
og EN Monthly release notes for VS Code 1.113.
EN Monthly release notes for VS Code 1.113.
og EN Monthly release notes for VS Code 1.112.
EN Monthly release notes for VS Code 1.112.
og EN Monthly release notes for VS Code 1.111.
EN Monthly release notes for VS Code 1.111.
og EN Monthly release notes for VS Code 1.110.
EN Monthly release notes for VS Code 1.110.
og EN Monthly release notes for VS Code 1.109.
EN Monthly release notes for VS Code 1.109.
og AI要約 はじめに OpenAI Codex と Claude Code を両方使っていると、単純な「どちらが賢いか」とは別に、かなり現実的な差が見えてきます。 それが コスト感 です。 ここでいうコストは、月額料金だけではありません。 どれくらい作
og AI要約 個人的によく使う学習効率化のプロンプトを3つ、 実際のやり取り例とあわせてメモしてみます。 1. 横展開プロンプト:「〇〇という概念と同等の重要度の概念を3つ教えて」 用途 言語・フレームワーク・アーキテクチャなど、新しい概念を一つ学んだタ
og AI要約 本題: VScodeで初歩的なカウントアプリを作って学習中に、次の問題に直面した。 Live Serveer機能でHTMLを記述したファイルを表示しようとしても表示されなくなってしまった。 ※最初の投稿かつ備忘録としての書きなぐりためいろい
og EN Right now, every AI model you've ever used works the same way. You talk, it listens. It responds, you listen. Thinking Machines is trying to change that by building a model that processes your input a
EN Right now, every AI model you've ever used works the same way. You talk, it listens. It responds, you listen. Thinking Machines is trying to change that by building a model that processes your input a
og EN arXiv:2605.08314v1 Announce Type: cross Abstract: SVD-based Low-rank compression reduces transformer parameters and nominal FLOPs, but these savings often translate poorly into real LLM serving speedu
EN arXiv:2605.08314v1 Announce Type: cross Abstract: SVD-based Low-rank compression reduces transformer parameters and nominal FLOPs, but these savings often translate poorly into real LLM serving speedu
og EN arXiv:2605.09070v1 Announce Type: cross Abstract: Many jailbreak attack research papers report attack success rates for a limited number of parameter settings, even though there are many combinations
EN arXiv:2605.09070v1 Announce Type: cross Abstract: Many jailbreak attack research papers report attack success rates for a limited number of parameter settings, even though there are many combinations
og EN arXiv:2605.09228v1 Announce Type: cross Abstract: Most LLM benchmarks score how well a model responds to explicit requests. They leave unmeasured a different conversational ability: noticing and actin
EN arXiv:2605.09228v1 Announce Type: cross Abstract: Most LLM benchmarks score how well a model responds to explicit requests. They leave unmeasured a different conversational ability: noticing and actin
og