• Tesla teases Model Y Performance trim in new video
    Tesla teases Model Y Performance trim in new video Tesla Model Y Performance might finally be on its way. The company launched a thoroughly revamped Model Y in January, and has gradually expanded the available trims, but one popular option was missing: The super-quick Performance trim. Now, Tesla's Europe and Middle East account has posted a...
    0 التعليقات 0 المشاركات 6 مشاهدة
  • Suunto has launched the Wing 2 bone conducting headphones — a runners opinion
    Suunto has launched the Wing 2 bone conducting headphones — a runner's opinion A good playlist is a runner's perfect companion. Mile by mile, sometimes the only thing keeping you going is the right beats per minute playlist. But at the same time, there are all kinds of things you need to be aware of when running outdoors, besides your favorite...
    0 التعليقات 0 المشاركات 17 مشاهدة
  • WWW.MASHED.COM
    11 Old-School Dishes That Were Once Considered Fancy (But Not Anymore)
    Here are some examples of classic dishes that were once luxurious, only to fade from fashion. But, even if they're not fancy, many of them still taste great.
    0 التعليقات 0 المشاركات 1 مشاهدة
  • WWW.MASHED.COM
    The Victorian-Era Dessert Even Royals Couldn't Resist
    Not every dessert stands the test of time for centuries or is fit for a royal palate, but this timeless Victorian-era dessert has accomplished both.
    0 التعليقات 0 المشاركات 1 مشاهدة
  • WWW.BGR.COM
    New Proton Security Feature Lets Your Friends And Family Access Your Account In Emergencies
    Proton announced a new account security feature called Emergency Access that lets family and friends access your data when you can't: Here's how to set it up.
    0 التعليقات 0 المشاركات 1 مشاهدة
  • WWW.BGR.COM
    New Proton Security Feature Lets Your Friends And Family Access Your Account In Emergencies
    Proton announced a new account security feature called Emergency Access that lets family and friends access your data when you can't: Here's how to set it up.
    0 التعليقات 0 المشاركات 1 مشاهدة
  • BLOG.JETBRAINS.COM
    Koog 0.4.0 Is Out: Observable, Predictable, and Deployable Anywhere You Build
    Featuring Langfuse and W&B Weave Support, Ktor Integration, Native Structured Output, GPT-5, and More.Koog 0.3.0 was about making agents smarter and persistent. Koog 0.4.0 is about making them observable, seamlessly deployable in your stack, and more predictable in their outputs all while introducing support for new models and platforms.Read on to discover the key highlights of this release and the pain points it is designed to address.Learn more Observe what your agents do with OpenTelemetry support for W&B Weave and LangfuseWhen something goes wrong with an agent in production, the first questions that pop up are Where did the tokens go? and Why is this happening?. Koog 0.4.0 comes with full OpenTelemetry support for both W&B Weave and Langfuse.Simply install the desired plugin on any agent and point it to your backend. Youll be able to see the nested agentic events (nodes, tool calls, LLM requests, and system prompts), along with token and cost breakdowns for each request. In Langfuse, you can also visualize how a run fans out and converges, which is perfect for debugging complex graphs.W&B Weave setup:val agent = AIAgent( ...) { install(OpenTelemetry) { addWeaveExporter( weaveOtelBaseUrl = "WEAVE_TELEMETRY_URL", weaveApiKey = "WEAVE_API_KEY", weaveEntity = "WEAVE_ENTITY", weaveProjectName = "WEAVE_PROJECT_NAME" ) }}This will allow you to see the traces from your agent in W&B Weave:Langfuse setup:val agent = AIAgent( ...) { install(OpenTelemetry) { addLangfuseExporter( langfuseUrl = "LANGFUSE_URL", langfusePublicKey = "LANGFUSE_PUBLIC_KEY", langfuseSecretKey = "LANGFUSE_SECRET_KEY" ) }}This allows you to see the agent traces and their graph visualisations in Langfuse:Once everything is connected, head to your observability tool to inspect traces, usage, and costs. Drop-in Ktor integration to put Koog behind your API in minutesAlready have a Ktor server? Perfect! Just install Koog as a Ktor plugin, configure providers in application.conf or application.yaml, and call agents from any route. No more connecting LLM clients across modules your routes just request an agent and are ready to go.Now you can configure Koog in application.yaml:koog: openai.apikey: "$OPENAI_API_KEY:your-openai-api-key" anthropic.apikey: "$ANTHROPIC_API_KEY:your-anthropic-api-key" google.apikey: "$GOOGLE_API_KEY:your-google-api-key" openrouter.apikey: "$OPENROUTER_API_KEY:your-openrouter-api-key" deepseek.apikey: "$DEEPSEEK_API_KEY:your-deepseek-api-key" ollama.enabled: "$DEBUG:false"Or in code:fun Application.module() { install(Koog) { llm { openAI(apiKey = "your-openai-api-key") anthropic(apiKey = "your-anthropic-api-key") ollama { baseUrl = "http://localhost:11434" } google(apiKey = "your-google-api-key") openRouter(apiKey = "your-openrouter-api-key") deepSeek(apiKey = "your-deepseek-api-key") } }}Next, you can use aiAgent anywhere in your routes:routing { route("/ai") { post("/chat") { val userInput = call.receive<String>() val output = aiAgent( strategy = reActStrategy(), model = OpenAIModels.Chat.GPT4_1, input = userInput ) call.respond(HttpStatusCode.OK, output) } }} Structured output that actually holds up in productionCalling an LLM and getting exactly the data format you need feels magical until it stops working and the magic dries up. Koog 0.4.0 adds native structured output (supported by some LLMs) with a lot of pragmatic guardrails like retries and fixing strategies.When a model supports structured output, Koog uses it directly. Otherwise, Koog falls back to a tuned prompt and, if needed, retries with a fixing parser powered by a separate model until the payload looks exactly the way you need it to.Define your schema once:@Serializable@LLMDescription("Weather forecast for a location")data class WeatherForecast( @property:LLMDescription("Location name") val location: String, @property:LLMDescription("Temperature in Celsius") val temperature: Int, @property:LLMDescription("Weather conditions (e.g., sunny, cloudy, rainy)") val conditions: String)You decide which approach fits your use case best. Request data from the model natively when supported, and through prompts when it isnt:val response = requestLLMStructured<WeatherForecast>()You can add automatic fixing and examples to make it more resilient:val weather = requestLLMStructured<WeatherForecast>( fixingParser = StructureFixingParser( fixingModel = OpenAIModels.Chat.GPT4o, retries = 5 ), examples = listOf( WeatherForecast("New York", 22, "cloudy"), WeatherForecast("Monaco", 29, "sunny") )) Tune how models think with GPT-5 and custom parametersWant your model to think harder on complex problems, or say less in chat-like flows? Version 0.4.0 adds GPT-5 support and custom LLM parameters, including settings like reasoningEffort, so you can balance quality, latency, and cost for each call.val params = OpenAIChatParams( /* other params... */ reasoningEffort = ReasoningEffort.HIGH)val prompt = prompt("test", params) { system("You are a mathematician") user("Solve the equation: x^2 - 1 = 2x")}openAIClient.execute(prompt, model = OpenAIModels.Chat.GPT5) Fail smarter production-grade retries for flaky calls and subgraphsIts inevitable sometimes LLM calls time out, tools misbehave, or networks hiccup. Koog 0.4.0 introduces RetryingLLMClient, with Conservative, Production, and Aggressive presets, as well as fine-grained control when you need it:val baseClient = OpenAILLMClient("API_KEY")val resilientClient = RetryingLLMClient( delegate = baseClient, config = RetryConfig.PRODUCTION // or CONSERVATIVE, AGGRESSIVE, DISABLED)Because retries work best with feedback, you can wrap any action (even part of a strategy) in subgraphWithRetry, approve or reject results programmatically, and give the LLM targeted hints on each attempt:subgraphWithRetry( condition = { result -> if (result.isGood()) Approve else Reject(feedback = "Try again but think harder! $result looks off.") }, maxRetries = 5) { /* any actions here that you want to retry */} Out-of-the-box DeepSeek supportPrefer DeepSeek models? Koog now ships with a DeepSeek client that includes ready-to-use models:val client = DeepSeekLLMClient("API_KEY")client.execute( prompt = prompt("for-deepseek") { system("You are a philosopher") user("What is the meaning of life, the universe, and everything?") }, model = DeepSeekModels.DeepSeekReasoner)As DeepSeeks API and lineup of models continue to evolve, Koog gives you a simple and straightforward way to slot them into your agents. Try Koog 0.4.0If youre building agents that must be observable, deployable, predictable, and truly multiplatform, Koog 0.4.0 is the right choice. Explore the docs, connect OpenTelemetry to W&B Weave or Langfuse, and drop Koog into your Ktor server to get an agent-ready backend in minutes. Your contributions make the differenceWed like to take this opportunity to extend a huge thank-you to the entire community for contributing to the development of Koog through your feedback, issue reports, and pull requests!Heres a list of this releases top contributors:Nathan Fallet added support for the iOS target.Didier Villevalois added contextLength and maxOutputTokens to LLModel.Sergey Kuznetsov fixed URL generation in AzureOpenAIClientSettings.Micah added the missing Document capabilities for LLModel across providers.jonghoonpark refined the NumberGuessingAgent example.Ate Grpeliolu helped with adding tool arguments to OpenTelemetry events
    0 التعليقات 0 المشاركات 2 مشاهدة
  • BLOG.JETBRAINS.COM
    ReSharpers New Out-of-Process Engine Cuts UI Freezes in Visual Studio by 80%
    Visual Studio power users love ReSharpers deep analysis, but the cost has been the occasional UI hiccup that breaks the flow of work. In ReSharper 2025.2, analysis runs in a separate 64-bit worker out of Visual Studios UI process. Previously, ReSharper shared Visual Studios UI process, so long analyses could stall the UI thread. Now, Visual Studio keeps repainting while ReSharper crunches.We tested this new approach on the Orchard Core solution. During Visual Studio launch, total UI freezes of 100 ms or longer fell from 26 s with ReSharper 2025.1.4 (in-process) to 10.1 s with ReSharper 2025.2 running out of process a 61% reduction. The side-by-side UI-pause visualizer shows the experience during startup. Heres how we measured it.Testing methodsWe ran two measurements on the Orchard Core solution (about 223 projects). First, we used ETW MessageCheckDelay to detect UI freezes. For greater flexibility, we later switched to custom tooling that detects periods when the UI thread becomes unresponsive.We then summed all UI freezes of 100 ms or longer occurring during Visual Studio launch, regardless of source.We measured ReSharper 2025.1.4, ReSharper 2025.2 (in-process), ReSharper 2025.2 (out of process), and Visual Studio without any extension installed.Startup results Visual Studio launch (100ms or longer, all sources)The graph below shows cumulative UI freezes of 100 ms or longer during Visual Studio startup: For context, Visual Studio without ReSharper measured 6.3 s during Visual Studio startup in our lab. Deviation from our previous measurements is possible, as the tests were performed locally in different environments by different people using different methods. We are currently implementing even more optimizations for Out-of-Process mode.What changed under the hoodMost analysis now runs out of process, so heavy work no longer blocks the Visual Studio UI thread.Smarter scheduling reduces contention during typing, completion, and navigation.Caches and indexes live in a separate process to avoid extra work inside Visual Studio.Visual comparisonBelow is the side-by-side UI-pause visualizer demo, recorded in similar conditions to the previous table. It demonstrates opening and working in the Orchard Core solution on the same machine with both versions.Left: ReSharper 2025.1.4 (in-process) Right: ReSharper 2025.2 (out of process)The bars at the bottom indicate intervals when the UI thread is unresponsive. A bar turns red when an interval is 100 ms or longer. During those times, typing or clicking in the IDE has no effect. At a glance, youll see fewer red bars on the right (out of process), indicating a smoother user experience.If youve seen Visual Studio warn that ReSharper is slowing down your computer, Out-of-Process mode targets the root causes behind that warning, so you should now see fewer alerts.Known limitations in 2025.2 (out of process)Out-of-Process mode still has some limitations, as it does not yet support the following functionality:AI-powered featuresDebugger integrationsDPA, dotMemory, dotTrace, and dotCover integrationsTemplate editorDiagramming toolsWere actively working to bring these features into Out-of-Process mode, and you can follow our progress in YouTrack.Learn more about Out-of-Process mode and how to enable it on this page.What were improving nextRight-click latency: Profiling highlighted hot spots in PsiFiles.GetPsiFiles and SqlInjectionPsiProvider.ComputeDataForFileContext. Weve reduced the impact for 2025.2 and will continue to monitor them in real-world projects.Methodology and thresholds: Well keep validating on larger solutions and may adjust freeze thresholds as we learn more.TryittodayThere are four ways to enable Out-of-Process mode in ReSharper 2025.2 or later.From the menu, go to Extensions | ReSharper | R# Out-of-Process and select Switch to Out-of-Process mode.From the menu, go to Extensions | ReSharper | Options | Environment | Products & Features and select Run ReSharper in a separate process (preview). When you click Apply, you may be prompted to restart ReSharper.Use the Go to action, Ctrl+Shift+A, and then simply type Switch to Out-of-Process mode and press Enter.In the status bar (after youve enabled Out-of-Process mode at least once), click the R# Out-of-Process indicator and choose Switch to Out-of-Process mode.To revert, you can use any of the paths above and select Switch to In-Process mode. If none of these work, you can start Visual Studio with /ReSharper.InProcess to temporarily revert to In-Process mode and save your choice in the options.Power user tip: Start Visual Studio with /ReSharper.OOP to launch directly in Out-of-Process mode.Call for feedbackWhile we are encouraged by the results of our tests, we really want to know if the editor feels faster to you. We invite you to share frame drops, trace files, or even a quick screen recording GIF. We read every report.Optionally share anonymous usage statisticsTo help us validate performance improvements at scale, you can opt in to ReSharpers Usage Statistics program.Opt in: From the menu, go to Extensions | ReSharper | Options | Environment | Usage Statistics, check Participate anonymously in the Usage Statistics program, and then click Save.Only aggregate, anonymous data is sent no project names or source code. See the Options panel for details.Beyond raw speed, Out-of-Process mode helps ensure ReSharper remains a first-class bridge for Visual Studio users and a stepping-stone for anyone curious about our standalone IDE, Rider.
    0 التعليقات 0 المشاركات 2 مشاهدة
  • FR.GAMERSLIVE.FR
    Hollow Knight Silksong : TOUT ce que vous DEVEZ SAVOIR sur LA PPITE de la rentre !
    Hollow Knight Silksong : TOUT ce que vous DEVEZ SAVOIR sur LA PPITE de la rentre !
    0 التعليقات 0 المشاركات 2 مشاهدة
  • 0 التعليقات 0 المشاركات 0 مشاهدة