• 0 Comments 0 Shares 35 Views
  • BGR.COM
    Apple just released new AirPods Max firmware, and heres how to update
    A few weeks after making wired lossless audio available for the USB-C AirPods Max, Apple has released a new firmware update for its premium headphones. At the moment, it's unclear what's available in this new software. However, it likely improves the capabilities of lossless streaming. If you don't recall, Apple previously pulled the AirPods Max firmware update that would bring these new features to the headphones.So far, Apple has only released a firmware update for AirPods Max. The USB-C model has been updated to version 7E108 from theprevious 7E101 firmware.While the USB-C AirPods Max are the same headphones as the 2020 release, the main difference is the USB-C port. In April, Apple announced it would unlock lossless audio capability when connected to a wired network.With that, AirPods Max could stream songs in 24-bit, 48 kHz lossless audio, which preserves the integrity of original recordings. In addition, it would be possible to listen to lossless audio while still enjoying personalized spatial audio.Apple says users must use the AirPods Maxs USB-C cable included with their iPhone, iPad, and Mac devices to take advantage of lossless capabilities. More interesting than that, gamers and live streamers can also take advantage of ultra-low latency audio to experience no response delay while playing or live streaming, which, according to Apple, becomes reliably smooth and even more immersive for users.Here's how to update your AirPods Max firmwareFollow these steps to update your AirPods Max firmware:Make sure that your AirPods are in Bluetooth range of your iPhone, iPad, or Mac thats connected to Wi-Fi.Put your AirPods in their charging case and close the lid.Plug the charging cable into your charging case, then plug the other end of the cable into a USB charger or port.Keep the lid of the charging case closed, and wait at least 30 minutes for the firmware to update.Open the lid of the charging case to reconnect your AirPods to your iPhone, iPad, or Mac.Check the firmware version again.We'll continue to let you know when Apple rolls out new AirPods firmware updates.Don't Miss: Apple has finally explained how to update AirPods firmwareThe post Apple just released new AirPods Max firmware, and heres how to update appeared first on BGR.Today's Top DealsMemorial Day security camera deals: Reolinks unbeatable sale has prices from $29.98Todays deals: $150 AirPods 4 with ANC, $30 JBL speaker, $55 Ring Battery Doorbell, $279 Miele C1 vacuum, moreTodays deals: $720 Apple Watch Ultra 2, $6.50 Anker fast chargers, $35 2K camera drone, Fitbit sale, moreTodays deals: $299 iPad 11, $497 LG 65-inch smart TV, $30 Philips Sonicare toothbrush, $499 Traeger grill, more
    0 Comments 0 Shares 35 Views
  • BGR.COM
    You can try Googles next-gen coding AI agent for free right now
    Google unveiled Jules, it's AI-powered coding agent, back in December of 2024. Now, just a few months later, the company is pushing the AI coding companion out to more people than ever with the launch of a public beta.Jules is especially exciting, not only because it is meant to make coding easier, but because it does so asynchronously, meaning you can keep working while it does. The company revealed the public beta as part of its push of new AI features at Google I/O 2025, which included some new AI-powered video tools, and updates to its most advanced models.Google's Jules AI agent will code apps for you. Image source: GoogleGoogle says that Jules integrated directly into your current projects and repositories. It then clones your codebase to a secure Google Cloud virtual machine (VM), where it can work alongside you to perform a series of different actions such as building new features, providing audio changelogs, bumping dependency versions, writing tests, and fixing bugs.Google also claims that Jules is private by default when working on private repositories, and that it doesn't train on your private code. Jules works directly in the Gemini app, so you don't need to access any new tools to experiment with it. Google has been working hard to make Gemini appealing as a coding assistant.https://www.youtube.com/watch?v=LUpQl3owixwSo far, Jules has garnered quite a lot of attention from the online community, with many calling it a "Codex killer" in regards to OpenAI's coding agent. As with any AI, though, both Jules and Codex are constantly being updated with new features and capabilities and Google and OpenAI improve things.Google is also working on a new AI that could help Gemini develop better versions of itself.During its public beta, Jules will be available to everyone with Gemini access, wherever Gemini is available. Google hasn't shared any price details for what the agent will cost down the line, though it says pricing will come as the platform matures.For now, though, you can try out Jules for yourself, though usage limits apply.Don't Miss: Will ChatGPT steal Googles thunder again ahead of I/O 2025?The post You can try Googles next-gen coding AI agent for free right now appeared first on BGR.Today's Top DealsBest Fire TV Stick deals for May 2025Todays deals: $30 Anker waterproof speaker, $50 off Powerbeats Pro 2, $9 Angry Orange pet stain spray, moreTodays deals: $15 Amazon credit, $480 AirPods Max, $60 Instant Pot Duo, $228 Sony XM4, moreAmazon deals: 20% off gift cards from Amazon, $553 unlocked iPhone 15, $114 standing desk, more
    0 Comments 0 Shares 35 Views
  • BGR.COM
    Google debuts mindblowing new AI-powered video tool powered by DeepMind
    We expected Google to go hard on AI this year at Google I/O, and the tech giant hasn't disappointed at all. One of the biggest new developments, at least for filmmakers, is Flow, a new AI video tool that blends together several different models from Google DeepMind into one package.Google calls Flow an "evolution of VideoFX," an experiment the company launched in Google Labs last year. It brings everything from Google's top advanced models, such as Imagen, Veo, and Gemini, into one place.https://www.youtube.com/watch?v=A0VttaLy4sUGoogle says the combination of these three models in one tool allows for unprecedented levels of prompt adherence. The inclusion of Imagen here allows you to bring your assets directly into Flow, so you can create characters using text-to-image generation.One of the biggest advancements that Flow brings to the table is scene consistency. Despite AI video tools getting better and better, consistency is still a tough thing to sort out sometimes. Google claims that Flow will make the process easier.Flow will also give users full camera control, so they can change the motion, angles, and even perspectives of the shot. The Scenebuilder functionality will let you seamlessly edit and extend existing shots.https://youtu.be/x_x-JAAKSvU?si=L-50uqxv0gPBogbIOver the past several months, we've seen some very promising AI video tools, including Runway Gen-4, which creates some pretty stellar shots. Whether Flow and the AI models that power it will carry the same weight remains to be seen.So far, though, some of the demo videos that Google has released have looked very promising.On top of launching Flow, Google also revealed info about new updates to its most advanced models with the arrival of Veo 3 and Imagen 4. Google claims these are the strongest models it has released for generative media.Veo 3 is the first time that Google's AI video model has allowed you to create audio for your clips, too, so that should be a big improvement for filmmakers working with it. The company is also expanding access for Lyria 2, its music-focused generative AI tool.https://www.youtube.com/watch?v=4hzFi0V0xMUFlow, Veo 3, and Imagen 4 are all available today. Flow will be available for Google AI Pro and Google AI Ultra plans in the US, with plans for more country support soon. US-based Ultra subscribers will have access to Veo 3 starting today. Vertex AI enterprise subscribers can also access it today.Imagen 4 is the most widely available of these models, and it will be available today for the Gemini app, Whisk, Vertex AI, and across Slides, Vids, Docs, and other features in Google Workspace.Don't Miss: 3 things AI can do that youll be thrilled to never have to do againThe post Google debuts mindblowing new AI-powered video tool powered by DeepMind appeared first on BGR.Today's Top DealsTodays deals: $189 Apple Watch SE, 15% off Energizer batteries, $144 queen memory foam mattress, moreTodays deals: $30 Anker waterproof speaker, $50 off Powerbeats Pro 2, $9 Angry Orange pet stain spray, moreTodays deals: $299 iPad 11, $497 LG 65-inch smart TV, $30 Philips Sonicare toothbrush, $499 Traeger grill, moreTodays deals: $720 Apple Watch Ultra 2, $6.50 Anker fast chargers, $35 2K camera drone, Fitbit sale, more
    0 Comments 0 Shares 35 Views
  • BGR.COM
    Google wants AI Mode to be the only way you search
    Google knows its main Search business needs to evolve. Apple's top executive recently said it saw searches on Google declining for the first time in 22 years (though Google disputes this claim). The company is facing real competition for the first time in years, as more people turn to AI tools like ChatGPT, Perplexity, and others for their searches.With that in mind, Google announced AI Mode in Search. This end-to-end AI experience is rolling out in the US. It's described as the company's "most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web."AI Mode uses a query fan-out technique that breaks down a question into subtopics while simultaneously issuing multiple queries on the user's behalf. Google says this allows Search to dig deeper into the web than traditional searches, delivering "hyper-relevant content that matches your question."This new AI Mode will first enhance Gemini's capabilities. Over time, Google will bring other features into the core Search experience through AI Overviews. A custom version of Gemini 2.5 will be available in Search for both AI Mode and AI Overviews in the US.With AI Mode, Google Search will start offering the following features:Deep Search: Get an expert-level, fully-cited report in just minutes for even the toughest questions.Live Capabilities in Search: With Search Live, you can have a real-time back-and-forth conversation using your camera.Agentic capabilities: Helps you find and buy things, like "two affordable tickets for this Saturday's Reds game in the lower level."AI Shopping partner: Google AI Mode for shopping uses artificial intelligence to show how you'd look in an internet outfit, tracks the price, and helps you buy it when it fits your criteria.Personal Content: Google will improve how it provides personalized suggestions based on your search history.Custom Charts & Graphs: AI Mode can analyze complex datasets and generate visuals tailored to your query.BGR will keep you updated on the latest Google I/O features as they're announced and rolled out to users.https://www.youtube.com/watch?v=sxUBThVQLjUDon't Miss: How Googles new Gemini AI update keeps us safe from online scamsThe post Google wants AI Mode to be the only way you search appeared first on BGR.Today's Top DealsTodays deals: $99 AirPods 4, $50 TP-Link WiFi 6 router, $279 3D printer, $200 Toshiba smart TV, moreTodays deals: $169 AirPods Pro 2, $150 Hoover carpet cleaner, 50% Off Roborock Qrevo Master, moreAmazon deals: 20% off gift cards from Amazon, $553 unlocked iPhone 15, $114 standing desk, moreTodays deals: $15 Amazon credit, $480 AirPods Max, $60 Instant Pot Duo, $228 Sony XM4, more
    0 Comments 0 Shares 35 Views
  • BGR.COM
    Google Beam uses AI to turn video calls into lifelike 3D experiences
    One of the most entertaining reveals from Google I/O 2021 was Project Starline. The research project looked to be the next evolution of video communication, utilizing several technologies to enable remote conversations with life-size, three-dimensional depictions of the participants in a video call. Four years later, Project Starline is one step closer to reality with a brand new name, with Google Beam making its debut at Google I/O 2025.According to Google, Beam "will use AI to enable a new generation of devices that help people make meaningful connections, no matter where they are."https://www.youtube.com/watch?v=OTObIPmDyjcGoogle explained that its AI volumetric video model is what makes video calls on Beam appear fully three-dimensional from any angle. It transforms the 2D video streams into 3D experiences, maintaining the illusion even as users move around the room.Google goes on to note that Beam is built on Google Cloud and will combine the power of its AI video model with its light field display. As a result, Google believes that Beam users will be able to make eye contact, read subtle facial cues, and build trust as if they were talking face-to-face. Clearly, businesses will be Google's primary target.The company also announced that it's exploring bringing its speech translation feature, which is coming to Google Meet today, to Google Beam. Your conversations will be translated in close to real-time as you speak, maintaining your voice, tone, and expression.Google says it's working with HP to bring the first Google Beam devices to market with select customers before the end of the year. The first products will be revealed at InfoComm 2025, which kicks off in Orlando, Florida, on June 7th. Google is also working with Zoom, Diversified, and AVI-SPL to bring the technology to businesses around the world.Don't Miss: Google killed the Chromecast, but Walmart just launched a budget replacementThe post Google Beam uses AI to turn video calls into lifelike 3D experiences appeared first on BGR.Today's Top DealsTodays deals: $150 AirPods 4 with ANC, $30 JBL speaker, $55 Ring Battery Doorbell, $279 Miele C1 vacuum, moreTodays deals: $30 Anker waterproof speaker, $50 off Powerbeats Pro 2, $9 Angry Orange pet stain spray, moreTodays deals: $169 AirPods Pro 2, $150 Hoover carpet cleaner, 50% Off Roborock Qrevo Master, moreTodays deals: $189 Apple Watch SE, 15% off Energizer batteries, $144 queen memory foam mattress, more
    0 Comments 0 Shares 35 Views
  • BGR.COM
    Android XR smart glasses with Gemini are coming soon, heres what they can do
    When Google announced the Android XR platform in December, it felt a little rushed, as if Google was trying to prevent OpenAI from getting all the AI attention. OpenAI was in the middle of a two-week-long series of ChatGPT live streams, some more important than others. It was in December that Google also unveiled Gemini 2.5 Pro and its first AI agents.Fast-forward to mid-May, and we've had many Gemini 2.5 Pro developments this year. But we've hardly seen any Android XR news. Neither Google nor Samsung has addressed the Project Moohan headset shown in December, and we havent gotten any updates on Android XR smart glasses from Google, whether for AR or AI-only experiences.Google did say last week, during its Android 16 event ahead of I/O 2025, that Gemini will become the default assistant on many connected devices, like cars, watches, and TVs. Google also confirmed that Gemini will power Android XR gadgets, which are built around Gemini Live.Now that Google I/O 2025 is underway in California, we finally have more details about Android XR hardware. Google plans to build stylish smart glasses with select partners. Samsung will also produce smart glasses, alongside the Project Moohan spatial computer.While there are no release dates yet for the upcoming Android XR hardware, Google did finally showcase more Android XR features that are coming to these devices.Continue reading...The post Android XR smart glasses with Gemini are coming soon, heres what they can do appeared first on BGR.Today's Top DealsAmazon deals: 20% off gift cards from Amazon, $553 unlocked iPhone 15, $114 standing desk, moreBest Fire TV Stick deals for May 2025Todays deals: $720 Apple Watch Ultra 2, $6.50 Anker fast chargers, $35 2K camera drone, Fitbit sale, moreMemorial Day security camera deals: Reolinks unbeatable sale has prices from $29.98
    0 Comments 0 Shares 35 Views
  • BGR.COM
    Googles new AI Mode is the only way Im shopping from now on
    After Opera impressed me with a live demo of its Browser Operator agent purchasing flowers with just a command, Google is also stepping into the AI shopping era with its new AI Mode, which debuted at Google I/O 2025.AI Mode is Google's most advanced take on reasoning and multimodality. It breaks down questions into subtopics while issuing queries on the user's behalf. This new shopping experience combines the capabilities of the Gemini model with Google's Shopping Graph to help users browse for inspiration, weigh their options, and narrow down products.For instance, if you want to see how an outfit might look on you, you can upload a photo of yourself. Google will use AI to generate a preview. Once you've found the perfect outfit, you can ask the new agentic checkout feature to purchase it for you using Google Pay when the price drops to your target.Google AI Mode has a new "try it on" feature for shopping. Image source: GoogleThis feature helps you buy your most desired products when they fit your budget. With the "track price" feature, Google will alert you when the product you want (in the right size, color, or other preferences) becomes available. Then, you can ask AI Mode to complete the purchase with a simple "buy for me" tap.Another example of AI Mode's shopping capabilities is finding exactly what you need. If you tell AI Mode you're looking for a travel bag, it will recognize you're seeking visual inspiration. You'll get a browsable panel of images and product listings tailored to your location. You can filter options for bags that fit a specific destination and time of yearperfect for rainy weather and long trips, or sunny days and shorter getaways.Google says these features will roll out in AI Mode in the US in the coming months.https://www.youtube.com/watch?v=sxUBThVQLjUDon't Miss: I watched AI control a browser to order flowers, and it felt like the futureThe post Googles new AI Mode is the only way Im shopping from now on appeared first on BGR.Today's Top DealsTodays deals: $150 AirPods 4 with ANC, $30 JBL speaker, $55 Ring Battery Doorbell, $279 Miele C1 vacuum, moreBest Apple Watch deals for May 2025Todays deals: $99 AirPods 4, $50 TP-Link WiFi 6 router, $279 3D printer, $200 Toshiba smart TV, moreMemorial Day security camera deals: Reolinks unbeatable sale has prices from $29.98
    0 Comments 0 Shares 50 Views
  • BGR.COM
    Heres what you get with Googles $250/month Gemini AI Ultra plan
    In the weeks leading up to Google I/O 2025, the product name Gemini Ultra popped up a few times, and I thought this would be Google's next big AI model. We'd either get a Gemini 2.5 Ultra that's even better than the 2.5 Pro, or the Ultra would be an alternative to the expected Gemini 3 model.It turns out that Google did plan to unveil a new Gemini AI Ultra product at I/O, but it has nothing to do with the AI technology level we're at with Gemini. Instead, Gemini AI Ultra is now Google's most expensive AI subscription, priced at $249.99 per month.That's even more expensive than the $200/month ChatGPT Pro subscription OpenAI offers, and for good reason. Gemini AI Ultra includes extra perks in addition to access to Google's latest AI models and much higher limits. It comes with YouTube Premium and 30TB of storage.Continue reading...The post Heres what you get with Googles $250/month Gemini AI Ultra plan appeared first on BGR.Today's Top DealsTodays deals: $189 Apple Watch SE, 15% off Energizer batteries, $144 queen memory foam mattress, moreTodays deals: $299 iPad 11, $497 LG 65-inch smart TV, $30 Philips Sonicare toothbrush, $499 Traeger grill, moreTodays deals: $99 AirPods 4, $50 TP-Link WiFi 6 router, $279 3D printer, $200 Toshiba smart TV, moreBest Fire TV Stick deals for May 2025
    0 Comments 0 Shares 50 Views
  • BGR.COM
    6 new Gemini Live features Google announced at I/O 2025
    Google unveiled Project Astra last year at I/O, showing off nascent AI technology that allows mobile users to talk to Google's AI in real time using conversational language. You might ask the AI to find stuff on the web for you or share your camera and screen so it can see what you see and provide guidance.Some of those features are available via Gemini Live, Google's AI-powered assistant for Android and iPhone. But Google isn't stopping there. It announced several new Project Astra tricks coming to Gemini Live soon, in addition to making its best feature free for Android and iPhone users.Continue reading...The post 6 new Gemini Live features Google announced at I/O 2025 appeared first on BGR.Today's Top DealsMemorial Day security camera deals: Reolinks unbeatable sale has prices from $29.98Todays deals: Heybike ALPHA, $299 Apple Watch Series 10, $90 23-piece cookware set, moreTodays deals: $299 iPad 11, $497 LG 65-inch smart TV, $30 Philips Sonicare toothbrush, $499 Traeger grill, moreAmazon deals: 20% off gift cards from Amazon, $553 unlocked iPhone 15, $114 standing desk, more
    0 Comments 0 Shares 49 Views