Engadget Feed

Meta is giving Threads on web a redesign that finally adds direct messages

1 day 19 hours ago

Meta is starting to test a long-overdue facelift for Threads on web. The company's head of Threads Connor Hayes showed off a new look for the web version of Threads that finally adds direct messaging and makes it easier to navigate between multiple feeds.

The new layout adds a bunch of new shortcuts to the site's left rail, including saved posts, insights, activity, and the ability to move between different feeds. Those features have all been accessible on web before, but many were hard to find. For example, the only way to currently get to "insights" is to navigate to your own profile or save it as a "pinned" column. Most importantly, though, the update finally adds the Threads inbox, which has not been available to web users even though the feature was added to the app last June.

It's not clear when the new look will roll out, but Hayes said Meta has already started to test it and that the company will "be investing more here going forward." The last time the Threads website got a major update was last April, which added some basic functionality. But since then, Meta has focused much of its efforts on the Threads app, rather than the website. Some newer features, like disappearing "ghost posts," are able to be viewed on the web but can only be created in the app.

Speaking of the Threads app, the web updates come one day after Hayes previewed some tweaks to how replies look on mobile. With the change, replies under a post will be indented slightly to make it easier to follow conversations. That change is rolling out now on iOS and currently "testing" on Android. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-giving-threads-on-web-a-redesign-that-finally-adds-direct-messages-192903284.html?src=rss
Karissa Bell

The European Commission wants Google to share search engine data with competitors

1 day 19 hours ago

The European Commission has proposed new measures for Google aimed at bringing the tech giant's search business into compliance with the Digital Markets Act. In order to allow third-party online search engines to be competitive with Google, the EC has recommended that Google permit those services to access its treasure trove of search engine data. As it stands, the proposal would require Google to let rivals see data points "such as ranking, query, click and view data, on fair, reasonable and non-discriminatory terms."

"Data is a key input for online search and for developing new services, including AI," said Teresa Ribera, the Commission's executive vice-president for Clean, Just and Competitive Transition. "Access to this data should not be restricted in ways that could harm competition. In fast-moving markets, small changes can quickly have a big impact. We will not allow practices that risk closing markets or limiting choice."

European regulators have been using the Digital Markets Act to hammer at Google's dominant market position for several years. Beginning in March 2024, Google was required to be in compliance with the DMA and it did plan some changes in accordance with the legislation. A year later, though, the Commission levied preliminary charges against Google arguing that Google Search and the Play Store had not met their obligations for market competition. Google offered some possible adjustments to how search results are displayed in response, but it seems the regulator is going to keep fighting for more robust changes to Google's search business.

If you think all that sounds like something Google is unwilling and unlikely to do, you'd be correct. For starters, the actual requirements for Google could change in the coming months. The EC is accepting comments on the proposed measures through May 1, and Google's legal team is certain to have a lot of opinions to share. We've reached out to the company for a comment on these preliminary measures. A final, binding decision on Google's next steps is due by July 27, so we're expecting a lot of back-and-forth between the parties until that date.

Update, April 17 2026, 11:36AM ET: Reached for comment, Google's Senior Competition Counsel Clare Kelly told Engadget, "hundreds of millions of Europeans trust Google with their most sensitive searches — including private questions about their health, family, and finances — and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections. We will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-european-commission-wants-google-to-share-search-engine-data-with-competitors-192709530.html?src=rss
Anna Washenko

Blizzard just made Overwatch’s best mode much worse

1 day 19 hours ago

While I dabble in other Overwatch modes, I spend the vast majority of my time there in Mystery Heroes, a casual mode in which you load in as a random character and automatically switch to another one when you respawn. It's by far my favorite way to play Overwatch (which I do a lot!), since it helps me switch off and relax. Others play it as a warmup for competitive action. It requires a particular skillset, as players need a working knowledge of all 51 heroes to help them coordinate with teammates and know what the opponents have on deck. But with the arrival of the new Overwatch season this week, Blizzard changed Mystery Heroes, and made it much less fun to play. 

The developers say they modified Mystery Heroes "to help keep the mode's casual pace intact while mitigating some of the more extreme pain points it's had in the past." That might have something to do with how the mode handled perks. In other modes, players earn perks (upgrades to their existing tools or entirely new abilities) throughout a match. In Mystery Heroes, they load into a round with random perks already equipped. That's fun! I love the chaos of Mystery Heroes and trying to make things work with whatever hero and perks I have. 

Now, perks are gone from Mystery Heroes. I kind of get it. Nixing them helps players clock what's going on a bit faster — they won't feel the need to quickly check which perks they have when they're racing back to the action when the time is running out. Perks were only added to Overwatch a little over a year ago and they've helped make the game more enjoyable. Removing them from Mystery Heroes diminishes it compared with other modes, especially given that some heroes are now more oriented around their perks.

OverwatchBlizzard Entertainment

The other major change in Mystery Heroes is a switch from teams of five players to 6v6. I'm fine with 6v6 in other game types, but it makes Mystery Heroes much worse. 

In other 6v6 formats, you typically have two tanks, two supports and two damage heroes. In Mystery Heroes, the defense might load in with four tanks and two supports (teams are limited to three heroes from any class after respawns). While tanks generally have lower health pools in 6v6 modes, that's still an oppressive composition to run into. Attackers lack the agency to switch to other characters that can counter such a setup. For instance, if the attackers don't spawn in as heroes that can get behind a chokepoint and take out those supports, they might never break through. That doesn't make for a fun round. Trying to keep five other players alive as a solo support isn't exactly a blast either.

I’m not the only one who’s disappointed with these changes. In every single Mystery Heroes game I’ve played since the update, other players have lamented the loss of perks (the reaction to the 6v6 switch is more mixed). They're protesting on Reddit and the Overwatch forums too. 

It’s not like I’m averse to change. Blizzard has made a ton of updates to greatly improve Overwatch over the last few years. It added the big Stadium mode in 2025 and the game recently had a soft, successful relaunch. Even in this season, there are lots of positive tweaks, including some welcome hero updates (and some that are less welcome) and the return of post-match awards. New damage hero Sierra is rad too. 

The Mystery Heroes changes, though, are a step in the wrong direction. I dearly hope Blizzard reverses course on those soon. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/blizzard-just-made-overwatchs-best-mode-much-worse-185114683.html?src=rss
Kris Holt

Playdate Season 3 is coming later this year

1 day 20 hours ago

Playdate is getting a third season of curated, surprise games, Panic announced today. We don't know much beyond the fact that Season Three is officially happening, but Panic's Head of Playdate Greg Maletic said in an announcement video that it will be here "in time for the holidays" this year. Considering we had to wait a whole three years for Season Two to come out following Season One's release with the console in 2022, that doesn't sound so bad.

Panic hasn't yet said how many games Season Three will include, or how much it will cost. While Season One had a total of 24 games — with a release schedule of two games per week for 12 weeks — last year’s Season Two had half the amount (plus Blippo+), and cost $39. But that drop in quantity thankfully didn't mean a drop in quality. Season Two was great, with a collection of games that felt stronger overall than the first. I, for one, can't wait to see what Season Three brings. In other exciting news, Panic also announced today that the much, much-awaited game Office Chair Curling is finally available for purchase on Playdate and Steam, with the option for online cross-play.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playdate-season-3-is-coming-later-this-year-181340117.html?src=rss
Cheyenne MacDonald

A first look at Metro 2039 shows how its Ukrainian developer turned the darkness up to 11

1 day 21 hours ago

If the real world isn’t grim enough for you, Ukranian developer 4A Games has your back: Metro 2039 has been announced and is scheduled to arrive this winter. And based on the developer’s first look at the title, Metro 2039 looks to be an even darker affair than previous titles in the series. A tall order, but the real-world turmoil that has enveloped 4A Games since Russia’s invasion of Ukraine sounds like it has turned into a painful inspiration for the developer.

The lengthy cinematic reveal, which also contains a brief bit of gameplay at the end, doesn’t give much of the story away. But it does serve to place you right in the ruined, terrifying world of the Metro series. Metro 2039 arrives about 25 years after a nuclear apocalypse wiped out most life on the planet. The series focuses on survivors who live in Moscow’s ruined metro system. 4A says that this time out, the different underground factions have been united by a group known as “the Novoreich,” complete with a new ruler, the Spartan known as Hunter.

Despite Hunter promising “salvation and a new life” for the survivors left on the surface, things aren’t exactly rosy underground. As you might expect, this supposedly “united” society is still a complete disaster, with propaganda, authoritarian rule and violence the hallmark of the regime.

Screenshot from Metro 2039.4A Games

The Metro series is based on novels by Dmitry Glukhovsky, a Russian author who has been in exile due to his public denouncement of Russia’s invasion of Ukraine. 4A Studios says that while this new game isn’t based specifically on one of his works, they worked in collaboration with Glukhovsky on the story for Metro 2039 “shaped by shared values of freedom and truth, and informed by the harsh realities of the world today.”

In statements from the studio, 4A directly acknowledges the conditions that Metro 2039 was created under. “Many developers continue to work from multiple locations, facing daily challenges never anticipated,” the studio says. “Through power outages, reliance on generators, and disruptions from missile and drone attacks, development has continued – driven by resilience, shared support, and a commitment to the work.”

It goes on to state that: “The war has directly shaped the development of Metro 2039, with its story focused acutely on choices, actions, consequences, and the cost of securing a future. While told from a distinctly Ukrainian perspective, Metro 2039 remains an authentic Metro story.” While the Metro series has been unfailingly bleak, it’s not hard to imagine how Russia’s invasion could have influenced the storytelling coming out of a Ukranian studio with an exiled Russian being part of the story team. But the limited bit of the game we’ve seen so far doesn’t make anything too explicit.

Screenshot from Metro 2039's reveal trailer.4A Games

The trailer shows off the new player-character known as The Stranger, the first voiced protagonist in the series (though we don’t hear him do anything but scream in the preview). The Stranger has apparently been surviving in the above-ground wasteland but is forced to return to the metro. The little bit of gameplay we saw was the standard first-person shooter view of The Stranger heading underground to be immediately ambushed by a pretty horrific monster that he barely escapes from — he’s then dragged to “safety” by a group of survivors who just get the doors to their shelter shut before being overrun by a larger horde. Creepy stuff.

The rest of the preview largely feels like a dream (or nightmare) sequence — but while it’s hard to put together what is going on, there’s no doubt that the detail in the environments and characters is top-notch. Given that the last metro game, Metro Exodus, was released way back in 2019, it’s fair to say that we’re getting a more graphically impressive rendering of ruined Moscow and the tunnels beneath it.

There’s no exact release date yet, but 4A Games says Metro 2039 will arrive this winter for Xbox Series X/S, PlayStation 5 and PC.

This article originally appeared on Engadget at https://www.engadget.com/gaming/a-first-look-at-metro-2039-shows-how-its-ukrainian-developer-turned-the-darkness-up-to-11-171500713.html?src=rss
Nathan Ingraham

Google Chrome makes it easier to wrangle different tabs in AI Mode

1 day 21 hours ago

Love 'em or hate 'em, no modern browser is complete without robust tab support, and so too would it seem Google's AI Mode. Starting today, the company is rolling out an update to users in the US that makes the tool better at interacting and understanding tabs. 

To start, the next time you use AI Mode on Chrome for desktop and click on a link, the chatbot will open a new side-by-side interface that allows you to both browse the new webpage and ask questions of AI Mode. The connection allows the chatbot to maintain the context of the search that brought you to that website in the first place. 

For instance, say you're looking for a new coffee maker to buy for your apartment. After AI Mode finds a handful of different models for you to compare, you can click on one to go to the manufacturer's website and ask additional questions of the chatbot like "how easy is this to clean?" Thanks to the expanded context window, you don't need to refer to the specific name of the model.   

Meanwhile, if you have an existing tab or group of tabs that you'd like AI Mode to factor into a new search, you can do that now too. From the redesigned Plus menu, just click the new option that's there. While you're in the Plus menu, you can also prompt AI Mode to consider other materials, including images and PDFs, alongside any relevant tabs.   

In testing, Google says users found the integration translated to less tab switching, and made it easier to focus. Mike Torres, vice-president of product for Chrome, said the new features represent a broader effort by Google to bring practical AI capabilities to its web browser. Torres added the company would soon bring today's updates to more places around the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-chrome-makes-it-easier-to-wrangle-different-tabs-in-ai-mode-170000914.html?src=rss
Igor Bonifacic

OpenAI's latest Codex update builds the groundwork for its upcoming super app

1 day 21 hours ago

Last month, following reporting from The Wall Street Journal, OpenAI confirmed it was working on a desktop super app that would combine ChatGPT, its Codex coding agent and Atlas web browser into one cohesive experience. OpenAI is not releasing that application today. Instead, it's pushing out a major update to Codex that significantly expands what that software can do. However, the new release offers a glimpse of what OpenAI hopes to build with its latest effort.  

"We're building the super app out in the open," said Thibault Sottiaux, the head of Codex, during a press briefing held by OpenAI. "This release is about developers. In the future, we will broaden it up to a wider audience." Until then, the latest version of Codex offers developers multi-purpose AI agents that can work across a "larger surface area," while being more proactive. In practice, that translates to a host of new capabilities, starting with computer use. 

The agents inside of Codex can interact with other apps on your PC. When prompting one of OpenAI's models, you can name a specific program or let it determine the best application for the job. Computer use is available in competing apps like Claude Cowork, but where OpenAI believes Codex offers an edge in that department is in the "secret sauce" it built to allow an agent to run an app without bogging down your entire system, so the two of you can work in tandem. At the same time, OpenAI is releasing 111 new plugins for Codex that combine skills, app integrations and model context protocol server connections to give Codex more ways to gather context and use the tools developers depend on for their work.

The company has also added a built-in browser, with a commenting system that allows you to prompt Codex to make tweaks to specific parts of a webpage or web app you're building. In the demo OpenAI showed, one member of the Codex team used this tool to instruct Codex to change the margins on a graph so that the y axis wasn't cut off. Complementing this is built-in image generation. Codex can use gpt-image-1.5 to create product concepts, mockups, frontend designs and even assets for simple games. It also allows Codex to use screenshots to verify it's on the right track with a user request.   

With today's update, OpenAI is also previewing a pair of memory features. The first allows Codex to recall context from previous tasks to inform how it goes about future prompts. According to OpenAI, with time, this will allow Codex to complete requests faster and to a higher standard. The app will also use the context it's gathered to suggest proactive actions. For example, at the start of your day, it might suggest you respond to a comment a coworker left on a Google Doc draft you wrote. 

If you want to try the updated Codex for yourself, OpenAI is starting to roll out the new version to desktop app users who are logged in with their ChatGPT account. Computer use is available to macOS users first, with availability for people in the EU and UK to follow soon. Similarly, Brits and Europeans will need to wait to try the memory features OpenAI has built into Codex.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-latest-codex-update-builds-the-groundwork-for-its-upcoming-super-app-170000019.html?src=rss
Igor Bonifacic

Intel launches new Core Series 3 chips for mainstream laptops

1 day 21 hours ago

Intel has unveiled its new Core Series 3 chips, the official title for its Wildcat Lake-codenamed series intended for mainstream and value-oriented laptops. Built using the same Intel 18A process as its Core Ultra Series 3 chips, they’re significantly more powerful than the previous generation and promise "exceptional battery life" and "boosted AI-ready performance."

Intel says the Core Series 3 offers up to 47 percent better single-thread performance and 41 percent better multi-thread performance, as well as 2.8x better GPU AI performance compared to a five-year-old PC. Stacked up against its last-gen Intel Core 7 150U processors, the new mobile chip uses up to 64 percent lower processor power and is capable of 2.7x AI GPU performance. In other words, expect more grunt and improved efficiency.

At the top end of the lineup sits the six-core Intel Core 7 360, which has a P-core Max Turbo frequency of 4.8GHz and NPU TOPS performance of 17. This scales down as you move through the other six-core options, and there’s also a five-core Core 3 processor at the entry level with a more modest GPU.

Intel promises all-day battery life, rated at 12.5 hours in the office and 18.5 hours for streaming from Netflix. As for connectivity, there’s support for Wi-Fi 7, Bluetooth 6 and two Thunderbolt 4 ports. The Core Series 3 chips will be making their way into a variety of laptops throughout 2026, including Acer’s Aspire Go 14, 15 and 16, the ASUS Vivobook 14/15/17 and ExpertBook B5 Flip, B3 G2 and P3 G2. The likes of Dell, Samsung and Lenovo will announce their own Core Series 3 devices in the near future.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/intel-launches-new-core-series-3-chips-for-mainstream-laptops-164821846.html?src=rss
Matt Tate

Gemini can now draw on your Google data to personalize the images it generates

1 day 22 hours ago

Your Google Photos library could soon influence the kind of images you can generate with Gemini. After letting users personalize the AI assistant's responses with data from Gmail, Search and YouTube, Google says it's bringing that same "Personal Intelligence" to Nano Banana 2 to make it easier for users to create personalized images with the AI model.

The goal is to have the data affiliated with your Google account — your YouTube history, emails, Google Photos, etc. — provide context to Nano Banana 2 so you don't have to. Rather than prompting Gemini's image generation model with information about you or photos of your belongings, a direction to "create a picture of my desert island essentials" should produce an image that includes the things you care about without any extra context. Similarly, if you use labels in Google Photos to identify people or pets, you can tell Gemini to "create a hand-drawn illustration of mom," and it should be able to use Google Photo's labels to find the right reference photo and create an image of the right person.

Google

If Gemini creates images that don't look right, you can still send a follow-up prompt to refine the result, or select a new source image from Google Photos with the "+" button. Google says you can also click the "Sources" button to view what images the AI referenced in the first place, or ask it directly for the attribution and sources used for a specific image.

Personalized user data is one of the unique advantages Google has over companies offering competing AI assistants, so expanding Personal Intelligence to an already popular feature like image generation is a natural way to build on that lead. For now, this more personalized version of Nano Banana 2 is available in the Gemini app for eligible AI Pro and AI Ultra subscribers. Google says the feature will come to Gemini in Chrome and other users "soon."

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-can-now-draw-on-your-google-data-to-personalize-the-images-it-generates-160000269.html?src=rss
Ian Carlos Campbell

The first real trailer for the Street Fighter movie is filled with crowd-pleasing moments

1 day 23 hours ago

We finally have a real-deal trailer for the upcoming Street Fighter movie, after a short teaser dropped at The Game Awards last year. This is nearly three minutes of fighting, silly dialogue and, of course, Easter eggs from the games.

To the latter point, there's a scene of Ken beating up a car like in the bonus stages from Street Fighter II and footage of Ryu powering up one of his famous Hadoken fireballs. There's even a cheeky reference to Chun-Li's notoriously-large and powerful thighs. This is all helped along by the fact that the actors all look very silly and mostly accurate to the games.

The plot looks to be fairly standard for this type of adaptation. There's a big, important fighting tournament and Chun-Li is recruiting people from around the globe, acting like the franchise's Nick Fury or something. Ken and Ryu are beefing, M. Bison is involved in a criminal conspiracy (big surprise) and everyone else is punching and/or making snarky asides. It looks campy as hell, which is a good thing.

Street Fighter is directed by Kitao Sakurai, who made the film Bad Trip and was heavily involved with The Eric Andre Show. It hits theaters on October 16.

The cast is actually stacked. Noah Centineo and Andrew Koji lead the film as Ken and Ryu, but Jason Momoa is playing Blanka and Curtis '50 Cent' Jackson is portraying Balrog. Other actors involved include David Dastmalchian, Callina Liang, Cody Rhodes and Orville Peck.

This is the third attempt at a live-action Street Fighter adaptation. The 1994 film is famous for Raul Julia's iconic performance as M. Bison and 2009's Street Fighter: The Legend of Chun-Li is famous for being very bad.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/the-first-real-trailer-for-the-street-fighter-movie-is-filled-with-crowd-pleasing-moments-153145868.html?src=rss
Lawrence Bonk

Meta isn't setting its Oversight Board free just yet

1 day 23 hours ago

The Oversight Board — the policy body Meta created to weigh its most impactful moderation rulings — has seen its role within Mark Zuckerberg's empire come into question due to shifting content policy priorities and dwindling investment. The Oversight Board has taken steps to formalize its long-contemplated desire to work with other companies, but Engadget has learned Meta has thus far declined to move forward with that process. 

Over the last year, board members have become increasingly interested in artificial intelligence policy and how their experience shaping Meta's content rules could translate into advising companies in the generative AI space. That interest has intensified as some AI companies have privately signaled they would be open to working with the board, according to a source familiar with the organization who was not permitted to speak publicly. The board began talks with Meta last fall about the possibility, which would require the company to sign off on changes to the legal documents that govern the board's operations. But Meta officials have not indicated whether the company is willing to make those changes, which would likely require approval from top executives. 

Platformer, which first reported on Meta's budget negotiations with the Oversight Board, noted that the company "has long encouraged the board to seek additional funding sources." So far, no other company has publicly shown interest in working with the group, though the board has had conversations with other firms behind the scenes. 

Oversight Board co-chair Paolo Carozza told Engadget in December that there had been "really preliminary" discussions between the board and AI companies, though he declined to name which ones in particular. "It feels like quite a different moment now, largely because of generative AI, LLMs, chatbots [and] the way that a variety of retail-level users of these technologies are facing a whole new set of challenges and harms that's attracting a lot of scrutiny," he said at the time. 

Meta has readily agreed to amend the board's governing documents in the past — like when the trust that controls the Oversight Board's budget funded a new organization to mediate content moderation disputes in Europe. While Meta executives once promoted the idea of its ostensibly independent Oversight Board working with other social media platforms, the prospect of the group working with a competitor as it pursues AI superintelligence is apparently more complicated. 

Over the last five years, board members have received briefings from officials at Meta about the inner workings of its moderation systems and other non-public details as part of their work with the company. That raises practical questions about how the board would safeguard Meta's proprietary information, as well as larger strategic questions about whether Meta would want its Oversight Board to work with some of the companies it's now fiercely competing with, the source said. It's not clear how invested Meta's current leadership is in ensuring a future for the board. Former president of global affairs Nick Clegg, who was one of the most vocal champions of the board's work, left the company last year.

Meanwhile, other board members have publicly made the case that the group, which consists of free speech and human rights experts from around the world, is well-positioned to guide AI companies grappling with an increasing number of real-world harms. When Anthropic published a "Claude Constitution" earlier this year, the board published a lengthy analysis from member Suzanne Nossel arguing that Claude also needed the kind of "oversight" the board has provided for Meta. She made a similar argument for the wider AI industry in an op-ed in The Guardian last month.

While Nossel denied that she was directly pitching the Oversight Board to Anthropic, she said that AI companies face many of the "same dilemmas" as social media platforms. "When the board was first created, there was the notion that we might work across the industry," she told Engadget. "Now, as the world shifts toward an AI-centric paradigm, we're very interested in what our experience can bring to that conversation." 

Oversight Board members, who naturally have a vested interest in expanding their purview, aren't the only members of the industry who have warned that generative AI platforms are essentially speed-running social media companies' playbook. A former OpenAI researcher recently wrote that "OpenAI Is Making the Mistakes Facebook Made," citing the AI company's moves toward optimizing for engagement and its plans for in-app advertising. The researcher cited Meta's Oversight Board as an example of the kind of independent governance that's needed in the AI industry.

The question of working with other companies has taken on new urgency as the Oversight Board faces the possibility that it will lose its backing from Meta. In a statement, a Meta spokesperson pointed to previous reports that Meta has committed to funding the board through 2028 and said that "nothing has changed." But a source familiar with the board tells Engadget that Meta has so far only handed over half of the smaller tranche of 2028 funds to the board amid ongoing discussions about its future, including whether it will expand its purview beyond Meta. 

There are also very real questions about how the Oversight Board fits into Meta's current strategy around content moderation. Zuckerberg announced last year that Meta was shifting away from most proactive moderation, ending fact-checking in the United States and rolling back hate speech rules. Zuckerberg himself reportedly led the push for these changes following a meeting with then President-elect Donald Trump. The Oversight Board, which Meta has sometimes asked to advise on major policy changes, was not consulted. The company recently said it plans to reduce the number of human moderators in favor of AI-based systems.

"The Oversight Board is currently engaged in meaningful discussions with Meta regarding its future and the evolution of its model to ensure the organization can address the most urgent emerging challenges in AI governance, standards, and accountability," an Oversight Board spokesperson said in a statement. "At this time, no decisions have been made about the Board’s future, and the organization’s day-to-day work and mandate remain unchanged.”

Critics have long said that the board, which has received more than $280 million from Meta, moves far too slowly. In a little more than five years of operation, the board has published more than 200 decisions about specific moderation issues, which Meta is required to uphold. Those decisions — a tiny fraction of the millions of requests it receives — can take months, though the board can opt to move more quickly. The board has also made hundreds of policy recommendations, which Meta has to respond to but isn't required to implement. The company has agreed to at least some changes in response to 75 percent of recommendations, according to the board. 

For the Oversight Board, working with a company besides Meta would begin to address some of the challenges it now faces. It would boost the group's credibility at a time when Meta seems to be re-evaluating its relationship with the board, and it would open up the possibility of new sources of funding. But the situation underscores another long-simmering tension when it comes to the role of the "independent" oversight organization. Meta has always been in control of how much influence the group can actually have. And it's not clear that the company is ready to let the board, which has spent the last five years learning the minutiae of Meta's content moderation and policy processes, advise the companies it's now competing with.

During its work with Meta, the Oversight Board has weighed in on its rules for AI several times. The board has criticized the company's "manipulated media" policy that governs deepfakes and other content, which led to Meta adopting new rules around AI labeling. In its most recent decision dealing with AI, the board urged Meta to invest in better AI detection tools and to collaborate more closely with other platforms. The company has not yet formally responded to those recommendations. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-isnt-setting-its-oversight-board-free-just-yet-153000172.html?src=rss
Karissa Bell

Meta Quest headset prices are going up on April 19

1 day 23 hours ago

The RAM crisis has prompted another company to jack up hardware prices. Meta says it will be increasing the price of Quest headsets on April 19. The Meta Quest 3 will get a $100 hike to $599, while the Quest 3S will be $50 more expensive at $350 (for a version with 128GB of storage) and $450 (256GB).

Meta is blaming the increases on the rising costs of RAM, which has skyrocketed in price due to a shortage of chips as AI companies gobble up as much memory as they can for their data centers. Sony recently bumped up the prices of PS5 consoles and the PlayStation Portal handheld for similar reasons. Microsoft made its Surface PCs more expensive this week too.

Meta Quest accessories are staying at the same prices, but refurbished Quest units are somehow getting more expensive as well. Refurbished Quest 3S units will be also be $50 more at $320 (128GB) and $410 (256GB). Meta is increasing the price of a refurbished Quest 3 by $100 to $550. I’m not exactly sure how the company can pin those changes on increased manufacturing costs. Meanwhile, Meta told The Verge that it doesn’t expect to increase the prices of its smart glasses anytime soon.

Correction April 16, 2026, 11:28AM ET: This story initially stated that the price of a refurbished Quest 3 is increasing by $170. It’s going up by $100. We regret the error.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-quest-headset-prices-are-going-up-on-april-19-143259031.html?src=rss
Kris Holt

Anna's Archive told to pay Spotify and record labels $322 million over unprecedented music scraping

1 day 23 hours ago

The open-source library and search engine Anna’s Archive has been ordered to pay Spotify and the three of the world’s largest music labels $322 million in damages after it claimed to have scraped the entirety of the streaming platform’s library of music.

Spotify, Universal Music Group, Warner Music Group and Sony Music Entertainment, sued Anna’s Archive in January for a slightly comical $13 trillion. They alleged Anna's Archive had illegally scraped 86 million songs — a significant chunk of all the music on the planet — and intended to make them available for download via BitTorrent. At the time, Spotify called the scraping a "brazen theft of millions of files containing nearly all of the world’s commercial sound recordings."

In a since-deleted blog post, Anna's Archive stated the scraping was an act of preservation. Still, a New York federal judge sided with the plaintiffs after the archive's anonymous operator failed to respond to the lawsuit.

The court order finding Anna's Archive guilty of direct copyright infringement, breach of contract and violation of the Defense Contract Management Agency (DCMA) was filed on April 14. A further claim of violation of the Computer Fraud and Abuse Act (CFAA) was dismissed by the judge.

The total breakdown of damages includes $7.5 million to each of Sony and Universal Music and $7.2 million to Warner Music, with the remaining $300 million going to Spotify. The latter figure amounts to $2,500 for each of the 120,000 scraped music files already made available by Anna’s Archive. The remainder of the 86 million files were due to be released to the public at a later date.

The court also ordered Anna’s Archive to "immediately destroy all copies and phonorecords of any work ‘scraped,’ downloaded, copied or otherwise extracted from Spotify," but whether it actually does this, or indeed hands over a penny of the damages, remains to be seen. The bizarre reality of this case is that the person (or people) behind Anna’s Archive remains a mystery.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/annas-archive-told-to-pay-spotify-and-record-labels-322-million-over-unprecedented-music-scraping-151034032.html?src=rss
Matt Tate

Spotify debuts a new UI just for tablets

2 days 1 hour ago

Spotify has a new look today for listeners on tablets. The streaming service’s updated tablet UI now provides adaptive orientation, switching between portrait and landscape layouts rather than simply resizing the interface when changing how the device is held.

Spotify's tablet app now sports a collapsible sidebar so listeners can take advantage of their larger screen space when watching a music video or podcast. Parallel browsing lets you continue to scroll through the app while a video or lyrics are in the sidebar, and the "switch to video" toggle has been made more prominent.

The new design had appeared for some users earlier this year during tests. The final version is rolling out today for both iPad and Android devices.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotify-debuts-a-new-ui-just-for-tablets-130000533.html?src=rss
Anna Washenko

Canva starts previewing a more powerful version of its AI assistant

2 days 1 hour ago

Adobe isn't the only company releasing a new AI assistant this week. Ahead of its Create event in Los Angeles today, Canva announced Canva AI 2.0. Building on its existing AI assistant, the company is billing the release as its most significant update since the platform first launched in 2013, and the culmination of years of investment to build its own foundational design models. 

As you might imagine, it all starts with a conversational interface that allows you to describe an idea or goal and the system will start generating a design to match. Under the hood, there's a new orchestration layer that allows the model to use all of Canva's disparate tools to accomplish complex, multi-step tasks. For instance, the company suggests you could use Canva AI to create a multi-channel advertising campaign, and the software will generate everything you need to get that off the ground. 

For brands, Canva AI 2.0 can adapt to their design needs. Canva

If edits are required, the company says Canva AI avoids one of the pitfalls of many other image generation models. It's possible to edit every visual element the system generates, just like if they were created with a traditional image editor. As a result, you can do things like swap out images and tweak fonts without affecting any other part of a design. To bring everything together, Canva has built persistent memory into the tool. The more you use Canva AI, the better the system will get at applying your personal taste and style to future generations. According to the company, it also has a context window that is long enough to maintain coherence until you arrive at a final design.    

Alongside those enhancements, Canva is adding support for new workflows that expand what you can do with its software, starting with connections that allow its models to pull data from other apps, including Notion, Slack, Zoom, Gmail, Google Calendar and more. Users can also schedule tasks for Canva AI to complete in the background, and the company has even baked in deep research capabilities into the tool.

The coding function Canva previously offered has been upgraded to include support for HTML imports, allowing users to bring any HTML file or AI-generated experience into Canva's visual editor to tweak the design of it without breaking things. For brands, the company is also offering a tool that can process their visual identity and apply it to new and existing designs.   

Canva's updated coding agent now support HTML imports. Canva

As a casual observer, it might seem like Canva is trend chasing, but Danny Wu, the company's head of AI, argues the new AI tools represent a natural evolution for Canva. "This is something we've been dreaming of and working towards for quite a while," he tells Engadget. "Even before ChatGPT was a thing, we were thinking, 'what if we don't have a template that matches your needs?' … So I wouldn't describe this as a pivot or shift, we've been wanting to offer these kinds of capabilities all along as part of our mission to make design simple."

If you want to give Canva's new tools a try for yourself, Canva AI 2.0 is available as a research preview starting today. The first 1 million people who visit the Canva website will get first access, with availability gradually expanding to more users over the coming weeks. As before, access to Canva’s AI features remains included in the company’s free offering, though it’s also introducing a new AI Pass add-on that significantly increases rate limits for users.

This article originally appeared on Engadget at https://www.engadget.com/ai/canva-starts-previewing-a-more-powerful-version-of-its-ai-assistant-130000966.html?src=rss
Igor Bonifacic

DJI Osmo Pocket 4 review: The only vlogging camera you'll ever need

2 days 2 hours ago

DJI’s Osmo Pocket 3 gimbal-camera was a category-defining camera. Two years since its launch, everyone from vloggers to pro film makers continue to upload how-to guides and gushing reviews to YouTube. When the Osmo Pocket 4 landed at the FCC at the end of 2025 (followed by a credible leak), creator forums and Reddit threads started to chatter with excitement. Over the following months the Pocket 4 leaked again and again, to the point where there’s very little that someone with a passing interest and an internet connection doesn’t already know about the camera. But DJI chose today to give us the official reveal, so we’re here with the full review which, remarkably, does contain some surprises. 

What’s new

For those who were waiting for official, confirmed specs and information, here’s a rundown of the headline new features of the Osmo Pocket 4. The camera is still 4K, but comes with an updated 1-inch CMOS sensor that DJI says is good for another two stops of low light performance (for a total of 14). The camera retains the 20mm equivalent, f/2.0 lens but squeezes in an improved max framerate of 240 fps (up from 120 fps) for up to 10x slow-mo. The Pocket 4 can also shoot in full, high dynamic range 10-Bit D-Log, upgraded from the more lightweight D-Log-M available on the Pocket 3. Shutter speeds are now expanded and go right down to 1/4 for extreme light effects. 

Hardware changes are few, but do include two new buttons below the 2-inch display. One is a dedicated zoom button and the other you can assign a function from a selection of common tasks — rotating the gimbal, toggling recording presets and so on. You can assign up to three different controls to this button via single, double and triple clicks. There’s also 107GB of internal storage. You can still use SD cards, but you don’t need to if you don’t want to.

That zoom, DJI states, is good for 2x “lossless” zoom while shooting in 4K and 4x in 1080p. The Pocket 3’s 2x Mid-Tele zoom had to be activated first, but now you can use lossless zoom any time and/or while using ActiveTrack face-tracking. It’s available in Portrait mode, too, but if you’ll need to have the screen in the horizontal position to access the buttons, which means your viewfinder/preview will be teeny-tiny as it’s rescaled for 16:9.

DJI Osmo Pocket 4James Trew for Engadget

DJI has added on-camera “Film Tones” which are similar, functionally, to film simulations seen on Fujifilm cameras. There are six to choose from at launch and include subtle and not-so-subtle stylized color tones that apply different “moods” to your videos without having to manually color grade or use a LUT after the fact. As for still images, there’s an on-screen button for “Live” photos similar to what you might find on an iPhone. Live photos were sorta-kinda possible on the Pocket 3, but they are a little bit easier this time around.

A lot of DJI drones include Gesture Control, which lets you start/stop recording and engage ActiveTrack from a distance, and that’s new on the Pocket 4 too. 

On the audio side of things, the Pocket 4 now has “audio zoom,” so if you have two people in a scene and do a close up on one of them, the volume of their voices will be boosted. It’s a little crude, but it could be handy in certain situations. The Pocket 4 can also record spatial audio via the three onboard microphones, good for live music and other situations where sound placement might matter. 

Lastly, the Pocket 4 has a modular component. At launch, there’s a magnetic fill light that clips onto the gimbal and can be configured via the camera menus. It’s included in the creator combo and opens the door for other modular accessories, though it’s limited to things that can sit on the gimbal without causing problems. A shotgun-style microphone, for example, could be possible.

The display and controls on the Osmo Pocket 4James Trew for Engadget

Battery life also gets a slight boost over the Pocket 3 with a 1,545mAh cell — which is almost a 20 percent increase. That translates to an extra 30 minutes or so of recording time for an average of two and a half hours at 4K, more if you shoot in lower resolutions or are using the camera for photos.

What we don’t see here, an item that you might have been hoping for, is any type of optical zoom. What’s more, the max resolution in vertical mode remains capped at 3K. You still have to rotate the camera if you want full-sensor, 4K video in portrait.

Video quality

The popularity of the Pocket series is thanks to its combination of high-quality video and a portable form factor. The Pocket 4 builds on this winning formula with exceptional quality for the camera’s size. The new 1-inch sensor is noticeably more detailed than the Pocket 3 and DJI’s claim of improved low light performance is backed up by stellar results. I took the Pocket 4 out at night and it bested its predecessor with far more dynamic range and better exposure in shadowed areas that come out dark or fuzzy on the Pocket 3. 

Image performance in general is impressive and a definite strong point for a camera of this size. Colors now look more natural than ever without looking over-saturated. Similar shots on the Pocket 3 look a little flatter when viewed side by side. I like that the f2.0 aperture still provides some light bokeh, and when combined with the new D-Log mode, there’s plenty of scope for cinematic shots. These would be harder to achieve with a phone and don’t require the setup and planning of a mirrorless camera. 

With the extended shutter speeds you can get some interesting effects — dramatic light trails in traffic for example — but it’s going to over expose any other light source in your shot. So, proceed with caution. The Pocket 3 bottomed-out at 1/25, but the Pocket 4 goes right down to a dramatic 1/4. 

The 2x lossless zoom surprised me. At first, I was sceptical about DJI’s claims of it being lossless, but it does seem to maintain visual quality without noticeable loss of detail. Though if you want to use that 4x zoom in 4K, expect to see some digital artifacts. The Pocket 4’s 20mm lens is particularly suited to wider, vlog-style shots, so a usable zoom is a welcome addition. It’s worth noting that it’s better used for static and tripod shots as any gimbal movements and keeping a subject in frame can feel like steering a ship.

Film tones

Until now, if you were aiming for a more cinematic style, you had to get comfortable shooting in D-Log-M and boning up on color-grading. DJI provided some filters in the Mimo app for a quick and dirty way to add a mood or vibe to your videos, but that still caused some friction in the workflow. The new film modes are on camera, so achieving something more stylized is now just a menu tap away. I’ll be honest, I’m not a huge fan of the selection available right now as they’re either too hot or too cold. Of the six, Warm and Movie seem the most usable for cozy-style landscapes or B-roll cityscapes. 

DJI hasn’t shared much about whether these are just on-camera filters or true film simulations. Movie and Retro, at least, were already available as filters in the app. If the full effect is too strong, you can dial down the intensity, but that’s the extent of the control. Their addition here expands what you can get out of the camera without using the app or having to drag things over to your editing software. It’s unclear if we’ll see more options in the future, but they’re there if you need them.  

New buttons

One of my main complaints with the Pocket cameras was the zoom. More specifically, controlling it with the joystick. It always looks slow, inconsistent and a bit amateur when zooming in manually. The new button provides an instant punch-in that can be used for an intentional, attention-drawing effect. I can’t count the number of times I’ve ruined a shot because I thought I had the joystick set to zoom, but it was still assigned to panning (you had to toggle its use via an on-screen button). With the physical button, I can close in on a target instantly and never worry about accidental pans.

The button layout on the Osmo Pocket 4James Trew for Engadget

The second, customizable button is also a real usability upgrade. If, like me, you’re constantly recentering the gimbal, you’ll know that the usual double-click on the joystick is often unreliable. Now you can assign that action to the button plus two more controls from a selection of common actions. I have it set so double-click switches to one of my manual recording presets and triple-click locks the gimbal so I no longer have to jump into the main menu to switch gimbal modes. It even works while recording if I spontaneously decide I want to keep my horizon level.

Changing what this button does is simple: Long-press it and it’ll jump into the settings where you can choose its functionality. There’s still scope for some refinement, as although a double click can instantly start recording with my preferred settings, clicking again doesn’t stop it. You have to use the record button. This makes some sense, but I’m used to using the same button to stop/start recording, so intuitively I thought that might be the case here. Sadly not.

Audio upgrades

Something a little unexpected in the Pocket 4 is the addition of spatial audio. Using the three built-in microphones, the theory is you should be able to hear where sounds are coming from — though you’ll need headphones on for the effect to work. In practice, it does create a different audio ambience, one where sounds feel more relative to their location, but it comes at a price. If you speak to the camera, even if you’re nearby, your voice will sound distant and muddled so spatial audio is something you’ll want to use intentionally and certainly not as a default setting.

The same is true for that audio “zoom.” To be fair to DJI, I’ve never found an audio zoom I truly liked. You can’t capture better audio than what the microphone is receiving, so amplifying it in any way isn’t going to improve it beyond what you can do with editing software. In a pinch, this might help with interviews when you have multiple speakers, no external microphone and need to publish quickly, but I’m reluctant to recommend it for anything else.

You can get an Osmo Pocket 4 bundle with a DJI lapel micJames Trew for Engadget

The new “Vocal Boost” is a more useful option under the Pro settings menu. When activated, it enhances voices by lowering background noise and other sounds. Again, it’s not a fix for getting good source audio, but in noisy run-and-gun vlogging environments, it can improve your chances of capturing something useful with just the internal microphones.

Fortunately, DJI has a much better solution that was already a feature of Pocket cameras — native connectivity with its wireless microphones. The Creator Combo now includes a single DJI Mic 3 transmitter and charging cable, and it’s the absolute best way to get YouTube-ready audio from the camera. One nice tweak with the Pocket 4 is that you can now export videos with both the built-in and external mic audio as one 4-channel file. Open this in your video editor and you can mix and cut between mic and ambient audio without having to deal with separate files as before. 

The competition

The fact that there’s no real direct competition for the Pocket series is surprising. For true, like-for-like gimbal cameras, expect to find alternatives from brands you’re less familiar with — such as Agfaphoto or Feiyu. Most of the nearest competition will be action cameras like the GoPro Mission 1 or Insta360 Ace Pro 2. Both of these are great portable cameras with solid stabilisation, but they unsurprisingly favor that wide, bright and sharp action-style footage. The Pocket 4’s nearest rival for stabilized vlog-friendly filming is still the Pocket 3.

This raises the question of whether the Pocket 4 (£445) is worth it over the more affordable Pocket 3 (£389) at launch. (DJI can’t directly sell the Pocket 4 in the US, so official prices are in British Pounds or Euros.) Both are great, all-purpose, vlogging cameras versatile enough for recording in a variety of situations — though less suited to rugged/action filming thanks to the delicate mechanical gimbal. It’s likely that the price difference between the two will expand after the launch window. 

The Osmo Pocket 4 flipped down and powered offJames Trew for EngadgetWrap-up

The Pocket 4 might not bring defining new features like optical zoom or higher resolution, but it’s a better camera in every way that matters. There are also several quality of life improvements that make it incredibly compelling. For the extra money, you’re getting better image quality that will pay you back over time. The new buttons make the camera even more convenient and that onboard storage alone effectively closes the price gap — not to mention the huge convenience that feature alone brings with it.

Hardcore fans might have been hoping for more “dazzle” with the Pocket 4. In reality, DJI delivered a camera that builds on an already winning formula in ways that actually matter: higher quality video, improved usability, modular capabilities and longer battery life. It’s hard to argue with that.

This article originally appeared on Engadget at https://www.engadget.com/cameras/dji-osmo-pocket-4-review-the-only-vlogging-camera-youll-ever-need-120000374.html?src=rss
James Trew

Anthropic will ask Claude users to verify their identities 'for a few use cases'

2 days 2 hours ago

Anthropic has started rolling out identity verification on Claude “for a few use cases.” The company didn’t list out those use cases in its announcement, but we’ve asked it for details and will update this post when we hear back. Anthropic says you might see a verification prompt upon “accessing certain capabilities,” asking you to verify your identity. You would have to show a valid and physical government-issued photo ID. You’d also have take a selfie with your phone or computer camera that the system will compare against the ID you present. 

The news, as you’d expect, wasn’t well-received. Many users are questioning the necessity of identity verification to be able to use an AI chatbot, especially if Anthropic already has their credit cards on file as paying subscribers. People are also criticizing Anthropic’s decision to use Persona Identities, which also provides age verification services for OpenAI and Roblox. One of Persona’s major investors is venture firm Founders Fund, which was co-founded by Peter Thiel, who’s also the co-founder and chairman of surveillance company Palantir. 

Palantir’s customers are mostly federal agencies and government offices, including the FBI, the CIA and US Immigration and Customs Enforcement. Most criticisms against the company center around the services it provides those customers, as they’re mainly used to expand government surveillance using its facial recognition and AI technologies. 

In its announcement, Anthropic said that Persona will be the one handling your IDs and selfies. It will not copy and store those images. It also said that Persona is “contractually limited” in how it can use your data and that all data passing through its process is “encrypted in transit and at rest.” Anthropic emphasized that it will not use your identity data to train its models and that it will not share your data with anyone else. 

Update April 16, 2026, 11:35AM ET: Reached for comment, an Anthropic spokesperson told Engadget that "this applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior, which violates our usage policy."

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-will-ask-claude-users-to-verify-their-identities-for-a-few-use-cases-115754092.html?src=rss
Mariella Moon

Amazon MGM's 2026 theatrical slate includes 'Highlander' and 'Spaceballs: The New One'

2 days 4 hours ago

Fresh off the box office success of Project Hail Mary, Amazon MGM Studios has announced its theatrical release lineup for the next year. Most of the titles aren't likely to hit the highs of the Ryan Gosling starrer which has grossed $515 million in theaters. However, there are a number of promising releases like Spaceballs: The New One and Highlander starring Henry Cavill, both sequels to '80s films. Another is The Sheep Detectives with Hugh Jackman, the trailer for which has been a hit on YouTube.

Earlier this year, Amazon MGM promised to release up to 14 films in theaters over the next year and leave them to run for as long as 45 days — a far cry from its previous policy of releasing just a few films for several weeks at most. That strategy is paying off so far. "Four months. Four films. Over $670 million at the box office. And we have nine more on the way,” said Amazon MGM's head of domestic theatrical distribution, Kevin Wilson. 

The company said that it's not about volume, but impact. "We are building films that give audiences a reason to leave the house. Films with scale. Ambition," Wilson said. Looking at the slate, though, some of those films are likely to be hits and some not so much. 

First up is The Sheep Detectives set to arrive on May 8th. You may scoff at the title, but the trailer has racked up 20 million views and mainly positive comments. It looks like fun, family-friendly fare and stars a popular actor, so one could easily see this being a hit for Amazon MGM.

Masters of the Universe is next up on June 5, 2026. "Director Travis Knight brings the world of Eternia to life on a massive scale with stars Nicholas Galitzine, Camila Mendes, and Idris Elba," Amazon explains. The film is based on the Mattel toy franchise and animated series so again, it could be another magnet for kids and their nostalgic parents. 

How to Rob a Bank is a heist comedy with a solid cast including Nicholas Hoult, Zoë Kravitz, Anna Sawai, Pete Davidson, and John C. Reilly, set to arrive on September 4. That's followed by Verity (October 2, 2026) based on the Colleen Hoover novel and Peter Farrelly's I Play Rocky Sylvester Stallone biopic about the production of the 1976 film Rocky

2027 starts with the The Beekeeper 2 (January 15) starting Jason Statham, followed by The Thomas Crown Affair (March 5, 2027) directed by and starring Michael B. Jordan. Spaceballs: The New One, a sequel to the classic Mel Brooks movie arrives on April 23, 2027 with Rick Moranis, Josh Gad, Keke Palmer, Lewis Pullman, Daphne Zuniga, Bill Pullman, and Mel Brooks. 

Other films expected but without release dates yet include The Chosen: Crucifixion, A Colt is My Passport, Your Mother Your Mother Your Mother and Highlander starring Henry Cavill based on the 1986 cult classic.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/amazon-mgms-2026-theatrical-slate-includes-highlander-and-spaceballs-the-new-one-094505690.html?src=rss
Steve Dent

YouTube now lets you hide Shorts

2 days 5 hours ago

You now have the power to remove short-form videos from your YouTube feed if you don’t want to see them. YouTube has rolled out the capability to set your Shorts feed limit to zero minutes, which could help you stop doomscrolling, at least on mobile. The video-sharing platform originally launched a Shorts feed limit back in October last year, but the lowest option you could choose was 15 minutes. Once 15 minutes are up, you’ll get a pop-up reminding you to take a break.

Earlier this year, it integrated the feature with parental controls, allowing guardians to set time limits for younger users. YouTube said back then that parents will soon see the option to set the timer to zero. Now, the Shorts timer is live not just for parents, but for all users. We can confirm that we’re now seeing the zero minutes option in our (adult) account and were able to activate it for ourselves. When you select it, you may see a notice that says “Scrolling is paused but you may still see individual Shorts.” You may also have to refresh your app before short-form videos disappear from your feed.

To be able to block stop Shorts from showing up for you, go to your Setting page in the YouTube app for mobile. Look for “Time management” and scroll down to “Daily limits,” where you can find the “Shorts feed limit” section. If you don’t want to get rid of Shorts altogether, you can choose from any of the other options, with two hours being the maximum time available.

This article originally appeared on Engadget at https://www.engadget.com/apps/youtube-now-lets-you-hide-shorts-085538825.html?src=rss
Mariella Moon

Opera adds Browser Connector for integrating AI chatbots

2 days 6 hours ago

Opera is offering a new choice for looping in an AI assistant during browsing. Today, the company introduced Browser Connector, which allows Opera One and Opera GX users to integrate either ChatGPT or Claude into the platform. The chatbots will be able to access page content while a person is browsing and will draw context for queries from the information in your open tabs. The free new feature can be enabled through the AI Services section of the Settings menu. 

Opera is one of the many browser companies that has been experimenting with an AI-focused service. It began rolling out the $20-a-month Opera Neon agentic AI browser last year. The benefit of something like Browser Connector means you aren't limited to a single brand's product offerings and can switch things up at will.

This article originally appeared on Engadget at https://www.engadget.com/ai/opera-adds-browser-connector-for-integrating-ai-chatbots-080000153.html?src=rss
Anna Washenko
Checked
31 minutes 29 seconds ago
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Engadget Feed feed