Samsung says the Galaxy S26 isn't a smartphone, and that's a dangerous gamble

by · Android Police

If you've been keeping an eye on Samsung's websites or social media feeds ahead of Galaxy Unpacked on February 25, you may have noticed a theme.

If you haven't been keeping up, then you can probably guess what it is. That's right, Galaxy Unpacked 2026 will all be about AI. And Samsung isn't being subtle about it.

At the end of its Galaxy S26 teaser videos, a short animation plays where a circle is drawn around the word "smart" in "smartphone," and after a few seconds of watching the Galaxy AI logo flash, the text changes to "AI phone."

That's right, everything you wanted is coming. The Samsung Galaxy S26 won't be a smartphone; it'll be an AI phone.

What's that? You aren't excited? Weren't you convinced by Now Brief's blindingly obvious suggestions or Now Bar's limited app support? You don't think Smart Suggestions' irrelevant suggestions are helpful? Gosh!

Sarcasm aside, Samsung AI has produced mixed results. Now Bar is better than it was at launch, and Circle to Search is genuinely useful.

Still, there's a lot that needs to be done here, which is why Samsung's plans for the Galaxy S26 are doomed to fail.

Samsung promises a future of agentic AI

Samsung is keen to get ahead in the AI arms race

Buried within Samsung's fourth-quarter earnings report in January (via The Korea Herald) was the following line:


MX (Mobile Experience) will expand sales centered on flagship products with the launch of Galaxy S26, and strengthen leadership in the AI smartphone market through an agentic AI experience.


The first segment is pretty standard. "We will expand sales by launching new phones," is a statement so basic it can barely be called a strategy.

However, the second line is far more interesting due to the phrase "agentic AI experience."

If you haven't heard of the phrase or don't know what it means, agentic AI exhibits a level of autonomy and goal-oriented decision-making that allows it to act on your behalf as an agent.

The earliest examples we saw of agentic AI were for mundane tasks like booking a hotel. You say, "I want to book two rooms in a 5-star hotel with a beach view on these dates," and the AI would not only find suggestions but also complete the booking process for you.

This level of autonomy is a vital step for the future of AI, as, unlike most existing tools, you don't need to provide oversight.

However, early examples like OpenAI's Operator struggled with crucial tasks like navigating websites.

Nevertheless, it's seen as the next big step for AI; Google has been developing Gemini's agentic AI to start operating apps on your behalf (ironically, this is probably what will power Samsung's agentic AI).

Agentic AI is thus an obvious step for Samsung. But what does that mean for our smartphones?

Samsung's agentic AI could let you perform complex tasks without touching your phone

It's a step towards the AI we see in movies

Our phones have been able to perform tasks on our behalf for years.

Back when digital assistants like Google Assistant were the Big Thing, the ability to send texts, check the weather, or answer questions was a much-touted functionality. However, these assistants were limited to basic commands.

Let's say you're at the bar with friends, and you're ready to go home. You pull out your phone, open the Uber app, select your home, choose your ride, and wait.

But with an agentic AI, you can say "Book me the cheapest Uber to arrive in 10 minutes to take me home" to your phone and (in theory) you won't need to check your phone until you get the notification that your ride has arrived.

If done properly, agentic AI could offer significant value to Samsung's smartphones.

However, I think that Samsung is making a mistake trying to be first out the door with a fully integrated agentic AI on its smartphones.

Agentic AIs aren't smart enough for our smartphones

I certainly wouldn't trust one to act on my behalf

Agentic AIs add convenience to our lives. We don't have to waste time filling out forms and navigating web pages when an AI can do it for us.

However, agentic AIs aren't reliable enough yet to operate without constant observation.

AP Recommends: Subscribe and never miss what matters

Tech insights about everything mobile directly from the Android Police team.


Subscribe

By subscribing, you agree to receive newsletter and marketing emails, and accept our Terms of Use and Privacy Policy. You can unsubscribe anytime.

This is fine for a desktop browser, where you can multitask by delegating to an AI while keeping an eye on it as you work in another window. However, we're used to taking more time on our desktops.

Our smartphones are designed to keep us only a couple of taps away from the information we need, and slowing that down with an agentic AI will cause problems.

Let's return to my earlier example of booking an Uber. To perform this task, the agentic AI would need to run the app in the background or open the app and perform the tasks in the same way you would do it manually.

The former method immediately runs into problems of compatibility. What if an app can't perform these actions in the background? What happens when the already strained memory of our phones is put under the pressure of running multiple apps at once?

The latter method means that we can't use our phones while the AI is running, and at that point, we might as well do it ourselves.

Samsung is betting big on agentic AI, but I'm not convinced it's ready

I think that the future of letting an AI operate apps independently is not far off. However, I think that implementing this effectively and unobtrusively is borderline impossible.

To become a major selling point for smartphones, we need to be able to wholly trust an AI to operate our phones without any oversight.

And if the past few years of observing AI agents have taught me anything, it's that you always need to watch them.