What is Mobile 2.0?

Today, ten years after the iPhone launched, I have some of the same sense of early constraints and assumptions being abandoned and new models emerging. If in 2004 we had ‘Web 2.0’, now there’s a lot of ‘Mobile 2.0’ around. If Web 2.0 said ‘lots of people have broadband and modern browsers now’, Mobile 2.0 says ‘there are a billion people with high-end smartphones now’*. So, what assumptions are being left behind?

Source: Mobile 2.0

Benedict Evans makes some great points in this piece, and it got me thinking about how I would characterize both Web 2.0 and Mobile 2.0 if I were asked to do so.

I think I’d say the following:

  • Web 2.0: was the conversion from flat, link-based experiences to application-like experiences.

  • Mobile 2.0: will be the conversion from primarily using applications to using digital assistants via voice and text. In other words, it’ll be the transition from direct interaction to brokered interaction, with the brokers being your digital assistant and chat bots.

As I write about in The Real Internet of Things, this brokering is inevitable for several reasons. Here are two of them:

  1. Computers (i.e., digital assistants) will be able to interact with the hundreds or thousands of daemons that surround us on our behalf, whereas humans will not be able to.
  2. Voice and text (and eventually gesture, eye-tracking, and neural interfaces) are far more natural than poking at glass in different ways for different apps. The human will just express desires, and it’ll be up to the tech to sort it out, as opposed to the human explicitly poking buttons in the way demanded by the app.

Benedict does make a great point about a limitation of voice in replacing applications. If you have 20 applications on your mobile device, how are you supposed to remember them all? And how to use them all with pure voice / text.

I think the answer will come from a combination of high-quality digital assistants that make quality assumptions about what you want to do (and thus require you to be explicit less often), and advances in eye-tracking that can let you make selections from options more naturally than poking glass.

But to support his point, those will be a while. If we can’t remember what all Alexa can do for us, that same problem will follow us on mobile as we try to move to voice. Icons on glass will become reminders that you have the functionality as much as anything else.

Summary

  1. Web 2.0 was the transition from links to web apps.
  2. Mobile 2.0 is the transition from poking glass to naturally expressing desires to digital assistants and bots.
  3. Because it’s hard to remember all the different capabilities of a strong digital assistant, there will still be a use case for displaying functionality—in whatever form—at least for the foreseeable future.

Notes

  1. Even further out is the digital assistant using deep context to anticipate desires, curate choices, and otherwise remove the need for exhaustive choice selection.
  2. I love his comment about visual sensors vs. cameras, which I also talk about in my book. The idea applies to all kinds of sensors, and all kinds of machine learning algorithms. The game is sensor to algorithm. Humans looking at a snapshot is going to be extremely old thinking soon.
  3. If you’re not subscribed to Benedict’s newsletter, I recommend it strongly. It served as the inspiration for the reboot of my own newsletter, especially around the simple text-based design.

__

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

Source: http://feeds.danielmiessler.com

Leave a Reply