A few snippets of news that are worth your attention this week…

The unveiling of the A12 bionic chipset in the new XS iPhones and the sensor behind the electrocardiogram in the new Watch were for us the real news at last week’s event.

1. The A12 can run 5 trillion ops per second on 8 cores vs 600m on 2 cores in the A11. This is very significant for Face ID, better photos and video and what’ll be needed and beyond for VR/AR, Apple’s next truly big thing.

It is also core to the extension of battery life.

Samsung is playing catch-up in bionic chips, Google is relying on Intel in its latest Pixel but has Tiny TPU coming on stream for IoT devices. Huawei’s Kirin 980 neural chip impresses however.

2. Apple’s FDA approved healthcare features in the Watch were fascinating…

“The Apple Watch Series 4 will include a more advanced heart-monitoring technology called electrocardiogram. Like most wearables, it monitors heart rate using green LED lights embedded in the device. The lights reflect on the skin to detect the pulse and changes in blood volume; this is turned into the heart rate number. 

On the Series 4, users put their finger on the digital crown. The Watch passes a current across the chest to track electrical signals in the heart, directly, which is far more accurate than interpreting based on pulse. The process is meant to take about 30 seconds and the user will receive a heart rhythm classification. Normal rhythms will be classified as “sinus” rhythm, and the Watch will also classify irregular rhythms, such as atrial fibrillation.”

This is a prelude to what is in store.

Our take…

The last ten years have been about the creation of a rich and addictive mobile lifestyle full of digital dopamine fixes.

The next ten years will see equally dramatic changes as our physical environments are repurposed to respond to our voice, gut and anxiety cycles.

This starts with voice recognition…

Alexa as the “Computer” in Star Trek: a voice-activated intelligence that constantly talks, learns, guides and surprises us in our homes.

It will soon be about face, body, mind and personality recognition: environments that communicate with every factor of our being that it is possible to track.

There are already inklings of the shape of this world:

  • Alexa and Google Home
  • The medical surveillance technology of Sentrian
  • The smile-as-you pay tech in unmanned stores
  • The giggly charming Pepper social robot

Every environment we step into will be humming and hypertuned to our presence.

In the home, this intelligence will be in constant communication with us as it monitors our heart rate, the microbial mix in our gut, our anxiety levels.

It will splash entertainment across the walls – from holograms to virtual assistants as we cook.

This is quite a large engineering project (with plenty of investment opportunities that we have been discussing with Signum partners).

Right now, we are building a comprehensive system of sensors, tags, responders and readers in every environment. This is the business of rendering so that all objects, images, waves and sounds can be modelled and interpreted.

The next stage is to open up as many fields of communication between people, robots, vehicles and servers.

We will need zero latency between all these devices.

They will have to respond to voice, movements, temperature, blood chemistry: receiving information and learning to reduce the noise of each signal.
3. Apple is best seen as a developer of luxury consumer items that has pricing power for up to a billion cult followers, who, as per Scott Galloway, see Apple products as life devices that define them as being cool, educated and elite and thus representing better mating material than Galaxy owners.

Discussion point: Is Apple poised to do a super Udacity and enter the education business?

A threshold has been crossed 

Voice recognition is one of the great breakthroughs of the last decade. 

You only have to look at the recent Google Duplex video see how far we’ve come… 

This exchange is the culmination of decades of research into natural language processing and artificial intelligence. 

Audrey, a machine created by Bell Labs in the fifties, was one of the first machines to successfully determine speech patterns. 

Audrey could recognise the numbers 0 – 9 when it’s inventor spoke to it. 

IBM Tangora, released in the 1980s, required slow deliberate speech and no background noise, but could recognise up to 20,000 words after 20 minutes of conversation. 

The real breakthrough has only come in the last few years… 

By training artificial intelligence with millions of voice messages and prompts, the tech giants have managed to replicate human speech in conversation with close to 100% accuracy. 

It’s a huge investment story… 

By 2020, we will speak rather than type more than half of their Google search queries, predicts Comscore. 

The market for ads delivered in response to voice queries will be $12 billion, according to Juniper Research. 

Aside from the market for voice search, ComScore see a ‘wearables’ market of $16 billion within 5 years with bean sized earbuds that translate, filter out ‘noise’ and let us control our devices by voice

Is this an improvement? 

Will we really stop using our beloved smartphones? 

Well as AI scientist Andy Ng makes the entirely valid point that humanity was never designed to communicate by using our fingers to poke at a tiny little keyboard on a mobile phone. 

“Speech has always been a much more natural way for humans to communicate with each other.” 

We can speak a lot faster than we type. 

And as we talk, the voice will learn more about us.  

It will become smarter. 

It will cull the daily information about our lives – what we eat, what we watch, the conversations we have with our family – and gradually it’ll start to understand us. 

This is the bargain the likes of Amazon, Google and Apple are asking us to make. 

Give up the mundane details of your life. 

Give up on privacy. 

And we’ll feed you entertainment…conveniences…we’ll get to know you…we’ll listen. 

It sounds like a disturbing deal. 

But its one people are already making. 

In China, a tabletop robot called Rokid (pictured, which has an alluring voiceand impressive conversational skills, is proving enormously popular. 

And Softbank‘s jokey Pepper robot is getting backing to go global from Alibaba and Foxconn. 

When people in the UK were asked recently what they wish their smart speaker could, the answers were very personal… 


“Want help telling jokes”. 

“Want help to be funnier and more attractive”. 

These are basic sentiments.
And the tech giants now have the technology to exploit them.

We’ll keep you up to date on the most promising developments (and the companies being left behind) from this platform.

In the meantime, you can contact us here if you have any burning questions.