Why Snapchat’s New Update is Actually Quite Amazing
You may not realize this but when Snapchat first burst onto the scene, it wasn’t exactly an impressive feat of engineering. What made Snapchat a hit was the simplicity of its idea and how much fun it was possible to have with the self-destructing messages it created. But in terms of programming, this was a very inefficient app that would eat up your CPU even when not in use.
Then Facebook offered to buy the company for $3 billion and everyone was incredibly surprised when the CEO said no. And for a while it looked like a big mistake.
But now Snapchat is back. The lenses that came with the 2.0 update have been setting the world on fire and actually the code on display here is absolutely incredible. Moreso than you may well realize. Read on to find out why.
Why the Lenses Are Amazing
In order to activate a Snapchat lens, all you need to do is to tap on your own face and then hold it for a second. A mesh will then appear around your facial features and this will map you to allow you to start seeing the filters applied in real-time.
And what you’ll find is that this is actually quite amazing. Why? Because it is mapping your face in real-time which takes some very impressive coding skills.
The way this works is through something called ‘computer vision’. That means that the software is able to look at a photo of your face and then map the contors of your facial expression – knowing what expression you’re pulling, which direction you’re looking, how far away you are from the camera and more.
This, by the way, is incredibly impressive.
What is Computer Vision?
Another example of computer vision is the way that an Xbox Kinect is able to read your body in space, or the way that a car is able to drive itself by detecting what’s in front of it or behind it.
But in both these cases, the devices in question use additional sensors which can scan the depth of the environment. These include infrared and multiple camera arrays to create depth of field.
But in the case of Snapchat it’s using only a single camera. And that camera isn’t even a particularly good camera – this works with even a low MP front-facing camera.
To give you an idea of how breakthrough this is, the technology is impressive enough to potentially change the way we interact with virtual reality. Using a Gear VR it is currently possible to look around but not to walk around. This is because the Gear VR can’t detect whether you’re moving forward or backward because it has no sensors capable of doing this. Palmer Luckey says that this would never be possible with a regular phone.
But using the technology available right now and on display in Snapchat, it would be possible for the Gear VR to do exactly that with its front-facing camera.
So in short this technology is very much right at the cutting edge. And it’s being used to give people dog tongues. That’s the internet for you!