SHARE
Online-gaming-changing

There is no question that technology is impacting our everyday lives and, to a large degree, making them easier and more fun. Smartphones might have been around for nearly twenty years, but the march of technological advance has seen huge advances in screens, cameras and chips. The latest phones are almost unrecognisable from the first iterations. But it has not just been competition among the manufacturers to produce better devices. The software companies that design the apps and the games that we use are forever exerting upward pressure on the hardware designers. The gaming industry, in particular, is constantly seeking to engage users with better feedback and more authentic experiences.

With better gaming software, the smartphone manufacturers have had to realise that their devices are being used for more and more applications from a wallet to a live gaming hub where we can play any number of opponents simultaneously as we walk along the street.

So what is next? What’s the next great innovation that will change the game(s) again? Well, there are a couple that may well rise to the top, but gesture control and facial recognition are probably the favourites. Everyone remembers Tom Cruise using his hands to control the translucent screen in the Hollywood blockbuster, Minority Report – and everyone wanted to have it! That was in 2002.

The technical name for this is Natural User Interface (NUI) and it lets users master their software without actually touching it. So, no more sweaty keyboards or grubby mice on the desktop. There have, of course, been the first iterations of this technology with Microsoft’s Kinect and Leap Motion, but things are moving on. Ultimately, we don’t want to necessarily sit in front of a computer screen and conduct it like an orchestra; we want to change radio stations in our car or control our camera drone with no more than a flick of the hand or a flash of the eyes. Removing the need for physical controls means that we should be able to manipulate virtual objects (so not just opening applications) in the way we do so with physical ones.

There have been attempts at this in the past with wired gloves and the like – and we now have touch-sensitive screens that allow us to control our smartphones (known as haptic technology). But the wired gloves were never going to be taken on by the public and were only really useful in industrial situations where robots rely on touch. A quote from Kevin Kelly in The Inevitable goes thus; “All devices will need to interact. If a thing does not interact, it will be considered broken.” Surely the days of the eccentric rotating mass, the linear resonant actuator and the Apple taptic engines are all technologies of the past. Controllers that rumble and buzz in your hands are so passé.

The future lies in the technology that does what we want it to, without having to touch it. Not thought control exactly (although that will come eventually), but vision based recognition systems. 2D cameras make a reasonable attempt at this but the development of 3D cameras which are now cheaper, more reliable and fundamentally better at capturing three-dimensional body and hand motions, is the way forward.

Kinetic and Leap Motion are very clever but aren’t designed to be embedded in a smartphone. Software companies are now providing software development kits (SDK) for application development engineers to integrate gesture and pose recognition – and also facial recognition – into their applications. Gestoos, for example, enables high precision tracking of any shape or object and understands motion – it enables the user to interact naturally with any device or software.

The development of emotion awareness technology which recognises emotions from watching users’ facial movements is being offered to gaming companies by a company called Affectiva. The software was originally designed to understand better how people react to advertising and political polling. For gaming companies to understand how the players are reacting to the game is surely the nirvana of gaming – remember, gaming companies rarely meet their users. Utilising software to learn to react to a user and to enable the user to affect the next move of the software opens up many possibilities.

It is worth thinking of the games that might be able to harness this technology so that the user can better control the outcome of his play. First person shooter games like Call of Duty and Quake would be very different if we were able to react at the speed of our vision and not just the reaction time between our seeing and being able to move the controller button. Likewise, if we could read the reactions of the other player at the table in a game of online poker, we might be put in very good place. Online poker has developed hugely over the past decade with software that is getting closer and closer to replicating the feel of sitting at the real table – but there is nothing quite like understanding the emotions of our opponents.

Online Poker has revealed plenty of ‘tells‘ which enables players to read their opponents based on the size of bets and the timing of calls. There is also plenty of online tuition pieces to help new players learn the game. But, so far, it has been very difficult to communicate the raw emotion of the real game. You can’t see the sweat online. Facial recognition will change this.

Online gaming is here to stay; it’s easy to access, and we can play in the comfort of our homes or as we walk along the street. Technology has changed the game beyond all recognition from the simple graphics and basic software we started with. However, the industry will not stand still, and our demands for a more realistic feel to the games we play is being answered every day. Facial technology and gesture recognition are on their way, so get ready. It’s going to be great!