Image analysis: The opportunities and challenges
Joel Windels, VP of inbound marketing at Brandwatch, explains how image analysis, although in its infancy, could become an invaluable tool for marketers and brands
Thanks to various stories making waves in the consumer tech space, like Facebook’s new system that can ‘read’ photos for visually impaired people, there’s been a lot of conversation about image analysis recently. There’s no doubt that these sorts of development are incredibly exciting. Imagine all the images posted across the web at your fingertips, categorised and easily searchable. With this in mind, brands need to think about how this new technology can be applied to their social listening efforts.
Using our own research, we have been helping clients to better understand how image analysis can help them with their social intelligence goals. The opportunities image analysis presents are numerous, but there is much to be learnt about the current challenges in this space. A first step is understanding the difference between image analysis and image recognition.
Image analysis vs image recognition
To put it simply, image recognition finds images shared online or in a platform’s archive containing certain things – usually brand logos. Image analysis identifies what’s within an image you already have – for example, you show it an image, and it identifies (and subsequently tags) what’s in the image.
Image recognition in theory promises something amazing – the ability to never miss an image of your brand or your logo, even when they don’t actually mention you in the accompanying text. There have been some interesting early examples of how this can be used such as Miller Lite’s digital agency, DigitasLBi, finding new audiences to target through images relevant to the brand, or brands using user-generated images as campaign assets.
But it’s image analysis where some incredible opportunities lie. Image analysis represents the possibility of organising the web’s images into an archive that is fully searchable and analyse-able. If a brand was looking for a particular image for its next campaign, say a beverage being enjoyed on a beach, image analysis will identify all of the relevant pictures posted by fans, so the brand can make contact about using them.
If the brand wants to know where its new snack is being consumed, so it can target its next advert more appropriately, image analysis can provide a breakdown of all the types of places images of that product is taken.
Image analysis can identify all manner of things in images, from the types of people and surroundings, to objects, emotions and even the time of day. This is not only powerful for uncovering user-generated content that can then be used in campaigns and amplified, but also for better understanding your audience and how they view, consume, and promote your products.
Understanding the challenges
But image analysis is not without its challenges and the technology is still evolving – take the recent example of Flickr’s auto-tagging system causing offence by tagging concentration camps as ‘jungle gyms’. Clearly, not the desired outcome, but this case raises an important consideration with image analysis – context.
Whilst recently working with our product team on how we might tag and group particular images, it became apparent just how difficult it can be for a computer to fully understand context to the same degree that humans can – but also what a huge opportunity it will be when it’s cracked.
For example, what do you see in the image below?
You or I would look at this and probably surmise that this is a man and a woman, possibly a couple, having a serious or emotional conversation. We probably assume it’s night-time, and that they’re in some kind of bar.
When you consider the things we’ve taken into account, it has taken years of life experience to come to that conclusion. The subtleties of the red lighting, the type of seat, the lights in the corner, the body language.
Image analysis technology on the other hand can likely work out that it’s dark, so probably night-time, and assumes they’re indoors because it can’t detect sky, buildings or trees. It can see there’s a man and a woman. It might even be able to work out that they’re having a conversation from their positions. If it’s really clever, it might be able to understand their emotions (for example, Google’s Cloud Vision can detect basic emotions).
But what it is unlikely to understand – at this stage, anyway – is the mood and context of the image. The overall tone. The moodiness, the fact that it’s a bar, the emotions on their faces combined with the scenario. All those things humans understand instinctively.
Another example is celebrities – it’s not (currently) possible for a computer to know every single face of every A-Z list celebrity in the world, so whilst looking at an image of someone famous, it won’t pick up on who it is. This is something you might want to know about if you’re a brand and a famous pop star is holding your product.
All of this emphasises the fact that, the technology is developing quickly but it’s not quite there yet. But imagine a future where it is. Where you can look for images with a particular emotion and answer questions like ‘are people happy or sad when using my product?’, or ‘are any celebrities using my products in their photos?’
This issue of context is such an interesting challenge and it’s going to be fascinating to see how far computers can go as this area develops. Image analysis is only at the beginning of its journey. But as it continues to evolve and improve, there’s no doubt that it will remain a game changer for brands.