jump to navigation

Case Study – Why Bigger Is Better! November 27, 2009

Posted by Jon Ward in Advertising, eye tracking, Market Research, Marketing, Media, Studio, Technology, Tips And Tricks, Tobii.
add a comment

The Tobii T60XL launched earlier this year and was the first eye tracker with a full HD (1920 x 1200) widescreen integrated into a single unit. The benefits of the high definition screen, and increased size of the monitor over the standard T series trackers – from 17″ to 24″, are obvious for print design, virtual shelf testing and web testing at a number of resolutions using a single set-up, while still remaining portable and unobtrusive.

One of our customers, and a long standing Tobii and eye tracking user, Simple Usability (www.simpleusability.com) embarked on a project for global banking group HSBC. Their brief was to look at the effectiveness of the campaign the bank utilises to entice and recruit graduates to the bank, a campaign that consists of web based and printed materials. Guy and his team chose to use the T60XL for the testing and you can read and download the case study below.

Simple Usability Case Study – HSBC

By using the increased screen size and resolution of the T60XL the results clearly showed that the interaction with the web based media varied greatly and that although the standard for web testing is largely regarding as being 1024 x 768 designers need to consider and be aware that users will view in every resolution from that up to and including HD formats such as 1920 x 1200 – by being able to run comparative testing on various resolutions with the same unit these variances were easily spotted and fed back to the client. For more information on the T60XL feel free to check out our website www.acuity-ets.com or email me at  jon@acuity-ets.com.

And to finish  – I am afraid that this time, bigger definitely is better!

When One Is Just Not Enough… November 18, 2009

Posted by Jon Ward in Advertising, eye tracking, Market Research, Marketing, Media, Studio, Technology, Tips And Tricks, Tobii, Usability & UX.
add a comment

Following on from my earlier post about the different metrics applied to heatmaps I thought it was time to vent some frustration I have about a piece of work I saw posted on the internet. The example was a number of heatmaps showing performance of a number of print adverts. The images themselves showed hot spots in all the right places on the image – and the study said that the adverts were a great success and this was proved by the heatmap. We are always a little sceptical about the findings one can draw from a single heatmap so I did a little digging…. and when checking what the heatmap represented I found out it showed the number of fixations (as “this was the best measure for checking print / web performance” apparently) and therefore the images ‘worked’ as there was a lot of fixations in certain areas. As from our previous posts you may know that we scowl on the use of a single metric or output to try to draw conclusions (if not please read our blog post here : https://acuityets.wordpress.com/2009/10/03/heatmaps-the-truth-is-out-there/ ) so I thought a blog post would be appropriate  to show how utilising a single metric can prove to be misleading.

Firstly let’s look at a heatmap :

This heatmap shows the popular website for Woolworths, recently re-launched. This was part of a task to find a fancy dress costume for a child and this is the toys page, about halfway through the task for most users. While it wasn’t part of the task this heatmap shows a lot of interaction with the bicycle and special offer window, and a lot of interaction with the whole search bar along the left hand side. The mouse clicks show that there were several points on the left navigation bar (6 in fact) where people clicked through to try to find the costume – a point that maybe highlights that the labels / categories weren’t clear, as the task was quite clearly defined. What you may possibly say about this is that the special offer area drew attention (and therefore ‘worked’) and that people read and looked at each section of the left navigation. There was also a considerable amount of fixations on the top navigation bar in certain areas. This test was operated with the standard “Tobii Fixation Filter” which defines a fixation as gaze points within a 35 pixel area, with no minimum dwell time – what this basically means is that is the persons gaze remains within a 35 pixel area, for any period of time it is classed as a fixation. While this filter is a good ‘one size fits all’ filter you will end up with a lot of potentially very short fixations. Depending on which research you are basing your fixation metrics on (or by what measure you deem a fixation) fixations that show engagement or cognitive thought are deemed as ranging from 80ms – 200ms+. Therefore to rely on an output that could be showing a lot of very short fixations ( many potentially <80ms) is possibly ‘jumping the gun’ a little. What do I mean by this? Well very simply if there is a lot of very small fixations of 10, 20 or 30ms or so it is likely that the participant is scanning around the page but not actually taking in what they are seeing. After testing we would validate if they have actually seen, absorbed and remembered the images / page / advert by asking questions to test their recall and we will probably find out they didn’t – they were scanning looking for a keyword or link and didn’t see it, and were merely ‘passing through’ the area almost. Let’s have a look at a second heatmap from the same participants, on the same page, covering the same amount of time  only the type of heatmap metric has been altered :

You will notice immediately the difference in the outputs – and instantly if you were drawing conclusions or writing a report based on this output your findings would be very different. So what does this heatmap show? This is showing the amount of time actually spent by each person (relative to their exposure to the page – basically a % of the total amount of time on the webpage / stimuli). What we see instantly is the ‘hot spots’ are in very different places – and the 50% off area, the top navigation area and also the left navigation bar received much less attention than you would have initially thought if you had simply looked at the first heatmap. What ths actually proves is that in the initial image people were quickly scanning around th page, creating a number of fixations (and therefore hot spots) but weren’t actually absorbing the message. We now know that people didn’t actually spend time dwelling on the 50% off offers, and that people only dwelled on the options on the left menu that they clicked through on – proving our initial hasty conclusions were not correct. In this case the large number of small fixations on the left navigation bar were probably caused by the fact the menu is not in alphabetical order, and people were quickly scanning to firstly check how the menu is laid out and then secondly trying to find the link the wanted (or expected to see). All of this should obviously be backed up by our findings in our post recording questioning, gaze replays, and statistical data. Once again, no one report or output gives us all the answers! The overall picture of interaction is very different but this is a much more realistic measure than our original fixation based heatmap.

So to conclude – heatmaps are very powerful visual indicators of interaction and performance of a stimuli to some extent but only if used correctly. The type of fixation filter applied, and the actual fixations themselves are key to the output and this will radically alter the data presented. Just showing a fixation heatmap does not give any solid findings, it also doesn’t show accurately if and how much information a participant has absorbed – this needs to be examined post testing with probing questions, based on the key objectives and task at hand. Any thoughts or comments? As always contact me at jon@acuity-ets.com.

 

Another Day, Another Tobii Studio Version! November 16, 2009

Posted by Jon Ward in Uncategorized.
add a comment

Ok, so hands up who is confused? Well I am, well was as Tobii released not only an alpha version of Studio 2.1 (no support remember – use it at your peril – it is a pre-production release!) but then also Studio version 1.7. It is important to note some of the changes that 1.7 brings and as such the release notes can be found here : Studio 1.7 Release Notes

Studio 1.7 will be the last version available to customers who do not hold a valid support contract for Tobii Studio. What this means is you can continue using this software for ever – but there will be not additional functionality or fixes made. To benefit from the latest innovations such as the new AOI statistics tool, better E-Prime integration and the like you will need to purchase or continue with your support and upgrade contract.

If you have any questions, queries or would like to find out how much a support contract will cost you then as always email me at jon@acuity-ets.com

 

Studio 2.1 Alpha Release – IMPORTANT INFO! PLEASE READ! November 9, 2009

Posted by Jon Ward in Studio, Technology, Tips And Tricks, Tobii, Updates.
add a comment

If you are the sort of person that keeps checking for the latest version of Tobii Studio (HELP > CHECK FOR UPDATES) then you may have seen a recent announcement saying that an open Alpha release of Studio 2.1 is available for download.

This major update has an amazing new statistics tool in it allowing for a whole range of new functions to be accessed. It also has improved audio / video time syncing and a few other tweaks and additions. While we hope that you will take time to download this version and have a play around please read the disclaimer that is displayed.

As this is an ALPHA (pre-release) version of the software it is NOT RECOMMENDED that you use this for commercial or important testing as there may be incomplete code, bugs or other things that need ironing out. Also Tobii won’t be able to offer technical support on this version until its full and final release. Tobii will not be liable or be held responsible for any lost data or issues that arise from using Studio 2.1.

On a lighter note – have a look at the new tool and let us know your thoughts, views and comments – if there is something you feel is lacking or needs improving let us know and we will pass this on. As always contact me at jon@acuity-ets.com.