Neuro-Tools: Emotion Detection January 16, 2017Posted by eyetrackrob in Biometric, Captiv, neuromarketing, Tips And Tricks, Uncategorized.
Tags: biofeedback, biometrics, emotion detection, emotions, GSR, Heart Rate Variability, Heartrate, respiration rate, valence
add a comment
Much of the research that requires biofeedback involves emotions and the detection or classification of those. But as you surely noticed emotions are a very complex topic with many different aspects and challenges. Measuring when and how strong these events occur is relatively easy using electrodermal sensors, cardiovascular sensors or respiratory sensors and a software in which to process the raw data (and maybe even applies emotion detection algorithms – as TEA Captiv does). Things that I covered in my previous posts.
But immediately after knowing when and how strong an emotion occurred, the inevitable question comes up: “was it a positive or negative emotion?” (Valence) with the usual follow up question: “which emotion was it?” (Classification).
We have seen that each tool has its merits when it comes to finding emotions, but most biofeedback sensors on their own have the limitation that they can’t really give us answers regarding those rather interesting questions about valence or classification.
However countless studies exist, that cover specific emotions or sets of emotions and that use different sensors to measure the bodily reactions to those emotions. If we could just review all those studies, we could surely come up with a map showing which emotions could best be captured with which sensors and how the measurements might differ from one emotion to another so that we could identify specific emotions (i.e. fear, happiness, anger etc.) from those measurements.
Sylvia D. Kreibig probably had the same idea and reviewed 134 publications that report research of emotional effects so that you don’t have to. Her review “Autonomic nervous system activity in emotion” was published in 2010 in Biological Psychology holds some interesting findings as well as food for thought.
Before getting to the eye opening results, there are a few take-aways from her research that might be interesting.
- Generally most research is done on negative emotions. And in general negative emotions have been associated with stronger autonomic reactions. However in her review she did not report on magnitude of changes partly for reasons described in #3).
- Heart rate was the most used measurement in those 134 studies, followed by GSR and much more than HRV or Respiration.
- Baselining! Do it! Some of the studies she reviewed did it, others didn’t. There are a number of ways to do a baseline: neutral or benchmarking/variable. While there is no definitive way to do it (making it more complicated to compare between studies), the important thing is that you use some kind of baseline to compare your results to.
- A rose is a rose is a rose. But in emotions the same term describing an emotion can mean different things. Disgust can be induced due to contamination (dirty toilets, fowl food) or due to injury (mutilations, blood). Sadness can provoke crying or not and there are many other ambiguities: although anger for example is always a negative emotion it can drive people either away (withdrawal/avoidance motivation) or pull them closer in an aggressive move (approach motivation). Emotions that are semantically close such as fear and anxiety or amusement and happiness might still be based on distinct behavioural systems!
- Artifacts may influence the measured response. Watch out for the effects of posture and movements, ambient temperature and cognitive demands! Start with sensors that give you good signal quality to start with. If you then use TEA Captiv you can process the data, apply artifact detection algorithms and filters to smoothen the data and eliminate unwanted effects.
There are a few more things that need to be considered when comparing data from different studies but these are my personal top 5 take-aways. Apart from the results of course.
In the table below you can see already my summary of her results. In her review, she reports that HR was increased in most emotions and surprise. However in her review she comes to the conclusion that the HR decreased in emotions that involve an element of passivity such as non-crying sadness, contentment, visual anticipatory pleasure and suspense.
GSR was increased in most emotions probably reflecting motor preparation and increased action tendency and of the more commonly induced emotions in (commercial) research just non-crying sadness lowers GSR. All other emotions tend to increase the reaction.
HRV has been shown in quite a few studies to be useful as an indicator for cognitive workload. Low HRV usually correlate with means high stress levels. Since her review was mainly focused on emotions, cognitive workload was not considered and the use of HRV was not too helpful.
So, what does this table tell us? Is there a specific fingerprint of biometric activities unique to each emotion?
Under very controlled conditions and also taking into account other types of measurements there might be potential to discover a unique signature to some emotions.
Unfortunately for many researchers, very distinct emotions such as Anger and Happiness or Joy and Disgust have very similar bodily reactions – if we look only at HR, GSR and Respiration Rate. Different types of sadness can cause a variety of reactions, making it actually a very interesting research subject in my opinion, but this doesn’t make it easier for your everyday research and especially when it comes to commercial research you might not be able to control for every possible factor and to use too many sensors.
My personal conclusion is that while tools such as GSR, HR, Respiration or Heart Rate Variability can help us detect emotions, in most research projects they don’t allow to uncover which emotion it is and not even if it is a positive or negative emotion.
But on the positive side, we still have a few other tools in our Neuro-Toolbox that can help us along the way: Facial Expression Analysis for example, Implicit Association Tests or even EEG can help us to understand emotions, associations and motivations and thus help us to detect valence or even to classify emotions.
With this in mind, I’ll be covering Facial Expression Analysis in my next post as it is probably the easiest to use out of the three.
If you want to dig deeper into the original report, you can find it here: http://www.sciencedirect.com/science/article/pii/S0301051110000827
Sylvia D. Kreibig has since then been involved in some interesting research projects following up on the results. Take a look at her work on Research Gate.
Neuro-Tools : Heart Rate & Respiration November 21, 2016Posted by eyetrackrob in Biometric, Captiv, eye tracking, Market Research, Marketing, neuromarketing, TEA, Technology, Uncategorized.
add a comment
Although not as fast as I thought, step by step, I’ll be covering the most relevant biofeedback sensors in this blog series. So far I’ve only managed to write about GSR, one of the sensors of the hour! Galvanic Skin Response has been around for a long time and in the past years it has gained lots of attention from researchers, but as you might have read in my last post, although it deserves all the attention it gets, it’s not always that simple to use.
Other measurements mentioned before that could tell you more about emotions or cognitive workload are respiration, heart rate and from this also the possibility to calculate the variability of the heart rate (HRV).
Heart Rate (HR) reflects the frequency of a complete heartbeat within a specific time window. It is typically expressed as beats per minute (bpm). The HR is constantly, antagonistically influenced by the sympathetic nervous system (SNS) and parasympathetic nervous system (PsNS) and in general heart rate, similar to GSR, unfolds rather slowly. Although with peak effects observed after 4 seconds and return to baseline after about 20 seconds it is much slower than GSR. Heart Rate Variability (HRV) on the other hand expresses the quick variations of the frequency between heart beats. The time between beats is measured in milliseconds (ms) and is called an “R-R interval” or “inter-beat interval (IBI).”
Image 1: shows a typical heart rhythm as recorded by an electrocardiogram (ECG). You can see heart rate (4bpm) as well as the differences in the inter-beat intervals.
Both measurements (HR and HRV) are closely related to emotional arousal, with HRV allowing for assessment of more sensitive and quicker changes, which also can be related to stress and cognitive workload (this might be a good topic for a follow up post).
While today many fitness devices exist that measure heart rate in the context of fitness and well being, those solutions might not be the ideal for your research. One of the reasons for this is the processing and averaging of data going on in the sensor.
Image 2: shows the same recording as averaged data export (blue) and as it was displayed during the recording (orange). The data was recorded with a wrist worn device measuring the HR optically using light. In the averaged data the highest heart rate is at around 100 bpm. In the live stream the same time frame shows much more variability (still averaging at around 100 bpm) and it’s clearly visible that it is not the highest value of the recording.
As mentioned above, heart rate has a relatively low sensitivity and slow response. Many wearable fitness trackers don’t allow to export the data for further analysis or allow to access only averaged data, where quick spikes in the data have been eliminated as noise. The result of this prepossessing of data is that the effects of emotion might be lost altogether. On the other hand to compute HRV correctly, continuous and precise measurements must be guaranteed. Just 2-3 missed data points can mean inaccurate calculations of the times between beats and thus again missing relevant events.
Image 3: In the live visualization the highest heart rate reaches 145bpm. However the suspiciously round form reaching to the peak value indicates that data points are missing and data was interpolated. This becomes clear when looking at the averaged data. This data would not be suited for interpretation of HR or HRV.
Another reason why many heart rate trackers available for fitness purposes are not necessarily a suitable solution for researchers is that most of them are worn on the wrist and use light to measure blood flow and from there derive the heart rate. Compared to sensors that are placed close to the heart and measure electrical impulses (electrocardiogram/ECG), sensors on the wrist have to overcome challenges of compensating for movements, muscle-tensing, sweating and potentially light interference. ECG sensors are therefore the recommended tool for data collection for research purposes as they are more sensitive to certain signal characteristics.
Image 4: ECG Sensor as belt or as electrodes
Research has associated respiration rate and depth with emotional impact and emotional valence. Interestingly olfactory information ascends directly to limbic areas and is not relayed through the thalamus as other sensory input. The Thalamus is a part of the brain which is acting as a relay and pre-processing for sensory information and is accounted to be relevant to regulate consciousness, arousal, wakefulness and alertness. As olfactory information is not relayed through this part of the brain, there is a different mechanism to make olfactory information conscious which leads to quicker physiological response and unconscious alternation of the respiratory pattern. Respiration patterns therefore allow to identify potentially unconscious liking or disliking and arousal. The deduction of a unique emotion from respiration rate and depth does not seem to be possible although more research is still needed in this area.
Respiration measurements can be obtained either from the use of dedicated clinical instruments, stretch sensitive respiration belts or can be calculated from ECG data. The latter being the least invasive for commercial research.
Figure 6. Stretch Sensitive Respiration Belt
ECG data can be processed in TEA Captiv to obtain HR, HRV and even respiration rate and as with GSR all of the mentioned measurements can be synchronized with eyetracking to understand what visual information influenced a rise in HR, a change in HRV or an alteration of respiration patterns.
In my next post I’ll take a look at how all these measurements can be combined and if through a combination it is possible to not only detect emotional events but also understand whether it is a positive or negative emotion and even which specific emotion it is. So, watch this space for more!
Copy Cat Brands – Who is Trying to Steal Your Attention? October 31, 2016Posted by Jon Ward in Advertising, eye tracking, Market Research, Marketing, neuromarketing, Shopper Research.
add a comment
Tim from Acuity has recently been speaking at a conference in Peru where he presented some of the exciting findings from our parasitic brands research last year. Using the world leading facilities at the GSK SSL and in partnership with the British Brands Group we tested people’s recognition of famous brands and their not-so-famous imposters under a variety of conditions. Have a watch of the video below and maybe head over to the Acuity Intelligence website and read more about the study here : http://www.acuity-intelligence.com/blog/statute-of-imitations
Tobii Eyetracking & Garmin GPS/ANT+ Data for Sports and Real-World Wayfinding Research October 31, 2016Posted by Scott Hodgins in Biometric, eye tracking, Glasses, Tips And Tricks, Tobii, Uncategorized.
add a comment
In Jon’s previous blog post he mentioned me running and training with some friends over at Forest Fit Clubs and added a video link. I wanted to reply to the inevitable questions about data overlays etc. and how did we do this with the Tobii software? The short answer is that we didn’t, here’s a “how to” to get you started.
This version is based on a running, other examples include:
- Wayfinding – overlay GPS data on the eyetracking video so you can immediately see where pedestrians moved in the built environment. Understanding how people use signage, do they see it? do they understand and use it?
- Driving & Flying – use the GPS and speed/G-Metrix data to understand the technique and relationships between looking-engaging and acting on that information.
- Data overlay’s are not just limited to the Garmin descriptions – you can hack the overlays to change the title and add maybe GSR data, or cognitive data and arousal metrics from an EEG such as the ABM X-10
We wanted to show the power of adding basic data overlays onto the eyetracking video so we could easily associate an action from the video with a resultant change in data. We had a Garmin VIRB XE that we had used for a technical demonstration with a customer. I had noticed that the VIRB produced MP4 files, as did the Tobii Glasses 2 (TG2), so the idea of hacking the workflow, swapping the VIRB video out and overlaying biometric and location data over the TG2 video data was born. Below is a video showing an overview of the software.
The kit list:
1 x Tobii Glasses 2 (any variant)
1 x Garmin Fenix 2 Sports watch (now replaced by the Fenix 3 and Fenix 3 HR, which may be an alternative source for HRM data)
1 x Garmin HRM RUN Heart rate monitor, we wanted this as it also offered specific run data, not just HR
1 x Garmin VIRB XE & Garmin VIRB Edit Software (we didn’t use the camera, just the software)
1 x Willing participant (me, and yes I calibrated myself, in daylight, outside, it is super-easy with the TG2) with a suitable pocket or running belt to stash the recording unit in.
- Assemble the TG2:
Connect Head Unit-Cable-Recording Unit, Insert SD Card, Insert Battery & power up. This took about 5 minutes instead of the normal 2 minutes as I threaded the cable under my base layer to control the cable movement and placed the recording unit in a neoprene running belt to control movement. (1)
- Power up the controlling computer, we use Dell Venue 11 Pro’s (Now Dell Latitude 11 5000) running W7 Pro or W10 Pro x64.
- Connect to the TG2 WLAN, start Tobii Glasses Controller, select “New Recording”, add a participant name and description.
- Calibrate: Select the icon in the bottom right of the screen to calibrate (easy on the tablet – just touch) and look at the calibration marker – for non Tobii users a fully 3D calibration is completed typically <5s. A major advantage of this ground-breaking 3D calibration model is that we don’t have to try and “work around the data” during analysis. (2)
- Start the recording, then start the recording on the Garmin Fenix while looking at the screen – it’s not perfect but we should be able to sync the data to the frame where the recording started and at 25fps video we are sure that we are sync’d to about 40ms. (3) Turn the laptop off, or put it in sleep mode.
- Run around, enjoy the beautiful scenery at Somerley.
- Finish session, cool down, stretch – up to 90 minutes per battery so we have plenty of time head back to the car and stop the recording on both the Garmin Fenix and the TG2
- Stopping the recording, then select the recording and start to output the 1080p HD video.
- Sync Garmin to the cloud – in this case it was Bluetooth to Garmin Connect on my iPhone then auto sync’d to the cloud (connect.garmin.com)
- Login to your connect account, select the activity and download the FIT or GPX data from this session.
- Open VIRB Edit, create a new video and import the video you exported from the Tobii Glasses Controller, then add this to the video timeline.
- Import the FIT or GPX data, click on G-Metrix and then on Data and find your file.
- Sync the two files using one of the options at the foot of the G-Metrix>Data dialogue.
- Now use either the Templates and Gauges options to add data overlays on to the video, you can use appearance to change the colour of the gauges.
- Importing the logo & setting up a new template is more art than science – good luck with that, I think it took me about a dozen failed attempts then it magically worked, I took the time to try again while writing this, it now shows as a Beta function in the software.
- Export the video to your chosen resolution and quality.
The next post will look at doing something similar, using TEA Captiv as a more scientific option, with multiple video feeds and more data options.
The end result:
- It is worth taking 5 minutes here to make sure you have free movement before starting, otherwise turning too sharply could lead to disconnection or discomfort. Because I used the wireless version, once I was wired up and adjusted I didn’t need to touch the system again until I was retrieving data.
- Other wearable/head mounted eyetrackers have issues when we start looking in different planes. Their calibration model is typically a one dimension transform that is adequate when looking in the calibrated plane, the calibration will suffer when looking in a different plane. For example if we calibrate on a vertical wall (looking at the corners of a picture) then place that picture flat on the desktop we will see trapezoidal offsets, this is also true if we calibrate in a horizontal plane (desk) and look at a vertical target (wall). The result is that if we are not cognoscente of this and take the distorted (erroneous) data at face value we risk producing worthless results.
- There is a sync port on the Tobii that can send/receive an LVTTL pulse to/from an external device, however the Garmin watch isn’t designed for this so we chose to keep it simple with a video based sync.
- Garmin data formats, I have always used the GPX download to import into VIRB Edit, the FIT data caused a few anomalies specifically with time and GPS origin. The FIT file has all of the data recorded in Garmin Connect, the GPX has less, there was still enough for this example though.
Neuro-Tools : GSR October 24, 2016Posted by eyetrackrob in Biometric, Captiv, eye tracking, Glasses, Market Research, neuromarketing, TEA, Tobii, Uncategorized.
As mentioned in my first introduction to this blog, the central nervous system is divided into different branches which monitor and control different body functions. One of the branches, the sympathetic nervous system (SNS), is responsible for quick fight or flight reactions. By constantly accessing the surroundings and scanning for situations that could potentially be dangerous an evaluation takes place which leads to preparations for an adequate fight or flight reaction. These preparations can be measured throughout the body and include changing heart rate, respiration and levels of sweat on hands and feet.
As we start to understand that these non-conscious reactions are strongly and inseparably tied to decision making processes and thus human behaviour, more and more researchers have become interested in using tools to measure these reactions.
In my first post a few weeks ago, I wrote about the general rise of Neuro-Tools and mentioned some such as eyetracking, EEG, facial expression analysis, GSR, heartrate and respiration as well as Implicit Association Tests as examples. The series aims to go through these tools one by one and review what they measure, how they work and of course also where we run into the limitations of those tools. With the general objective to give you a perspective on how these tools can be made a valuable addition for your research, I’d like to continue the series looking at GSR today. Initially I thought of talking about GSR, heartrate and respiration in this post as they could easily be summarized as “biometrics” or “biofeedback measurements”, but it turned out to be a quite long post, so I’ll split them down into individual posts.
Enough of the introductions! Let’s dig into the exciting world of biometrics starting with:
Galvanic Skin Response
GSR isn’t simply around measuring sweat, there is an awful lot more to it than that so before offering some general advice on what to look out for when considering to use GSR, I would like to explain the basics around this tool.
Electrodermal Activity (EDA), Skin Conductance (SC) or Galvanic Skin Response (GSR) refer to the ability of the skin to conduct electricity due to changes in the activity of the sweat glands and thus the secretion of sweat. Those changes are closely related to psychological processes and can be triggered by emotional stimulation. Electricity can be conducted when an external, unnoticeable current of constant voltage is applied, and with more moisture on the skin, electrical resistance decreases and skin conductance increases at a measurable level, although sweat might not necessarily be visible through visual observation.
Skin conductance can be divided into tonic and phasic activity. The level of conductivity of the tonic activity is constantly changing within each individual respondent, depending on their hydration, skin dryness and autonomic regulation in response to environmental factors such as temperature for example. Phasic response in turn are short term peaks in GSR reflecting reactions of the SNS to emotionally arousing events, mostly independent of the tonic level. For most of the time, we will be looking at these reactions which occur in the eccrine sweat glands.
GSR data is measured in microsiemens (μS) and the relevant phasic reactions can be quantified and analysed in different ways. Apart from the number of peaks occurring within a certain period after stimulus onset, peak amplitude, the time to reach peak value and the recovery time can be used for analysis. GSR can be used to determine strength of arousal but can’t be used to determine the valence (like or dislike) of a reaction.
Image 1 is an example of data including tonic and phasic activity.
The density of sweat glands varies across the body being highest on the head, the palms and fingers as well as on the sole of the feet. Most tools that measure the GSR are therefore build to be used on the fingers, where this reaction is strongest. However some instruments on the market allow for measuring the change in sweat levels on the wrist which often results in poorer data quality but might be necessary for some experiments where the hands are needed to interact with objects (i.e. holding mobile devices/products or typing).
Image 2 shows eccrine sweat gland concentration. Red areas indicate a high concentration of eccrine sweat glands (glands.cm−2) allowing to measure sympathetic arousal of low intensity and minimal duration. Green zones indicate a low concentration of relevant sweat glands able to measure only events of high intensity (for example on the wrist). (N. Taylor; C. Machado-Moreira, 2013)
Depending on the manufacturer and kind of system used for the measurements, sensors can be adhesive electrode pads that are already filled with conductive gel in order to reduce preparation time and to avoid electrode movement. Conductive gel is not mandatory but can improve data quality and ensure a good and stable electrical connection. Many GSR device manufacturers that provide systems for the use on fingers and toes, provide Velcro straps to place the electrodes firmly. In any case excessive respiration, movements and talking should be avoided as these can cause noise in the data or variations in the signal that can be misinterpreted.
Image 3 shows a classic sensor (TEA T-Sens GSR) that can be placed on the fingertips adjustable with velcro straps next to an Empatica E4 wristband.
As written in the introduction, reaction times and strength are highly individual and therefore distinct for each participant and they can vary between 400 milliseconds up to 5 seconds after presenting a stimulus. In a controlled lab environment a calibration procedure can help to understand individual differences in reactions but might not always be necessary. It is not advised to use GSR in areas where many low and high impact events can occur uncontrolled at any time and can be mixed with all kinds of artifacts, as it might be complex, if not impossible, to relate an emotional arousal peak to a specific event.
If free movement is a requirement (for example in shopper research) it is highly recommended to calibrate the GSR reaction time and strength for each participant and to complement the GSR measure with a synchronized video and sound feed -ideally even with eyetracking- to understand the source of the arousing events. The synchronization of several feeds can sometimes be a challenge but there are solutions that allow either for a live synchronization or a post-recording-synchronization.
Image 4 shows a synchronized recording of different sensors such as ECG, HR, HRV, Respiration and Cogntitive Workload with eyetracking (top right) and an additional video stream (bottom right). The synchronization can be done for example using the QR code that is visible on the screen (top left) marking a synchronization point in video and sensor feed.
Image 5 shows a TEA T-Log, a small and mobile device that emits a short flash of light that can be picked up by a camera or in the video of the Tobii Glasses marking a visible event in the video and a sync point in the sensor recordings.
How GSR raw data, filtered data and emotion detection works all synchronized with eyetracking, can be seen in the following short video, recorded from TEA Captiv. I also imported data from a wrist-worn GSR device but the data was not usable, which is why I chose to minimize those curves in the software. As you can see in Image 2 the concentration of eccrine sweat glands on the wrist is low which very often means having a very noisy signal or the absence of a signal. To improve the signal quality it is recommended to get a minimum level of tonic sweating, for example through some physical exercises. Although I did this (as you can indirectly and briefly see at the very beginning of the video), it wasn’t enough to make the measurement from the wrist usable. For these types of study (researching and improving the emotional and visual impact of TV commercials), I would usually recommend to use a remote eyetracker such as the Tobii X2-60 as well as sensors worn on the fingers (T-Sens GSR or similar), however I also wanted to show that it can easily be done with a mobile eyetracker if needed as shown below:
In comparison you can also watch a video of a similar test (same commercials) using a remote eyetracker as mentioned above. You’ll notice similarities in the general gaze data but also in the arousal detection, although you might also notice that each participant has a slightly different reaction time and the emotional threshold has an influence on how many emotional moments each person is experiencing:
There is still a bit more to know about GSR and we at Acuity are do offer training on methodologies, technology and best practices for your research. To give you a headstart on some of the things to consider have a think about these 4 questions and then maybe give us a call:
- Where will the data collection happen? Do you need to be completely mobile, or will it be a controlled environment close to a computer? If you go mobile, can you carry a small device to record the data or does the GSR device itself needs to store the data?
- What type of sensor do you need? Is it a viable option to use sensors on the fingers, or will you need to use the hands to hold something or type for example?
- Do you know how to analyse the data? GSR raw data is rarely usable. Do you know how to remove the effects of tonic activity and artifacts and do you need a software that can do it for you and find the relevant events?
- Do you need to synchronize the data with other devices and do you want to accumulate data over several participants?
In the next post I’ll be covering heart rate and respiration to wrap up the more commonly used biofeedback tools before taking on EEG, facial expression analysis, Implicit association tests and others. Stay tuned!
3D – The Key to Tobii’s Performance Lead October 17, 2016Posted by Scott Hodgins in eye tracking, Glasses, Market Research, Marketing, Media, neuromarketing, Shopper Research, Technology, Tips And Tricks, Tobii, Updates, Usability & UX.
Tags: eye tracking, eyetracking, Marketing, research, smi, Technology, Tobii
add a comment
This post is trying to answer some of the most common questions that we get asked – Why should I buy a Tobii? Why is it better? System “X” has a “better head box” and system “Y” is cheaper.
The answer from our point of view is simple, the eyetracking is more accurate than using other systems for more people over a longer timeframe.
This is a pretty grand claim, why are we so confident?
Let’s start at the beginning; Eyetracking itself is straight forward, there are several well documented methods to find and follow the pupil, Tobii uses a non-intrusive video based technique called “Pupil Centre Corneal Reflection” (PCCR). Essentially an IR illuminator is used to help differentiate between the pupil and the iris, it also creates a highlight or glint that we use as well. The Tobii systems use an improved version of this idea, the secret-sauce as it were being a combination of two things, illumination and data modelling. These two areas allow the remote and wearable trackers to monitor the respondents relative 3D position in space, adjust the calibration parameters in the 3D physiological model, and therefore afford a far greater range of movement than similar systems while keeping accuracy and precision.
(Figure below shows the native 3D data from the TG2)
Illumination: Tobii can use up to two different lighting techniques known as bright and dark pupil to optimise the illumination for the participant in that location, and crucially when they move we can adapt the illumination to keep track of them. This allows a Tobii to offer people greater freedom of movement while retaining the tracking accuracy without the need for constant drift correction from the system operator.
Data modelling: The Tobii method is different having typically used multiple cameras in their research grade eyetrackers and have done since the launch of the T and X series systems in 2007/8. The advantage of using multiple cameras is that we can physically describe the location of the eye in space. That is to say we know with a very high degree of accuracy where the centre of your eye is, and which eye it is, for every sample recorded. The slightly different images from the pair of cameras in an X2 for example allows the creation of a 3D physiological model of the eyes it is tracking during calibration. This approach allows Tobii to understand the movement of the eye or the eyetracker should one or the other move and adjust the calibration accordingly with a high degree of precision.
The net result is that the these systems can accommodate movement, even if the head leaves the area trackable by the hardware and can recover tracking when the eyes are visible again, this is one of the reasons people keep choosing Tobii for demanding applications like infant research and in-vivo commercial research. In a recent study Acuity Intelligence recruited 330 people as they were entering supermarkets and didn’t have to turn away a single participant because they could not be tracked – a first for any data collection exercise with this number of people regardless of the brand of technology they were using.
Don’t just take out word for it, please challenge us, whether it is onscreen, in the real world or in the emerging AR and VR application areas we can help.
add a comment
One of Acuity’s directors, Scott, likes to run around in the forest early morning and do some circuit training and we thought this would be an ideal opportunity to test out the Tobii glasses alongside his fitness tracker and GPS watch which allows us to overlay his positional data, heart rate, speed and distance travelled over the eye tracking video output. This would give a researcher a fantastic insight into the participants performance during sports research, medical or clinical trials or military type studies and is really simple to integrate. What really stood out for us was how well the Tobii Glasses 2 performed across a wide range of lighting conditions, movement, physical activity and stayed resilient to the moisture in the air and the beads of sweat from Scott’s forehead!
By using it’s unique four camera eyetracking system the Tobii Glasses can compensate for slippage which occurs under normal use – and was even more extreme under these test conditions – and it remains accurate, as you can see for the video! Also with the low hanging sun coming up through the trees, the Tobii Glasses full HD scene camera worked fantastically and eyetracking data remained solid, robust and accurate.
Don’t just take my word for it – have a look for yourself and if you want to discuss the Tobii Glasses, our range of biometric options or anything else then please don’t hesitate to get in touch via email@example.com or on +44 1189 000795.
Neuro-Tools : Essentials September 23, 2016Posted by eyetrackrob in Biometric, Captiv, Market Research, neuromarketing, Shopper Research, TEA, Technology.
In recent years eyetracking has become a standard measurement in many research fields and with the “neuro”-hype many companies and universities have started to add direct and / or indirect measurements of the central nervous system to their research toolbox aiming to add an additional dimension to help understand human behaviour and decision making.
Far from being a complete catalogue of all the options currently available this series of posts will concentrate on the more practical, and commonly used, tools for commercial research – things such as salience mapping, eyetracking, facial expression analysis, electroencephalography (EEG), implicit association tests and galvanic skin response (GSR).
With the dawn of wearable fitness devices that can easily measure blood volume pulse (BVP), from which heart rate and heart rate variability (HRV) may be derived, access to these measurements have become much easier, although not without limitations as it will become clearer in this series of blogs. Additionally some of those wearable fitness devices do allow some measurement of measure electro-dermal activity (EDA) and skin temperature showing that this technology is not far from mainstream use, at least in some form.
Although the word “neuro” is very often thought as a synonym for “brain”, neuroscience comprises the study of the complete nervous system and the tools and techniques involved are suited to measure directly or indirectly certain aspects of the processes occurring within. These tools can be broadly divided into three categories : neuro measurements, behavioural measurements and biofeedback measurements. The latter is as good as a starting point as any.
Our nervous system is quite complex and can be divided into different branches which monitor and control different body functions.
One of the branches, the sympathetic nervous system (SNS), is responsible for quick fight or flight reactions. By constantly accessing the surroundings and scanning for situations that could potentially be dangerous an evaluation takes place which leads to preparations for an adequate fight or flight reaction. These preparations can be measured throughout the body and include changes in heart rate, levels of sweat on hands and feet and respiration.
The reactions of the SNS are not immediate to the exposure to the stimulus to be evaluated. Reaction times and strength are highly individual and distinct for different measures. They can vary between 400 milliseconds up to 5 seconds. As part of the fight or flight reactions, the change in sweat levels on the palms and fingertips is thought to be an evolutionary mechanism allowing a firmer grip. Interestingly this reaction can also be measured on the feet!
Changes in pulse are associated with changes in either physical exercise or arousal. If physical exercise is constant, heart rate variation can be a reliable index of arousal. Research has been conducted measuring different combinations of HRV and heart rate related to stress and to the identification of positive or negative valence and even specific emotions.
A third measured physiological measurements is respiration. The perception or anticipation of odours is depended on respiration. In other words our sense of smell and therefore emotional activation through it, is enhanced by respiration. Research has associated respiration rate and depth with emotional impact and emotional valence.
At Acuity we provide tools to measure biofeedback synchronized with eyetracking to help understand not only where people are looking but also the emotional impact that it is causing. We can provide a series of sensors from different manufacturers that can be brought together into Captiv L700, a software from our friends over at TEA ergo (click here to see a video of the TEA Captiv Software integrating a variety of neuro-tools).
We are also happy to help you with training to explain how those sensors work, what they are measuring and get you started on the analysis and interpretation side of things.
My next post will focus on GSR but I will cover other biometrics, EEG, facial expression analysis and complements to eyetracking data in the following posts.
Stay tuned or feel free to get in touch via firstname.lastname@example.org to learn more about how to use neuro-tools in your research.
Where Is The Value From Eyetracking August 9, 2016Posted by Jon Ward in eye tracking, Glasses, Market Research, Marketing, Shopper Research, Technology, Tobii, Usability & UX.
add a comment
Of the past decade we have seen eyetracking move out of the research labs and academic institutes and begin to hit mainstream uses in markets such as gaming and control of operating systems, but if we look holisitically over every possible use and application for this amazing technology one question crops up quite regularly – “what can’t you eyetrack?”
Potentially this could prompt a simple answer and we list some obvious things and limitations of eyetracking as a technology – but I think the bigger question is “will eyetracking add value to what I am doing?” as it isn’t always obvious where the return on investment is from the data that eyetracking gives you.
There are actually very few situations where you can’t eyetrack people (or indeed some species of animal!) – for example recently Tobii equipment was used to eyetrack a F1 driver (https://www.youtube.com/watch?v=zjkUUMZnTnU) where the latest technology readily mounted inside the very snug and close-fitting helmet of Nico Hulkenberg. Staying with a sports theme Zoe Wimshurst from Southampton Solent University used the Tobii Glasses on a gymnast who performed a number of backflips while the equipment not only stayed in place, but also remained accurate thanks to Tobii’s 4 camera binocular platform (https://twitter.com/ZoeWimshurst/status/760472938499936256). And while we are name dropping I eyetracked Cristiano Ronaldo some years ago (https://www.youtube.com/watch?v=2NcUkvIX6no) with a previous generation platform. Sports is one thing, but what about something different… how about primate research, yup – we can tick the box there as well! For example some trials we did with Edinburgh Zoo (http://www.living-links.org/2012/11/), and the typical uses of consumer shopper and online research, psychology, linguistics and infant research are all areas where eyetracking is heavily involved, and this is of course now developing into the virtual world, mounting systems into VR and AR units for the next generation of these fields… not to mention interaction within gaming – both user testing and for control applications (http://www.tobii.com/xperience/apps/the-division/).
So you might say “great PR but what does that give us apart from a YouTube video” – well lets look at some examples, starting with F1 – if by watching the eye movements and point of gaze of a F1 driver we can shave 0.1 seconds per lap from a 58 lap race, we gain 6 seconds. In the Australian GP 7 drivers (from the 16 that finished) could have gained a position with this advantage, one driver could have jumped 4 places – gaining 7 points in the drivers championship in the process. For a footballer, releasing the ball 0.25 second earlier because you have the ability to read the field more efficiently visual performance training could be the difference between beating the offside trap and scoring or dropping points in a multi-billion pound race to the title. In elite performance the smallest of margins can mean winning or losing, and in today’s environment that could mean the difference between fame and fortune, or fading into obscurity.
If we look at medical or clinical uses, being able to identify things like autism at an earlier stage (using non-verbal responses through measuring eye movements) allows parents and clinicians to adapt and plan a child’s education to minimise the impact on their development and lets the family be more prepared moving forward. Building up databases of typical and non-typical developing children from all walks of life both in and out of the lab allows milestones to be measured, new learning or rehabilitation techniques to be developed. Being able to extract information without the requirement for self reporting or verbal communication breaks down barriers that would otherwise mean that diagnosis may not be available for weeks, months or even years later otherwise. Using the latest techniques for training and both real and virtual presentation of scenarios means that we can now train healthcare professionals, surgeons and patients in situations that could be life threatening without the risk, and by understanding totally how they interact and engage gives us insights never before available.
When looking at process management, health and safety or manufacture there are always people in a workplace that are ‘naturals’ at what they do, they have either adapted to their task very comfortable and excelled, or more likely through repetition and learning have become expert. Using eyetracking we can observe how these people operate, understand if and how they anticipate next steps, how they scan and search for elements or their situational awareness. Next we bring on the novice or the person to improve, observe them and compare them to our experts, guiding their interactions with a proven benchmark. An accident at work can be costly both in financial and possibly human terms, so use a simulator, VR environment or test area and monitor people’s actions and movements – and pre-empt possible bad situations. Does that fork lift driver check either side of the load often enough? How is that member of the QA team better at spotting defects in products – is their search strategy different? What makes that soldier better at finding ground disturbance in the field and locating IED’s? How can we be sure a mechanic checks every inch of an engine during a service and a vehicle is safe to use?
Let’s think about consumer research – a mainstay of eyetracking and an ever growing market place. With the adoption of mobile devices on-screen real estate is smaller, we consume information quicker and we need to be more efficient at being noticed, getting our message across and of course helping the customer with their journey. A 1% increase in click-throughs, sign up or user experience could mean huge increases in a companies KPI’s but often selling in ideas and changes to a stakeholder can be challenging. Eyetracking provide a very visual way to demonstrate why customers aren’t (or indeed are!) doing what was expected on a website, image or menu system. Jumping into the retail space we are bombarded with products, signage, offers, POS, noise, colour and a whole lot more every time we walk through a shop entrance, or a mall, or a petrol forecourt – consumers self reporting their actions always has its limitations and this is even more evident in such a busy space as a retail outlet. Our eyes are digesting heaps of information, our brain is processing and discarding things that aren’t pertinent to the task and consumers simply can’t remember, never mind verbalise, all of this at the rate it is going. Unlock the subconscious by measuring the bodies leading input device – the visual system. Again small performance gains at the checkout in one store quickly multiply to large increases across a brand, retailer or globally. What distracts the shopper or draws their attention away from where we want them to look? What attracts them to our competitors? What elements do they use to navigate, make a decision or determine quality? Can people navigate around the virtual store before we invest in deploying the new layout?
Think about your project, objective or study – is the interaction with the stimulus, product, environment or other people of interest? Do you want to know what and when they use visual information at any stage in the trial to inform the decision-making process? Do you want to understand why someone is better at a task than someone else? Do you want a very visible way of demonstrating a participants behaviour to a stakeholder? If the answer to any of these questions (or many more similar to these) is yes, then there is value in eyetracking for you.
Speak to us about methodologies for your study, the different types of equipment on hand and how we can help you get the insights you need.
Tobii Glasses 2 Software Updates! April 13, 2016Posted by Jon Ward in eye tracking, Glasses, Tobii, Updates.
1 comment so far
As part of the evolution of the Tobii Glasses 2 software platforms we are happy to announce some new functionality released and available now!
Event Logging in the Tobii Pro Glasses Controller We have added the possibility to log live events in the Pro Glasses Controller software. This enables you to highlight interesting parts during the recording. The events can also be exported, together with all the other data collected, into the Tobii Pro Glasses Analyzer software.
Time of Interest Feature in the Tobii Pro Glasses Analyzer The latest release of the Pro Glasses Analyzer includes the option to segment data by creating custom portions of it. With the Time of Interest feature, you can choose a start event and a stop event to get a clearly defined set of data for a particular event. Also, we added the possibility to view logged live events, created in the Pro Glasses Controller, to be able to find the interesting parts of the recording and use these when creating Times of Interest.
Other improvements in this release:
- New quick-access menu
- Possibility to resume Real-World Mapping by storing all queued automatic mapping tasks when closing the program
- Major performance and stability improvements for big projects, where some operations are 10+ times quicker