TalentIndikator©

We are proud of our product TalentIndikator©. Therefore we are also transparent around the trustworthyness and validity of our testing tool. Here you can read all about the test tool TalentIndikator©.

General information about the TalentIndicator Test

Every test is a sample of reality. Our aim is to uncover the talent potential of each individual, enabling them to better articulate their strengths, develop and leverage their talents to build competencies, and place these within a systemic framework.

We actively address the uncertainty inherent in sampling – to what extent can the test reveal what we are truly looking for? The larger the sample, the greater the reliability. We have chosen to measure 34 talents and 3 credibility indicators. Control questions, answers, “Next & Time Out,” and response distribution analysis are our methods for ensuring credibility.

We use a criterion-related scoring approach, where the test results themselves are assessed. They are not compared to a given norm, meaning interpretation is not driven by deviations from an external standard, but by the talent completing the test.

The purpose is to indicate the individual’s talent profile as accurately as possible. Normative tests – where results are expressed as deviations from a norm – tend to encourage context-dependent responses. This means a test taker may overemphasize what they believe is the desirable role for a given project. Normative tests are based on hypotheses or statistics about what is most common. In contrast, ipsative tests aim to eliminate context and the “desired” profile.

We have chosen an ipsative structure, which reduces the likelihood of manipulation and the possibility of influencing the outcome.

The individual profile ranks the person’s preferences according to their evaluation of a series of “forced choices.”

Controlling questions are a construction where an amount of questions is exposed twice during the course of the test, and measures on the grade of agreemenet, on an absolut as well as a relative dimension. We are looking for the grade of consistency in the answers.

To simulate the pressures of reality, there is given a maximum timeframe of 20 seconds for answers. We measure the used time, amount of times the time ran out, and finally the respondant has a choice of actively skipping a question. We have this as a possibility to secure that the respondant is not forced to make choice on something that they do not know. This give us the possibility of precisely measuring where the person is uncertain and maybe controlled by a given situation.

The last parameter – a topological illustration of the answer distribution, where we search for a mirroring in the central axis, that with the chosen algorithm is taken as an expression for the person completing the test with a high rate of clarified self insight.

Under one we use these 3 parameters + the persons time spent as an indicator of the collected validity and reliability of the answers.

Talentindikatoren© is built with an ipsative scoring structure. All choices and skips come from a forced ranking of two simultaneous equivalent statements.

The purpose is to compare a persons relative preference for different value sets and not the persons absolut preference for each of the activities compared to other people. Ipsative scoring systems are systems that are suitable for ranking a persons score.

Does the test measure what you expect? Will the test be reliable? Does the test show the same result over time?

When we discuss validity, is is because we wish for as high a grade of predicting validity as possible – this is why we make the test in the first place.

We consider validity through 3 different levels: Rules, Guidelines and Results.

Rules

The Data Protection Authority have demands in their capabilities as administrators of personal data laws related to handling and processing of personal data. We work from 4 principles:

  • Availability – That a specified group of TalentInsights employees at the right time and place have access to the information that the person has given consent to them using.
  • Confidentiality – that others, who shouldn’t have access to processing this information, will be stopped in doing so.
  • Integrity – that data is, what they claim to be, this means that they are not changed (for example deletion), without this being stated clearly.
  • Traceability – that which can be documented who has completed it, seen, changed, deleted or in other ways processed data.

We are approved by the Data Protection Authorities as data processors of personal data.

TalentCloud is executed through a 128bit crypted SSL link at i23 and the server is certified through approved standards.

This in part means, that every approach to data needs to be secured through authenticity, so that you are as sure as possible that the person who has given the answers is the right person, and that others have not had the opportunity to change the registered data.

Data is stored in TalentCloud for 6 months, after which the data is anonymized.

Read about our processing of data here.

Guidelines

We comply with guidelines given by:

  • Dansk Psykologforening5 (1999)
  • Videnscenter for Professionel Personvurdering6 (2011)

as to how you should act in connection with personality tests, in our own instrutions and work methods

Interpretation of results – validity

Interpretation of results is handled from 2 guidelines. The internal and the external, where the internal reflects the relations we in our model structure have put in. The external is an expression as to whether the people we evaluate can confirm the picture we are drawing of them.

The internal validity around the results has been secured by:

  • Asking 408 questions formed as 204 paired positive statements
  • Asking about every factor (talent) 12 different times
  • Have built in 16 controlling questions
  • Have an ipsative scoring structure
  • Running with time management on all questions (maximum given amount of time per question is 20 seconds.)
  • Not giving the possibility of correcting earlier given answers
  • Using a scale without a neutral midpoint

The external validity has been secured by the fact that there at this given moment are no test persons, who have been able to discard the test results – there are a few that have commented on the order of the individual talents, but not on the existence of the relevant feature.

We often do statistical evaluations of the underlying model. What we often verify, is the scale of independence between the 34 factors. It is important for the models integrity and validity that the characteristics we mirror are so robust and unambiguous as possible.

Interpretation of results – reliability

  • The reliability in the test is controlled in part by a longitudinal and partly a splithalf test design.
  • The accumulated base of test results gets tested every 12 months, where the tests from the last 12 months are measured in relation to all previously given, also called a test-retest reliability.
  • In part we split the results randomly in two groupings, and the reliability between these two is validated.
  • The reliability measures in both cases the uniformity over time and across of the population. We have not been able to find measuring errors in the sisten since it was released in 2005.
  • We measure with Cronbach Alpha on the whole population as well as split-half.

Test of correlations

The average correlation between the 561 pairwise correlations was last time measured at 0,144 and latest 2020 measured at 0,122. This means that with a 99% confidence interval on respectively positive as negative correlations, the analysis shows that it is between 91,35% and 96,67% sure, that there is no correlation between the 34 individual talents. In other words this means that the model structure secures, that there is no statistical correlation between the way the individual preferences are uncovered.

Test of independence with the help of random numbers

We have additionally made a test, that compares the empirical data with a randomly generated data population. A so called Monte-Carlo drive of 561 hypothetical correlations, placed in the interval [-0,405;0,457] – this means the smallest and the biggest in the empirical material. We have used Excels add-on program for Data Analysis, random numbers and an even distribution.

If we look at 99% confidence interval, lower limit coincidence (-0,004) and upper limit for the empirical drive (-0,014) they are placed very close to each other. This means that 1/100 of the distance in the interval [0; -1]. Or 1/200, which means per thousand in relation to the correlations possible outcome area [-1; 1].