Deductive reasoning screening tests discussion

Discuss applications to the clearing house (and to courses that are not in the clearing house system), screening assessments, interviews, reserve lists, places, etc. here
RJParker
Posts: 257
Joined: Thu Feb 13, 2014 3:44 pm

Re: Deductive reasoning screening tests discussion

Post by RJParker » Thu Feb 27, 2020 8:11 am

Spatch wrote:
Wed Feb 26, 2020 10:38 pm
Ultimately, how much of this conversation is about building a system that picks the best clinical psychologists, or just building a system that picks us?
You should sticky that post somewhere :-)

PinkFreud19
Posts: 44
Joined: Sat May 18, 2019 3:08 pm

Re: Deductive reasoning screening tests discussion

Post by PinkFreud19 » Thu Feb 27, 2020 1:07 pm

Thank you for your view, Spatch. I understand and appreciate the perspective that the tests are a useful way of quickly and efficiently reducing large numbers of applicants without necessarily resulting in a detriment to the quality of the quality of those selected. This is why I am not opposed to the use of selection tests more generally, or even the use of GMA tests in moderation (I.e amongst some, but not all, of the courses).

I also acknowledge that each form of selection process will be biased against particular groups in their own way. Again, for that reason, I think it would be unfair on some if all courses only provided interviews, or only shortlisted using group exercises.

The reason that I disagree with their widespread use is not so much because it will detriment the quality of clinical psychologists, but more to do with the ethics of restricting opportunity to some groups over others. After all, is that not why the use of IQ tests in job recruitment is generally banned in Western countries, as they are biased against disadvantaged groups?

At best, they limit opportunity of access to the profession from those who, while potentially competent, do not best have their skills and talents evinced by the processes associated with GMAs. At worst, they are potentially discriminatory against people from disadvantaged backgrounds who are not able to access the same opportunities to be effective on these tests as others. Again, if there are data to contradict this claim, I welcome it.

Even if it is not specifically the minority and disadvantaged groups that are adversely affected, but perhaps other groups (those who are competent but nonetheless struggle to demonstrate this on GMAs), I think it still boils down to the question of “why is equality or opportunity important to us?”, when you could use the argument, as has been done here, that “it doesn’t matter if some don’t make it because there are plenty of competent people who do”. My question is more “how comfortable are we with a selection system that bars access to some people, often on the basis of circumstance or life experience, that otherwise would be excellent?”. Admittedly, this criticism does not selectively apply to GMAs, which is why I feel that a mix of methods among courses is best.

The other ethical issue I take is the stress and psychological impact of these tests, in an already anxiety provoking situation. The idea that your entire year’s chance of being successful in the application rests on as little as 30 mins of performance in an abnormally stressful situation, is pretty brutal. What’s more brutal, still, is the ease in which “failure” at this stage can be misinterpreted as being “not intelligent enough”, when this could be entirely false. Can we blame people for making this interpretation when the sense of vulnerability will be at peak and we tell them that “the tests are highly valid [so it Must be that you’re not good enough]. It’s the brutality of this particular methodology that makes me feel uncomfortable with GMAs. They do not feel like psychologically minded methods of selection.

Your last point is a good thought experiment. If we modify it slightly, so that it is about a friend that we know would be excellent but does not benefit from the selection process. We might empathise with them and be upset on their behalf that the profession has so rigidly fixated on one particular method and feel that it is unfair that they do not give everyone a fair chance to demonstrate their skills. This is the sentiment that I was trying to catch in my above points, while also attempting to acknowledge the role of my own biases from personal experience of this process.

To clarify, I’m not arguing that we are going down some slippery slope that ends in all courses using GMAs. I’m arguing that this should be avoided at all costs.

User avatar
Spatch
Posts: 1435
Joined: Sun Mar 25, 2007 4:18 pm
Location: The other side of paradise
Contact:

Re: Deductive reasoning screening tests discussion

Post by Spatch » Fri Feb 28, 2020 5:39 pm

PinkFreud, you make some good points. Addressing each one in turn:
After all, is that not why the use of IQ tests in job recruitment is generally banned in Western countries, as they are biased against disadvantaged groups?
IQ tests aren't 'banned' per se, but usually not utilised a) because valid, professionally administered IQ tests are lengthy and expensive, b) not predictive of outcome for the vast majority of jobs.

https://www.hiresuccess.com/resources/g ... ting-legal

On the contrary it has been argued are that psychometric testing levels the playing field compared to selecting people from Oxbridge or a tap on the shoulder at public school (which is how often we used to recruit to the civil service or MI6). This has led to the rise of standardised tests at graduate assesment centres, GMAs and similar, which are not exactly kosher IQ tests, but would heavily load onto your WAISes and whathave you if you were to compare.
At best, they limit opportunity of access to the profession from those who, while potentially competent, do not best have their skills and talents evinced by the processes associated with GMAs. At worst, they are potentially discriminatory against people from disadvantaged backgrounds who are not able to access the same opportunities to be effective on these tests as others. Again, if there are data to contradict this claim, I welcome it.
There is some truth to this, but the full story is quite nuanced. In short, a good test that is valid and reliable for the particular task should not discriminate, and part of the process of devising the selection is to be aware of the strengths and weaknesses of the various tests and its applicability (which is incidentally part of what you learn on your DClinPsy). If a test is felt it may discriminate (usually unintentionally) there needs to be reasonable adjustment made, or variants produced that are culture specific. There are actually government guidelines around this like the POSTtechnical report: Psychometric Testing in the Workplace (1995).

However, there are also incidents where tests have been bluntly applied and have caused unnecessary disadvantage e.g. this: https://www.porterdodson.co.uk/blog/psy ... riminatory

Note this isn't against the use of psychometric tests, it's about the inappropriate use of them.
My question is more “how comfortable are we with a selection system that bars access to some people, often on the basis of circumstance or life experience, that otherwise would be excellent?”.
I think that's a good question. I guess my first thought would be what proof or evidence do you have that these candidates would actually be 'excellent' and how would that excellence be defined. My second is that actually as a society in 21st Century Britain, we seem very comfortable in allowing brilliant minds to rot away in call centres, exploiting them as deliveroo riders, or firing them when they get to the age of 50. If you ask me the whole education system isn't built around brilliance, but conformity, with brilliant minds often being seen as trouble makers or embroynic malcontents (witness the majority reaction to things like Universal Basic Income, Extinction Rebellion or any other radical change).

But let's take it back to your question. I actually happen to believe that we do keep out many excellent candidates if only for reasons of sheer numbers. I also happen to believe that brilliant DClinPsy applicants would also go on to make brilliant teachers, medics, graduate scheme trainees, managers, researchers, nurses, activists, care workers, therapists etc, which do have very different selection criteria. For me, it's not the excellent candidates I think about the most, because in my experience those tend to do fine. I feel it's the more mediocre ones that have a really hard road ahead of them, who I worry about.
The idea that your entire year’s chance of being successful in the application rests on as little as 30 mins of performance in an abnormally stressful situation, is pretty brutal.
How is this different from a traditional interview situation? Doesn't that also disadvantage certain people as well? Can there ever be a stress free method of selection I would wonder? I would also counter that using standardised psychometrics, an array of assessment techniques, analysing data, refinign methodologies and trying to reduce subjectivity/ bias are are pretty much in sync with the fundamental principles of psychological science (which clinical psychology is a branch of). Selection isn't person centered, or 'psychologically minded' around the needs of the individual because it's not a therapeutic intervention or actually being done to benefit the applicant, it's purely for the benefit of the courses.

I am intrigued with how a genuinely workable 'psychologically minded' way of any selection procedure would work or even look like (bearing in mind the limitations of volume, physical resource, time and existing employment legislation).
If we modify it slightly, so that it is about a friend that we know would be excellent but does not benefit from the selection process. We might empathise with them and be upset on their behalf that the profession has so rigidly fixated on one particular method and feel that it is unfair that they do not give everyone a fair chance to demonstrate their skills.
If you were to modify it in that way, I think empathising and being upset on their behalf would be considerably less helpful than finding routes that would value their alternative attributes and strengths. If we are talking about rigidity , I would encourage them themselves to not fixate on CP as the only the profession or view selection as some kind of existential validation, and for it to be one option among many where they could possibly shine.
Shameless plug alert:

Irrelevant Experience: The Secret Diary of an Assistant Psychologist is available at Amazon
http://www.amazon.co.uk/Irrelevant-Expe ... 00EQFE5JW/

HK16
Posts: 2
Joined: Wed Feb 26, 2020 11:17 am

Re: Deductive reasoning screening tests discussion

Post by HK16 » Fri Mar 06, 2020 1:44 pm

Found the deductive reasoning - unreasoning haha

I practiced both verbal and deductive and felt stronger with the deductive but actually found the verbal to be a lot better than I expected and the deductive a shambles.

I think half my time was wasted keeping the 'up' arrow pressed because it wouldn't let me scroll up to see the top part of the questions. I wish they had released the practice for the deductive but hey ho what is done is done. Just sad that I thought doing the tests gave me a bit of control over the process but it definitely didn't.

PinkFreud19
Posts: 44
Joined: Sat May 18, 2019 3:08 pm

Re: Deductive reasoning screening tests discussion

Post by PinkFreud19 » Mon Apr 06, 2020 11:54 pm

Hi Spatch, I'd just like to say that I appreciate your eloquently written and well thought-through post. Your time has not gone to waste but I have found myself in a busy spot in recent times and have not had a free moment to write a reply that does your comment justice.

In the meantime, I agree and you have certainly softened my attitude. I suppose my contention has evolved from "I really dislike the use of GMAs" to "I believe diversity of selection processes between courses is optimal for both the selection of a diverse range of candidates, while also sitting more comfortably with me from an ethical perspective". I would, however, be quite disappointed if, in ten years time, all courses required a psychometric testing for shortlisting.

To be open, the source of this disgruntlement stems from the fact that I have always performed top of my class in statistics and the mathematical side of psychology, right through undergraduate and master's. I'd never struggled at all in this area. I was a little shocked, therefore when I scored at a pretty embarassing percentile on the numerical reasoning test. Being subsequently reminded that these tests are valid, with the tempting interpretation that this must mean I am poor at numerical reasoning, or generally stupid, sits quite uncomfortably with a view of myself that all previous sources of evidence had built up. If I then try and reason that perhaps I was tired, and the mistakes I made on the test compounded and snowballed, plus some anxiety, this then feels as if I am trying to make excuses for myself; is this externalising blame? And at a meta-level, the personal uncertainty and cognitive dissonance created by this experience is perhaps a large contribution to my dislike of them. I'm aware of these biases of personal experience playing out but, at the same time, I think this experience gives me insights into some of the disadvantages.

User avatar
miriam
Site Admin
Posts: 7938
Joined: Sat Mar 24, 2007 11:20 pm
Location: Bucks
Contact:

Re: Deductive reasoning screening tests discussion

Post by miriam » Tue Apr 07, 2020 4:05 pm

PinkFreud19 wrote:
Mon Apr 06, 2020 11:54 pm
Being subsequently reminded that these tests are valid, with the tempting interpretation that this must mean I am poor at numerical reasoning, or generally stupid, sits quite uncomfortably with a view of myself that all previous sources of evidence had built up. If I then try and reason that perhaps I was tired, and the mistakes I made on the test compounded and snowballed, plus some anxiety, this then feels as if I am trying to make excuses for myself; is this externalising blame? And at a meta-level, the personal uncertainty and cognitive dissonance created by this experience is perhaps a large contribution to my dislike of them. I'm aware of these biases of personal experience playing out but, at the same time, I think this experience gives me insights into some of the disadvantages.
Indeed, this is about your attributions, and whether they are comfortable external stable attributions ("an unfair process") or uncomfortable internal ones ("I didn't perform as well as I expected") and as you've been able to identify your initial thoughts (that are quite self-critical) and later attempts at reasoning (including a description of legitimate possibilities as "making excuses") you probably have quite a lot to reflect on. I'd think a lower test score might reflect high standards of competition (perhaps markedly higher than your cohort during your degree), stress/anxiety (particularly if you were telling yourself that your score on this task was a gatekeeper for your professional aspirations), tiredness, misreading of questions, perfectionism or spending too long on them, or the particular modality or type of questions might not match with the prior assessments where you had done well, but it might also reflect your expectations (eg if you thought "I'm quite good at maths, so I can focus on preparing for the other stuff"). Thinking about all this stuff will hopefully guide you as to how you can challenge some of those attributions, and either choose to change your preparation for next year or select courses with different selection methods.
Miriam

See my blog at http://clinpsyeye.wordpress.com

PinkFreud19
Posts: 44
Joined: Sat May 18, 2019 3:08 pm

Re: Deductive reasoning screening tests discussion

Post by PinkFreud19 » Wed Apr 08, 2020 12:51 pm

miriam wrote:
Tue Apr 07, 2020 4:05 pm
Indeed, this is about your attributions, and whether they are comfortable external stable attributions ("an unfair process") or uncomfortable internal ones ("I didn't perform as well as I expected") and as you've been able to identify your initial thoughts (that are quite self-critical) and later attempts at reasoning (including a description of legitimate possibilities as "making excuses") you probably have quite a lot to reflect on. I'd think a lower test score might reflect high standards of competition (perhaps markedly higher than your cohort during your degree), stress/anxiety (particularly if you were telling yourself that your score on this task was a gatekeeper for your professional aspirations), tiredness, misreading of questions, perfectionism or spending too long on them, or the particular modality or type of questions might not match with the prior assessments where you had done well, but it might also reflect your expectations (eg if you thought "I'm quite good at maths, so I can focus on preparing for the other stuff"). Thinking about all this stuff will hopefully guide you as to how you can challenge some of those attributions, and either choose to change your preparation for next year or select courses with different selection methods.
I appreciate your thoughts Miriam, thank you. It is quite helpful to see these factors sounded out. There is some validity to the idea that performance on these tests is multifactorial, which is why I came to quite a critical conclusion about the use of these tests, which makes me ask; how often is this occuring in others too? And are there minority or disadvantaged groups (I'm thinking beyond race and gender here) that are disproportionately affected by these factors?

On the flip side, I need to not go too far the other way and go too far with the external attributions, as you quite rightly pointed out. I was, perhaps, too tired and stressed when I sat the test and I may not have given it as much practice and thought as the verbal test. These are definitely factors that I could have taken personal accountability for.

Furthermore, as Spatch pointed out, these factors are not unique to GMA tests; they also occur with interviews and group tasks. Although, to balance that point, interviews are an inevitable part of the process while GMAs are not.

To note, I was successful in my application last year and I am currently a first year trainee. I am very grateful that non-GMA options were available, as this played to my strengths. However, I appreciate the danger of biases in favour of supporting an application system that favours us, rather than one is fair, and that some brilliant future CPs are not in training with the explicit help of GMAs

User avatar
Spatch
Posts: 1435
Joined: Sun Mar 25, 2007 4:18 pm
Location: The other side of paradise
Contact:

Re: Deductive reasoning screening tests discussion

Post by Spatch » Wed Apr 08, 2020 1:36 pm

Glad you found it thought provoking.
In the meantime, I agree and you have certainly softened my attitude. I suppose my contention has evolved from "I really dislike the use of GMAs" to "I believe diversity of selection processes between courses is optimal for both the selection of a diverse range of candidates, while also sitting more comfortably with me from an ethical perspective". I would, however, be quite disappointed if, in ten years time, all courses required a psychometric testing for shortlisting.
I personally think that the diversity of selection methods works well to a large extent, and would like to see a range of courses that use methods of selection that fit with their individual ethos i.e. more research focussed courses test on research skills, more psychodynamically oriented courses use compatible selection methods. Saying that I am very much aware the outside perspective of many that feel that clinical psychologists are very much a 'cookie cutter' group and our selection methods aren't diverse enough (not me though -I look like I belong more on an MBA intake apparently).
Being subsequently reminded that these tests are valid, with the tempting interpretation that this must mean I am poor at numerical reasoning, or generally stupid, sits quite uncomfortably with a view of myself that all previous sources of evidence had built up. If I then try and reason that perhaps I was tired, and the mistakes I made on the test compounded and snowballed, plus some anxiety, this then feels as if I am trying to make excuses for myself; is this externalising blame?
I agree with Miriam's response about this and her possible reasons from a cognitive/performance perspective. However, regarding your question about externalising blame, if I was in therapist mode, I would wonder if there was something deeper seated about that particular aspect of yourself being challenged, or if there were any core beliefs being activated by that experience. I think these often come out in times like exams, dating, careers etc. What is so bad about "making excuses" in this context, and how could we possibly test this?

You are not alone in that defensiveness thought. You would also be surprised how many times I have had the conversation with psychologists who have advocated selection tests not being willing to sit them themselves and to compare themselves against their qualified/ course team peers. As a supervisor, I was even surprised that many of my peers are reluctant to have their cognitive abilities tested (on tests they aren't familiar with or are not knowledge based), have their therapy directly rated by trainees or open themselves to potentially negative feedback by trainees. It's not just those at the start of the journey.
And at a meta-level, the personal uncertainty and cognitive dissonance created by this experience is perhaps a large contribution to my dislike of them. I'm aware of these biases of personal experience playing out but, at the same time, I think this experience gives me insights into some of the disadvantages.
That's pretty impressive self awareness and honesty. That will stand you in good stead in the future.
And are there minority or disadvantaged groups (I'm thinking beyond race and gender here) that are disproportionately affected by these factors?
Good research question, and the existing literature suggests that it does for factors such as social class, parental education /wealth, ASD etc. Anecdotally, I suspect they would load highly against some of the Big 5 personality traits like most academic tests do. Then again some of this is tautological, because tests are supposed to pick up 'advantage' be that cognitive, interpersonal, experiential or knowledge based, and it would be strange to have a test that is not impacted by that. Above all, I would say most methods of selection for courses do directly, and harshly, discriminate against those that are not 'psychologically minded'. I will leave it up to you if you think that is good or bad.
Shameless plug alert:

Irrelevant Experience: The Secret Diary of an Assistant Psychologist is available at Amazon
http://www.amazon.co.uk/Irrelevant-Expe ... 00EQFE5JW/

hawke
Posts: 143
Joined: Tue Feb 07, 2017 11:10 am

Re: Deductive reasoning screening tests discussion

Post by hawke » Wed Apr 08, 2020 3:16 pm

Spatch wrote:
Wed Apr 08, 2020 1:36 pm
Above all, I would say most methods of selection for courses do directly, and harshly, discriminate against those that are not 'psychologically minded'. I will leave it up to you if you think that is good or bad.
I'd be really interested to hear you expand on this, Spatch.

Anecdotally, I definitely agree with this. It took me 5 interviews to get on the course across 2 years of applying. I had no issues getting interviews, and was very ready clinically and academically. It took the first application with 2 brutal rejections at interview and a reserve offer that came to nothing to make me realise how personally unprepared I was. I spent a year working on this, catalysed by a significant bereavement during the 2nd application, and now my course colleagues and friends/family would say the ability to reflect and think psychologically is a real strength of mine. But it took a lot of personal work to tap into that side of me.

I performed well in the screening tests (to acknowledge my own bias towards them!). One thing I think they capture well is being able to think quickly. The CPs I most admire, whether in the NHS or academia, are those that are able to make good reflective evidence-based decisions at speed multiple times throughout the day. Obviously on an individual level, confidence, knowledge and experience feed into that, and many systemic factors like good team relationships and organisational structures also contribute - but being able to digest and process a lot of information quickly is a skill common to all senior CPs I have met so far.

User avatar
miriam
Site Admin
Posts: 7938
Joined: Sat Mar 24, 2007 11:20 pm
Location: Bucks
Contact:

Re: Deductive reasoning screening tests discussion

Post by miriam » Wed Apr 08, 2020 6:55 pm

PinkFreud19 wrote:
Wed Apr 08, 2020 12:51 pm
To note, I was successful in my application last year and I am currently a first year trainee. I am very grateful that non-GMA options were available, as this played to my strengths. However, I appreciate the danger of biases in favour of supporting an application system that favours us, rather than one is fair, and that some brilliant future CPs are not in training with the explicit help of GMAs
I thought you were a trainee already, and quite a reflective one at that, but I was confused by your post and making a more general point.

I'd agree that there is a broader theme of how we all like tests that suit our patterns of skills, but to be genuinely diverse as a profession we have to be willing to look at how we recruit a range of varied people who have the right competencies to bring to the profession, not use ourselves as templates for more of the same, however unconsciously.
Miriam

See my blog at http://clinpsyeye.wordpress.com

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest