UPMC Physician Resources
ORLive member exclusive presentation. Please sign in below to view the video.
Membership is free and will provide you with access to this video and the option to receive information about new content on ORLive.com
Emerging Frontiers in Concussion - Session 2: Concussion Assessment and Clinical Profiles
Doctors Noel Zuckerbraun, Michael Collins and Philip Schatz present and discuss the latest research on concussions and traumatic brain injuries.
Upon completion of this activity, participants should be able to:
- Conceptualize Concussion as a heterogeneous injury
- Describe clincial subtypes of concussions
- Improve their ability to interpret baseline and post-concussion assessment data
- Better identify factors influencing performance during baseline and post-concussion assessment
- Recognize the complexities of the psychometrics of the “reliability” of concussion assessment measures.
- Describe challenges to Emergency Department concussion management in youth
- Discuss and advocate for ED best practice for acute concussion management in youth
- Collins, MW et al. Neurosurgery. 2016 Oct 12. Statements of Agreement From the Targeted Evaluation and Active Management (TEAM) Approaches to Treating Concussion Meeting Held in Pittsburgh, October 15-16, 2015
- Am J Orthop (Belle Mead NJ). 2016 Sep/Oct;45(6):352-356. Concussions in American Football. Womble MN1, Collins MW
- A revised factor structure for the post-concussion symptom scale: baseline and postconcussion factors. Kontos AP, Elbin RJ, Schatz P, Covassin T, Henry L, Pardini J, Collins MW. Am J Sports Med. 2012 Oct;40(10):2375-84
- Computer-related anxiety: examining the impact of technology-specific affect on the performance of a computerized neuropsychological assessment measure. Browndyke JN, Albert AL, Malone W, Schatz P, Paul RH, Cohen RA, Tucker KA, Gouvier WD. Appl Neuropsychol. 2002;9(4):210-8
- Assessing Symptoms in Adolescents Following Sport-Related Concussion: A Comparison of Four Different Approaches.
- Elbin RJ, Knox J, Kegel N, Schatz P, Lowder HB, French J, Burkhart S, Collins MW, Kontos AP. Appl Neuropsychol Child. 2016 Oct-Dec;5(4):294-302. doi: 10.1080/21622965.2015.1077334.
- Trends in visits for traumatic brain injury to emergency departments in the United States. Marin JR, Weaver MD, Yealy DM, Mannix RC. JAMA. 2014 May 14;311(18):191
- Use of modified acute concussion evaluation tools in the emergency department. Zuckerbraun NS, Atabaki S, Collins MW, Thomas D, Gioia GA. Pediatrics. 2014 Apr;133(4):635-42.
- Benefits of strict rest after acute concussion: a randomized controlled trial. Thomas DG, Apps JN, Hoffmann RG, McCrea M, Hammeke T. Pediatrics. 2015 Feb;135(2):213-23.
- Clinical Risk Score for Persistent Postconcussion Symptoms Among Children With Acute Concussion in the ED. Zemek R, Barrowman N, Freedman SB, Gravel J, Gagnon I, McGahern C, Aglipay M, Sangha G, Boutis K, Beer D, Craig W, Burns E, Farion KJ, Mikrogianakis A, Barlow K, Dubrovsky AS, Meeuwisse W, Gioia G, Meehan WP 3rd, Beauchamp MH, Kamil Y, Grool AM, Hoshizaki B, Anderson P, Brooks BL, Yeates KO, Vassilyadi M, Klassen T, Keightley M, Richer L, DeMatteo C, Osmond MH; Pediatric Emergency Research Canada (PERC) Concussion Team. JAMA. 2016 Mar 8;315(10):1014-25.
Dr. Michael Collins has financial interests with the following proprietary entity or entities producing health care goods or services as indicated below:
Stockholder: Impact Applications Inc.
Dr. Philip Schatz has financial interests with the following proprietary entity or entities producing health care goods or services as indicated below:
Consultant: ImpACT Applications Iinc. Member of the ImPACT Advisory board.
Dr. Noel Zuckerbraun has reported no relevant relationships with proprietary entities producing health care goods or services.
All presenters disclosure of relevant financial relationships with any proprietary entity producing, marketing, re-selling, or distributing health care goods or services, used on, or consumed by, patients is listed above. No other planners, members of the planning committee, speakers, presenters, authors, content reviewers and/or anyone else in a position to control the content of this education activity have relevant financial relationships to disclose.
The University of Pittsburgh School of Medicine is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.
The University of Pittsburgh School of Medicine designates this enduring material for a maximum of 1.5 AMA PRA Category 1 Credits™. Each physician should only claim credit commensurate with the extent of their participation in the activity. Other health care professionals are awarded (1.50) continuing education units (CEU) which are equivalent to 1.5 contact hour.
For your credit transcript, please access our website 4 weeks post-completion at http://ccehs.upmc.edu and follow the link to the Credit Transcript page. If you do not provide the last 5 digits of your SSN on the next page you will not be able to access a CME credit transcript. Providing your SSN is voluntary.
Release Date: 12/14/16 | Last Modified On: 12/14/16 | Expires: 12/14/17
I'd like to invite you for this talk to be in the emergency department but not as a patient, not as a consultant rather as a fly on the wall. I really want you to think about the setting because really to understand how to manage any condition including concussion in the emergency department you have to understand the setting and what the goals are.
So we see the spectrum of TBI in the ED but just to put in perspective a typical case and we'll carry this through. We have a 14 year old girl who was hit in the head during a soccer game. She fell backwards, had brief LOC, no amnesia. Her symptoms are headache, fatigue and dizziness. She does have a history of migraines and she had a concussion about a year ago that took about 3 weeks to recover. She has a totally normal physical exam without any focal deficits but she seems to maybe answering questions a little bit slowly. So she is in the ED on a busy Saturday night with some worried parents. And to think about what really the best management for her will be will be the basis of this talk. And hopefully at the end of this brief 15 minutes you'll be able to describe some challenges to ED concussion management in the emergency department in youth in particular and to discuss an important need to advocate for best practice for ED concussion management.
So as Mickey just said, we are often the frontline for those who are going to seek care acutely, the Emergency Dept. is often the frontline. And annually and nationally we see about 750,000 visits for concussion in youth. We are seeing more patients in the ED. I'm sure it's no surprise for various reasons there is a national increase in TBI visits and no surprise that the majority of them are mild traumatic brain injury.
This is data from Jen Marin, who is faculty in our division who looked at national data from 2006 to 2010, there was a 30% increase in TBI visits. There is also reports over the last decade of it being a 60% increase. Children's in Pittsburgh is no exception. We are the busiest ED in the city just to give a local perspective, if you build it they will come is true. In 2009 we moved to Lawrenceville from Oakland and our volume has continued to increase. We see about 76,000 patients annually in the ED. And not all EDs are the same, but volume can be an issue and when do we see our highest volume? When kids are out of school and when they are playing sports.
So what's the ED's role? Acutely we have to worry about what injuries are going to require immediate intervention, that's still on our mind in the Emergency Dept. setting and that's still our first responsibility regardless. The good news is the majority of kids do not need CT scans and they can be evaluated clinically as we've heard already a theme here, the clinician really is the most important step and we have good evidence to support who needs a CT and who doesn't.
This paper was a landmark evidence based paper published in 2009 in the Lancet over 40,000 children, 25 Emergency Depts. nationally and derivation and validation set to show which kids are at low risk for clinically important head trauma. And this really I think helps us then move the conversation from you know what kind of imaging needs to be done to diagnosing and management of concussion in the ED. Unfortunately for some this is a challenge. The ED teachable moment, some folks come to the ED for a CT and the fear of the unknown can drive expectations. So some of the visit can be taken up by discussing that which can be unfortunate and deter some of the conversation about concussion.
So why don't we just CT everyone right? Well I think this isn't new news but just to bring it up as a point that we have to consider CTs are more available and we certainly have seen an increased us in them. This is data from children 5 to 14 years of age from 1996 to 2010 and you see the rise there, that top line with the circles is head CTs. So why do we care? It's radiation and we know that although there is not an exact correlation we can make, we know that the risk is not zero and that's going to have potentially more of a possibility to play out in a child's lifetime and so we need to consider that risk.
So moving onto another challenge in the Emergency Dept., recognition. Of course the kid that comes from the field boarded and collared that lost consciousness we usually don't miss that as a head injury. But occasionally we have traps for tunnel vision. So the field sees this and we see this. The field sees this and we see this. And so there is a risk for not recognizing the concussion when there is other distraction injuries taking place at the same time. So standardization for recognition in the Emergency Dept. setting is something that we'll get to.
The setting itself is challenging and particularly in an academic ED setting we have a steady 30 clinician attendings that we can educate about cutting edge therapies and management; however we have over 450 trainees annually in our Emergency Dept. so another challenge is standardization of the information that is brought forward to the family.
So we have these challenges which I just went through, we are seeing more kids, potentially the discussion can be derailed. We have trouble at times recognizing it and we need to provide standardization of information. How do we do that? And this is really we've worked with Dr. Collins and the Concussion Group to determine best practice and use these challenges to turn them into opportunities. And what I'd like to do as you think about this is think about ways that you could reach out and advocate for best practice in your local Emergency Depts.
So one of the important projects that we did with DC Children National was look at the CDC's ACE concussion tools, both the diagnostic and the discharge instruction and modify those for use in the Emergency Dept. setting to overcome some of those barriers we talked about to provide a standardized ED assessment, to provide early diagnosis and not miss cases, to guide appropriate management with the discharge. And the critical point is linking the necessary follow-up.
We looked at kids 5 to 22 years who were discharged from the ED with a mild traumatic brain injury. We looked at the tools both pre- and post-implementation and we prospectively contacted families at 1, 2 and 4 weeks. Our primary outcome measure was follow-up because we truly believe that the evaluation can't end in the Emergency Dept. and the children need to move on and be followed. And we also looked at recovery patterns.
So we took the office version of the ACE which for the ED, this was too complicated for us, we needed to make it more simplified and we streamlined it into the nuts of bolts of what we needed in the acute phase.
At Children's in Pittsburgh we are paperless like many Emergency Depts. and so we uniquely took the ACE-ED form and we embedded it into our electronic tracking, so the nurses in triage will go through the ACE with the patient and if they screen positive it actually pulls up a icon on the tracking board that we see in the Emergency Dept. that will notify the clinician that they have screened positive for a concussion so in those cases where there may be distracting injuries that potentially could be of use.
Then we looked at our discharge instructions and we incorporated education on the potential range of symptoms that the patient may experience in the upcoming weeks and what to do, what not to do; particularly not getting involved in any circumstance where they could reinjure themselves and linking them to follow-up both with the pediatrician, which we recommended for a 3 day follow-up, and linking to our local concussion program. A unique part of the discharge instruction that we included was a school form. It was not only a sport excuse but it was also an education form that gave the school some information about potential enhancements they could do or things they could do in the classroom to help the child that was still symptomatic.
So our results, our primary outcome measure we did show that follow-up improved but I'm going to have you look at the slide for a second and look at the pre-intervention and the post-intervention follow-up, it's not that great. To start with it's about a third, and that's pretty typical of Emergency Dept. follow-up for other conditions like asthma. So we did improve that, the dark blue bar there is a 4 week follow-up which was up to 60%, but certainly still room for improvement. And there has been other groups nationally that have looked at this particularly for concussion and the bottom line is it's not routine for follow-up. The 60% is good but there is another reason that we should reach out and try and improve the follow-up rates from the Emergency Dept. for this injury.
So our recovery pattern behaviors were interesting. They total symptom score, the PCSS is on the Y axis there and if you look at the solid line that's our post-intervention which was increased compared to the pre-intervention, largely week 1, and actually followed a pattern if you are familiar with that score that is more reflective of the typical injury pattern with the pre-phase the dotted line really they are in the normative range. So hopefully we were improving their recognition to some degree. The time to return to activity also increased, that's the solid line there. Also reflective of a more typical pattern post-recovery than the pre-phase shows.
So how do we interpret these results? Well certainly the increased follow-up was good, room for improvement like we talked about. But it was the increase in symptoms post-intervention was that good news or bad news, and something I just want to touch on to put in people's minds here. We felt that and hoped that increased awareness was what was driving the report of increased symptoms and that instruction adherence was what was driving the increased time to baseline activity level. But we wondered whether we were making the kids worse form the Emergency Dept. setting and of course we don't want to do any harm. And we know information influences experience, we know about the placebo effect in medicine and there also is a nocebo effect where your expectations of negative outcomes can influence the patient's experience. And we know that framing makes a difference in how things are put in terms of recovery can make a difference. So with that in mind I think in summary the ED best practice obviously we need to diagnose the injury and properly educate what they can to help recovery. Most importantly probably we need to emphasize the need for follow-up and show that there is a link for them to get good follow-up with their primary care doctor or with a concussion specialist. But we needed to do that minding in the instructions both a nocebo response and a framing bias.
So we are going to shift gears a little bit and talk about what we are learning that's new about concussion management from the ED acute phase of injury standpoint. So I think it's not news to this audience but the pendulum seems to be swinging on rest and until recently there were no prospective data on rest as a treatment measure for the acute phase of injury in children.
Danny Thomas who was the lead author of this recently published paper was a fellow in our pediatric Emergency Dept. and some of his earlier work was done here with Dr. Collins' group as well. And he published a randomized controlled trial on rest. They took children and looked in the Emergency Dept. setting prescribing 1 to 2 days of usual rest versus 5 days of strict rest and looked at 10 day outcomes, and interestingly found there was no difference in 10 day outcomes in regards to neurocognition or balance. But even probably more interestingly the strict 5 day rest group had more symptoms and this is the data showing that. They have the symptom score on the Y axis there and injury days post from the ED. The X showing the dark bars, patients had more daily symptoms and slower symptom resolution in the strict rest group. So they concluded that more stringent rest offers no benefit. And that symptoms are probably influenced by recommending strict rest. Their final recommendation was that the usual care model of modest cognitive and physical activity currently is probably the best strategy for recovery.
So we know the ED population is a high risk population for prolonged recovery. In general there is a pretty - good amount of work out there on this population and about a third of patients that present in pediatric Emergency Depts. will still be reporting symptoms about a month out. This is work by Dr. Mehan's group actually, Eisenberg published this paper in Pediatrics in 2014 and the blue arrows highlight headache and fatigue being the most prominent symptoms. But also this paper showed that emotional symptoms were being seen more commonly in that post-phase in the 1 month period.
So on that note I'm going to end with a recent study that was posted in March of this year by the PERC Canadian Concussion Team that looked at a clinical risk score for persistent post-concussion symptoms among children with acute concussion in the Emergency Dept. setting. It was another bug study, multicenter, 9 Emergency Depts. in Canada, a cohort of youth and a large derivation and validation set with the main outcome of persistent symptoms defined there as 3 or more new or worse symptoms at 28 days.
And about a third had symptoms at one month. Out of 46 candidate variables they were able to identify 9 that were associated with prolonged recovery which included being female, being a teenager, having a history of migraines, having a prior concussion with symptoms that lasted more than a week, presenting with acute symptoms of headache, noise sensitivity and fatigue. And on physical exam answering questions slowly and they did the BESS evaluation and they had 4 or more errors.
So they took these factors and developed a 12 point score to determine a prolonged recovery risk. And they stratified that score into 3 different levels with a low score giving a 4 to 12% probability or prolonged recovery verus a high score having a 57 to 81% probability of prolonged recovery. And our initial case that child would have fallen into this high risk category by this score.
So what's the ED utility for something like this? I don't think it's ready for prime time, I think this needs to be validated and the score was fair, it had fair ability to really predict the levels that are reported up there. It was better than they looked at clinician prediction from the Emergency Dept. which was about 50-50, the toss of the coin, so it did a little bit better than that. But I think it is interesting that there might be some potential to simplify the assessment, maybe provide some more realistic items especially on the low end and the high end of these scores. And probably most useful I think for improving and pushing research would be to target these high risk patients for the much needed research in this field in terms of therapy.
So in summary I hope you've learned a little bit about some of these challenges and opportunities that exist for excellent concussion care in the Emergency Dept. setting. And I encourage you again to collaborate locally to develop collaborations for follow-up as we have done here which are really important and I think benefitting our patients locally in Pittsburgh. And ED best practice I really believe involves standardized diagnostics and education that I think is helpful for consistent recognition and also for early education for parents and families. Instructions need to be well framed and need to keep in mind potential biases that the clinician can influence and should really set the path for modest cognitive and physical activity, not strict rest; obviously not condoning any activity that's going to put the child in harm's way. And emphasizing importantly the need for ongoing follow-up so that they can be seen by a specialist and get individualized management for their injury.
Before starting I do disclose my interest in IMPACT but I also want to disclose the fact that this meeting was underwritten by the NFL. They did not organize it, they did not have anything to do with the paper, they had nothing to do with the Statements of Agreement, they just paid the bill for people to come to Pittsburgh and attend the meeting.
The objective of my lecture here is I'm going to briefly set the stage before presenting Statements of Agreement I'm going to talk to you about a Harris Poll that we conducted at UPMC showing the perceptions of concussion out there. How many of you halve seen the Harris Poll that we put together? Not too many, I think you'll find that very interesting. I am then going to briefly go through and explain and actually show the 17ish Statements of Agreement that we voted on at this meeting in Pittsburgh to give you a handle on what's coming out in journal form very soon. And I'm also going to briefly put - you now talk about the evidence behind each statement.
So we knowing the fact that there seems to be this bigger and bigger chasm existing between what we are seeing in clinic and what I think parents coming to clinic report regarding the perceptions of concussion we actually wanted to convene a Harris Poll last fall or last spring I should say trying to get a better understanding of the perceptions of concussion out there with U.S. adults, parents, kids. And so we convened, commissioned this through the Harris Poll and they did a poll looking over 2,000 adults. It was appropriate weighted for age, sex, race, ethnicity, etc. as you can see, and the Harris Poll is a really respected, well done survey. And this document is about 25, 30 pages long. If you want to text me - don't text me, email me, I will send you this poll or you could go to rethinkconcussions.com and the Harris Poll is on that site. But I'm going to just show one slide going over some of the findings from this poll.
And it's very interesting. 24% of U.S. adults think a concussion is going to change their life forever, 72% believe that damage to the brain is permanent, 80% believe you can only lesson symptoms and you never fully recover, that's 8 out of 10 U.S. adults feel that there is permanent symptoms from concussion, which is mind numbing to me. 81% are not comfortable that they would know the steps to manage and treat a concussion if they sustain one. U.S. adults believe the treatments for concussion are refraining from physical activity, hydration and over the counte3rs which I don't think is going to treat anyone. And so you know there is very little understanding of treatment. And lastly the poll found that 25% of U.S. adults will not allow their kids to play contact sports due to the fear of concussion.
There is a new study coming out very soon, I've seen it, I don’t think it's been published yet which shows that orthopedic team physicians there is a survey on orthopedic team physicians asking if they'd let their kids play football and I believe over 50% of orthopedic team physicians said they wouldn't allow their kids to play football due to the fear of injury, which is unbelievable. So there is - you know for a clinician who sees patients everyday and I see patients get better and I do put them back to football or do put them back to other sports there just seems to be a pretty big difference between what we are seeing clinically in terms of the fact that most kids get better if not the great majority of them versus what the public is perceiving about this injury.
The meeting that we put together here you know I really do feel it's time to change the conversation on this injury. And we need to start somewhere and we need to start through science of course as to changing perceptions. You know in the International Consensus Statements that exist what's been perpetuated with those statements is really a rest and monitor approach. I mean I think we all would agree if you look at the International Consensus Statements and the last meeting was 2012 so you know really it's been rest, monitor, graduated return to play and really that's been the parameters around treatment, that's really all treatment has been - that's the only treatments that have been discussed is rest and monitor. We need to move beyond that. We realize that there is more active approaches now in the form of vision therapy and vestibular therapy and exertion therapy and medications and there is a lot of advances we made in terms of implementing treatment. You are going to hear this all tomorrow in terms of how we treat this and a lot of you are doing the similar things. You know we really need to kind of evolve the field towards a treatment, towards a more active treatment approach and again you know I like the words we need to focus on process and not protocol.
The meeting we had in Pittsburgh it was held here October 15-16 of 2016, myself, David Okonkwo and Anthony were the course directors of this meeting and it's important to know how this was formulated. The NFL wanted to have a meeting on treatment, they underwrote it. They had us invite, they had nothing to do with who was invited. We invited, we had in the budget for about 40 people to be invited to the meeting, we knew that it needed to be multidisciplinary. You know what's so cool about seeing everyone here is we have neurosurgeons and neurology and PM&R and athletic training and physical therapy and you know I seem - and by the way I've seen more physical therapists at this meeting than any prior meeting ever for concussion. So thank you for coming.
You know it's becoming a very multidisciplinary approach to this injury and we wanted that reflected in who was invited to this meeting. And we really invited three different types of people, academics, clinicians and thought leaders. And we could have invited 120 more people but we had to get the list to 40 and so - and we wanted it cross-disciplined and myself and Anthony and David were the ones that put the list together in terms of who was invited. And we also wanted a group that didn't necessarily agree on a lot of stuff. We wanted people there that had a lot of different opinions. And so we put together I thought a pretty good slice of folks that are doing this work.
We looked at 3 sessions, so we had 37, what ended up being 37 invited participants who were voting members of the conference. So all 37 of those people voted on the Statements of Agreement okay. We also had 18 stakeholder guests, nonvoting, it was a closed door meeting, it was invite only. We had 3 sessions that we went over during this meeting. We wanted to vote on statements from these 3 areas: summary of the current approach to treating concussion, heterogeneity and evolving clinical profiles, and then specific strategies toward treatment. And then we had Statements of Agreement that we voted on after the presentations, we then convened as groups. We had 3 breakout sessions, we convened and the 3 different sections revised the wording of those statements and then we revoted on the statements after revising the language. And so the language that you are going to see in the statements every word was carefully gone over in terms of how we framed it in terms of language. Very carefully done and the voting reflects on that. This was all written into a White Paper that is in the final stages of review. We fully expect that this will be published soon. It has already gone through revisions and all 37 authors are on the paper and this will hopefully be coming out very soon. Don’t know when exactly.
The invited experts again is really across disciplines, we invited Neurosurgery, Neurology, Neuropsychology, Athletic Training, Sports Medicine, PM&R, Emergency Medicine, Psychiatry, etc. And you know we had Bob Cantu was there, Julian was there, Rich Ellenbogan, I can go through these names. Barry Jordan who I have a great deal of respect for in Neurology, David Brody down at St. Louis, Mike McCrea, Neuropsychology, Rubin Echemendia, Gary Solomon, I mean I can go through the list here but you now Kevin was involved, John Almquist who is here was involved, Anne Mucha was involved, etc. And you know it was an interesting experience to have this meeting.
There was a lot of people in the room that I haven't spoken to some of these people in years, and we met here in Pittsburgh. The first night we had dinner at the Duquesne Club and it was we didn't leave - I mean no one left until you know really late. Everyone was getting along, it was very collegial. I don't know I may be blowing a little sunshine but I found it to be extremely collegial and very refreshing how this meeting went. And even though there were differing opinions and there was a lot of real heated conversations about some of the things you are going to see, we all came together at the end and voted on these things and pretty, I won't say unanimous but you can see the statements were voted on pretty strongly as a group.
We also had nonvoting participants as part of this meeting and we did not - unlike the International Consensus Statement we did not want any sporting influences on this document. We did not want the NFL's name, we did not want them voting on it, we didn't wan the NHL involved in voting on it you know etc. But we did want people in the room to have dialogue but when it came to voting on these statements we really, the 37 people that were invited voted on those - on these statements. But the sporting organizations involved you can see for yourself, military, we had DOD, U.S. Army, U.S. Navy, Sidney Heinz was here I believe, he attended this meeting, etc., CDC, Centers for National Institute of Health, One Mind were all there and they had a lot of very important input for this as well.
So what I'm going to do here and it's the end of the day, I'm going to just present to you the statements that we've agreed upon and maybe talk about each one very briefly. But this is what will be coming out in publication form in how we voted on these statements. Now in the voting we had basically everyone voted. This was the final tally for each Statement and you could vote agree, somewhat agree, somewhat disagree or disagree. And we only included Statements that we had supporting agreement on, somewhat agree or agree that they passed. There was only one Statement that didn't pass out of all of the ones that we prepared and I frankly can't even remember what that statement was.
But here is what will be coming out and I think it's important, when you look at these Statements I think it's important to reflect on what exists in terms of Zurich and different International Consensus Groups and how there has been very little focus on treatment. And that was the purpose of this meeting was to really come up with a conceptual framework for treating concussion.
So the first Statement of Agreement, needs prior expert consensus for management of concussion including no return to play on the same day, prescribe physical and cognitive rest until asymptomatic you know etc., accommodations at school, work as needed and progressive aerobic exertion based return to play symptoms. So we all you know 91% agreed that this is what currently exists from the International Consensus Groups. You know if you look at the Institute of Medicine report you know this is what the International Consensus Groups state, but I think all of us would agree there is very little evidence regarding the efficacy of rest following concussion, and really we need to improve upon that.
The second Statement of Agreement, previous consensus statements have provided limited guidance with regard to the active treatment of concussion. We had 97.2% of agreed there was one person that somewhat agreed on that statement and I think it's pretty clear that that's - there is really very little discussion about active treatment of concussion in any of the consensus statements. In fact if you look these just three AAN International Consensus Statement and the NCAA you know if you look at what is currently in existence you know physical and cognitive rest until the acute symptoms resolve, supervised graded exertion, you know gradual return to school and social activities, you know I think we all agree upon these things. But really there has been very little discussion regarding treatment, as much of the current existence of these Consensus Statements is really focused on prescribed rest. And again the purpose of this meeting was to try to take it to the next level as to how we conceptualize treating this injury.
The third Statement of Agreement that will be published hopefully soon, there is limited empirical evidence to the effectiveness of proscribed physical and cognitive rest with no multisite randomized controlled trials for rest following a concussion. And we all pretty agreed on that statement as well. If you look at what is out there in the literature and the evidence for rest there is a handful of studies. Moser has published 2 papers with one just a few months ago, and Brown et all published a paper. And really these studies, for example the Moser study had a cohort of kids that had symptoms of concussion somewhat chronically, they rested them for a week, they had you know pre-rest neurocognitive data and then they had them rest for a week and then redid the neurocognitive testing and found that the neurocognitive scores improved and the patient's symptoms improved. There is no good randomized prospective studies examining this. There is no large randomized trials starting in the first few days. There simply doesn't exist the literature supporting the fact that rest is going to be effective at treating this problem. There just needs to be more work done.
The fourth statement, prescribed physical and cognitive rest may not be an effective strategy for all patients following concussion. We had you know 87.2% agreement on this statement, one person somewhat disagreed but is clearly as a group we felt there has to be more than just resting here in terms of how we treat this injury. And you know some of this, we had a lot of conversations about this topic when we went over this statement and you know we really leaned on what's out there with Danny Thomas and what he did and Noel presented this study earlier. There was also a study done by deKrujik in 2002 which compared bed rest to no bed rest in 107 patients with MTBI and did not find any differences in outcomes. And what does exist, which is very - there is a paucity of literature on this topic, what does exist really doesn't support that rest is effective in terms of treating this injury.
We also published a study way back in 2008 where we actually - I don't know if you've, any of you have seen this study but we were thinking about this a long time ago. We created a activity scale from 0 to 5, 0, the subject did not engage in school or physical activity, 4 meant the patient had engaged in school and a sports game and then we looked at their neurocognitive data at 6 days post-injury in a fairly large sample of kids and we actually found that moderate levels of physical and cognitive activity actually exhibited the best outcomes within 6 days of concussion. And so what - you know this is a study we did you know almost a decade ago and we actually found that moderate levels of physical activity and cognitive activity were actually beneficial to the patient outcomes. And that was replicated not only with visual memory but reaction time and other scales on impact showed the same findings.
Statement number 5, strict brain rest is not indicated and may have detrimental effects on patients following concussion. I feel that this is a very important statement for people to hear. And I think as a group we agreed upon that, again one person disagreed. Most of us agreed or somewhat agreed on this statement. but you know there is too many kids coming to our clinic that have been put in dark rooms and it relates in a lot of morbidity, a lot of anxiety, a lot of migraine, a lot of falling behind in school and a lot of social isolation that frankly can cause a lot of detrimental effects that are well intended, that treatment is well intended; but for us to be able to publish this and say that we agree that cocoon therapy is not indicated I think hopefully will get a lot of changes to where we really try to get kids to be you know we have to fill in the gaps here empirically but you know we are going to come out in this statement and make this statement. And you know really you know really hopefully the days of dark room are going to - dark rooms are going to be over soon because it does create a lot of comorbidities in these patients.
Number 6, although most individuals follow a rapid course of recovery over several weeks, days to weeks following injury concussions may involve varying lengths of recovery. You know if you look at the International Consensus Statement 80 to 90, I actually read in the recent statement, in an actual Consensus Statement I read 11 times in the paper that it's generally thought that 80 to 90% of athletes recover from concussion in 7 to 10 days. It was stated 11 times in the paper. That doesn't resonate with what we see in our clinic, nor does it resonate with what we are finding in terms of our research. So we've published studies since the recent International Consensus Statement looking at what recovery looks like and you know we find recovery really is lasting up to 3 to 4 weeks for symptoms, up to 3 to 4 weeks for memory and up to 3 weeks for vestibular and ocular motor recovery.
And this is a paper done by Luke Henry, one our fellows, and it really builds on other research that we've done showing that you know 80% of kids get better by 3 weeks and that, but when you add the ocular motor and vestibular stuff we are seeing recoveries taking a little longer. Bottom line is the International Consensus Statement that 80 to 90% of patients get better in 7 to 10 days doesn't resonate with what we are seeing in the field. And I think there needs to be some balance to that and at least - I mean because if all coaches and parents think 80 to 90% of kids get better in 7 to 10 days I mean how many of you have run concussion clinics? How many of you see patients, you know the majority of patients, how many of you does it resonate that 90% of kids get better within 7 to 10 days? Anybody. You know it just doesn't really happen that way when you really use the right tools and ask the right questions and have the right protocols in place. And so I think changing that is going to be beneficial.
7th Statement of Agreement, recovery from concussion is influenced by modifying factors, the severity of injury and the type and timing of treatment that is applied. And we had very good, 100% agreement on this statement. And I think there is a real understanding now that the injury is heterogeneic, that there is different risk factors that predict longer outcomes. There is more severe injuries, there is less severe injuries and really now that we have kind of ways to treat concussion we don't know the right timing for treatment from an evidence based standpoint. We don't know about the right dosing of treatment but the point being is that we are starting to get there and we'll get there eventually but as a group we agree on the fact that there is a lot of variability in terms of how people recovery from this injury. And that's backed up by extensive literature by our group and many others looking at the different constitutional and symptom risk factors. And you've heard all of this today, I don't need to repeat it.
So that's the first section. The second section of the paper is really focusing on the heterogeneity involving clinical profiles of concussion. Now this is where the conversation can get pretty interesting as a group. We all kind of agreed where the field is at and the fact that we kind of need to move on rest; but then how do we do that? And what's the conceptual framework from which we are going to work from? And so the statements here are a little more vague and rightfully so because there is a lot of work that needs to be done to really truly understand the profiles; but again trying to come up with a conceptual framework here are some of the things we voted on.
Statement of Agreement 8, concussions are characterized by diverse symptoms and impairments in function resulting in different clinical profiles and recovery trajectories. We spent hours on this, on this one in terms of the right language to state in this statement. And I wish you could have seen where we started from. But there is after we came to accord in the language that we were going to use there was a complete unanimous agreement that there is different types of problems that we see from concussion and that we need to start looking at this injury in a profile way. And so instead of managing concussion with a cookbook and one size fits all we need to come up with different profiles or problems and match treatments to those profiles or problems. And that's a big step for the field.
And this is what we propose you know in terms of our clinical profile model. There is very good work done by Ellis that proposes 3 profiles, autonomic, cervicogenic and vestibular/ocular. So there are a few other models out there, we are not saying our model is better than any other model but we need to really publish on these things and better understand it and more research will lead us to understand really truly what the profiles are. But there are different models that exist.
Number 9, a thorough multi-domain assessment is warranted to properly evaluate the clinical profiles of concussion. Every one of us agreed on that statement. And you know really it's very important to measure this injury in a multi-domain way. We all agreed that you need to look at cognition, you need to look at symptoms, you need to look at vestibular and ocular and all this type of stuff. We didn't agree on exactly how to measure it, it was more conceptually that there needs to be a comprehensive approach at measuring the injury.
Number 10, a multidisciplinary treatment team offers the most comprehensive approach to treating the clinical profiles associated with concussion, 100% agreement, you know 17.6 somewhat agree but we all agree that - and we were careful with this because we realized not everyone has the ability to have a multidisciplinary clinic. And in the paper, one thing I want you to keep in mind is in the paper that will be published we are going to have a Statement of Agreement and the supporting language behind it and there is going to be discussion on each of these Statements of Agreement. And you know if you don't have the ability to have a multidisciplinary clinic it's okay. We think that it's optimal to have that and perhaps we'll be getting centers of excellence and all of these types of things down the road, but you do what you can. And in communities where that's not possible you just try to piece it together as best as you can. We totally understand that.
This is the model we use here but that you know you can have other models as well. Again we had a good discussion you know Javier Cardenas' model is different than ours but we kind of do the same thing. And you know I saw Rob Franks you now in Philly he does a different thing, pretty similar but different. I mean we all have different kinds of - it all depends on who you are around, there is no one defined model of how we treat this thing.
Section number 3 is really specific strategies that we want to devote on in terms of target evaluation and treatment, and here is what we voted on. I think this is the biggest one of all. Concussion is treatable, we had 100% agreement on this topic. Only 2 somewhat agreed out of the 37 authors or whatever it was, maybe 3. This is a big statement. The fact that we are going to be able to put this in a document and distribute it and disseminate this I think all of us in the room are getting to a point where you know you juxtapose this with the Harris Poll that we conducted and we have a lot of work to do in terms of educating the public that there are effective treatments for this injury and kids get better. And so hopefully this paper will lay the foundation for that to occur and I think it's a pretty powerful statement that we are going to make in this paper hopefully. You know again the Harris Poll is a lot of misperceptions, and 72% believe that damage to the brain is permanent whereas we all collectively agree that this is a treatable problem.
Number 12, is preliminary evidence suggests that active rehab may improve symptoms recovery more than prescribed rest alone. Again if you think contextually about the current Consensus Statements this is quite a step forward for the field in that more active approaches of treatment are better than rest alone, and we all agreed upon that statement.
And there was a study done by Darling in 2014 that actually showed 40% of children who were prescribed rest actually had subsequent problems in school that was unrelated to the cognitive effects of injury, which doesn't surprise me in any way, shape or form.
Active treatment strategies may be initiated early in the recovery following concussion. If you follow the current Consensus Statement you cannot work anyone out until they are symptom free. I will argue that you will never get anyone better until you get them active. And the fact that the coauthors of this paper agreed with that, you have to understand what you are treating. And you are not always treating a vestibular problem, but sometimes you are and you don't get better from that until you activate, expose, recover. And this statement will hopefully really start to allow clinicians the ability to activate early in some patients.
We have to figure out the dosing, the timing, all that type of stuff but this paper lays the foundation for us to change things quite significantly than what currently exists. And I'm hoping that the meeting coming up in Berlin will also do some of these same things we are doing in this paper, okay. I think that the people in that meeting will be aware that changes need to be made in terms of a rest and monitor approach towards a more active approach to managing this injury.
This is a big one, matching targeted and active treatments to clinical profiles may improve recovery trajectories following concussion. And so again quite a step forward, meaning we are actually matching treatments to problems which I think is a big step for the field. Now we have to define what those problems are and all agree upon it, but at least we can start somewhere, okay. And I think that was the - so this is kind of our treatment model, we've talked about this ad nauseam, I'm not going to go over that.
Number 15, patients returning to school and work while recovering from concussion benefit from individualized management strategies. Very important that you know - you can't create academic accommodations, the same accommodations for every kid because if you have an ocular problem there is going to be a whole different set of accommodations versus if it's a vestibular problem versus if it's migraine, versus if it's anxiety. If it's anxiety I don't provide accommodations. The best way to get them better is by getting them exposed. And I don't want to reinforce the anxiety. Whereas if I have an ocular problem in a kid I'm going to be more careful with math and science and those sorts of things. And so we really need to get much more specific in how we apply academic accommodations to our patients with concussion, and that's what this statement, we agreed upon that. And the same thing with return to work. Return to work is going to be completely predicated upon the problem the patient has, not on this one size fits all approach.
And lastly, pharmacological therapy may be indicated in select circumstances to treat symptoms and impairments related to concussion. We all agreed upon that. David Brody out of Wash U. gave a phenomenal talk on medications at this meeting and you'll hear from Kelly Anderson tomorrow who I think is, she's really talented at doing the meds as well. But in some cases medications are needed for kids with this. Again for example if you have a patient that has chronic migraine or chronic sleep problems, chronic anxiety that you can't treat with your traditional behavioral management and vestibular therapy, exertion therapy sometimes medications are indicated. We try to do the other stuff first but sometimes meds are a very important part of the treatment process.
So those are the Statements, there are a few others I didn't include for the sake of time but this is about 30 to 40 page document. When boiled down I don't know how many journal pages it is, but it's thick. Hopefully it's going to set the stage to work from. And really the next steps forward for this you know we all talked about, there is a whole section on future research in this paper and what we really describe and we have good momentum towards is creating multi-site randomized controlled trials aimed at treating concussion. You know a couple of sites look at the vestibular treatment, vision therapy, exertion therapy and start sharing data and actually doing good randomized controlled work looking at treatments. And in that fashion we can figure out the right treatments and what time to give the treatments and the dosing of the treatments. There is so much more work to be done but this is the first ever meeting we have had on treating concussion and so we wanted to come up with some conceptual framework to do so. And that's what will be put forth hopefully very soon in the literature.
So in summary there is a lot - you know a uniform approach involving prescribed rest and progressive return to activities may not be affective for all patients. There is emerging clinical profiles, matching treatments to those profiles is really where the field is moving. And at the end of the day it's really time to change the conversation in that this is a treatable injury and hopefully this paper will take that step.
Quick disclaimer, I am a Professor at St. Joe's University, I am a member of the Scientific Advisory Board of ImPACT and also serve as a consultant to them. I'm sure you are most interested in my work with the Concussion Center of New Jersey, but probably the takeaway message is that all the work that I'm presenting was done as my role as a professor and I'm not receiving any royalties. And I believe 3 years ago I presented and was driving a 2005 Honda, and I'm still driving that Honda, so if I am consulting I'm not doing a good job.
So I'm going to try and pickup some time. So the goals here are to review some of the challenges that are related to interpreting data, we'll talk a lot about baseline data and also some post-concussion data, what factors influence performance and on more cognitive tests and the data that we see and really to try and understand the complexities of psychometrics and what really is reliability. And I'm going to tell you right now I'm not sure that I know what reliability is. And finally try and talk about psychometrics without putting everyone to sleep before lunch.
I love tracking these publications because it seems that you know when I entered somewhere around here there wasn't a lot of data. This is PubMed searches with the word concussion in the title or abstract. And I also looked at concussion and validity or concussion and reliability, and not many people are interested in psychometrics. That was my entry in to the field, it was a good little angle for an academician. And I entered the field somewhere around you know 2000, you can see the influence I've had, it's really measurable. But if you look at the number of articles that have been published just in the 2000s and you prorated them what's going to be at the end of this decade? There is exponential growth in the amount of research and yet there is not a commensurate growth in our understanding of what are the psychometrics behind these tools.
So very, very briefly, I assume this is statistics 101 but we are going to get a bunch of that in the next 15 minutes. We don't want to be all over the place, we want to be reliable, which means consistent. But if I'm reliable all the time just saying nah, no, I don't like that. Cheese, no, no. Food, no, I don't like any of that stuff, I'm going to be consistent. Is this on autoplay? I'm not touching this. So we want to be consistent and we want to be on target. And that's what we are hoping for is to make a consistent diagnosis or measurement of the behavior and measure what we are supposed to be measuring.
So one of the problems is that we assume that all human behavior is normally distributed. And I don't want to go too deeply into statistical theory but the central limit theorem says that as you sample from a large population the distribution will be around normal curve, and if we sample smaller samples from that population the means of each sample will be normally distributed. If I'm collecting shoe size we should be normally distributed. If I happen to find a classroom at my university that is only football players that's probably not going to be a big deal, but if it's only basketball players well we are going to have an outlier, right? So we tend to look at outliers based on a certain number of standard deviations right. In this case 95%, we are going to talk about this a lot, two tails, 95% of the population falls within two standard deviations, 2.5% fall on either side, they get the As and the Fs of whatever we are measuring. Okay?
Well let's talk about outliers. We often talk about invalid baselines, and my first thought was there really is no such thing as an invalid test. Some individuals fall below 2 standard deviations from the mean. If we look at a one tailed test because we have that upper limit of 100% well you are not going to be an outlier at 100%. So its who falls below the cutoff. So some research we've done on invalid baselines shows that around 5% of the population falls below the 5% cutoff, right. It happens to be that high school athletes when you get out 6% and college athletes 4%, you average those you get pretty darn close to 5%. So that makes a lot of sense, but who falls below that? Well a much more disproportionate amount of athletes with ADD and LD fall below that, right. Well you might expect that, these are individuals who are scoring below average, farther below average than would be expected. But there is a reason for that.
We did some research saying what if you give these folks another assessment? Almost 9 out of 10 do better, that could be regression to the mean, people who do poorly do better afterwards, right. People who degrade do worse afterwards, they regress to the mean. So maybe they were goofing around, maybe they weren't paying attention, maybe they were sandbagging if that's such a real thing, right. So we don't actually know but once again the folks who obtained the invalid scores the second time a third of those were people with ADD and LD. So the folks that are falling below the cutoff are not necessarily, I mean we don't say they are invalids, we don't use that word but we call them invalid. I'm not sure that that inflection is a real difference, right. It's not a very nice thing to say about your patients, right, they are an invalid.
So all scores are valid. I broke this down, every couple of minutes I'm just going to stop and make some points to keep everybody awake. So all scores are valid but some are outside of the range of what we expect. Well we need to interpret the data. My background is in neuropsychology, I'm a neuropsychologist, we do assessment, we don't give a test, we give an assessment. We need to understand the data that we are seeing right. Not everybody is evaluating their data. Research by our colleague Tracy Covassin showed that out of a sample of about 400 only about 50% of athletic trainers are even looking at the baseline data to try and interpret it. If someone gets an invalid or a score below expected levels they are just letting it slide, okay.
We know that there is factors that affect scores on neurocognitive testing, right. Taking a test in a group versus an individual setting is going to create some noise in the system. We know that concussion history affects performance, right, it affects attention and concentration, it interacts with a history of attention deficit disorder, or learning disability and there is a whole slew of information on gender differences, okay. These are all important factors to consider when just looking at baseline neuro in healthy individuals how do they score, okay.
Now we expect idealistically that when we measure human performance it stays reliable, it stays stable. And I'm not sure why that is. What we actually see is that there is a change from time 1 to time 2, there is some error in the system. It could be random error, we don't know what it is; or systematic error like a practice effect. If you take a test once and then you take it again you usually get better at taking that test. And that's not new information. I am on autopilot, I have 40 seconds per slide.
So the original study that came to us from the neuropsych community was Jeff Barth's Virginia Football Study in which they tested football players and identified a non-concussed cohort, this was the quintessential prospective cohort controlled study. A football player got injured they grabbed one of their teammates, same age, same position and so forth and tested them as well. And they serially tested. And what was interesting is that the impairment at that time was diagnosed and/or defined as failure to benefit from the practice effects, okay. If you look the controls got better, and this is on the trails B test time, so they are getting better you see. and those practice effect in the light blue and the football players are failing to benefit, they are lagging behind their control cohort because they are not benefitting from practice effects. So there was an understanding that human performance does deviate from time 1 to time 2, they get better and there is an expectation that people are going to get better. And failure to get better is a problem in this cohort or in this study.
So there are some assumptions that we need to talk about. The first one is that fluctuations or changes are actually due to deficits in the measure, and that's a very, very big assumption right. In the field of neuropsychology that might be the big assumption. We have this assumption that human behavior does not deviate from time 1 to time 2 and that we are measuring relatively enduring characteristics which we call traits rather than temporal states, right. And there is a whole state trait debate in the psychology literature.
So in reality we know that fluctuations or changes are not necessarily due to the measure. So this is basic autonomic functions here, this has nothing to do with neuropsychology, okay. If you measure the heart rate and blood pressure between two independent individuals on the same person at the same time there is about a 19 to 26% difference between the raters, okay, and the rater liability. Now doing one arm versus the other because that also changes, okay. If you take an individual and have them lay down and rest and get their heart rate, bring them in a week later there is a variation of 5 beats per minute. Probably second not minute, but we don't do seconds. Okay, so what does that mean? Do we say that stethoscope sucks? That blood pressure cuff is not reliable. I mean it's ridiculous to say there is a problem with the measure on something as simple as autonomic functions, right, it just doesn’t' work that way.
So we know human behavior does change from time 1 to time 2, okay. I hope we understand that. Everybody does not perform the same now as they will in the future or in the past. So the gold standard in neuropsychology are the Wechsler Scales, this is the best measure that we have, right. If you are going to see a neuropsychologist they are doing a forensic assessment they are going to be using the Wechsler Scales, okay.
This is Tulsky's work from validation studies of the Wechsler Memory Scale and over an 121 month interval the correlations, and I'm not sure what that means, we'll get into that, are about .57 to.70. So they are not perfect 1 to 1 correlations right, there is deviation in human performance. Look at some other verbal learning tests, they drop. This is the PAR Manual by Benedict, research that was done at an INS poster by Snow and you have to really dig deep to find some of these, as well as Digit Span data by Barr. They are all about .5, .6 to .7, okay.
This is the only slide that I'm going to be doing comparative data from ImPACT and whatnot but you can see that looking just at working memory the variation in concussion scales goes from .24 on CogSport up to .79 in ImPACT and it compares very favorably to the gold standards of human performance in neuropsychology at 1 year, the 11 month data from Tulsky. So it's possible to actually measure states and not traits. I know that's a lot to think about for a second, what are we measuring someone when they come in, is this how they always are or is this how they are now when they area not concussed?
So let's look at BESS data and I'm not a huge user of the BESS, I do some research on this. But BESS data generally it's around .6 to .7 the reliability, and it improves if you do certain things. Some took away the double leg stance, others looked over 2 days instead of 5 days and the reliability improved. But these are all on athletes, this is data on athletes and I wouldn't suspect that they are naturally clumsy people, athletes are clumsy people with bad balance. I don't think we are picking up traits here when we look at the BESS. But there is something happening situationally with balance that perhaps performance on the field is not contingent on closing your eyes and holding one leg in the air, but balance in an athletic environment is very different than balance in a laboratory environment. So it's possible that the measure isn't bad, it's just picking up variations in these situations that these people are bringing in situational constructs that vary very differently from what happens in the field.
So a couple more important points. The variation does occur, it may occur, it does occur. And that probably more importantly the cognitive domains that are traditionally sampled by concussion, attention, concentration, working memory, reaction time and processing speed are highly susceptible to variation. And they are intercorrelated with one another. Attention is the first stage of learning, it's the first stage of cognition. If you can't absorb it and encode it you are not going to retrieve it and duplicate it right, human memory is based on attention. How many times do you talk to somebody, like I don't think they are paying attention. They just don't even listen to me, they are not paying attention to what I'm doing, right. I know when I'm teaching I just see this, right. They are not paying attention, I know they are not going to do well in the exam. So if we look at otherwise healthy individuals what we are really seeing is variation in performance and maybe it's these variations in behavior that we are picking up, not a limitation of the test. The tests are doing exactly what they should do, picking up variations in human behavior.
So what can we say about baseline testing? Well it was recommended as part of preparticipation examinations a while back. Right 2001 the cornerstone of concussion management, okay, use baseline pre-injury testing and the serial follow-up assessment. Prague they said use baseline cognitive symptoms, cognitive assessment and symptom scores, and in Zurich they said regardless of age or level of performance do cognitive evaluations. And then by 2012 things had changed a bit. They said ti's not required and mandatory. There is no sufficient evidence to recommend widespread routine use of baseline neurocognitive testing. And there is really reasoning behind that in the document, however there were two pieces of information. One, it may be helpful and useful information for overall interpretation of post-concussion tests. Right, you need the baseline to help add information to interpret post-concussion data. And it also provides an additional educational opportunity. Well we'll come back to that educational part in a second.
So do we need baseline examinations? There really isn't a lot of literature on this, right. The normative data in neuropsychology says you look at normative samples. There are no baselines. People don't walk around with their medical records of their own baseline neurocognitive functioning before a car accident, right. We don't have baseline data on life. So Rubin Echemendia looked at let's take baseline scores and compare post-concussion scores to those, or take post-concussion scores and compare them to normative data. And you can get into the details of the study, I'm not going to do that because of time, but they essentially looked at cutoffs using 1 and 1.5 standard deviations and they used reliable change, which we'll get into, which was about 1.64 standard deviations but their ultimate result was that the majority of college athletes who experience meaningful change on post-concussion can actually be identified without baseline data, that they are using normative references do the job.
And one of the questions I thought was well does the majority apply equally, does it apply to everybody? And to put this in perspective, let's say you are above average, okay, we are all above average, you guys are all smart. It's a very well educated crowd, so we are all on the left side of the curve right. So if you start 2 standard deviations above the mean and we use a drop of 1 or 1.5 against the normative reference we are in the average range. You are not going to look like you are impaired if you use a normative reference. However if you start slightly below average you are already impaired, you are already diagnosed as being below average and now you drop 1.5 standard deviations and you are way down, right, you are significantly impaired. So to me it didn't seem that this seemed appropriate, especially coming from training in neuropsychology where we understand that there is variation in human behavior.
So we tried to replicate this but controlling for a baseline level of functioning. So we looked at individuals who are below average, average and above average on their baseline and then said let's look at what happens with change from baseline versus comparing to normative data. And the data here in red shows where we found differences. That as you would expect everybody that was below average looked below average regardless of whether they were normative references or change from baseline, okay. But as you move upwards far fewer individuals were diagnosed as being impaired using references to normative data than using change from baseline data for those people who are average and above average, right. This is really important data. So the idea that normative comparisons can be used to cover the majority of athletes is wrong, that we need baseline data because it helps identify individuals who fall outside of the average range and that's at least 1/3 of our individuals right, because if we believe the normal curve 2/3 of the individuals fall within the average range.
Well if we are going to use change from baseline what does it mean if an athlete gets 1 score below baseline? What does that mean? A more liberal individual would say, or a conservative that's bad, they are impaired, pull them out right. Should we be using absolute thresholds? So I got pulled into this research with Grant Iverson to understand multivariant base rates, and I have to be honest with you, I didn't really even understand what he was asking me to do at the time but if you know Grant Iverson it's just an honor to be asked. So the question was what percentage of athletes show reliable change from baseline when completing 2 assessments. And we understand that when we give one measure the results should be normally distributed around the normal curve; but what happens when you give 4 or 5 measures, right? And looking at ImPACT we have verbal memory, visual memory, reaction time, processing and we have symptoms. That's 5 measures. Is impairment normally distributed on 5 measures?
This comes from the field of neuropsychology in which we give an entire battery of tests, an entire day of testing. And then at the end of the day unilaterally or univariately we would look at all of the data and say what's impaired? Let's make sense of the data, without thinking what's the likelihood of getting 5 or 6 scores that are in the impaired range when you give an entire battery of tests? Okay, so that's the idea of multivariate base rates.
So when you - this was healthy individuals, we reanalyzed data. One was a small sample, 30 day test-retest of only 25 individuals and a 1 year test-retest interval on 369 individuals. And if you look at the individual scores you can see that some get better, some get worse. There is some evidence of practice effects, there is some individuals that get false negatives, okay. And it's very hard to interpret individual scores. But if you look at what number of scores out of 4 fell below or above cutoffs using reliable change there was much more practice effect than there was detriment. So people aren't getting concussed by taking a test right, they don't look concussed after taking a test the second time but some do. And in fact you should only have 5% of the people falling below this cutoff and we have 8 people that are getting worse and 32% getting better, and in the 1 year which is probably a better indicator a little bit over 5% were getting worse and 18% were getting better. But more importantly nobody is getting worse on 2 or more scores. That the likelihood of finding somebody that gets better or worse on 2 or more scores is almost zero. So the multivariate base rate would suggest that a use of 1 score may bring in chance performance, but use of 2 or more scores is beyond chance. You are not going to find that in the general population, certainly not beyond the 5% that's expected.
So important point number 3, the opportunity to give baseline testing is an important opportunity for education, that legislation in the U.S. requires almost across all states there is some form of concussion management program, right, that athletes who are identified as being concussed get pulled from play and they are not returned until there is an independent medical evaluation. by what discipline varies from state to state. Well how do you get people into a program? So perhaps the testing may be secondary to indoctrination into an actual program, how do athletes begin to learn about concussion. They are being hopefully taught by the individual that's proctoring the exam, that's leading the program, that's educating them about this potential problem and risk factors; and that might be a real intangible benefit to doing baseline testing.
That you don't want to use post-concussion comparison to normative data because you are going to inappropriately classify a lot of individuals, and that scoring below baseline on only one score is not always indicative of impairment. And I know this isn't an example that many of you have encountered but if not what if someone scores below normative data or significantly below baseline on reaction time but their accuracy scores are all high? And they know if they don't pass the test because their physician or whomever is governing the program isn't going to let them back into play, they are going to be more cautious, get all the answers right but be slower. But now they are below expected levels on reaction time, is that impairment? All it takes is one question, right, make ti an assessment not a test, right. It is not an event, it's a process. Ask them what happened here? Oh I went slower because I wanted to make sure I got it right. That's not impairment. But if you use an absolute threshold of one score that person is going to look impaired, right. Next time they might rush, go faster and get lower accuracy and now you are creating practice effects and all kinds of problems, okay.
What does it mean for a test to be reliable? Well in this political climate it depends on depends on what the meaning of the word reliable is, okay. And I love this quote because it fulfills the prophesy of what I'm trying to say. There is no such thing as the reliability of a test unqualified, the coefficient has meaning only when applied to specific populations. So you can't just say a test is or is not reliable and there is a lot of different ways to go about doing so. So is reliability a correlation? I'm hoping this isn't the first time you've heard of the word Pearson's correlation but it's how well the scores co-relate or go together, right. It's considered weak when individuals vary but the group means are the same, meaning there is a lot of noise but the data on average goes upward. So here is an example where everybody took the test and they got 1 better, so Y=X+1. The correlation is 1.0 but everybody got better, there is a practice effect. That's not necessarily a good thing, but the data looks like it's perfectly correlated, okay.
There is other ways of measuring correlation and perhaps a more popular way these days is with intraclass correlation coefficients. Interestingly this was originally developed for inter-rater reliability where there is independent observations of the same person doing something and it's about my rating versus your rating and now we say it's the same person taking the test on time 1 and time 2, but they independent. I don't really understand how it was extrapolated from inter-rater reliability to test-retest but it has been. But it's supposed to be better for trial to trial consistency especially when there is a lot of variation going on. And it's a little bit better at picking up practice effects.
So here is an example which we just showed before where the correlation was 1.0 where Y=X+1, so there is a practice effect where everyone is getting better by 1 point and the ICC is lower than the correlation, right, it's .83 instead of 1.0. If you have Y=X+2, so everyone is getting better by 2 points, the ICC drops a little bit more. So it is a better measure when accounting for practice effects but it is by no means a gold standard of saying this is reliable.
Well what happens between assessments? What happens from time 1 to time 2 on baseline assessments? So this is data that covers a bunch of different studies that go from 7 days between assessments which was done a long time ago by Grant Iverson and you know up to 2 years. So research that goes 30 days, 45 days. There is 2 studies on 45 days which I'll talk about in a minute, and 1 year and 2 years. And really the message here is that other than the middle one, the 45 days by Broglio all of the coefficients do what you expect them to do. As the time between assessments increase the relationship between scores decreases regardless of whether you use correlation coefficients using R or ICCs. That makes sense, human behavior changes from time 1 to time 2, and the more time that spans between those 2 assessments the less shared correlation there is.
I like to bring this up just because I like to take shots at Broglio's study because I think it's worth taking shots at, so I'm okay with that, it's on tape, I stand behind my words. Okay, so it's the importance of methodological rigor. So Broglio has been heralded for the psychometric you know grace of his study. And what they did was they gave 3 test batteries back to back to back and an effort test. So they literally gave 4 test batteries and 34% of their sample fell outside of the expected range to what would be called invalid, so they had a lot of noise. And the resulting scores probably had a lot of noise because they were using undergraduate student volunteers for course credit, right, the old human subject pool.
Well one of Tracy Covassin's doctoral students, Yosuki, I know him by his first name, Nakayama, replicated this by just giving one battery and one effort test and their invalidity rate dropped to 3%, which makes sense, and the students were instead of undergraduate student volunteers they were athletically active athletes, folks that met the American Congress of Sports Medicine definition of being athletically active by doing certain amounts of aerobic and anaerobic activities. And you can see that the correlation between assessments of 45 days increase dramatically when you decrease the probable interference effects and fatigue effects of taking multiple test batteries as well as having individuals who were engaged in the activity. So I think it's a better measure to put in this, you know when you look at this table I think that it's a much better indicator and it fits nice between the 30 day and the 1 year data.
All right do baseline test scores exhibit statistically significant change from time 1 to time 2? So one way of measuring reliability could be correlation and other ways well do they change? And the way we do that is just using a T-test and it's really quite simple and I think anyone that took statistics knows that T-tests are probably the most basic statistical operation. So it looks at the variability between scores using standard deviations but at the group level. And often we are looking for practice effects right, we expect that people are going to get better from time 1 to time 2. So this is 4 of those studies, 30 days, 45 days, 1 year and 2 years and anything that is red and in the blue background shows that it was statistically significant. And the one year study had the largest sample and without going into the intricacies of this the larger the sample the smaller the differences needed to be significant, but these were significant changes. So it appeared at one year people were getting better in almost everything and at the other intervals there really isn't any evidence of significant change.
And if you look just at magnitude of change, you start to see that the number of points that is required to be significantly different is not all that huge, it's much smaller than you know a reliable change of let's say 12 points on visual memory is what's required to diagnose someone as being concussed, but 3.5 is significant. So it's really hard to understand what is statistical significance without clinical significance. But this says oh yeah over a year people are getting better at the test but at other intervals they are not. It's kind of hard for me to understand how at 45 days you don’t get a practice effect but you do a year later. I understand it because of sample size.
So maybe the way to measure reliability is stability. So it's not correlation and it's not change, it's how stable are the scores. And the way that we measure stability is really looking at a magnitude of change with a confidence interval. And I'm not going to get into these very deeply, you can read the literature if you find this interesting. If you don't this will be the time you learn not to read the literature, that's okay, I won't be hurt. But there is reliable change indices. It came from psychotherapy literature that said if you put somebody into a therapeutic environment how can you measure change at the end of psychotherapy when people are expected to get better by just simply being in therapy. So what is beyond chance? There is variation, there is improvement, what is improvement beyond change, okay/
And regression based measures say let's make a regression equation what should time 2 based on time 1 and how far did you deviate from where you should be? So it's using a regression analysis and that's a very simple way of describing these. But both use a confidence interval. They put a confidence interval of a certain number of standard deviations around the time 2 score and say did you fall outside of where you are supposed to be?
So using - this is an example of a 95% confidence interval, I kept the ones in red that were significant from T-test change but this shows that when you use a more statistically appropriate measure of stability that even though there was a statistically significant change very few scores fall outside of the confidence interval that's expected. For every one of these the number 5 is what you are expecting, it's a 95% confidence interval so you are only expecting 5% of the scores to fall outside of that interval. And there is a couple - I mean something happened here at this 45 day in visual motor speed, we know that people get better at the act of taking a test, interacting with the measures, okay. But there isn't any real evidence that there is clinically significant change even though there was statistically significant change. See, that wasn't so hard.
So reliability can be defined in many different ways, it can be correlations, it can be T-tests, it can be stability. And the reliability of the test I think is really relevant to the question, the population, is there a clinical question and probably the most important thing which I won't answer today is does variable between baseline tests matter when you are looking at drop-off to post-concussion? Because if you are going to get a significant drop-off of a couple standard deviations does it matter if there is a little bit of variability or wavering between baselines? So you can't say a test is reliable without getting into the validity data which we didn't have time today to talk about but sensitivity and specificity.
And you know they say the great quote, there is 3 kinds of lies, lies, damn lies and statistics. Well you can take a test and you can with the same data I can deem that test to be reliable or unreliable based purely on the way that I want to interpret the data. So there is a lot of elephants in the room behind my comment, I hope there is a lot of different ways to build up or take down a test and I think when you look at the literature and you start to see these measures and which one are being focused on maybe it will give a little bit more appreciation of what is the actual psychometric data that's going on behind the tests.