Tag Archives: Education Research

More Indications of Positive Results from Auburn’s iPads

We’ve had iPads in our Kindergarten classrooms for more than a year now. This fall, we also rolled out iPads to our 1st grade students. All in the name of improving students’ mastery of literacy and math.

We know that we have too many students who aren’t demonstrating proficiency, so for several years, we’ve been making sure that teachers are getting quality training in literacy and math instruction, and we’re hopeful that, combined with the access to educational resources made possible through iPads, that we’ll increase that level of proficiency.

And when we examined gains made by last year’s kindergarten students, that’s what we found. Our kindergarten students had made more gains than in years past, leading our Curriculum Director to proclaim that taxpayers’ money is well spent.

Read more about our gains in the Sun Journal article Educators Say iPads Help Scores, and the MPBN radio story Auburn Educators Tout Benefits of iPads for Kindergartners (sorry iPad users; you need flash to listen to the story, but you can still peruse the article).

Keeping the Main Thing the Main Thing in Middle Level

I recently posted “Let’s Put the “Middle” Back in Middle Level” over on the Bright Futures Blog.

In it, I argued that we middle level educators are being pulled away from our core values by a lot of competing priorities and goals. I wrote:

Middle level shouldn’t be about test taking, or getting kids to put aside their cell phones or Facebook pages, or high school readiness, or work readiness. It’s not even about “hormones with feet…” First and foremost, middle level needs to be about young adolescents: what are their characteristics and what practices are harmonious with those characteristics.

And later:

And the more we get away from that being our center (no pun intended), the harder it is to teach middle level students. That includes (and is perhaps especially true for) that list of important (but supporting) goals for middle level education…

I also shared some really great resources! available for free on the AMLE website, that we can use with our teachers, school boards, and parents and communities to remind everyone about the main thing in middle level education.

 

It’s Your Turn:

How are you keeping the main thing the main thing in middle level education?

 

What Students and Teachers Say About Voice and Choice

The students and teachers in my Underachievers Study (2001) all had things to say about student voice and choice.

All students in the Underachievers Study felt they learned better when they had choices about how they learned. The teachers interviewed agreed. Mrs. Libby and Mrs. Edwards both believe that choice is a way to spark student interest or to engage students. Students reported that they sometimes had free choice about what book to read or could select from several books. Occasionally students could choose between two class activities, such as when Mrs. Libby allowed her language arts class to decide to either watch a video or work on creating a paper quilt. The students also said they could sometimes choose who to work with or were involved in setting due dates and scheduling tests, and sometime even in deciding how to assess a unit or project. Mrs. Jacques summed student choice up this way:

Choices? A lot of times I let them choose where they can sit, who they want to work with, what kind of learning environment they want to have in here. You see some of the stuff around the room, the different ways that they can report out. Sometimes they use overheads, sometimes they use charts, sometimes they use different things. I let them choose the kind of projects that they do.

In our interview, Mrs. Edwards and I described these kinds of choices as “the teacher making a skeleton and the students putting the skin on it.”

Mr. Mack believes that getting student input is the key to reaching reluctant learners:

Ask them if we are doing a certain unit, why they don’t like it? What type of things do they like? If it’s notes and discussions and a paper and pencil test at the end, they might not like that unit. Is there another way we can take the same information for them, that might be that they still take the notes but they do a model or a demonstration at the end. If they need to do the hands-on piece. I think that if you make it fun, exciting, they get into it without realizing that they are getting into it. And they’re starting to learn and they’re with you. And then at the end you ask them, “What did we do?” and they say, “We did this, this, and this,” and they were with you all the way along. But the student input helps me make it that way.

Choices and input were important components of project work for Ben, Doris, and Cathy. Doris said she wanted to do class projects and assignments her own way. Cathy wanted input into the kinds of work she does; she doesn’t mind parameters, but doesn’t want to be told exactly what to do. Ben thinks he learns best when he is doing hands-on activities that students have more control over. Cathy notes, however, that most of the time, teachers lay out all the work to be done by students and students aren’t given many choices. Ben and Doris agreed. Doris finds it boring when all the work is laid out to be done. Ben doesn’t see how he is given many choices in school and points out that he would like to have at least one course where he could learn what he wanted to:

Okay. I’d like to have a class where you get to learn what you want to learn. And it would be pretty much divided up [by interest group]… We have something like that only it’s an activity at the end of the day, called [activity period], that we only have on some days. I think there should be a class that is just, you choose what you’re going to learn. You have a little list of choices and you just choose.

Mrs. Jacques was explicit that students don’t get to choose what to learn, “As far as choosing the curriculum, you know, what they want to learn, that’s kind of set. So, they don’t really have much choice in that.” Seventh graders who participated in the Aspirations survey agree. The data show that only about half the students felt that they got the chance to explore topics that they found interesting (51%) or had opportunities to decide what they wanted to learn (44%).

Reference

Muir, M. (2001). What engages underachieving middle school students in learning? Middle School Journal, 33(2), pp. 37-43.

An Extrinsic Motivator So Good It Should Be Your Secret Weapon

You’ve been reading through a series of my posts highlighting, mostly, counterproductive extrinsic motivators.

You must be wondering, though, are all extrinsic motivators bad? The kind of motivators Alfie Kohn describes (bribery rewards) do have a negative impact on learning, but are there other kinds of extrinsic rewards (besides random rewards) that will positively effect learning?

The answer is, Yes!

It is choice that can help make extrinsic rewards almost as powerful as intrinsic motivation, that force that drives us to do what we’re interested in. Choice makes the difference. The fancy term is “autonomous supportive strategies” and comes from self-determination theory (Deci & Ryan, 1985; Deci, Vallerand, Pelletier, & Ryan, 1991). Proponents of self-determination theory maintain that integrated regulation is as effective for achieving optimum learning as intrinsic motivation.

Student Voice and Choice can be achieved through a variety of strategies. Teachers can allow students to share in the decision-making and authority within the classroom. They may negotiate the curriculum with the teacher, or help the teacher decide how they will learn the curriculum. Teachers might involve students in planning the entire unit or simply give students a choice of doing one of three assignments. Project-based learning, for example, allows students numerous choices around what form their finished product will take, while learning valuable content determined by the teacher, district, or state. The key is to make sure students have choices about their learning.

Keep in mind that giving students choices does not mean “let them do what they want.” We don’t ask toddlers, “What do you want for dinner?” But we might say, “Do you want peas or carrots with dinner?”

The same is true of students. We would never think to ask them “what do you want to learn?” But we might say, “We’re starting a unit on the Great Depression today and we’re all going to read a novel about the depression. But let me tell you about these three outstanding books so you can choose which one you want to read…” We might progressively structure more and more choice for students so that they can take on more and more of their own learning. But that would only come with careful scaffolding, just as we might give our own children more and more choices about what they eat, and as they develop the ability to make healthy choices, eventually get to the point where we do ask our teenager what she would like to eat for dinner.

This is why Student Voice and Choice is one of the Focus Five of Meaningful Engaged Learning and why it is one of the critical components of Customized Learning. It is so powerful that teachers should use it was their secret weapon in working to motivate students and help them be successful.

References:

Deci, E. L., and R. M. Ryan. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum.

Deci, E., Vallerand, R., Pelletier, L., & Ryan, R. (1991). Motivation and education: The self-determination perspective. Educational psychologist, 26(3 & 4), 325-346.

4 Reasons We (Try To) Use Extrinsic Motivators

I just blogged about 5 reasons we should avoid extrinsic motivators such as punishments and rewards. If there is so much evidence against punishments and rewards, why are they used so widely in schools?

Here are 4 reasons I think we do it.

Reason 1 – They Are Widely Used:
Part of the answer may be precisely that, because they are used widely, we believe that they are fine to use. Often a practice is implemented because of its legitimacy rather than its proven effectiveness. This may be especially true since teachers are so challenged to find ways to reach underachieving students and extrinsic motivators are more widely implemented and accepted than some of the other approaches to motivating students described in this book.

Reason 2 – They Tend To Have a Temporary Effect:
A third reason we may rely on punishments and rewards is, as I mentioned in the previous post, that they do tend to have a temporary desired effect. Have you ever had an itch? Perhaps Poison Ivy, or your arm was in a cast and the skin dry underneath? Rewards are a lot like itches. What about when you scratch it. How does it feel for the first five second? Wonderful! And every moment after that, how does it feel? It hurts. Even when you leave it alone for a few minutes and then you go back to scratch it, it hurts. But what do we do? Continue to scratch it over and over, trying to get those really good five seconds back from the beginning. That’s rewards. We have a really good initial positive effect and then it shuts down learning (but we keep doing it, trying to get that initial response back).

Reason 3 – Education’s Long Relationship With Behaviorism:
Another reason we might be quick to use extrinsic motivation is because of the long history of behaviorism in education. Skinner’s behaviorism maintains that all learning is actually only behavior and that all behavior can be conditioned and shaped through attention just to the behavior, through the pattern of stimulus-response-reinforcement. Although Skinner’s behaviorism has strongly impacted the world of education, his minimization of the role of thought and the mind brought about fiery responses from other educators and learning theorists. Perkins (1992, p. 59) responds by saying, “by ignoring human thinking as an invalid ‘folk theory,’ behaviorism discouraged some people from interacting with students in ways that made plain the workings of the mind.”

Prior to the advent of behaviorism, it was accepted that thought and mental processes play a crucial role in determining human action. But behaviorism buried this belief, with its conception of humans as robots, or machines with input—output connections. However, behaviorism no longer plays a dominant role in psychology, clearly because we are not robots, machines, or hydraulic pumps. A broad array of mental processes, including information search and retrieval, attention, memory, categorization, judgment, and decision-making play essential roles in determining why students behave as they do. (Weiner 1984, p. 16)

Perhaps the central problem with behaviorism is that it is presented as a general (comprehensive?) learning theory, instead of as a well developed, but small, piece of the puzzle. It has explanatory power for certain aspects of learning (such as appropriate behavior or recall of simple, disassociated facts) but lacks it for others (intrinsic interests, creativity, higher order thinking, anything related to the inner workings of the mind). This said, don’t think of behaviorism as incompatible with the cognitive theories. The “anti-cognitive” dimension grows from how behaviorism is sometimes applied. The theory, instead, simply describes an aspect of learning complementary to cognitive theories. Even Lepper, et al. (1973) and Weiner (1984) admit that behaviorist approaches, such as token economy systems, are effective at maintaining appropriate behavior in the classroom, an important precursor to learning.

Reason 4 – We’re Not Aware There Are Productive and Counterproductive Motivators:
Perhaps the largest reason that punishments and rewards are misused in schools is that teachers are not fully aware that there are both productive and counterproductive extrinsic motivators and which work and which do not.

It is this idea of counterproductive extrinsic motivators that I will explore in my next post in this series.

References:

Lepper, M. R., Greene, D., & Nisbett, R. (1973). Undermining children’s intrinsic interest with extrinsic reward: A test of the “over-justification” hypothesis. Journal of personality and social psychology, 28, 129-137.

Perkins, D. (1992). Smart schools: Better thinking and learning for every child. NY, NY. The Free Press.

Weiner, B. (1984). Principles of a theory of student motivation and their application within an attributional framework. In R. Ames and C. Ames (Eds.), Research on motivation in education (Vol. 1): Student motivation (pp. 15-38). San Diego, CA: Academic Press.

Motivating Students: Focus on 5 Strategies

There are many children who are undermotivated, disengaged, and underachieving. One of the most persistent questions facing individual teachers is, “How do I motivate all children to learn?” And you are Probably one of the ones wondering how to reach them. You aren’t alone.

One approach to reaching all students is Meaningful Engaged Learning (MEL), based on my research. Schools working to improve student motivation, engagement, and achievement concentrate on balancing five focus areas

  • Inviting Schools
  • Learning by Doing
  • Higher Order Thinking
  • Student Voice & Choice
  • Real World Connections

Here’s a brief overview of each strategy.

Inviting Schools
Sometimes, it may seem like this has nothing to do with academics or engaging students in learning, but positive relationships and a warm, inviting school climate are perhaps the most important element to implement if you are to reach hard to teach students. I heard over and over again from the students I studied that they won’t learn from a teacher who doesn’t like them (and it doesn’t take much for a student to think the teacher doesn’t like him or her!). It’s important for everyone in the school to think about how to connect with students and how to create a positive climate and an emotionally and physically safe environment. Adult enthusiasm and humor go a long way and teachers are well served to remember that one “ah-shucks!” often wipes out a thousand “at-a-boys!”

Learning by Doing
When you realize that people learn naturally from the life they experience every day, it won’t surprise you that the brain is set up to learn better with active, hands-on endeavors. Many students request less bookwork and more hands-on activities. The students I studied were more willing to do bookwork if there was a project or activity as part of the lesson. Building models and displays, fieldtrips and fieldwork, hands-on experiments, and craft activities are all strategies that help students learn.

Higher Order Thinking
It may seem counterintuitive, but focusing on memorizing facts actually makes it hard for students to recall the information later. That’s because the brain isn’t used to learning facts out of context. Higher order thinking (applying, analyzing, evaluating, and creating, within the New Bloom’s Taxonomy) requires that learners make connections between new concepts, skills, and knowledge and previous concepts, skills, and knowledge. These connections are critical for building deep understanding and for facilitating recall and transfer, especially to new contexts. Remembering things is important and a significant goal of education, but remembering is the product of higher order thinking, not the other way around. Involving students in comparing and contrasting, drama, and using metaphors and examples are strategies to move quickly into higher order thinking.

Student Voice & Choice
Few people like being told what to do, but in reality, we all have things we have to do that may not be interesting to us or that we would choose to do on our own. Nowhere is this truer than for children in school. So, how can we entice people to do these things? We often resort to rewards or punishments when we don’t know what else to do, but these have been repeatedly shown to be counterproductive and highly ineffective (Kohn, 1993). Instead, provide students voice and choice. Let them decide how they will do those things. This doesn’t mean allowing students to do whatever they want, but it means giving them choices (“which of these three novels about the Great Depression would you like to read during this unit?”). Let students design learning activities, select resources, plan approaches to units, and make decisions about their learning.

Real World Connections
This focus area is often a missing motivator for students. Schools have long had the bad habit of teaching content out of context. Unfortunately, this approach produces isolated islands of learning, and often makes it easy to recall information learned only when they are in that particular classroom at that time of day; they are not as able to apply the information in day-to-day life. When learning is done in context, people can much more easily recall and apply knowledge in new situations (transfer). Making real world connections isn’t telling students how the content they are studying is used in the “outside world.” It’s about students using the knowledge the way people use the knowledge outside of school. Effective strategies include finding community connections, giving students real work to do, and finding authentic audiences for work.

 

This model isn’t new material; it is a synthesis of what we’ve known about good learning for a long time. The model is comprehensive, developed from education research, learning theories, teaching craft, and the voices of underachieving students.

But it is important to keep in mind that students need some critical mass of these strategies to be motivated. Teachers sometimes get discouraged when they introduce a single strategy and it doesn’t seem to impact their students’ motivation. The trick then isn’t to give up, but rather to introduce more of the strategies.

 

It’s Your Turn:

What are your best strategies for motivating students?

 

Is Middle School All About Grade Configuration?

There is a new study out which concludes that students take an academic plunge when they go to a 6-8 school rather than a k-8 school. The article is called The Middle Level Plunge.

At first glance, it seems to be a reasonably well designed study comparing student performance in a 6-8 school to those in a k-8 school (the old grade configuration dilemma!). Their fallacy is in essentially equating the 6-8 grade configuration to “middle schooling,” and actually say “Our results cast serious doubt on the wisdom of the middle-school experiment that has become such a prominent feature of American education.”

Here is the response that I posted as a comment on their article:

Thanks for adding to the research on the impact of school grade configuration. I especially appreciate that you didn’t just study the grade configurations, but also tried to control for various explanations, including teacher experience, school characteristics, and educational practices. You have defined each of these clearly in your article.

I am concerned, however, with your using the term “middle school” to mean the 6-8 configuration schools. You are clear that this is your definition in the article, but in middle level education circles, the term means something very different, and I fear your conclusions about 6-8 grade configuration will be misinterpreted as conclusions about middle school practices. Readers should be able to make their own distinctions, especially when the writing is clear, as your article is, but you and I both know that in our “sound bite lives” there are too many people who will see the words “middle school” and think that your definition is the same as my definition.

For middle level educators, “middle school” is essentially a set of developmentally appropriate educational practices applied in the middle level grades (generally considered grades 5-8), without regard to the grade configuration of the school housing those grades. Readers may find helpful the numerous resources available on the Association for Middle Level Education website (http://www.amle.org).

Further, the school characteristics and educational practices you examine are not those that define middle school practices. I would have looked for the characteristics defined in AMLE’s This We Believe (http://tinyurl.com/865xggv), or the Turning Points 2000 recommendations (http://www.turningpts.org/principle.htm).

Again, I am not criticizing your study or the clarity of your writing, but simply sharing the unfortunate possibility of confusion for school decision makers trying to make informed (especially research informed) decisions based on your article and the use of the term “middle school.”

Perhaps, I could invite you to refer to the schools in your study as “6-8 schools,” instead of “middle schools.”

So, my big objection is defining “middle school” as a grade configuration, and seeming to conclude that “the middle school experiment has failed” and the possibility that decision makers will interpret this as if it were our definition of middle school…

I want to be clear, though. It is right and proper for researchers to select a term, define it, and use it in their article as they define it. It is expected that the reader will read such an article closely and critically. The authors of this study have done nothing wrong. Could it have been better (more clear to a wider audience) if they had done it differently? Yes.

But it is also right and proper for a reader to add their critique (politely and professionally) to the conversation though avenues such as comments on posts.

(For those of you exploring the Lead4Change model, this is a Branding and Buzz issue. Situations like these go directly to the issue of public perception of our initiatives and what role we play in communicating our vision. It is on us to try to correct misperceptions and to work toward the integrity of models we subscribe to.)

 

It’s Your Turn:

Are you a middle level educator or advocate? What are your thoughts about this study? I often ask you to post your comments here, but perhaps this time, you could post your comments on their article. And maybe you’d pass the word to your circle of middle grades contacts and they could comment, too…

 

Auburn’s iPad Research Project on the Seedlings Podcast

Seedlings is a great little podcast that, although about educational technology, is really about good teaching and learning.

So I felt honored when the Seedling hosts invited me to return to talk about Auburn’s research on their Advantage 2014 program, best known for giving iPads to Kindergartners. You can download that podcast and access related links here.

This was a follow up to the previous podcast, where we talked both about Advantage 2014, and Projects4ME, the statewide virtual project-based non-traditional program, where students can earn high school credit by designing and doing projects, instead of taking courses.

Responding to Critiques of Auburn’s iPad Research Claims

When we announced our research results last week, Audrey Watters was one of the first to cover it. Shortly thereafter, Justin Reich wrote a very thoughtful review of our research and response to Audrey’s blog post at his EdTechResearcher blog. Others, through comments made in post comments, blogs, emails, and conversations, have asserted that we (Auburn School Department) have made claims that our data don’t warrant.

I’d like to take a moment and respond to various aspects of that idea.

But first, although it may appear that I am taking on Justin’s post, that isn’t quite true (or fair to Justin). Justin’s is the most public comment, so the easiest to point to. But I actually believe that Justin’s is a quite thoughtful (and largely fair) critique from a researcher’s perspective. Although I will directly address a couple things Justin wrote, I hope he will forgive me for seeming to hold up his post as I address larger questions of the appropriateness of our claims from our study.

Our Research Study vs. Published Research
Our results are initial results. There are a lot of people interested in our results (even the initial ones – there are not a lot of randomized control trials being done on iPads in education), so we decided to share what we had so far in the form of a research summary and a press release. But neither of these would be considered “published research” by a researcher (and we don’t either – we’re just sharing what we have so far). Published research is peer reviewed and has to meet standards for the kinds of information included. We actually have more data to collect and analyze (including more analyses on the data we already have) before we’re ready to publish.

For example, Justin was right to point out that we shared no information about scales for the ten items we measured. As such, some of the measures may seem much smaller than when compared proportionally to their scale (because some of the scales are small), and we were not clear that it is inappropriate to try to make comparisons between the various measures as represented on our graph (because the scales are different). In hindsight, knowing we have mostly a lay audience for our current work, perhaps we should have been more explicit around the ten scales and perhaps created a scaled chart…

Mostly, I want my readers to know that even if I’m questioning some folks’ assertions that we’re overstating our conclusions, we are aware that there are real limitations to what we have shared to date.

Multiple Contexts for Interpreting Research Results
I have this debate with my researcher friends frequently. They say the only appropriate way to interpret research is from a researcher’s perspective. But I believe that it can and should also be interpreted as well from a practitioner’s perspective, and that such interpretation is not the same as a researcher’s. There is (and should be) a higher standard of review by researchers and what any results may mean. But practical implementation decisions can be made without such a high bar (and this is what makes my researcher friends mad, because they want everyone to be just like them!). This is just like how lawyers often ask you to stand much further back from the legal line than you need to. Or like a similar debate mathematicians have: if I stand some distance from my wife, then move half way to her, then move half way to her again, and on and on, mathematicians would say (mathematically) I will never reach her (which is true). On the other hand, we all know, I would very quickly get close enough for practical purposes! 😉

Justin is very correct in his analysis of our research from a researcher’s perspective. But I believe that researchers and practitioners can, very appropriately, draw different conclusions from the findings. I also believe that both practitioners and researchers can overstate conclusions from examining the results.

I would wish (respectfully) that Justin might occasionally say in his writing, “from a researcher’s perspective…” If he lives in a researcher world, perhaps he doesn’t even notice this, or thinks it implied or redundant. But his blog is admittedly not for an audience of researchers, but rather for an audience of educators who need help making sense of research.

Reacting to a Lay Blog as a Researcher
I think Justin has a good researcher head on him and is providing a service to educators by analyzing education research and offering his critique. I’m a little concerned that some of his critique was directed at Audrey’s post rather than directly at our research summary. Audrey is not a researcher. She’s an excellent education technology journalist. I think her coverage was pretty on target. But it was based on interviews with the researchers, Damian Bebell (one of the leading researchers on 1to1 learning with technology), Sue Dorris, and me, not a researcher’s review of of our published findings. At one point, Justin suggests that Audrey is responding to a graph in our research summary (as if she were a researcher). I would suggest she is responding to conversations with Damian, Sue, and me (as if she were a journalist). It is a major fallacy to think everyone should be a researcher, or think and analyze like one (just as it is a fallacy that we all should think or act from any one perspective, including as teachers, or parents, etc). And it is important to consider individual’s context in how we respond to them. Different contexts warrant different kinds of responses and reactions.

Was It The iPads or Was It Our Initiative
Folks, including Audrey, asked how we knew what portion of our results were from the iPads and which part from the professional development, etc. Our response is that it is all these things together. The lessons we learned from MLTI, the Maine Learning Technology Initiative, Maine’s statewide learning with laptop initiative, that has been successfully implemented for more than a decade, is that these initiatives are not about a device, but about a systemic learning initiative with many moving parts. We have been using the Lead4Change model to help insure we are taking a systemic approach and attending to the various parts and components.

That said, Justin is correct to point out that, from a research (and statistical) perspective, our study examined the impact that solely the iPad had on our students (one group of students had iPads, the other did not).

But for practitioners, especially those who might want to duplicate our initiative and/or our study, it should be important to note that, operationally, our study studied the impact of the iPad as we implemented them, which is to say, systemically, including professional development and other components (Lead4Change being one way to approach an initiative systemically).

It is not unreasonable to expect that a district who simply handed out iPads would have a hard time duplicating our results. So although, statistically, it is just the iPads, in practice, it is the iPads as we implemented them as a systemic initiative.

Statistical Significance and the Issue of “No Difference” in 9 of the 10 Tests
The concept of “proof” is almost nonexistent in the research world. The only way you could prove something is if you could test every possible person that might be impacted or every situation. Instead, researchers have rules for selecting some subset of the entire population, rules for collecting data, and rules for running statistical analyses on those data. Part of why these rules are in place is because, when you are only really examining a small subset of your population, you want to try to control for the possibility that pure chance got you your results.

That’s where “statistical significance” comes in. This is the point at which researchers say, “We are now confident that these results can be explained by the intervention alone and we are not worried by the impact of chance.” Therefore, researchers have little confidence in results that do not show statistical significance.

Justin is right to say, from a researcher’s perspective, that a researcher should treat the 9 measures that were not statistically significant as if there were no difference in the results.

But that is slightly overstating the case to the rest of the world who are not researchers. For the rest of us, the one thing that is accurate to say about those 9 measures is that these results could be explained by either the intervention or by chance. It is not accurate for someone (and this is not what Justin wrote) to conclude there is no possitive impact from our program or that there is no evidence that the program works. It is accurate to say we are unsure of the role chance played on those results.

This comes back to the idea about how researchers and practitioners can and should view data analyses differently. When noticing that the nine measures trended positive, the researcher should warn, “inconclusive!”

It is not on a practitioner, however, to make all decisions based solely on if data is conclusive or not. If that were true, there would be no innovation (because there is never conclusive evidence a new idea works before someone tries it). A practitioner should look at this from the perspective of making informed decisions, not conclusive proof. “Inconclusive” is very different from “you shouldn’t do it.” For a practitioner, the fact that all measures trended positive is itself information to consider, side by side with if those trends are conclusive or not.

“This research does not show sufficient impact of the initiative,” is as overstated from a statistical perspective, as “We have proof this works,” is from a decision-maker’s perspective.

We don’t pretend to have proof our program works. What is not overstated, and appropriate conclusions from our study, however, and is what Auburn has stated since we shared our findings, is the following: Researchers should conclude we need more research. But the community should conclude at we have shown modest positive evidence of iPads extending our teachers’ impact on students’ literacy development, and should take this as suggesting we are good to continue our program, including into 1st grade.

We also think it is suggestive that other districts should consider implementing their own thoughtfully designed iPads for learning initiatives.