Monthly Archives: February 2012

Auburn’s iPad Research Project on the Seedlings Podcast

Seedlings is a great little podcast that, although about educational technology, is really about good teaching and learning.

So I felt honored when the Seedling hosts invited me to return to talk about Auburn’s research on their Advantage 2014 program, best known for giving iPads to Kindergartners. You can download that podcast and access related links here.

This was a follow up to the previous podcast, where we talked both about Advantage 2014, and Projects4ME, the statewide virtual project-based non-traditional program, where students can earn high school credit by designing and doing projects, instead of taking courses.

Responding to Critiques of Auburn’s iPad Research Claims

When we announced our research results last week, Audrey Watters was one of the first to cover it. Shortly thereafter, Justin Reich wrote a very thoughtful review of our research and response to Audrey’s blog post at his EdTechResearcher blog. Others, through comments made in post comments, blogs, emails, and conversations, have asserted that we (Auburn School Department) have made claims that our data don’t warrant.

I’d like to take a moment and respond to various aspects of that idea.

But first, although it may appear that I am taking on Justin’s post, that isn’t quite true (or fair to Justin). Justin’s is the most public comment, so the easiest to point to. But I actually believe that Justin’s is a quite thoughtful (and largely fair) critique from a researcher’s perspective. Although I will directly address a couple things Justin wrote, I hope he will forgive me for seeming to hold up his post as I address larger questions of the appropriateness of our claims from our study.

Our Research Study vs. Published Research
Our results are initial results. There are a lot of people interested in our results (even the initial ones – there are not a lot of randomized control trials being done on iPads in education), so we decided to share what we had so far in the form of a research summary and a press release. But neither of these would be considered “published research” by a researcher (and we don’t either – we’re just sharing what we have so far). Published research is peer reviewed and has to meet standards for the kinds of information included. We actually have more data to collect and analyze (including more analyses on the data we already have) before we’re ready to publish.

For example, Justin was right to point out that we shared no information about scales for the ten items we measured. As such, some of the measures may seem much smaller than when compared proportionally to their scale (because some of the scales are small), and we were not clear that it is inappropriate to try to make comparisons between the various measures as represented on our graph (because the scales are different). In hindsight, knowing we have mostly a lay audience for our current work, perhaps we should have been more explicit around the ten scales and perhaps created a scaled chart…

Mostly, I want my readers to know that even if I’m questioning some folks’ assertions that we’re overstating our conclusions, we are aware that there are real limitations to what we have shared to date.

Multiple Contexts for Interpreting Research Results
I have this debate with my researcher friends frequently. They say the only appropriate way to interpret research is from a researcher’s perspective. But I believe that it can and should also be interpreted as well from a practitioner’s perspective, and that such interpretation is not the same as a researcher’s. There is (and should be) a higher standard of review by researchers and what any results may mean. But practical implementation decisions can be made without such a high bar (and this is what makes my researcher friends mad, because they want everyone to be just like them!). This is just like how lawyers often ask you to stand much further back from the legal line than you need to. Or like a similar debate mathematicians have: if I stand some distance from my wife, then move half way to her, then move half way to her again, and on and on, mathematicians would say (mathematically) I will never reach her (which is true). On the other hand, we all know, I would very quickly get close enough for practical purposes! 😉

Justin is very correct in his analysis of our research from a researcher’s perspective. But I believe that researchers and practitioners can, very appropriately, draw different conclusions from the findings. I also believe that both practitioners and researchers can overstate conclusions from examining the results.

I would wish (respectfully) that Justin might occasionally say in his writing, “from a researcher’s perspective…” If he lives in a researcher world, perhaps he doesn’t even notice this, or thinks it implied or redundant. But his blog is admittedly not for an audience of researchers, but rather for an audience of educators who need help making sense of research.

Reacting to a Lay Blog as a Researcher
I think Justin has a good researcher head on him and is providing a service to educators by analyzing education research and offering his critique. I’m a little concerned that some of his critique was directed at Audrey’s post rather than directly at our research summary. Audrey is not a researcher. She’s an excellent education technology journalist. I think her coverage was pretty on target. But it was based on interviews with the researchers, Damian Bebell (one of the leading researchers on 1to1 learning with technology), Sue Dorris, and me, not a researcher’s review of of our published findings. At one point, Justin suggests that Audrey is responding to a graph in our research summary (as if she were a researcher). I would suggest she is responding to conversations with Damian, Sue, and me (as if she were a journalist). It is a major fallacy to think everyone should be a researcher, or think and analyze like one (just as it is a fallacy that we all should think or act from any one perspective, including as teachers, or parents, etc). And it is important to consider individual’s context in how we respond to them. Different contexts warrant different kinds of responses and reactions.

Was It The iPads or Was It Our Initiative
Folks, including Audrey, asked how we knew what portion of our results were from the iPads and which part from the professional development, etc. Our response is that it is all these things together. The lessons we learned from MLTI, the Maine Learning Technology Initiative, Maine’s statewide learning with laptop initiative, that has been successfully implemented for more than a decade, is that these initiatives are not about a device, but about a systemic learning initiative with many moving parts. We have been using the Lead4Change model to help insure we are taking a systemic approach and attending to the various parts and components.

That said, Justin is correct to point out that, from a research (and statistical) perspective, our study examined the impact that solely the iPad had on our students (one group of students had iPads, the other did not).

But for practitioners, especially those who might want to duplicate our initiative and/or our study, it should be important to note that, operationally, our study studied the impact of the iPad as we implemented them, which is to say, systemically, including professional development and other components (Lead4Change being one way to approach an initiative systemically).

It is not unreasonable to expect that a district who simply handed out iPads would have a hard time duplicating our results. So although, statistically, it is just the iPads, in practice, it is the iPads as we implemented them as a systemic initiative.

Statistical Significance and the Issue of “No Difference” in 9 of the 10 Tests
The concept of “proof” is almost nonexistent in the research world. The only way you could prove something is if you could test every possible person that might be impacted or every situation. Instead, researchers have rules for selecting some subset of the entire population, rules for collecting data, and rules for running statistical analyses on those data. Part of why these rules are in place is because, when you are only really examining a small subset of your population, you want to try to control for the possibility that pure chance got you your results.

That’s where “statistical significance” comes in. This is the point at which researchers say, “We are now confident that these results can be explained by the intervention alone and we are not worried by the impact of chance.” Therefore, researchers have little confidence in results that do not show statistical significance.

Justin is right to say, from a researcher’s perspective, that a researcher should treat the 9 measures that were not statistically significant as if there were no difference in the results.

But that is slightly overstating the case to the rest of the world who are not researchers. For the rest of us, the one thing that is accurate to say about those 9 measures is that these results could be explained by either the intervention or by chance. It is not accurate for someone (and this is not what Justin wrote) to conclude there is no possitive impact from our program or that there is no evidence that the program works. It is accurate to say we are unsure of the role chance played on those results.

This comes back to the idea about how researchers and practitioners can and should view data analyses differently. When noticing that the nine measures trended positive, the researcher should warn, “inconclusive!”

It is not on a practitioner, however, to make all decisions based solely on if data is conclusive or not. If that were true, there would be no innovation (because there is never conclusive evidence a new idea works before someone tries it). A practitioner should look at this from the perspective of making informed decisions, not conclusive proof. “Inconclusive” is very different from “you shouldn’t do it.” For a practitioner, the fact that all measures trended positive is itself information to consider, side by side with if those trends are conclusive or not.

“This research does not show sufficient impact of the initiative,” is as overstated from a statistical perspective, as “We have proof this works,” is from a decision-maker’s perspective.

We don’t pretend to have proof our program works. What is not overstated, and appropriate conclusions from our study, however, and is what Auburn has stated since we shared our findings, is the following: Researchers should conclude we need more research. But the community should conclude at we have shown modest positive evidence of iPads extending our teachers’ impact on students’ literacy development, and should take this as suggesting we are good to continue our program, including into 1st grade.

We also think it is suggestive that other districts should consider implementing their own thoughtfully designed iPads for learning initiatives.

More News on Auburn’s iPad Research Results

The other day, I blogged about our Phase 1 research results on the impact of Advantage 2014, our literacy and Math initiative that includes 1to1 iPads in kindergarten. Now the press and blogosphere is starting to report on it, too.

Auburn’s press release, the research summary, and slides from the School Committee presentation are here.

It’s Your Turn:

Have you found press about this elsewhere? Please share!

Professional Development for Auburn’s iPad Kindergarten Teachers

Auburn is excited that our initial research results strongly suggest that our initiative is extending the impact our teachers are having on their students. It has prompted lots of requests to know more about what we’re doing for professional development with our teachers. Professional development is, clearly, one critical component to any school change initiative, and designing and providing the right kind of PD and support is a critical leadership role.

What professional development did we conduct with our kindergarten teachers?

Content of Professional Development – All of our PD and training has focused on a couple of topics. We wanted to expand our teachers’ skill at applying literacy best practice, and to insure that our teachers and specialists working with kindergarten students had the capacity to select and apply appropriate apps directly toward student academic needs, as well as how to manage the iPads and work within the unique demands of this initiative. We have summed this up at the beginning of each of our PD session agendas with the following goals:

  1. Link iPads to learning.
  2. Problem-solve technology-related issues.
  3. Discuss best practices.
  4. Discuss and review apps.

How did we manage professional development and support that achieved these goals?

PD for Paradigm Shfit – Although teachers can often, sometimes with coaching, apply best practice they are familiar with to unfamliliar contexts, the integration of technology at this level is often a paradigm shift for teachers. Rarely have teachers experienced learning with technology themselves, and many have received very little training with computers, let alone iPads. “PD for Paradigm Shift” recognizes that changing paradigms requires more than sharing information. Schema theory sheds the best light on how to structure professional development for large change: provide models and experiences. See it in action. Live it in action. That’s what we’ve tried to do for our teachers.

Getting Technology into the Hands of Teachers – A terrific first step for professional development is to get the technology into the hands of teachers, so they can become used to it through their own use. We made sure that every teacher had an iPad to use over the summer for this purpose. But it is important to keep in mind that this will develop a teacher’s personal use skill, not their integration for learning skill. That’s not a problem. The problem comes from thinking that if teachers know how to use their iPad that they also know how to leverage it for their students’ learning…

Modeling: Visiting Classrooms – When teachers don’t have a lot of experience with an innovation, one way to get them that experience is by having them visit other teachers who are successfully doing similar work. This can be done in person, or vicariously through videos or stories (not descriptive articles, so much as those that tell the story and paint a picture for the reader – remember, this is not about information, it’s about experience). Unfortunately, there weren’t a lot of classrooms for us to visit when we got started. We have tried to make it easy for our teachers to visit each other’s rooms, and we have had teachers video (on their iPads!) and share examples of what they are doing. Now we’re working to make sure that other teachers can come to visit our classrooms so they can begin to expand their experience (although we’re trying to be careful how we schedule and manage such visits as not to distract too much from the learning that is supposed to be our first order of business!).

Modeling Effective Practices – Did we see a teacher do a great lesson? We had her model that lesson to the other teachers in a PD session. Did we learn a better way to sync or manage apps? We modeled that approach in a PD session. Did we think the press might start contacting teachers? We’d review procedures for dealing with press requests, as well as share talking points and provide them language they could choose to use if interviewed. (Our teachers’ favorite talking point: “That’s a great question! You should ask the superintendent.” I think they might be a little press shy!) 🙂

Connecting with other Educators – A different approach to helping teachers and program leaders build models is to provide them opportunities to communicate with educators who are doing similar work. Networking is a powerful way for teachers to develop their own practice while helping colleagues (often in different states or countries!) to develop their own. Although few teachers have taken us up on these options to date, we encourage them to consider tweeting or blogging about their experiences, since it can help build a diverse professional learning network for the teachers who do. Our teachers are more eager to connect with teachers in more traditional ways: on the phone or via email. One avenue which has really opened possibilities for these connections was the national iPads in primary grades education conference we hosted last November. About half our participants were from Maine, and the other half from across the country (we even had one from India!). We’re already planning next year’s conference.

Constructivist Approach – As we thought about designing PD for our teachers, we didn’t want to just hand teachers information or resources; for example, we didn’t just want to hand them “approved” apps. We wanted teachers to have an intimate understanding of various components of the initiative they were on the front lines of implementing, including app (educational resource) selection. We decided to take a constructivist approach. For example, we had our teachers start by simply exploring apps. They had a limited budget for apps, but could also download as many free apps as they wanted. Then teachers made recommendations for apps that they thought would be the “core collection” of apps, those apps the district would purchase for every classroom. We would give teachers two similar apps and ask, “which one’s better?” to get them thinking about criteria for app selection; this eventually was developed into a rubric. Finally, we correlated apps to our kindergarten curriculum. The constructivist approach insures a deeper understanding based on their own experience.

Collaboratively Designed – I think one of our best secrets to successful professional development and support is realizing that none of us is as smart as a group of us together. We have tried to have a team approach to all design work for this initiative. Our Advantage 2014 Design Team includes central office administrators, our grant writer, a couple School Committee members, a couple parents and community members, as well as our some of our teach folk, and one of our elementary principals (and, of course, me, the Multiple Pathways Leader). We have smaller groups working on specific aspects of the program: funding, research, the Institute, and professional development. Our professional development planning group includes our Tech Director, our elementary Technology Coach, an elementary principal, one of our Special Education administrators, and myself. And even though no teacher is officially on the PD design team, in reality, they all are. We solicit their input in a variety of ways and work hard to be responsive to their needs (see the next section). Teachers helped us craft our policies and procedures, our expectations for teachers, our core collection of apps and our app selection rubric, and other significant components of our initiative.

I can’t over state this: this work MUST be a team effort. I can’t tell you how many times in the last few weeks I’ve said, “See! That’s exactly why we have a team planning this!” And not just for PD, but for lots of different aspects of this work. I don’t care how good some individuals in your district are; he or she doesn’t have the capacity represented by a collection of your staff, with various experience bases, perspectives, and areas of expertise.

Continuous Improvement Focus – We’ve tried to be highly responsive to the needs of our teachers. In additional to listening to our teachers, asking them directly, and being tuned in to situations as they develop, we use two tools. On a regular basis, we have our teachers complete one of two surveys we created in Google Docs. One asks questions about how often they used the iPad that week for various types of tasks (these correspond to our expectations for teachers that we collaborately created with the teachers, and essentially gets to fidelity of implementation). The other survey simply asks about their recent successes and challenges within the program. Although quite simple, both provide us with amazing data on what the teachers need right now. Although we plan our PD sessions in advance, we’ve been known to completely redesign a session hours before it starts based on what we’ve learned the teachers need.

Imbedded Support – Our district has three technology integrators: one for the high school, one for the middle school and one serving our elementary schools. As you can imagine, we’ve had our elementary technology coach spend much of her time working in our kindergarten classrooms. A good technology coach is really a good pedagogical coach. She can collaboratively design lessons with teachers, co-teach lessons, model lessons, sit back and observe and provide feedback, make recommendations to resources and otherwise support teachers. Although the technology coach becomes eyes and ears for program leaders, it is not an evaluative position. The teacher needs to feel safe with the coach working in her room, and we only use information from the coach to help direct resources and support.

Built On A Strong Literacy Foundation – Our teachers had been working on literacy instruction for several years prior to Advantage 2014 and the introduction of iPads to their classrooms. Auburn had been part of the Maine Literacy Project out of the Univerrsity of Maine and our teachers had done graduate level work with the Project. Adding the iPads and its apps was a logical extension of this work, and training we conducted specficially about the iPads was intended to extend this earlier work, not replace it.

Where Did We Find the Time? – We used the usual approaches: taking advantage of workshop days, after-school opportunities, and scheduling a couple days in the summer prior to school starting. But we also had the advantage of the district already having “Early Release Wednesdays” available for our elementary schools. We have used nearly every other Wednesday to provide several hours of training. Some days we met just with the “September Teachers” (the first round of teachers to get the iPads). Other times we met with all the kindergarten teachers, or just the specialists, or everyone all at once.

It’s Your Turn:

What are your best strategies for delivering professional development and support to your staff?

What Apps is Auburn Using and Other Advantage 2014 Information

Since the initial results of our research on improving kindergarten literacy with iPads has come out, we’ve received quite a few questions about what apps we are using, and other basic questions about our program.

Our teachers can largely choose which apps they use. A major component of our professional development with our teachers has focused on exploring apps, deciding what makes for a good app, correlating apps to our curriculum, selecting the apps for our common collection, and getting better at customizing apps to student needs.

This page has both our rubric for app selection and our core collection of apps correlated to our K literacy curriculum.

Additionally, here is where you can find lots of other basic information about Advantage 2014, Auburn’s literacy and math initiative that includes 1to1 iPads in kindergarten.

You can read much more about our program at our Advantage 2014 website.

There are lots of links to resources from the national iPads in Primary Grades Ed conference we hosted last November. (We’re planning now for next November’s conference!)

And this blog has posts about our work. Here are the ones in the Adv2014 category.

It’s Your Turn:

What are your favorite kindergarten apps?

Confirmed: iPads Extend a Teacher’s Impact on Kindergarten Literacy

I’m excited! I’m REALLY excited!

Our “Phase I” research results are in…

iPads in Kindergarten

We (Auburn School Department) took a big risk last May when we started down the path to have the first 1to1 kindergarten learning with iPads initiative. We had confidence it would help us improve our literacy and math proficiency rates. One of our literacy specialists had used her own iPad with students to great success (one of the big reasons we moved forward). But there were also segments of the community that thought we were crazy.

Now we have pretty good evidence it works!

We did something not a lot of districts do: a randomized control trial. We randomly selected half our kindergarten classrooms to get iPads in September. The other half would use traditional methods until December, when they received their iPads. We used our regular kindergarten literacy screening tools (CPAA, Rigby, Observation Survey) for the pre-test and post-test. And across the board, the results were emerging positive for the iPad classrooms, with one area having statistical significance.

These results are a strong indication that the iPad and it’s apps extend the impact our teachers have on our students’ literacy development. We definitely need more research (and will be continuing the study through the year, including comparing this year’s results to past years), but these results should be more than enough evidence to address the community’s question, “How do we know this works?”

And I’m especially excited that we went all the way to the Gold Standard for education research: randomized control trials. That’s the level of research that can open doors to funding and to policy support.

Why do we think we got these results?

We asked our kindergarten teachers that question. Anyone walking by one of the classrooms can certainly see that student engagement and motivation is up when using the iPads. But our kindergarten teachers teased it out further. Because they are engaged, students are practicing longer. They are getting immediate feedback, so they are practicing better. Because we correlate our apps to our curriculum, they are practicing the right stuff. Because we select apps that won’t let students do things just any way, we know the students are practicing the right way. Because they are engaged, teachers are more free to work one on one with the students who need extra support at that moment.

We also believe we got the results we got because we have viewed this as an initiative with many moving parts that we are addressing systemically. A reporter asked me, how do you know how much of these results are the iPad, how much the professional development, and how much the apps. I responded that it is all those things together, on purpose. We are using a systemic approach that recognizes our success is dependent on, among other things, the technology, choosing apps wisely, training and supporting teachers in a breadth of literacy strategies (including applying the iPad), partnering with people and organizations that have expertise and resources they can share with us, and finding data where we can so we can focus on continuous improvement.

And we’re moving forward – with our research, with getting better at math and literacy development in kindergarten, with figuring out how to move this to the first grade.

So. We have what we were looking for:

Confirmation that our vision works.

It’s Your Turn:

What do you think the implications of our research are? What do our findings mean to you?

Learn More about Projects4ME and Auburn’s iPad Program

The other night, I had the pleasure of joining Cheryl Oaks, Alice Barr, and Bob Sprankle on their Seedlings podcast.

We had a chance to talk about Auburn’s literacy and math initiative at includes 1to1 iPads in kindergarten and Projects4ME, Maine’s statewide virtual project-based program for at-risk youth.

Check out the links and podcast here.