Category Archives: Branding and Buzz

Branding and Buzz

7 Social Media Articles to Help Your School’s Communication Impact

Schools and educational organizations are starting to realize that even though they are doing great work, they need to get that message out to their parents, communities, members, and constituents. “Branding and Buzz” is one of the “Supporting but Necessary” components of the Lead4Change Model, and encourages schools and organizations to state their case for the work they are doing, communicate with their community and beyond, tell their story, and present their evidence.

So begins my recent Bright Futures blog post on schools using social media to get their message out.

The post points readers toward the Social Media Examiner, a wonderful resource for helping organizations leverage social media. In particular, I highlighted 7 articles focused on getting the most from Facebook, Twitter, and blogging.

A lot of schools already have a Facebook page. Some are even using twitter. Others have administrators or teachers who blog. But are they using these avenues to connect with parents, communities, and colleagues as effectively or with as much impact as they could?

I think these 7 articles can help insure that schools do. The articles share wonderful tips from folks who are getting the most from their social media. Where can a school start?

Use the post as a jumping off point. Do a deep dive into one or the articles, or have your staff or leadership team jigsaw a couple of them. Your school could take what they learn and decide on a couple things that they want to try out.

The Bright Futures Partnership did just that. We read the article 26 Tips For Writing Great Blog Posts, and decided on 5 or 6 things we were going to try (look for changes coming to the Bright Futures blog and see if you can spot which tips we put into action!). By the way, reading the article also allowed us to pat ourselves on the back for 5 or 6 things we were already doing!

 

It’s Your Turn:

What are your best strategies for getting your school’s or educational organization’s message out via social media?

 

Auburn’s iPad Research Project on the Seedlings Podcast

Seedlings is a great little podcast that, although about educational technology, is really about good teaching and learning.

So I felt honored when the Seedling hosts invited me to return to talk about Auburn’s research on their Advantage 2014 program, best known for giving iPads to Kindergartners. You can download that podcast and access related links here.

This was a follow up to the previous podcast, where we talked both about Advantage 2014, and Projects4ME, the statewide virtual project-based non-traditional program, where students can earn high school credit by designing and doing projects, instead of taking courses.

Responding to Critiques of Auburn’s iPad Research Claims

When we announced our research results last week, Audrey Watters was one of the first to cover it. Shortly thereafter, Justin Reich wrote a very thoughtful review of our research and response to Audrey’s blog post at his EdTechResearcher blog. Others, through comments made in post comments, blogs, emails, and conversations, have asserted that we (Auburn School Department) have made claims that our data don’t warrant.

I’d like to take a moment and respond to various aspects of that idea.

But first, although it may appear that I am taking on Justin’s post, that isn’t quite true (or fair to Justin). Justin’s is the most public comment, so the easiest to point to. But I actually believe that Justin’s is a quite thoughtful (and largely fair) critique from a researcher’s perspective. Although I will directly address a couple things Justin wrote, I hope he will forgive me for seeming to hold up his post as I address larger questions of the appropriateness of our claims from our study.

Our Research Study vs. Published Research
Our results are initial results. There are a lot of people interested in our results (even the initial ones – there are not a lot of randomized control trials being done on iPads in education), so we decided to share what we had so far in the form of a research summary and a press release. But neither of these would be considered “published research” by a researcher (and we don’t either – we’re just sharing what we have so far). Published research is peer reviewed and has to meet standards for the kinds of information included. We actually have more data to collect and analyze (including more analyses on the data we already have) before we’re ready to publish.

For example, Justin was right to point out that we shared no information about scales for the ten items we measured. As such, some of the measures may seem much smaller than when compared proportionally to their scale (because some of the scales are small), and we were not clear that it is inappropriate to try to make comparisons between the various measures as represented on our graph (because the scales are different). In hindsight, knowing we have mostly a lay audience for our current work, perhaps we should have been more explicit around the ten scales and perhaps created a scaled chart…

Mostly, I want my readers to know that even if I’m questioning some folks’ assertions that we’re overstating our conclusions, we are aware that there are real limitations to what we have shared to date.

Multiple Contexts for Interpreting Research Results
I have this debate with my researcher friends frequently. They say the only appropriate way to interpret research is from a researcher’s perspective. But I believe that it can and should also be interpreted as well from a practitioner’s perspective, and that such interpretation is not the same as a researcher’s. There is (and should be) a higher standard of review by researchers and what any results may mean. But practical implementation decisions can be made without such a high bar (and this is what makes my researcher friends mad, because they want everyone to be just like them!). This is just like how lawyers often ask you to stand much further back from the legal line than you need to. Or like a similar debate mathematicians have: if I stand some distance from my wife, then move half way to her, then move half way to her again, and on and on, mathematicians would say (mathematically) I will never reach her (which is true). On the other hand, we all know, I would very quickly get close enough for practical purposes! 😉

Justin is very correct in his analysis of our research from a researcher’s perspective. But I believe that researchers and practitioners can, very appropriately, draw different conclusions from the findings. I also believe that both practitioners and researchers can overstate conclusions from examining the results.

I would wish (respectfully) that Justin might occasionally say in his writing, “from a researcher’s perspective…” If he lives in a researcher world, perhaps he doesn’t even notice this, or thinks it implied or redundant. But his blog is admittedly not for an audience of researchers, but rather for an audience of educators who need help making sense of research.

Reacting to a Lay Blog as a Researcher
I think Justin has a good researcher head on him and is providing a service to educators by analyzing education research and offering his critique. I’m a little concerned that some of his critique was directed at Audrey’s post rather than directly at our research summary. Audrey is not a researcher. She’s an excellent education technology journalist. I think her coverage was pretty on target. But it was based on interviews with the researchers, Damian Bebell (one of the leading researchers on 1to1 learning with technology), Sue Dorris, and me, not a researcher’s review of of our published findings. At one point, Justin suggests that Audrey is responding to a graph in our research summary (as if she were a researcher). I would suggest she is responding to conversations with Damian, Sue, and me (as if she were a journalist). It is a major fallacy to think everyone should be a researcher, or think and analyze like one (just as it is a fallacy that we all should think or act from any one perspective, including as teachers, or parents, etc). And it is important to consider individual’s context in how we respond to them. Different contexts warrant different kinds of responses and reactions.

Was It The iPads or Was It Our Initiative
Folks, including Audrey, asked how we knew what portion of our results were from the iPads and which part from the professional development, etc. Our response is that it is all these things together. The lessons we learned from MLTI, the Maine Learning Technology Initiative, Maine’s statewide learning with laptop initiative, that has been successfully implemented for more than a decade, is that these initiatives are not about a device, but about a systemic learning initiative with many moving parts. We have been using the Lead4Change model to help insure we are taking a systemic approach and attending to the various parts and components.

That said, Justin is correct to point out that, from a research (and statistical) perspective, our study examined the impact that solely the iPad had on our students (one group of students had iPads, the other did not).

But for practitioners, especially those who might want to duplicate our initiative and/or our study, it should be important to note that, operationally, our study studied the impact of the iPad as we implemented them, which is to say, systemically, including professional development and other components (Lead4Change being one way to approach an initiative systemically).

It is not unreasonable to expect that a district who simply handed out iPads would have a hard time duplicating our results. So although, statistically, it is just the iPads, in practice, it is the iPads as we implemented them as a systemic initiative.

Statistical Significance and the Issue of “No Difference” in 9 of the 10 Tests
The concept of “proof” is almost nonexistent in the research world. The only way you could prove something is if you could test every possible person that might be impacted or every situation. Instead, researchers have rules for selecting some subset of the entire population, rules for collecting data, and rules for running statistical analyses on those data. Part of why these rules are in place is because, when you are only really examining a small subset of your population, you want to try to control for the possibility that pure chance got you your results.

That’s where “statistical significance” comes in. This is the point at which researchers say, “We are now confident that these results can be explained by the intervention alone and we are not worried by the impact of chance.” Therefore, researchers have little confidence in results that do not show statistical significance.

Justin is right to say, from a researcher’s perspective, that a researcher should treat the 9 measures that were not statistically significant as if there were no difference in the results.

But that is slightly overstating the case to the rest of the world who are not researchers. For the rest of us, the one thing that is accurate to say about those 9 measures is that these results could be explained by either the intervention or by chance. It is not accurate for someone (and this is not what Justin wrote) to conclude there is no possitive impact from our program or that there is no evidence that the program works. It is accurate to say we are unsure of the role chance played on those results.

This comes back to the idea about how researchers and practitioners can and should view data analyses differently. When noticing that the nine measures trended positive, the researcher should warn, “inconclusive!”

It is not on a practitioner, however, to make all decisions based solely on if data is conclusive or not. If that were true, there would be no innovation (because there is never conclusive evidence a new idea works before someone tries it). A practitioner should look at this from the perspective of making informed decisions, not conclusive proof. “Inconclusive” is very different from “you shouldn’t do it.” For a practitioner, the fact that all measures trended positive is itself information to consider, side by side with if those trends are conclusive or not.

“This research does not show sufficient impact of the initiative,” is as overstated from a statistical perspective, as “We have proof this works,” is from a decision-maker’s perspective.

We don’t pretend to have proof our program works. What is not overstated, and appropriate conclusions from our study, however, and is what Auburn has stated since we shared our findings, is the following: Researchers should conclude we need more research. But the community should conclude at we have shown modest positive evidence of iPads extending our teachers’ impact on students’ literacy development, and should take this as suggesting we are good to continue our program, including into 1st grade.

We also think it is suggestive that other districts should consider implementing their own thoughtfully designed iPads for learning initiatives.

More News on Auburn’s iPad Research Results

The other day, I blogged about our Phase 1 research results on the impact of Advantage 2014, our literacy and Math initiative that includes 1to1 iPads in kindergarten. Now the press and blogosphere is starting to report on it, too.

Auburn’s press release, the research summary, and slides from the School Committee presentation are here.

It’s Your Turn:

Have you found press about this elsewhere? Please share!

Confirmed: iPads Extend a Teacher’s Impact on Kindergarten Literacy

I’m excited! I’m REALLY excited!

Our “Phase I” research results are in…

iPads in Kindergarten

We (Auburn School Department) took a big risk last May when we started down the path to have the first 1to1 kindergarten learning with iPads initiative. We had confidence it would help us improve our literacy and math proficiency rates. One of our literacy specialists had used her own iPad with students to great success (one of the big reasons we moved forward). But there were also segments of the community that thought we were crazy.

Now we have pretty good evidence it works!

We did something not a lot of districts do: a randomized control trial. We randomly selected half our kindergarten classrooms to get iPads in September. The other half would use traditional methods until December, when they received their iPads. We used our regular kindergarten literacy screening tools (CPAA, Rigby, Observation Survey) for the pre-test and post-test. And across the board, the results were emerging positive for the iPad classrooms, with one area having statistical significance.

These results are a strong indication that the iPad and it’s apps extend the impact our teachers have on our students’ literacy development. We definitely need more research (and will be continuing the study through the year, including comparing this year’s results to past years), but these results should be more than enough evidence to address the community’s question, “How do we know this works?”

And I’m especially excited that we went all the way to the Gold Standard for education research: randomized control trials. That’s the level of research that can open doors to funding and to policy support.

Why do we think we got these results?

We asked our kindergarten teachers that question. Anyone walking by one of the classrooms can certainly see that student engagement and motivation is up when using the iPads. But our kindergarten teachers teased it out further. Because they are engaged, students are practicing longer. They are getting immediate feedback, so they are practicing better. Because we correlate our apps to our curriculum, they are practicing the right stuff. Because we select apps that won’t let students do things just any way, we know the students are practicing the right way. Because they are engaged, teachers are more free to work one on one with the students who need extra support at that moment.

We also believe we got the results we got because we have viewed this as an initiative with many moving parts that we are addressing systemically. A reporter asked me, how do you know how much of these results are the iPad, how much the professional development, and how much the apps. I responded that it is all those things together, on purpose. We are using a systemic approach that recognizes our success is dependent on, among other things, the technology, choosing apps wisely, training and supporting teachers in a breadth of literacy strategies (including applying the iPad), partnering with people and organizations that have expertise and resources they can share with us, and finding data where we can so we can focus on continuous improvement.

And we’re moving forward – with our research, with getting better at math and literacy development in kindergarten, with figuring out how to move this to the first grade.

So. We have what we were looking for:

Confirmation that our vision works.

It’s Your Turn:

What do you think the implications of our research are? What do our findings mean to you?

Entrepreneurial Thinking for Educators

I’ve been thinking a lot lately about entrepreneurial thinking.  Branding and Buzz is one of the “Supporting But Necessary” components to the Lead4Change model, but it is becoming clear to me that educators generally do not think entrepreneurially or about how to market their good work.  It is generally not part of their creative problem solving skill set. (Nor can I think of a single reason why they would have, up to this point. Educators generally haven’t had to live entrepreneurially, so why would they think that way? This isn’t blaming or criticism. It’s just observation.)

Not only am I thinking about how we might fund our innovative programs in schools (when we can barely get core services funded), but I know several groups of wonderful educators who put together conferences that always get the best reviews from participants, and nonetheless are facing greatly declining enrollments. We haven’t seen schools in such financial dire straights in a long time, and it doesn’t look like there will be any more money from the state or the Feds in the near future. And, just as in all perceived survival situations, all the supporting systems get shut down to keep the core systems operating… So it’s not surprising that PD is getting cut way back and conferences and institutes are struggling.  

In other words, there is a growing need for educators to think entrepreneurially.

I’ll concentrate in this post on the notion of entrepreneurial thinking when trying to put together professional development opportunities for others, since a version of these thoughts was originally a response to a friend’s inquiry about how to improve registrations and attendance for a summer institute she was helping to put together.

So, what are the the important pieces?

Entrepreneurial thinking has to move beyond us simply thinking about why things should be funded.  I think teachers readily recognize that initiatives or conference are worthwhile because they leverage strategies such as being quality work, involve partnerships, or utilize social media in productive ways.  These points aren’t wrong.  They are great reasons for educators to get involved in those professional opportunities.  But I think these points come up short when resources (funding) are scarce.

I fully believe that “doing quality work” is an important (critical!) component of living entrepreneurially. But it is clearly not sufficient. I doubt we have too many folks leaving the the wonderful conferences they attend feeling that it wasn’t an awesome and professionally valuable experience. I doubt we have too many educators who aren’t drawn to the well-known names on the program. But if “doing quality work” were sufficient, we wouldn’t be struggling with registration… (And clearly if we did crappy work, it wouldn’t matter what else we did, we’d still struggle with getting folks to register.)

I put strategies such as partnering with others and leveraging social networks in the “doing quality work” category. Although they are critical to making sure that the conference goes well for the participants, they aren’t critical pieces to the challenge of getting people to register in the first place. Certainly they play a supporting role, just not a critical role. For example, Apple helped us with logistical support for our iPads in primary grades conference. Were we a success because of that? Certainly it contributed (HEAVILY) to the quality of experience that participants had during the conference, but it didn’t contribute to getting folks to sign up in the first place. (But I feel differently about a different Apple contribution described below.)

So, what then needs to be in place beyond “doing good work”? That’s what I’ve been thinking a lot about lately. Having our iPad in Primary Grades Education conference do so well and go so well at least has given me a real experience to dissect… So what did we have that other conferences that don’t fill might not? Here are some of my initial thoughts…

1) We had something THEY wanted (not something we wanted FOR them), and we promised to show them how they could get it, too. iPads in Ed are REALLY hot right now, and tech in primary grades is controversial enough for folks who want to try it to wonder how others are doing it… Therefore, also “right place, right time” is a piece of this.

2) We were bold, perhaps to the point of being odatious. We claimed openly and publicly and in the press that we were going to be the first 1to1 kindergarten iPad initiative (maybe we were, but maybe not), and we were going to offer a national conference where others could learn about our success. (Odatiousness: where do we get off leading a national conference on an initiative we’ve been working on for less than 6 months!?!)

3) Building on the idea that others wanted to know how we were doing this, we built some sense of urgency by publicizing that we only had 100 slots (hurry now before they are all gone!). The irony is that we were also limited by how much room we had. If we wanted to do this conference locally, then our limited options for venue limited how much space we had…

4) We could easily market directly to our targeted customer base. Apple reps let their primary grades customers know about the conference (This is the “other” Apple contribution mentioned above), and we’re a member of the Maine Cohort for Customized Learning, and we let the other member districts know (one of those MCCL member districts brought 13 people! They were the largest team attending.). Other avenues helped (press releases, ACTEM list serve, etc.) but weren’t where we got the bulk of our attendees.

So, “doing good work” is one piece of creating a good event for folks, but I think it is good marketing that gets them there in the first place (especially when PD is disappearing for survival…). In fact, I think we need to separate our thinking about (A) how do we make a good experience for participants and (B) how do we get people to register. When resources are rich (A) is probably sufficient for (B), but when resources are scarce, (A) doesn’t cut it alone.

So when I think about working on (B) in our Institute, I think #s 3 & 4 are just good, standard marketing, and not probably the factors that greatly impacted our enrollment. I think 1 and 2 are the biggies.

Granted, we were pretty lucky that we moved when we did and we had a history with not just Apple but our Apple contacts (Jim Moulton & Tara Maker), and we were lucky that both our former and current Superintendents’ style were bold and odatious.

So the question is, do you have to wait until fortuitous circumstances provide you with the right stuff THEY need (right place, right time) and bold/odatious partners, or can these be engineered? And, of course, I believe these can be engineered. That doesn’t mean that it will be easy, just that with cleaver and different thinking it can be done.  I’m thinking about these issues again, now, as we begin to plan our second iPad conference.

So I  recommend (for my own team’s work and for my colleagues working to organize other opportunities), first, focusing on what is it that you have that they want. I think this kind of thinking requires two things: first that you suspend thinking about what you want for them and instead think empathetically from their perspective, and second that you reengineer what you want for them into the thing they want. Thats not to say that “what you want for them” is off base. It just means that it isn’t all that helpful to marketing…

Next, focus on being bold and odatious.

And remember. Marketing isn’t sharing information. Marking is making them want what you are offering.

It’s Your Turn:

What about a conference or institute would make you (or your principal, or your district) be willing to have you register and attend when funding is tight?

What I Wish The Union Had Said

Maine has had two sets of educational announcements in the last month.  One was for the Commisisoner’s plan, focused on customized learning and a performance-based diploma.  The more recent came jointly from the Governor and the Commisioner, and focused on four proposed pieces of legislation: allowing public funding be used toward (certified) private religious schools, school choice, teacher evaluation and accountability, and greater focus on career and technical education.

Chris Galgay (president), and Rob Walker (executive director) from the Maine Education Association were at both announcements.  News stories focused, not just on the announcements, but on how the teacher union was critical of the announcements.

Nationally, teacher unions have developed quite the reputation of blocking any kind of educational advancement and have become the villain in tales of attempts to improve education for all students.

I have mixed feelings about teacher unions.  I think unions should be the defenders of the profession, negotiating contracts favorable to their membership, insuring good working conditions, monitoring evaluation procedures so they are fair and reasonable.

But the reputation teacher unions have is not for defending the profession, but for defending the least professional teachers, protecting mediocre performance, and preserving the right of teachers to do as they wish, not that that is needed to be done.

I suspect that this reputation is somewhat undeserved, but I also know I have experienced myself actions that reinforce this reputation.  At a special purpose, project-based learning school I was part of creating, several teachers in the union told us they wouldn’t implement the educational program because the union told them they didn’t have to. When we had a workshop day, had bought all the teachers lunch (which we did as a nice gesture), and told them what time lunch would be served, several teachers came to us and told us that we couldn’t require them to come to lunch because it violated union rules (who had required anyone to do anything?  We had just done something nice…)

And I worry that dour expressions of the MEA leadership and the news reports of their critical and negative message are reinforcing that image, as well.  And yet, my wife is involved in some very progressive projects of the MEA that demonstrate a very different kind of defending and preserving the profession…

Here is what I wish I had heard from the union:

The MEA and our membership are working hard to insure that every zip code has a great school, so families indeed choose their local school. – I heard the union say they didn’t like school choice because it would take resources away from local schools.  A long time ago, in the early 90’s when charter schools were first proposed in Maine, a teacher told me he was against charters because the public schools would just be left with the least desirable students.  Yet, these issues would only come to fruition if the local schools were schools people wouldn’t choose.  Is the MEA defending undesirable teaching, educational programs, and schools rather than promoting their own vision for creating “Great Public Schools for Every Maine Student“?

The MEA and our membership are working to propose a teacher accountability and evaluation system that is fair to teachers, uses multiple measures, and is based on best practice. – In the past, I have heard the union say that they are against teachers being evaluated based on the performance of their students. This sounds too much to me that the union doesn’t believe that workers (including professionals) should be expected to be effective in their jobs.  I fail to see how this helps defend and protect the profession. This also seems rather counter to their own efforts.  The Instruction and Professional Development Committee has been working for a while on adapting an evaluation system based on the MTA Teacher Evaluation program, endorsed by Charlotte Danielson and Linda Darling Hamond, and connected to the inTASC Model Core Teaching Standards. The MEA’s own position statement on teacher evaluation reads, “MEA wants a meaningful, high quality evaluation process that is based upon sound pedagogical criteria and multiple evaluation tools. It is in the best interests of students, programs, and career educators.”

Teachers ought to be given the training, support, and resources needed in order to do the job they are being asked to do. – With these new announcements, I heard Chris Galgay say that the MEA is against ineffective teachers being placed back on probation. Again, is the union really claiming that if you aren’t good at your job there should be no consequences? How does this give the message that teachers are professionals? On the other hand, it is a travesty when a teacher who needs help to get good or better at their job is not offered that assistance.  Every teachers deserves professional development, coaching, and support, especially is this day of educational change.  And it is right and proper for the MEA to be the organization that champions this on behalf of teachers.

In all fairness, I’m responding to what was reported on the news.  MEA leadership might have said these exact things and the reporters chose not to include them in their reports.

But I still believe that an organization would earn more power and cred by taking on the issues of the day and being the ones proposing quality solutions, rather that appearing to defend the least common denominator and waiting around for others to propose solutions and publicly denounce them. The MEA is doing some very progressive work, insuring that teaching be a quality profession contributing to the changing educational landscape.  But at the same time, they are getting the most press for when they simply criticize other’s work to improve education.

Or is perhaps the MEA simply caught between the days of the old unions that defended their workers no matter what, and the new unions that are trying to create a quality profession…