Category Archives: Branding and Buzz

Branding and Buzz

Social Media for School Leaders

I just returned from the national middle school conference (AMLE12) in Portland, OR.

While there, I attended a wonderful session on Social Media for School Leaders by Howard Johnston and Ron Williamson. Their presentation showed a wonderful balance of the realities of today's viral communication and the school context.

The presentation addressed the role of social media in five areas:

  1. Social Media and Schools
  2. School Safety and Crisis Management
  3. Communication
  4. Productivity
  5. Professional Growth

What they made clear is how important a tool social media is to schools and school leaders, and the enormous opportunity lost when schools shun social media. They raised the following questions suggesting why school leaders might want to pay attention to the potential of social media:

  • Do you communicate with students, families and staff?
  • Do you monitor community views about your school?
  • Do your kids use social media?
  • Do you need to stay on top of cutting-edge educational topics?
  • Do you need to promote good news about your school in the community?

And they recommended a 5-step plan (in part, based on findings from the Pew Internet and American Life Project) related to social media and school safety:

  1. Learn about social media and how it works
  2. Recognize that most teens use it responsibly
  3. Don’t attempt to ban it
  4. Help students, families and staff know about how to manage social media
  5. Focus on responsible student use

Johnston and Williamson provided a great list of resources available to school leaders:

 

Introduction to Twitter for Educators: 12 Resources & Strategies

The irony is that at the same time my district has banned Facebook and has a team working on a social media policy, our administrators are learning how to use Twitter for both Branding and Buzz and their own professional development.

(Well, maybe it isn’t irony. I think maybe it is exactly the Yin and Yang of social media that has schools confused about what to do with it. On one hand social media seems to lead to distraction and bullying. On the other hand, it is a powerful marketing tool and tool for building a professional learning community.)

Auburn’s administrative team will soon get a brief Introduction to Twitter inservice. These are the resources I will be sharing with them.

What other resources would you share? (please add your suggestions in the comments)

 

Getting Started with Twitter:

 

Leveraging Twitter as Your Professional Learning Community:

 

Leveraging Twitter for Building Branding and Buzz Around Your School:

 

Leveraging Twitter for Teaching and Learning

 

Is Middle School All About Grade Configuration?

There is a new study out which concludes that students take an academic plunge when they go to a 6-8 school rather than a k-8 school. The article is called The Middle Level Plunge.

At first glance, it seems to be a reasonably well designed study comparing student performance in a 6-8 school to those in a k-8 school (the old grade configuration dilemma!). Their fallacy is in essentially equating the 6-8 grade configuration to “middle schooling,” and actually say “Our results cast serious doubt on the wisdom of the middle-school experiment that has become such a prominent feature of American education.”

Here is the response that I posted as a comment on their article:

Thanks for adding to the research on the impact of school grade configuration. I especially appreciate that you didn’t just study the grade configurations, but also tried to control for various explanations, including teacher experience, school characteristics, and educational practices. You have defined each of these clearly in your article.

I am concerned, however, with your using the term “middle school” to mean the 6-8 configuration schools. You are clear that this is your definition in the article, but in middle level education circles, the term means something very different, and I fear your conclusions about 6-8 grade configuration will be misinterpreted as conclusions about middle school practices. Readers should be able to make their own distinctions, especially when the writing is clear, as your article is, but you and I both know that in our “sound bite lives” there are too many people who will see the words “middle school” and think that your definition is the same as my definition.

For middle level educators, “middle school” is essentially a set of developmentally appropriate educational practices applied in the middle level grades (generally considered grades 5-8), without regard to the grade configuration of the school housing those grades. Readers may find helpful the numerous resources available on the Association for Middle Level Education website (http://www.amle.org).

Further, the school characteristics and educational practices you examine are not those that define middle school practices. I would have looked for the characteristics defined in AMLE’s This We Believe (http://tinyurl.com/865xggv), or the Turning Points 2000 recommendations (http://www.turningpts.org/principle.htm).

Again, I am not criticizing your study or the clarity of your writing, but simply sharing the unfortunate possibility of confusion for school decision makers trying to make informed (especially research informed) decisions based on your article and the use of the term “middle school.”

Perhaps, I could invite you to refer to the schools in your study as “6-8 schools,” instead of “middle schools.”

So, my big objection is defining “middle school” as a grade configuration, and seeming to conclude that “the middle school experiment has failed” and the possibility that decision makers will interpret this as if it were our definition of middle school…

I want to be clear, though. It is right and proper for researchers to select a term, define it, and use it in their article as they define it. It is expected that the reader will read such an article closely and critically. The authors of this study have done nothing wrong. Could it have been better (more clear to a wider audience) if they had done it differently? Yes.

But it is also right and proper for a reader to add their critique (politely and professionally) to the conversation though avenues such as comments on posts.

(For those of you exploring the Lead4Change model, this is a Branding and Buzz issue. Situations like these go directly to the issue of public perception of our initiatives and what role we play in communicating our vision. It is on us to try to correct misperceptions and to work toward the integrity of models we subscribe to.)

 

It’s Your Turn:

Are you a middle level educator or advocate? What are your thoughts about this study? I often ask you to post your comments here, but perhaps this time, you could post your comments on their article. And maybe you’d pass the word to your circle of middle grades contacts and they could comment, too…

 

7 Social Media Articles to Help Your School’s Communication Impact

Schools and educational organizations are starting to realize that even though they are doing great work, they need to get that message out to their parents, communities, members, and constituents. “Branding and Buzz” is one of the “Supporting but Necessary” components of the Lead4Change Model, and encourages schools and organizations to state their case for the work they are doing, communicate with their community and beyond, tell their story, and present their evidence.

So begins my recent Bright Futures blog post on schools using social media to get their message out.

The post points readers toward the Social Media Examiner, a wonderful resource for helping organizations leverage social media. In particular, I highlighted 7 articles focused on getting the most from Facebook, Twitter, and blogging.

A lot of schools already have a Facebook page. Some are even using twitter. Others have administrators or teachers who blog. But are they using these avenues to connect with parents, communities, and colleagues as effectively or with as much impact as they could?

I think these 7 articles can help insure that schools do. The articles share wonderful tips from folks who are getting the most from their social media. Where can a school start?

Use the post as a jumping off point. Do a deep dive into one or the articles, or have your staff or leadership team jigsaw a couple of them. Your school could take what they learn and decide on a couple things that they want to try out.

The Bright Futures Partnership did just that. We read the article 26 Tips For Writing Great Blog Posts, and decided on 5 or 6 things we were going to try (look for changes coming to the Bright Futures blog and see if you can spot which tips we put into action!). By the way, reading the article also allowed us to pat ourselves on the back for 5 or 6 things we were already doing!

 

It’s Your Turn:

What are your best strategies for getting your school’s or educational organization’s message out via social media?

 

Auburn’s iPad Research Project on the Seedlings Podcast

Seedlings is a great little podcast that, although about educational technology, is really about good teaching and learning.

So I felt honored when the Seedling hosts invited me to return to talk about Auburn’s research on their Advantage 2014 program, best known for giving iPads to Kindergartners. You can download that podcast and access related links here.

This was a follow up to the previous podcast, where we talked both about Advantage 2014, and Projects4ME, the statewide virtual project-based non-traditional program, where students can earn high school credit by designing and doing projects, instead of taking courses.

Responding to Critiques of Auburn’s iPad Research Claims

When we announced our research results last week, Audrey Watters was one of the first to cover it. Shortly thereafter, Justin Reich wrote a very thoughtful review of our research and response to Audrey’s blog post at his EdTechResearcher blog. Others, through comments made in post comments, blogs, emails, and conversations, have asserted that we (Auburn School Department) have made claims that our data don’t warrant.

I’d like to take a moment and respond to various aspects of that idea.

But first, although it may appear that I am taking on Justin’s post, that isn’t quite true (or fair to Justin). Justin’s is the most public comment, so the easiest to point to. But I actually believe that Justin’s is a quite thoughtful (and largely fair) critique from a researcher’s perspective. Although I will directly address a couple things Justin wrote, I hope he will forgive me for seeming to hold up his post as I address larger questions of the appropriateness of our claims from our study.

Our Research Study vs. Published Research
Our results are initial results. There are a lot of people interested in our results (even the initial ones – there are not a lot of randomized control trials being done on iPads in education), so we decided to share what we had so far in the form of a research summary and a press release. But neither of these would be considered “published research” by a researcher (and we don’t either – we’re just sharing what we have so far). Published research is peer reviewed and has to meet standards for the kinds of information included. We actually have more data to collect and analyze (including more analyses on the data we already have) before we’re ready to publish.

For example, Justin was right to point out that we shared no information about scales for the ten items we measured. As such, some of the measures may seem much smaller than when compared proportionally to their scale (because some of the scales are small), and we were not clear that it is inappropriate to try to make comparisons between the various measures as represented on our graph (because the scales are different). In hindsight, knowing we have mostly a lay audience for our current work, perhaps we should have been more explicit around the ten scales and perhaps created a scaled chart…

Mostly, I want my readers to know that even if I’m questioning some folks’ assertions that we’re overstating our conclusions, we are aware that there are real limitations to what we have shared to date.

Multiple Contexts for Interpreting Research Results
I have this debate with my researcher friends frequently. They say the only appropriate way to interpret research is from a researcher’s perspective. But I believe that it can and should also be interpreted as well from a practitioner’s perspective, and that such interpretation is not the same as a researcher’s. There is (and should be) a higher standard of review by researchers and what any results may mean. But practical implementation decisions can be made without such a high bar (and this is what makes my researcher friends mad, because they want everyone to be just like them!). This is just like how lawyers often ask you to stand much further back from the legal line than you need to. Or like a similar debate mathematicians have: if I stand some distance from my wife, then move half way to her, then move half way to her again, and on and on, mathematicians would say (mathematically) I will never reach her (which is true). On the other hand, we all know, I would very quickly get close enough for practical purposes! 😉

Justin is very correct in his analysis of our research from a researcher’s perspective. But I believe that researchers and practitioners can, very appropriately, draw different conclusions from the findings. I also believe that both practitioners and researchers can overstate conclusions from examining the results.

I would wish (respectfully) that Justin might occasionally say in his writing, “from a researcher’s perspective…” If he lives in a researcher world, perhaps he doesn’t even notice this, or thinks it implied or redundant. But his blog is admittedly not for an audience of researchers, but rather for an audience of educators who need help making sense of research.

Reacting to a Lay Blog as a Researcher
I think Justin has a good researcher head on him and is providing a service to educators by analyzing education research and offering his critique. I’m a little concerned that some of his critique was directed at Audrey’s post rather than directly at our research summary. Audrey is not a researcher. She’s an excellent education technology journalist. I think her coverage was pretty on target. But it was based on interviews with the researchers, Damian Bebell (one of the leading researchers on 1to1 learning with technology), Sue Dorris, and me, not a researcher’s review of of our published findings. At one point, Justin suggests that Audrey is responding to a graph in our research summary (as if she were a researcher). I would suggest she is responding to conversations with Damian, Sue, and me (as if she were a journalist). It is a major fallacy to think everyone should be a researcher, or think and analyze like one (just as it is a fallacy that we all should think or act from any one perspective, including as teachers, or parents, etc). And it is important to consider individual’s context in how we respond to them. Different contexts warrant different kinds of responses and reactions.

Was It The iPads or Was It Our Initiative
Folks, including Audrey, asked how we knew what portion of our results were from the iPads and which part from the professional development, etc. Our response is that it is all these things together. The lessons we learned from MLTI, the Maine Learning Technology Initiative, Maine’s statewide learning with laptop initiative, that has been successfully implemented for more than a decade, is that these initiatives are not about a device, but about a systemic learning initiative with many moving parts. We have been using the Lead4Change model to help insure we are taking a systemic approach and attending to the various parts and components.

That said, Justin is correct to point out that, from a research (and statistical) perspective, our study examined the impact that solely the iPad had on our students (one group of students had iPads, the other did not).

But for practitioners, especially those who might want to duplicate our initiative and/or our study, it should be important to note that, operationally, our study studied the impact of the iPad as we implemented them, which is to say, systemically, including professional development and other components (Lead4Change being one way to approach an initiative systemically).

It is not unreasonable to expect that a district who simply handed out iPads would have a hard time duplicating our results. So although, statistically, it is just the iPads, in practice, it is the iPads as we implemented them as a systemic initiative.

Statistical Significance and the Issue of “No Difference” in 9 of the 10 Tests
The concept of “proof” is almost nonexistent in the research world. The only way you could prove something is if you could test every possible person that might be impacted or every situation. Instead, researchers have rules for selecting some subset of the entire population, rules for collecting data, and rules for running statistical analyses on those data. Part of why these rules are in place is because, when you are only really examining a small subset of your population, you want to try to control for the possibility that pure chance got you your results.

That’s where “statistical significance” comes in. This is the point at which researchers say, “We are now confident that these results can be explained by the intervention alone and we are not worried by the impact of chance.” Therefore, researchers have little confidence in results that do not show statistical significance.

Justin is right to say, from a researcher’s perspective, that a researcher should treat the 9 measures that were not statistically significant as if there were no difference in the results.

But that is slightly overstating the case to the rest of the world who are not researchers. For the rest of us, the one thing that is accurate to say about those 9 measures is that these results could be explained by either the intervention or by chance. It is not accurate for someone (and this is not what Justin wrote) to conclude there is no possitive impact from our program or that there is no evidence that the program works. It is accurate to say we are unsure of the role chance played on those results.

This comes back to the idea about how researchers and practitioners can and should view data analyses differently. When noticing that the nine measures trended positive, the researcher should warn, “inconclusive!”

It is not on a practitioner, however, to make all decisions based solely on if data is conclusive or not. If that were true, there would be no innovation (because there is never conclusive evidence a new idea works before someone tries it). A practitioner should look at this from the perspective of making informed decisions, not conclusive proof. “Inconclusive” is very different from “you shouldn’t do it.” For a practitioner, the fact that all measures trended positive is itself information to consider, side by side with if those trends are conclusive or not.

“This research does not show sufficient impact of the initiative,” is as overstated from a statistical perspective, as “We have proof this works,” is from a decision-maker’s perspective.

We don’t pretend to have proof our program works. What is not overstated, and appropriate conclusions from our study, however, and is what Auburn has stated since we shared our findings, is the following: Researchers should conclude we need more research. But the community should conclude at we have shown modest positive evidence of iPads extending our teachers’ impact on students’ literacy development, and should take this as suggesting we are good to continue our program, including into 1st grade.

We also think it is suggestive that other districts should consider implementing their own thoughtfully designed iPads for learning initiatives.

More News on Auburn’s iPad Research Results

The other day, I blogged about our Phase 1 research results on the impact of Advantage 2014, our literacy and Math initiative that includes 1to1 iPads in kindergarten. Now the press and blogosphere is starting to report on it, too.

Auburn’s press release, the research summary, and slides from the School Committee presentation are here.

It’s Your Turn:

Have you found press about this elsewhere? Please share!