Wednesday, October 16, 2013

O Sole Mio

Much is made of the value of collaboration in online learning. Harasim (2005) categorizes collaborative learning as "the most powerful principle of online course design and delivery" [cited in Palloff & Pratt, 2007, p. 157); Draves (2002) calls it the "heart and soul of an online course." And Certo, Cauley, and Chafin (2006); Watson and Battistich (2006); Oosterhof, Conrad, and Ely (2008); and Frost (2013) are among the many who place collaboration squarely at the heart of online learning communities.But some people prefer to fly solo. Some learners find collaboration difficult (even stressful) and don't believe a group project allows for a valid assessment of individual effort.

Here's my question: If the learning objectives are not dependent on a collaborative effort, should learners be allowed to opt-out of group projects? What (if any) is the potential harm to the learner and the learning community? What (if any) are the potential benefits?

In your response include at least one potential disadvantage and one potential benefit for the learner and one of each for the learning community. Be sure to cite resources in your response.

You may download the rubric for this discussion here.

Sally

References

Certo, J., Cauley, K. M., & Chafin, C. (2002, April). Students' perspectives on their high school experience. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Draves, W. (2002). Teaching online (2 nd ed.). River Falls, WI: LERN Books.
Frost, S. (2013). The advantages of working in groups in the workplace. Retrieved from: http://smallbusiness.chron.com/advantages-working-groups-workplace-10711.html

Oosterhof, A., Conrad, R.-M., & Ely, D. P. (2008). Assessing learners online. Upper Saddle River, NJ: Pearson.

Palloff, R., & Pratt, K. (2007). Building online communities: Effective strategies for the virtual classroom. San Francisco, CA: Jossey-Bass. (Palloff, R., & Pratt, K., Promoting Collaborative Learning, Building Online Communities). Copyright 2007 John Wiley & Sons Inc. Used with permission from John Wiley & Sons Inc. via the Copyright Clearance Center.

Watson, M., & Battistich, V. (2006). Building and sustaining caring communities. In C. M. Evertson & C. S. Weinstein (Eds.), Handbook of classroom management: Research, practice, and contemporary issues (pp. 253-279). Mahwah, NJ: Erlbaum.

Wednesday, September 18, 2013

Cheating and the Online Environment

Cheating is an ongoing concern and a reality in both face-to-face and online settings (Oosterhof, Conrad, & Ely, 2008). If it is more common online there are at least two possible explanations: one, the physical separation inherent in online learning facilitates cheating among those who are so inclined (Rowe, 2004); and two, online instructors may assess students more frequently in attempts to validate student performance (Oosterhof, Conrad, & Ely, 2008), and more frequent assessment means more frequent occasions to cheat.

The nature of cheating has changed from copying answers from a student at the next desk to deliberately “crashing” a timed test to gain more time, hacking into instructor accounts and previewing assessment questions, and even changing grades in online student records (Cizek, 2001). Our society's reliance on test scores has risen dramatically over the last decade – driven at least in part by technologies that enable the administration and scoring of broad-scale assessments quickly and cost-effectively - and with that has come a disturbing trend of educators cheating and enabling or encouraging students to cheat (Cizek, 2001). In this climate, cheating (for some) has become a form of political or social protest: “Generally, there appears to be a growing indifference on the part of educators toward the behavior and even an increasing sense that cheating is a justifiable response to externally-mandated tests” (Cizek, 2001, p. 17).

The steps to be taken to minimize cheating in an online environment depend on the nature of and motivation for the cheating. For the purposes of this discussion, I offer three “global” strategies: creativity, judgments, and software.
 
Creativity in instructional design can reduce the frequency of cheating. For example, it is more difficult to cheat on a group project, performance assessment or essay test than on a multiple-choice quiz (Rowe, 2004). Designers can also “build in” security with frequent opportunities for assessment throughout the course or training. This can be particularly helpful in online learning where instructors don't have the benefits of face-to-face contact for assessing their students.

Humans are hard-wired to make judgments. Online instructors need to consider what they know about individual students (including their demonstrated abilities and the quality of their work) and group assessment norms and trends, and make judgments about whether cheating may have occurred. Also, instructors should consider the amount and type of work they assign. Some students report that they cheat because the workload is too heavy or the assignments are boring or meaningless (Stephans & Wengaard, 2001).

Software exists that can help increase testing security and minimize cheating. Assessment Systems Corporation is one company (I'm sure there are others) that offers test authoring, hosting, and psychometric services.

Should the definition of cheating evolve along with the tools we use to produce work in an online environment? A week ago, I would have answered, “No. Cheating is cheating is cheating and it's wrong.” But Maher (2008) has changed my thinking. “If a student is going to talk with a bunch of other students and network with them to exchange information to produce a paper, isn't that a skill that we want them to take to the workplace? If I can find someone who is working in advertising and who knows how to push a product, and they can collect information from other sources and borrow and steal and put it together and reshape it, isn't that a skill that I want them to have?” (Maher, 2008).

So perhaps the definition of cheating should evolve to fit with current notions of “work” and “learning”. But where, then, is the line between “collaboration” and “copying”? Maher has this to say: “... say that you're going to do something else that you can look at other people's projects, but the way I assess what you're doing is going to take into account that you're going to look at what other people are doing. Your work still has to be original, but to get inspiration from other people and to craft your work in response to theirs or alongside theirs is not something that's necessarily a problem.”I love that idea on its own, and I love it even more when I overlay Cizek (2001): “From the broadest perspective, it may be useful to entirely reconceptualize testing so that successful test performance can be more consistently and directly linked to student effort and effective instruction, and so that unsuccessful performance is accompanied by sufficient diagnostic information about students’ strengths and weaknesses” (p. 10).

Sally

References

Cizek, G. J. (2001). An Overview of Issues Concerning Cheating on Large-Scale Tests. Paper presented at the annual meeting of the National Council on Measurement in Education, April 2001, Seattle, WA.

Maher, S. (2008). Interviews. Retrieved from http://www.pbs.org/wgbh/pages/frontline/kidsonline/interviews/maher.html#5

Oosterhof, A., Conrad, R.-M., & Ely, D. P. (2008). Assessing learners online. Upper Saddle River, NJ: Pearson.

Rowe, N. (2004). Cheating in online student assessment: Beyond plagiarism. Online Journal of Distance Learning Administration, 7(2). Retrieved from http://www.westga.edu/~distance/ojdla/summer72/rowe72.html

Stephans, J. M., & Wangaard, D. B. (2001). Teaching for integrity: Steps to prevent cheating in your classroom. Retrieved from http://www.ethicsed.org/programs/integrity-works/pdf/teachingforintegrity.pdf.

Tuesday, August 6, 2013

Technology and Multimedia in Online Learning

Introduce a technology to an online learning experience, and you tee up the question, What impact does this technology have on the online learner/learner group/facilitator?” Obvious answers are that technology:
  • Enables students and facilitators to communicate conveniently across geographic boundaries<br>
  • Facilitates the development of technical skills<br>
  • Offers opportunities to enrich learning with interactive multimedia design elements
But do technology and multimedia actually change the value proposition of online learning? They can; whether or not they do is a function of instructional design and facilitation.
 
The greatest challenge in assessing an online engaged activity is determining the quality of thought expressed” (Conrad & Donaldson, 2011, p. 29). This is a critical need, and technology can help here. The Discussion Analysis Tool “also known as ForumManager (Jeong, 2003) evaluates patterns in online interactions” (Conrad & Donaldson, 2011, p. 29). ForumManager analyzes the quantity and depth of discussion entries per participant. That's one tool. Instructional design is another. Aligning discussion board activities with Bloom's Taxonomy encourages critical thinking that can improve learner success and satisfaction.

As I (and others) have said before, the primary consideration to be made about implementing technology for online learning is whether or not the technology supports achievement of learning objectives. Cool whiz bang multimedia that does nothing to enrich learning is merely a fancy and costly distraction. Technology that requires too much of learners - either because the tech is too time-consuming or user-unfriendly, or because the learner group lacks requisite experience with the technology – undermines the very learning process it is supposed to support.<br>

The craft of instructional design is as fluid as technology itself, and will continue to evolve in response to learner need and emerging technology. Our charge is to make sure our decisions are driven by learner need, not by our interest in technology.<br>


References
Conrad, R. & Donaldson, J. (2011). Engaging the online learner; Activities and resources for creative instruction. San Francisco, CA: Jossey-Bass.

Jeong, A. (2003). Sequential analysis of group interaction and critical thinking in online threaded discussions. American Journal of Distance Education, 17(1), 25-43.

Wednesday, July 24, 2013

Gaming to Increase Your Training ROI

Gaming is in its heyday, fueled by a confluence of accessible technology and the business case for learner engagement. If you've heard it once you've heard it a thousand times: fully engaged learners synthesize content, make deeper connections, retain learning, and improve their performance better than those who are not engaged. And in both face-to-face and online training spaces, well-designed games and activities are key for increasing learner engagement with opportunities to interact with core content and peer groups, and that interaction is essential for learning to occur. 

Whether you are working with a freelance instructional designer or designing training games in-house, this is the time to increase your training ROI with games and activities that: 
  • Align with learning objectives, learner characteristics, and expectations
  • Are rooted in course content
  • Offer opportunities for learners to practice core skills, get feedback, and improve their skills over time
  • Are manageable within the learning context (with regard to time, resource, and skill limitations)
  • Promote learner satisfaction and enjoyment

Like much of business, training is migrating online, and "e-learning" doesn't even come close to describing the range of training possibility that exists today. Instructional designers are dropping sales representatives into simulated realities to develop their skills in realistic scenarios; delivering just-in-time messaging to help them effectively manage customer objections; and giving employees exciting opportunities to develop essential skills in collaboration, communication, time management, and problem-solving with activities accessible from a variety of mobile devices. 
 
Training initiatives are vital and expensive. A qualified instructional designer understands the relationship between gaming and learning and can design training that ignites your trainees and gives you (and them) a solid return on your investment.

Sally

References
 
Conrad, R., & Donaldson, J. A. (2011). Engaging the online learner: Activities and
resources for creative instruction (Updated ed.). San Francisco, CA: Jossey-Bass.

Schreiner, E. (2013). What are the benefits of games in education & learning activities? Retrieved from
http://www.ehow.com/list_6158842_benefits-games-education-learning-activities_.html

Shank, P. (2006). Activities aren't optional. Online Classroom, 4-5. Retrieved from the Walden Library using the Education Research Complete database.

Friday, July 19, 2013

Setting up an Online Learning Experience

Starting well is essential for a successful online learning experience, and it doesn’t happen automatically.  A skilled facilitator spends the time necessary to achieve social, cognitive, and teaching presence from the outset (Boettcher & Conrad, 2010) and continues to actively support learners throughout the course according to their individual needs and learning styles.

Having a comprehensive understanding of available technology enables the facilitator to select the best technology for a particular application and to effectively coach learners who have not mastered the technology. Without technological clarity, a facilitator may overload herself and the learners with unnecessary tasks or technology that interferes with learning (Boettcher & Conrad, 2010).

It is essential to clearly communicate expectations to learners. Most adult learners are goal-oriented, and mastery of skills boosts their confidence and improves their self-esteem (Malamed, 2013). Learners need to understand expectations so they can take steps to meet them and feel satisfied with their online learning experience (Boettcher & Conrad, 2010).

When setting up an online learning experience, facilitators should provide an opportunity for learners to share biographical information. This fosters openness and community, and also provides a frame of reference for the facilitator when interacting with learners in discussion forums. When a facilitator uses student names and references relevant biographical information, this personalizes the learning experience and promotes learner engagement (Laureate Education, Inc., n.d.).

Conversely, facilitators need to be aware that some learners are reluctant to share personal information for fear of being stereotyped (e.g., by race, gender, ethnicity, geography, relationship status, etc. (Laureate Education, Inc., n.d.). Personal information should be invited, not required. As well, when designing ice breakers and discussion prompts, facilitators should allow learners to choose from activities with varying degrees of openness.

Great beginnings lead to great middles and great endings!

Sally

References

Boettcher, J. V., & Conrad, R. (2010). The online teaching survival guide: Simple and practical pedagogical tips. San Francisco, CA: Jossey-Bass.

Laureate Education, Inc. (Producer) (n.d.) Launching the Online Learning Experience [DVD] Baltimore: MD.

Malamed, C. (2013). http://theelearningcoach.com/learning/characteristics-of-adult-learners

Wednesday, July 3, 2013

Online Learning Communities

How do online learning communities significantly impact both student learning and satisfaction within online courses? Compared to traditional education, online learning communities offer more opportunities for learner-to-learner engagement (Laureate, n.d.). Through online learning communities learners exchange ideas and information with more and further flung peers than is possible in traditional educational settings. This aggregation of multiple intelligence (Gardner, 2003) exposes learners to different learning styles, which can increase student satisfaction and both real and perceived learning (Gilbert & Han, 2002).

What are the essential elements of online community building? The goal on an online learning community is “a sense of co-created knowledge and meaning” (Laureate, n.d.). Essential elements are those that support that goal, including:
• Orientation to online learning and to the course
• Navigation that is intuitive and clear for learners with varying degrees of technical ability
• Presence of the facilitator or instructor early and often
• Invitation to students to post a bio of themselves
• Feedback that is timely, specific, and supportive (Laureate, n.d.).

How can online learning communities be sustained? The long-term health of an online learning community depends on robust and ongoing co-creation of learning experiences in which learners and facilitators participate and evolve to the fullness of their abilities (Laureate, n.d.).

What is the relationship between community building and effective online instruction? Having been a member of an online learning community for 14 months, I know that community is a prerequisite for effective online instruction. You don't have to take my word for it: “Knowledge is literally the set of connections between entities, or the adjustment of the strengths of those connections,” (Downes, 2012) and “Experiences with the environment are critical to learning,” (Ertmer & Newby, 1993).

What did you learn that will help you become a more effective instructor in the future? I learn most about this by example, and I will seek to replicate the successes of some of my online professors who are:
• Organized – Online learning is fast-paced and intense. Instructors with excellent time and project management skills afford their students the best opportunities to interact with information and each other and thus, make deeper connections with the course material.
• Present – Online instructors who establish social presence in the community set a tone of openness and collaboration. Instructors who are present are able to probe for understanding and challenge learners to reflect deeply on course material, so they can enrich the group discussions.
• Clear – Instructors who give clear direction enable students to deliver as expected and experience mastery of course material.

Sally Bacchetta

References
Downes, S. (2012). Connectivism and Connective Knowledge: Essays on meaning and learning networks.

Ertmer, P. and Newby, T. (1993). Behaviorism, Cognitivism, Constructivism: Comparing Critical Features from an Instructional Design Perspective. Performance Improvement Quarterly, 6(4), pp. 50-72.

Gardner, H. (2003, April 21). Multiple intelligences after 20 years. Paper presented to the American Educational Research Association, Chicago, IL. Retrieved from http://www.pz.harvard.edu/Pls/HG_MI_after_20_years.pdf.

Gilbert, J.E. & Han, C.Y. (2002). Arthur: A personalized instructional system. Journal of Network and Computing Applications, 22(3), 149-160.

Laureate Education, Inc. (Producer). (n.d.). Online Learning Communities [DVD] Baltimore, MD

Monday, July 1, 2013

Continuing the Journey

Today I move into the next leg of my journey toward a Master's degree in Instructional Design with a course called Online Instructional Strategies. I'm glad you've come along. I'm looking forward to more growing and learning together. Welcome!

Sally Bacchetta

Insights on Program Evaluation

Having just completed a Program Evaluation course at Walden University, I reflect here on the experience.

I didn’t realize that the process of creating and presenting my program evaluation plan was central to my learning until it was done. It was in pulling the disparate pieces of the plan together into a cohesive whole that I found meaning. Only then did I realize that had I not thoroughly explored the program context; not stretched myself to identify primary and secondary stakeholders and their interests, needs, and biases; not named my values and committed myself to them; not reflected on my own biases; not considered the impact of my report, my evaluation plan, which may have held together well enough to bring me to the last week of the course, would ultimately have failed the final analysis.

Program evaluation can inform what needs to change and form the basis of a change management plan, but first, it must fit the program context. It is incumbent on the evaluator to consider how contextual factors may inform the selection of an evaluation model. Program evaluation is inherently a political process, and an evaluator who ignores, avoids, or mismanages the political realities of evaluation limits the effectiveness and usefulness of the process (Fitzpatrick, Sanders, & Worthen, 2010.

This experience has shown me that working with stakeholders is one of the most challenging aspects of program evaluation. The important work of planning a program evaluation can be upset by stakeholder conflict, politics, bias, and unexpected manifestations of organizational culture. And yet, as an evaluator, I have a professional obligation to find my way to promote meaningful evaluation and the application of evaluation results by stakeholders (Fitzpatrick et al., 2010).

Technology can facilitate communication with stakeholders and simplify the processes of data collection, data management, and research (Laureate Education, Inc., n.d.), but it is only a tool; the evaluator must provide the raw material and craft the work. In every phase of evaluation it is incumbent on the evaluator to uphold the priority of justice (Schweigert, 2007); to mine the program context for cultural cues, gaps in understanding, potential bias; and feasibility; and to demonstrate and promote respect for stakeholders and the evaluation process. Bias is the weed that pervades the evaluation process, from the evaluator’s preference for a particular approach or data collection design to overt or covert liking of some stakeholders more than others, finding some steps of the evaluation process more interesting, more compelling, or more exhausting than others. The presence of bias is a given. Evaluators must be frankly self-reflective about their role in the evaluation process and circumspect about client requests, so as to minimize the potential for bias and ethical compromise (Fitzpatrick et al., 2010).

At some point, situational circumstance requires evaluators to make interpretations and best guesses (Schweigert, 2007), which are subject to bias and ethical compromise. I carry with me from this course Sieber’s (1980) conclusion that “being ethical in program evaluation is a process of growth in understanding, perception, and creative problem-solving ability that respects the interests of individuals and of society” (p. 53).

Sally Bacchetta
 
References

American Evaluation Association, 2004. Guiding principles. Retrieved from www.eval.org/Publications/Guiding Principles.asp.

Fetterman, D. (2001). The transformation of evaluation into a collaboration: A vision of evaluation in the 21st century. American Journal of Evaluation, 22(3), 381–384. Retrieved from the Education Research Complete database

Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson

Laureate Education, Inc. (Producer). (2009). Formative and Summative Evaluation. [DVD] United States.

Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30(4), 394–399.

Sieber, J. E. (1980). Being ethical: Professional and personal decisions in program evaluation. In R.E. Perloff & E. Perloff (Eds.), Values, ethics, and standards in evaluation. New Directions for Program Evaluation, No. 7, 51-61. San Francisco: Jossey-Bass.

Worthen, B. (2001). Whither evaluation? That all depends. American Journal of Evaluation, 22(3), 409–416. Retrieved from the Education Research Complete database

Wednesday, May 15, 2013

The Political Nature of Program Evaluation

Program evaluation is a political process. An evaluator who ignores, avoids, or mismanages the political realities of evaluation limits the effectiveness and usefulness of the process (Fitzpatrick, Sanders, & Worthen, 2010). Ethical complexities wind in and among the more overt political features of evaluation such as financial support, stakeholder allegiance, and social impact. Morris and Cohn (1993) detail several ways in which stakeholders may seek to influence evaluation outcomes, and Fitzpatrick et al. (2010) caution that evaluators also need to be aware of their own potential to taint the evaluative process.

If we accept that evaluation is political (and, therefore, ripe for ethical complication), then we must ask how best to balance the objectivity required in a program evaluation with the political interests of stakeholders. We must ask, “What ethical standards and values need to be emphasized in program evaluation?”

The American Evaluation Association’s (AEA) Program Evaluation Standards (Yarbrough, Shulha, Hopson, & Caruthers, 2011) and Guiding Principles for Evaluators (American Evaluation Association, 2004) provide a broad, somewhat obvious, framework for ethical conduct.

Fitzpatrick et al. (2010) are more specific, encouraging evaluators to be both self-reflective about their role in the evaluation process and circumspect about client requests, so as to minimize the potential for bias and ethical compromise: “…the client may be asking for what the client perceives as editing changes, but the evaluator sees as watering down the clarity or strength of the judgments made” (p. 81). And Schweigert (2007) roots evaluator responsibility in the notion of justice – public, procedural, and distributive.

From this we can extract answers to the question “What ethical standards and values need to be emphasized in program evaluation?” 

Ethical standards:
  • Those detailed in the AEA’s and other professionally recognized codes of conduct.

Values:
  • Commitment to truth – what Schweigert (2007) calls the priority of justice
  • Cultural sensitivity
  • Respect (for stakeholders, ourselves, and the evaluation process)

It seems that no professional code nor personal charter can do the whole job. No matter how pointed the professional standards, situational circumstance requires evaluators to make interpretations and best guesses (Schweigert, 2007), which are subject to bias and ethical compromise, as Weiss (2006) lays bare any illusions we may have that we are above or beyond the snare of bias and ethical confusion: “You never start from scratch. We pick up the ideas that are congenial to our own perspective. Therefore, people pick up this thought or that interpretation of a research report that fits with what they know or what they want to do” (p. 480).

I have thought about this a lot over the past few days, returning again and again to Sieber’s (1980) conclusion that “being ethical in program evaluation is a process of growth in understanding, perception, and creative problem-solving ability that respects the interests of individuals and of society” (p. 53). 


References
 
American Evaluation Association, 2004. Guiding principles. Retrieved from www.eval.org/Publications/Guiding Principles.asp.

Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson

Morris, M., & Cohn, R. (1993). Program evaluators and ethical challenges: A national survey. Evaluation Review, 17, 621-642.

Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30(4), 394–399.

Sieber, J. E. (1980). Being ethical: Professional and personal decisions in program evaluation. In R.E. Perloff & E. Perloff (Eds.), Values, ethics, and standards in evaluation. New Directions for Program Evaluation, No. 7, 51-61. San Francisco: Jossey-Bass. 

Weiss, C. H., & Mark, M. M. (2006). The oral history of evaluation Part IV: The professional evolution of Carol Weiss. American Journal of Evaluation, 27(4), p. 474-483.

Thursday, May 2, 2013

Fostering Behavior Change

In Fostering Behavior Change (Tulgan, 2013), Bruce Tulgan offers seven best practices for creating training that increases knowledge uptake and meaningful behavior change. Two-thirds of my way through a Master’s degree in Instructional Design and Technology, I first thought Tulgan’s tips were obvious. Simplistic. After thinking about it quite a bit, I’m sure they are. Why would Tulgan, an established training expert, tell us what we already know? Because it’s true. Because he’s right.  

There is no magic to training, and all the cool whiz-bang technology in the world doesn’t change the fact that effective training is a product of sound design and delivery. Tulgan’s tips should seem obvious, because he is reminding instructional designers and trainers of what we already know, yet sometimes fail to execute. We need to leverage needs assessments to align instructional objectives with identifiable skill and knowledge gaps, link instructional content to real-life, and deliver content to multiple memory centers. Sticky training offers actionable solutions and learning extensions. Finally, we need to follow up and cultivate support for ongoing learning.

We know this. We need to do it. Every time.

Make a great day,


Reference
 
Tulgan, B. (2013, January/February). Fostering behavior change. Training, 50(1), p. 9.

Sunday, April 21, 2013

The Future of Distance Learning

I was recently asked to share my thoughts on the future of distance learning, and I’m struggling to be original about the topic. It seems self-evident that the future of distance learning is expansive and inclusive and ever-so-much-more-so. Distance learning has burgeoned far beyond the realm of training for medical transcription and is ubiquitous in corporate settings, higher education, and K-12 education. But the real driver is not just that more people are learning at a distance; it’s that more people are recognizing distance education as a viable alternative to traditional F2F learning. The quality of distance learning coupled with the broad availability of emerging technologies have transformed distance learning from a fall-back, Plan B position to a deliberate first choice.

The growing acceptance of distance learning is fueled by a global increase in online communication. As more of us spend more time together online, engaging with more diverse groups than we ever would in person, the complications of distance matter less, and the benefits matter more, to individuals, corporations, and educational institutions (Laureate Education, Inc., 2009).

If there is a popularity ceiling for distance learning, it may be framed and fortified by distance education institutes themselves. A study by Gambescia and Paolucci (2009) found that few institutions effectively leverage their academic integrity in their promotions, relying instead on convenience and flexibility to appeal to potential students. The study didn’t reveal reasons for this, but I’ll speculate on two reasons:

  1. Convenience and flexibility are big draws for distance learning. It’s slam dunk marketing.
  2. It’s easier to leverage innate characteristics of distance learning than it is to ensure the academic integrity of distance education.
As Gambescia and Paolucci note, “to ensure a high-level of academic fidelity and integrity for online degree programs is not simply a matter of the university transferring current academic assets to the new online degree programs—throwing it over the fence, so to speak. Transferring such academic assets to online degree programs will understandably call for changes, as the inputs and outputs of online degree program offerings by design can be quite different” (Gambescia & Paolucci, 2009).

Assuming the momentum gathering around distance learning is indeed forward momentum, distance learning institutions themselves have work to do. Enrollment – ballooning. Acceptance – growing. Perceived quality – ?

More is expected of those to whom more has been given; so it is in distance learning as in other aspects of life. Those of us who are distance learners or work in distance learning are uniquely positioned to cultivate positive impressions of distance learning. We do this best by doing well in our endeavors and demonstrating the high standards of distance education today.

Sally Bacchetta

References

Gambescia, S., & Paolucci, R. (2009). Academic fidelity and integrity as attributes of university online degree program offerings. Online Journal of Distance Learning Administration, 12(1). Retrieved from http://www.westga.edu/~distance/ojdla/spring121/gambescia121.html

Laureate Education, Inc. (Producer). (2010). The future of distance education [DVD]. Baltimore, MD: Author

Schmidt, E., & Gallegos, A. (2001). Distance learning: Issues and concerns of distance learners. Journal of Industrial Technology, 17(3). Retrieved from http://atmae.org/jit/Articles/schmidt041801.pdf

Wednesday, April 10, 2013

Scope Creep: A Horror Story

This week brings another interesting assignment for my Project Management course. I am asked to reflect on an experience I had with scope creep and, well... see for yourself. 

Describe a project, either personal or professional, that experienced issues related to scope creep.  In a former career I managed a residential treatment program for adults with mental illnesses. I supervised the staff and residents of a ten-apartment facility, provided counseling, assisted with activities of daily living, and participated in inter-disciplinary planning and treatment for each resident. The need for mental health care far outpaced available resources, and the CEO continually scouted for new properties to acquire and convert to accommodate our long waiting list.

Such a property was found not far from my facility, and I was flattered when the CEO asked for my help in establishing the new facility. He explained that I would maintain my current duties and also be responsible for interviewing, hiring, and training the new staff. I would also supervise them until a manager was hired and trained. I had a good deal of experience and a solid team, and I was confident that I could take over the additional responsibilities without compromising either program.

What specific scope creep issues occurred?  Almost immediately my role at the new location changed from temporary manager to design consultant-construction site supervisor-accounting rep-professional cleaner-building superintendent. With each passing day I found myself making decisions I had neither the qualifications nor the desire to make.

What interior paint colors do you want? Sally can decide.
We need to order furniture. Let’s ask Sally to do it.
What equipment do we need to set up the new office? The new kitchen? The residents’ bedrooms? Have Sally develop a list. Give Sally the corporate credit card. Let’s have Sally be there to take delivery.
There are bats in the fireplace. My invoice hasn’t been paid. We found mold in the basement. The porch foundation didn’t pass inspection. Call Sally!

How did you or other stakeholders deal with those issues at the time?  My staff stepped up and took on extra responsibilities. The contractors became progressively less motivated, less patient, and less concerned with quality. The CEO went on a 14-day cruise with his wife, and I quit exercising, dusting, and cooking decent meals; I drank too much coffee and slept very little, always with a pager by my side.

Looking back on the experience now, had you been in the position of managing the project, what could you have done to better manage these issues and control the scope of the project? What I could have done better is to actually manage the project instead of scrambling to keep up with the scope creep. I lacked project management experience, and I was so focused on the overwhelming need for more formalized mental health support that it didn’t occur to me to refuse (or question) any of the tasks that were dumped on me.

If I knew then what I know now, I would have drafted some type of work breakdown structure (Portny, Mantel, Meredith, Shafer, Sutton, & Kramer, 2008). I would have listed Level 1, 2, 3, etc., tasks and sub-tasks and identified and allocated resources for each. I would have outlined an appropriate chain-of-command and sought approval for each of these documents (Greer, 2010). And I would have said “no.”

“No” to hauling office furniture up a flight of stairs. “No” to manually seeding the acre lot using an old rusty spreader so we could save a few bucks. And absolutely, positively, “no” to checking to see if there really were bats in the fireplace. Yeah, there were. There sure were.


Resources
 
Greer, M. (2010). The project management minimalist: Just enough PM to rock your projects! (Laureate custom ed.). Baltimore: Laureate Education, Inc.

Portny, S. E., Mantel, S. J., Meredith, J. R., Shafer, S. M., Sutton, M. M., & Kramer, B. E. (2008). Project management: Planning, scheduling, and controlling projects. Hoboken, NJ: John Wiley & Sons, Inc.

Wednesday, March 27, 2013

Managing Your Project Schedule

One of the most challenging aspects of instructional design project management is managing the project schedule. The schedule must be rigid enough to provide structure, yet fluid enough to allow for the inevitable inevitabilities of project work. One way to develop your own scheduling style is to read and learn about what works for other people, so here is my two cents.

As often as possible I work back-to-front to draft my ideal network diagram, beginning with the end date of the final deliverable and working backwards to identify prerequisite (predecessor) activities and deliverables, time requirements, and event deadlines, based on required, procedural, and logical relationships (Portny et al., 2008).

I then compare my ideal network diagram to what I call “probable reality,” which is a consideration of my experience with:

  • Projects in general
  • Similar projects
  • What I know about this client
  • What I know about this project team
  • My own schedule and workload

Probable reality reflects my best guess of project limitations and unknowns (Portny et al., 2008) and their anticipated impact on the project. For example, let’s assume that my project team has 21 days to submit a deliverable of a module outline with instructional objectives. I consider my experience with projects in general and similar projects, and determine that I need 14 days (span time) to complete the deliverable.

In my ideal network diagram I allow 3 days for the client to review and approve the outline (17 days). However, I know this client is slow to review and approve materials, so I build an extra 2 days into the activity phase (19 days). That leaves the team only 2 days to review client feedback, make revisions, and re-submit the outline to the client.

If I know the project team is highly organized and on point, I may take a risk and go with that. But if I don’t know the project team well, or if I know them to be slow or unorganized or very busy, I will decide that it’s not enough time and I need to cut some time somewhere else in the process.

Every activity eats up time (Laureate Education, Inc., 2012), and when time is tight, my first choice is to cut time on my end where I have the most control. I then consider my schedule and workload to find a way to complete the deliverable in less than 14 days.

Freelance work can be very patchwork, and I sometimes function as ID, writer, and PM all in one. In those situations I can track everything with a simple Word document and an Excel spreadsheet. When working with a team, however, project management software is almost essential (Fabac, 2006). It facilitates consistent communication and sharing of timelines, milestones, progress, and changes between team members and clients.

Project work is a wild ride every time. It can be stressful, but the feeling of success is a rush!

Sally Bacchetta

References

Fabac, J. N. (2006). Project management for systematic training. Advances in Developing
Human Resources, 8(4), 540–547.

Laureate Education, Inc., (Producer). (2012). Creating a project schedule. [Multimedia Program]. Baltimore, MD: Author.

Portny, S. E., Mantel, S. J., Meredith, J. R., Shafer, S. M., Sutton, M. M., & Kramer, B. E.
(2008). Project management: Planning, scheduling, and controlling projects. Hoboken, NJ: John Wiley & Sons, Inc.

Thursday, March 21, 2013

Communication

Which mode of communication do you prefer: written, audio, or video? This week I reviewed a sample communication in each of those three modes to see if the type of communication affected my interpretation of the message. Here is the communication in written form: 

Hi Mark,
I know you have been busy and possibly in that all day meeting today, but I really need an ETA on the missing report. Because your report contains data I need to finish my report, I might miss my own deadline if I don’t get your report soon. Please let me know when you think you can get your report sent over to me, or even if you can send the data I need in a separate email.
I really appreciate your help.
Jane” (Laureate Education, Inc., 2012).

Written – When I read this I experienced Jane as feeling urgent, conscientious, and compassionate. She is clear about her request and also steps outside her own perspective to share insight into Mark’s workload and offer a compromise solution. If I got this email I would make it a priority to give Jane what she needs and thank her for her patience.

Audio – Next, I listened to an audio recording of Jane reading the same communication. Again, Jane came across as urgent, conscientious, and compassionate, although slightly less assertive.

Video – Finally, I watched a video of Jane speaking the communication. Jane seemed friendly, supportive, and quite a bit less assertive in the email and audio. Jane did something that women often do (and it drives me crazy!), which is to make a statement sound like a question by raising her voice inflection at the end of the sentence. “So please let me know…? If you can send it over soon?” (Laureate Education, Inc., 2012). This diminishes the urgency of the situation and suggests that she is flexible about waiting for the data. “According to Deborah Tannen, we hear a downward cadence as ‘closed’ or ‘final,’ with the extreme being ‘controlling.’ Conversely, we hear an upward cadence as ‘open’ and ‘flexible’ with the extreme being ‘indecisive’” (Tannen, 2011).If I were an over-scheduled Mark, I would not likely prioritize getting Jane’s data to her.

After this exercise I reflected on the nature of communication among some of the project teams I’ve worked with. All have blended written, audio and face-to-face communication. Is one mode better than another?

I prefer written communication, and I want it to be clear and concise. (Ernest Hemingway never wasted a word, and I wish more people were that way.) Some people need to give chapter and verse of everything they say, and others like to chit chat for a minute or two before they discuss the issue at hand.

A project manager needs to be able to reach and receive from everyone on the team, regardless of their communication style. This is easier when the PM is familiar with the team. Once I recognize someone’s style I make an effort to communicate with them in that style and mode. It helps ensure that the message sent is actually received.



References

Laureate Education, Inc., (Producer). (2012). The Art of Effective Communication. [Multimedia Program]. Baltimore, MD: Author.
 
Tannen, D. (2011). The eloquent woman blog. Retrieved from http://eloquentwoman.blogspot.com/2011/09/do-you-end-sentences-with-upward.html

Thursday, March 14, 2013

Sales Training Project Post-mortem



The project debrief, or post-mortem, is one of my favorite parts of a project because I learn as much from a negative project experience as I do from a positive one. By identifying what did and didn’t work I can be better prepared for the next project (Greer, 2010). This is my analysis of the “Rep Expo” training project.
Context
I was hired to design the instruction and develop the content for a three day sales training workshop. Each module was to be designed as a stand-alone unit to be delivered by a face-to-face facilitator. The client provided a list of topics to be addressed, and I was given the freedom to make all design and content decisions, with draft materials to be approved by the project manager and the client. The PM described the client as “extremely picky, unavailable, and unable to articulate what he wants.” I was brought in (freelance) because two in-house individuals had failed to successfully complete the project. The project was now well behind timeline and over budget, and the client had threatened to terminate the contract “if you folks can’t turn this around quickly.” We were given 60 days to complete the project and a successful beta test.

Post-mortem
What contributed to the project’s success or failure? The positive and negative drivers of this project were closely enmeshed. What follows is my pro and con post-mortem list of PM actions that drove the project to failure.

 Pro: Arranged for a F2F kickoff meeting with the client to clarify client objectives, expectations, and other project details.
Con: Attended the meeting with several other internal team members and behaved unprofessionally (holding whispered personal conversations while the client was talking, openly disagreeing with the client about details of the project, frequently checking her watch, leaving to make personal phone calls, and assuring the client that we had the capabilities to develop specific learning objects knowing full well that we did not).

Pro: Developed and maintained a comprehensive production schedule.
Con: Made no attempt to adhere to the schedule; it was completely moot.

Pro: Scheduled weekly internal meetings.
Con: Rescheduled or canceled most meetings with little advance notice; was absent from several meetings; meeting minutes routinely contained significant errors.  

Pro: Supplied me with contact information for client-selected SME.
Con: Failed to notify SME that he had been selected to consult on the project, so my initial call was a complete surprise. Failed to include SME in internal meetings and progress reports.

Pro: Provided me with a written SOW and contract for my services.
Con: Ignored the terms of payment detailed in the contract.

Pro: Assigned a team of graphic designers, animators, programmers, and editors to support my piece of the project.
Con: Failed to monitor their progress or hold them accountable to any standards, which resulted in loss of time and excessive costs.

Which parts of the PM process, if included, would have made the project more successful? Why?
  1. Project planning (Portny, Mantel, Meredith, Shafer, & Sutton, 2008). If the PM had developed a clear and feasible project plan and held team members accountable, we would have been able to meet our deadlines.
  2. Reporting on project activities (Portny et al., 2008). The PM routinely “hid” from the client and other team members, waiting several days to return time sensitive calls and emails. Delayed communication resulted in missed deadlines, incomplete revisions, an unhappy client, and frustrated team members.
  3. Managing the accomplishment of objectives, within time and budget targets (Portny et al., 2008). The PM is responsible for planning, organizing, and controlling the project and the project team (Portny et al., 2008). Had she actually managed at all we may have been able to save the project and the client.
  4. Identifying tasks and phases necessary to complete the project (Greer, 2010). “Because projects are, by definition, temporary endeavors, it is essential to identify how each phase or collection of activities will be judged by stakeholders to be formally or officially completed” (Greer, 2010, p. 20). Although the PM identified the ultimate deliverables, she neglected the milestone deliverables along the way.
  5. Driving forward. The PM bears responsibility for tracking progress against objectives and intervening as necessary to “correct problems, remove obstacles, and keep the project moving as planned” (Greer, 2010, p. 31). This project became mired again and again; the client missed his schedule launch date, which caused him professional embarrassment and personal stress.

Outcome
The PM created enough of a project shell that we had some successes throughout the project, most significantly that the client was very pleased with the design, content, and interactivity of the workshop. However, his dissatisfaction with the issues detailed herein prevailed, and he terminated the contract on the basis of non-performance.  

 References
(Laureate custom ed.). Baltimore: Laureate Education, Inc. 

Portny, S. E., Mantel, S. J., Meredith, J. R., Shafer, S. M., Sutton, M. M., & Kramer, B. E.
(2008). Project management: Planning, scheduling, and controlling projects. Hoboken, NJ: John Wiley & Sons, Inc.