Sunday, October 30
Technology is always, at best, two steps forward, one step back. But the more comfortable you are with the technology, the more you can take advantage of the good and mitigate the bad (although too much comfort can blind you to the bad, but that's another point).
One of the trickiest parts of producing a learning program is to navigate what I will call the lag of death. This is the difference (or delta, if I am feeling insecure) between the comfort level of learning program sponsors and learning program users.
Typically, the less mature technology people have a greater comfort in short term predictability, command and control, processes, certification, ease of use, eveness of distribution, and risk mitigation. The more mature technology people have a greater belief in communities, engagement, richness of experience, and short term chaos leading to long term order, uneven distribution, and person responsibility.
Navigating this lag of death is tricky, high stakes, and critical. But it increasingly has to be done if we are to thrive.
* note: this chart is a back-of-the-napkin sketch, more thematic than specific. I look forward to enriching it based on the comments of others.
Saturday, October 29
David Grebow's recent post (Wait a minute, let me Google it ...) really got me thinking and since I hate the fact that comments do not get seen by those receving RSS feeds of this blog I decided to do a follow-on post rather than a comment.
Breaking News! Google is doing all kinds of interesting stuff, much of which will have an impact on learning. One of their latest endeavors is Google Base:
Google Base is Google’s database into which you can add all types of content. We’ll host your content and make it searchable online for free.
Sounds like a potential learning object repository to me. Read more about it. Couple this with their free Google Desktop (to index and search your computer and intranet files) and you may have a pretty decent knowledge management solution.
No doubt about it, Google is great especially when it comes to connectng people to content. However, as a learning resource, it falls quite short. Fundamentally the quality of the content is often suspect for just because something appears on a web page does not mean that it is correct. And ironically today's search engines are almost too good - there is simply too much content available now.
To me learning is always made up of content AND collaboration. I learn more from reading a book and discussing it than just simply reading it.
Content + Collaboration = Learning
(see my post on Search sucks - where's the context?)
So you can see where I'm going to go with this ... Google needs to add a way to explicitly rate the content (Google's PageRank implicitly ranks the popularity of a web page) and Google needs to add a way for people to add comments to the web page links that result from a search query.
"Who creates it? Who maintains it?"
Easy. Those who use Google. They have the option to rate the pages and the option to leave a comment. Google has a large enough (massive) user base to make this work. Think of this as a 'people filter'.
This ability to connect people to people through content I think is critical. I often go to Amazon.com and read the book reviews for I what to know what actual people think of the content. Those that have validated identities I trust more than those that don't. A search is not always going to give me the answer I am looking for - this approach gives me the option to tap into the collective knowledge of other users.
Content is so Web 1.0; People is Web 2.0. (grin) Google has the opportunity to start connecting people to people - let's see if they will take advantage of it.
Is this the poor man's version of Jay Cross' workflow learning? What do you think - is Google becoming the best way for rapid, informal learning?
Friday, October 28
I remember quite a few high school & university classes where class participation counted as a relatively significant part of your grade. With most integrations between virtual classrooms and LMS products it seems like launching the classroom URL is about all that is tracked.
This strikes me as the equivalent of getting credit for class participation by walking in the door. At least some of the better integrations check to see that you didn't walk out (leave the meeting URL) before the end of class, or they have a "completion" threshold of some sort (percentage of meeting actual duration or, minimum time spent in the session).
A good virtual classroom facilitator or instructor draws the participants into the content with chat, pools, simulations and all sorts of interactions. But how is that tracked or managed? I might remember the names of a few active participants, but then again eyewitness reports aren't always accurate. Allowing the instructor or leader to enter a value or description for participation after the fact is a start, but how about automating this too.
Could we have integrations that reported back individual's quiz scores from virtual classes, a participation status indicated by upstream VoIP, frequencies or length of chat entries, questions asked?
Is anybody else wanting more real data from virtual classes? What data would you want and how would you foresee using these reports?
Tom King, Macromedia
Thursday, October 27
Google has become a digital extension of my memory. The older I get the more I use it. If I forget how to do something, or cannot remember a fact or name or place, I Google around for a bit and find it.
For example, this morning I was on the phone talking with a client. We started talking about a film and neither one of us could remember the name, only that Al Pacino was in it. So I Googled "Al Pacino Filmography", and two clicks later, I 'knew' the name of the movie. Earlier in the week, I had moved all the livingroom furniture and, in the process, unhooked the VCR, DVD, and TV. When I went to plug in this Medusa's Head of wires, I could not for the life of me remember how the VCR fit back into the scheme of things.
Right. I Googled "Mitsubishi DVD VCR Connections" and three clicks later had the operating manual that was lost in the same place that socks go to in the washing machine.
What does this have to do with learning? Everything. It points to the fact that I do not need to know something, or know how to do something, and I can still know it and do it. My performance is acceptable. It's my memory that sucks. Google is my brain plugged into the internet, the largest repository of information ever created since they set fire to the the Library of Alexandria. Wait a minute, who was it who set that fire ... hold on a sec ... okay it was either
Julius Caesar or Caliph Omar . Yep, Googled.
So why are we spending untold amounts of time and money on learning programs that are not necessary? They could be effectively replaced by a computer on a fast WIFI connection to a knowledge repository. Does anyone assess what needs 'real learning' versus what only needs to be searched for, used, and then forgotten? This is one place where some technology, beyond the flipchart, can come to the rescue and save us from needless "training". And it would be just in time ...
Tony Carlson tells us that "We process more information in a 24-hour period, than the average person 500 years ago would come in contact with in a lifetime..." Are we part of the problem, or the solution?
Maybe I'll ask Google ... .
Tuesday, October 25
- When you talk about development time, the context is downloadable flash based mini-games. Flash based mini-games, like this one, can be developed in just a few weeks. And yet they still have critical messages, and higher interactivity.
- When you talk about manuals and online references, even FAQ's, the dead elephant in the room is Wiki's. Fluid, up to date, organic, they grow faster and are more accurate than most published documents. Wikipedia outgrew the Encylopedia Britannica.
- When you talk about knowledge management, the hidden context is blogs and podcasts.
- When you talk about interactivity in e-learning, the spector is of computer games. Clicking a few buttons now and then can just never compare with the total engagement of Halo 2.
What are some other dead elephants?
Monday, October 24
My first thought was using Centra, but the client always uses the telephone for voice, and I am not sure if they have save capability.
A second option is to use the record voice option in PowerPoint.
A third is to use a tool like Audacity to record an MP3, and then have some one else align that with slides.
Does anyone know of either a great approach, or even great instructions that someone else has prepared to take individuals through this process?
Sunday, October 23
- for whom literacy is a skill.
- using it as a means for studying values based on literacy.
- functioning in a world of prepackaged artifacts.
- active beyond the limitations of literacy, such as stretching cognitive boundaries or defining new means and methods of communication and interaction.
Writers, editors, and some educators see it as a skill in which they make a living by knowing and applying rules of correct language usage.
Others gain value by exploring the great wealth of writings, poetry, history, and philosophy.
The majority, which is estimated at about 75 percent of the population, view it as an artifact or service, such as the mathematics in a calculator, the writing on a greeting card, or the spelling and writing routines incorporated into word processors.
And finally, groups, such as artists and designers, who actively attempt to push it beyond its limits.
For instructional designers, these four branches of literacy are quite useful in that they help us to identify the type of media preferred by our target audience. In addition, they show the branch that teachers, trainers, educators, instructional designers, etc. must join if they desire to help others learn -- the artists and designers who actively attempt to push it beyond its limits. For it is only by pushing literacy to its limits that we will be able to reach the broadest group of learners possible.
While some may bemoan the decline of literacy, others look forward to the instruments that are slowly, but surely, replacing it, such as audio, visual media, and text messages. While this last medium does sound quite literate at first, it manages to break almost every rule in order to obtain velocity:
Grade schoolers are starting to get laptops and of course literacy skills, such as handwriting and spelling, start to suffer. Yet, there was life before literacy, and there will be life when it declines. Thus, maybe it is better to not look at it as "declining," but rather as being...well...replaced.
Literacy presupposes the existence of a shared symbol system that mediates information between the individual's mind and external events (see Technological Literacy Reconsidered). Thus, just as mathematicians from all over the world can share and understand formulas; savvy cellphone uses can understand the above text message.
The literacy that is shaping the netcitizens of today is technological literacy -- knowledge about what technology is, how it works, what purposes it can serve, and how it can be used efficiently and effectively to achieve specific goals. It encodes and decodes messages via three dimensions:
These three dimensions closely relate to Dyrenfurth's (1991) three dimensions: "Technological literacy is a multidimensional term that necessarily includes the ability to use technology (practical dimension = knowledge), the ability to understand the issues raised by our use of technology (civic dimension = capabilities), and the appreciation for the significance of technology (cultural dimension = thinking & acting)." For more on these dimensions, see the Quicktime movie.
Knowledge acquisition took place at a slow place during the age of literacy. With the advancement of technology, we are no longer at the mercy of language (and the literacies associated with it) as the exchange of complex data via graphics, multimedia. etc., are more appropriate to our faster paced society. Our present knowledge economy is not driven by faster computers, but rather by human cognition embodied in experiences that support further diversification of experiences. And the more means we find to diversify our experiences, then the faster our knowledge acquisition will be.
NOTESYou can find the The Civilization of Illiteracy in three types of formats:
Dyrenfurth, M. (1991). Technological literacy synthesized. In M. J. Dyrenfurth & M. R. Kozak (Eds.), Technological literacy (40th Yearbook of the Council on Technology Teacher Education, pp. 138 183). Peoria, IL: Macmillan, McGraw-Hill.
Thursday, October 20
I hope I didn't sound too much like a Luddite when I wrote " Let’s stop building, advertising and selling systems and technologies that will provide the solution. " My intention isn't to impede progress and continued experimentation. I do believe that the various technologies many of us have been developing for years render vital services and that their impact will grow. I also believe that growth will only become significant when a few cultural changes take place within the world of learning. On the other hand, I don’t believe current conditions are yet favorable for that moment of quantum leap.
My major beef is with the hyper-commercialisation, the “advertising and selling” part rather than the “building” part. Elliot asked some years ago “if we build it, will they come?”. Given the number of items that have been built and delivered, it’s probably safe today to say that the answer is “no” (thanks, Anonymous, for summary of the HCE study). Before we build, however, we need to design. And before we design we need to have an idea of why we are designing (other than the hope of eventually selling it to the select few because the design looks good and exploits this year's augmented processing power).
I believe – as many do -- that more will come out of the Open Source movement than from vendors of systems (who are becoming fewer and fewer, as Ben Watson reminds us). My healthy doubts about what Open Source will ultimately deliver hover around how non-commercial creativity can fare in a vehemently and violently commercial world. But that’s a philosophical and sociological problem, not an educational problem.
Flipcharts actually have evolved in various ways, but the ways of using them by creative trainers have evolved much more than the technology itself. With electronic gadgets, it’s the opposite. The people responsible for making learning happen are deprived of the means of doing anything about it. Moore’s law has taught us that every 18 months someone’s going to deliver to our doorstep (COD, of course) everything we need to solve the problems we are too backward, poor, unorganized or handicapped (in terms of technological savvy) to solve ourselves. If we don’t pay, we’re excluded from the community of “best practice”, which might more accurately be called “best purchase”. The laws of the production/consumer society trump all others. The race for innovation, which should be about creativity and solving real learning problems, is dominated by the rich and lazy, those with the biggest marketing budgets.
It’s no wonder then that trainers and learners – as the CHE study reveals – feel not so much alienated as simply excluded. Still the technology is there to be used and in fact is being used, but with little sense of purpose and, I would submit, a great deal of waste. I guess that’s the price of hype.
Wednesday, October 19
Saturday, October 15
Having said that, I love a real analogy. For example, six years ago, I found it very useful to apply experiences with ERPs and CRMs to the then emerging area of LMSs. It did provide real glimpses into the future. As a rule, cold trends actually provide more insight than hot trends.
So with all of that as a caveat, I would like to present up an analogy that I hope is an example of the second, not first, category.
- If you want to see how educational simulations will be created to teach history to the K-12 and undergraduate environment, look at the various Star Trek computer games.
- Star Trek is a series of many, many (far too many?) stories, that cover a coherent timeline, with consistant cultures and technologies. Furthermore, Star Trek events, despite the single brand, have been created by many, many individuals and teams. There are missing links; there are contradictions. This background material is a fairly good analogy of history source material.
- Game developers have tried to capture the essence of the story in game forms. Star Trek games have more often than not used existing genres (First Person Shooter, Real Time Strategy, now MMORPG). They have often had to go deep (space ship battles in general) over broad (from beginning to end of Wrath of Kahn). Some new genres have been created (Bridge Commander, Starfleet Command). Some games have been created by modding other games (Star Trek mod for Half-Life 2). Some are very complex, developing deep expertise; some could not be more simple. Some just use the high level theme over an existing game (Star Trek pinball or Star Trek trivia game). This maps fairly well to the effort that different developers will probabbly go through to create historical educational sims.
- Star Trek fans are very engaged in the process. Every new Star Trek game brings heaps of criticism, people trashing the experience as being not accurate enough (What about episode 212 when Captain Yeri fired six photon blasts in less than five seconds), or fun enough, or broad/comprehensive enough. They then mod the experience, in some cases fixing inaccuracies, in some cases making something less accurate but more fun. This maps fairly well to the role of other historians/instructors looking at the experience.
Some Lessons learned:
- No one sim will capture the entire experience. Sims will often go deep, not broad.
- The more accurate the sim, the more frustrating it will be to play the first time.
- The less accurate the sim, the easier it is to game it, but all sims, no matter how accurate, can be gamed at some point. Multi-player games creates environments where people are faster to break the illusion and try to exploit the rules.
- Debates around specifics are inevitable, but should not be used an excuse to discount the entire experience. 100% accuracy is neither possible nor desirable.
- Huge holes in source material will be uncovered that were missed by linear thinkers but that are glaring to more dynamic content creators.
- Creating new genres is more powerful but also more risky than using old ones.
- Small simple games can be more instructive than super complex ones, especially for teaching about high level relationships.
- Sims won't replace the source material, but augment it.
- Time lines are less important than interactions.
- Communities are key.
I think the real power of educational simulations in K-12 and higher-ed will come when we rethink our curriculum all together. But for those intent on history based educational simulations (and my hat goes off to everyone of you - let me know how I can help!), I think the analogy is a good one.
Wednesday, October 12
Now having gone through a "merger" myself (SkillSoft and SmartForce) it sounds like BlackBoard is really acquiring WebCT and it doesn't bode well for WebCT given that there is a tremendous amount of product overlap between the two companies (a similar comparision would be if SkillSoft and NETg, the leading providers in the corporate education space, merged). It also doesn't bode well for WebCT's willingness to play with open source (sorry Harold!) If you are a WebCT employee you may want to dust off your resume with comments like this:
The combined company expects to realize significant efficiencies by leveraging shared development infrastructure, and mitigating duplicative marketing initiatives and administrative expenditures.
This announcement comes on the heels of Saba acquiring Centra. Now this deal makes more sense as Saba continues evolve beyond its Learning Management Software (LMS) roots (Saba recently bought THINQ) by rounding out its services offering. Personally I like its focus on 'on-demand learning' and its Services Oriented Architecture as I think generic content is a tough business to be in. Plus Saba + Centra creates a US$100M revenue company which is not too shabby!
And of course back in August we had WebEx buying Intranets.com and SumTotal (created out of Docent + Click2Learn) buying PathLore as I discussed in this post.
Clark - you need to update your Chart of Consolidations!
The Resulting Big Five: (forecasted annual revenue)
* strangely none of the investors like any of these deals as the stocks of the related companies have all dropped when they have announced their news.
So whom do you think will be doing a merger or acquisition next?
What do you think of these recent ones?
Having spent so much time with a broadly international crowd of people who spend very little, if any, of theirs speculating about the future of technology for training has allowed me to take some distance and possibly see a few things with more focus. One of the things that strikes me is how linked e-learning culture is to certain trends in the U.S. economy, even though the implications are necessarily global. And if I mention “e-learning culture” it means that I can identify a group of people who share that culture (namely, us) in contrast to all the other groups of people that don’t share it. Which introduces the somewhat embarrassing question of whether e-learning culture is really compatible with other cultures.
Listening to Eliot Masie correctly telling me (through an audio feed) that memory sticks will allow all sorts of things that no one could have imagined made me realize why I seriously doubt that any of what he describes will ever make an impact on learning. I feel exactly the same way about games, simulations and all kinds of “ideal” and idealized content (and I’ve spent twenty years of my life designing, producing and publishing the stuff). It all makes sense… but, when all is said and done, it just doesn’t seem to take off, even though we can usually get it to work (and even prove that it can produce results).
One of the major reasons for failure is culture specific: Eliot’s idea – and many others born out of technological innovation -- supposes learners are social monads, the thought of which is relatively easy to entertain in an individualist culture such as that of the U.S. but unimaginable elsewhere. And even in the U.S. it’s easier to imagine than achieve, because even though our culture teaches us to think of ourselves as monads and our pragmatic sense tells us to try out any promising solution, we actually aren’t monads: we are heavily linked to others through visible and invisible social networks (that, by the way, only vaguely parallel our technical networks). And those networks provide most of our models of behavior, whether we’re aware of it or not.
Looking back at fifty years of technological innovation, what do I see? The only true revolutionary breakthrough in training technology is… the flipchart! It changed things much more than we think (PowerPoint did as well, but in a totally different – and I would say regressive - direction). CBT/multimedia/eLearning has produced a niche market for products and services but bears less resemblance to a revolutionary development in training than it does to the hula-hoop (a great concept, a new and intriguing object, fun to have a go at, a winning topic of conversation, mildly frustrating to start using, possibly addictive in the short term but destined to have a short lifetime). What’s great about the flipchart is that nobody noticed it or talked about it. It arrived stealthily and did its job, allowing us to create, store, distribute and display flexible information in original ways. It also provided a fascinating link to group dynamics, giving trainers a tool to change learners’ perception of the learning environment and the goals associated with it (e.g. by having groups work in parallel and post their results on the wall). It was (and is) absolutely wonderful technology. And using it requires only minimal writing and drawing talents plus a bit of imagination on what to do with the pages. And best of all, no rival vendors telling you that their flipchart has more features than the one you just bought (and should feel guilty about). And no yearly upgrades!
So my suggestion is to do something similar with all our electronic technologies. Adopt and use them because we need them for storage and communication (independently of training) and then just have them around to help those who have something to teach others (formally or informally) get their messages across. Let’s stop building, advertising and selling systems and technologies that will provide the solution. Where Plato banned poets from the Republic, I would ban the vendors. People will end up providing the solution if you let them just use the technology they spontaneously accept for other purposes. Down with the constraints of training-specific technology. And down with instructional design (yeah, Jay, I’m with you as usual).
There are "Strong Brand" organizations: Microsoft, Google, Harvard University, IBM, Accenture. Even actors in a high-profile television series.
New employees are excited to get in. The parents are thrilled. Friends are jealous.
The new employees struggle, learn the culture, learn the rules, gain a lot of interesting perspectives and SOPs. Their prestige in the outside world climbs as a result of their new home.
To the outside world, these people have it made. They are part of a superb organization. They have access to resources. But to the no-longer new employee, they start getting restless. At some point, they realize that they are supporting the brand, not necessarily helping themselves. Their own professional identity risks becoming too associated with the Strong Brand. They do not have as much control as they would like. To some degree, the view of outsiders towards them becomes increasingly at odds with the internal organization's.
Unless they are fast-tracked, they grow resentful, ultimately either leaving, or sticking around but detached.
I don't know what the solution is. But everyone I know who is in that cycle feels unique, so I thought I would share this.
Tuesday, October 11
A few months ago I saw a presentation by Byron Reeves of Stanford who is doing some really interesting research using an FMRI ( like a CAT scan) to look at areas of brain activation when game players are faced with certain tasks and situations. He found that:
- People were more excited when they got to pick their own avatars rather then getting assigned one
- People were more excited in a rich media environment
- The story for the game had a big effect on excitement
What he found from his research that can be applied to work situations is:
- Don't underestimate the value of fun!
- Reinforcement in multiple time domains is important
- Many of the game social and management skills learned are transferable to work situations
Byron's group has applied a game called The "three Ring Pirate's Gamne" components to a call center application with great success.
My Experince in Applying Gaming Principles
I am wondering if some of the results of this type of research apply to learning. In my own experience I have found that to teach about collaboration and collaboration technologies, talking about it (lecture) was not adequate, and that I had to create the BTG (business transformation game), which was a hands-on, scenario-based, role-playing game that helped people learn about collaboration technologies as well as their behavioral interactions by actually using a variety of technologies in a scenario we created. I have done this for a number of clients and have found that the level of learning from those that play the BTG is much higher then those that just recieve a lecture.
More About Games
Today there are 60 million active gamers in the U.S. today. Although most of them are males 14-34, that population has been rapidly shifting both towards women and older populations.
There are real benefits that gamers can get from playing games. For instance, increasing their ability to deal with spatial rotation, or people with Asberger's syndrome getting better in social interactions.
From my point of view an even larger and more tangible benefit comes from the fact that these online games provide a variety of people the chance to try on and work in roles that they might not normally get until many years later in their career. In a game your group or team can deal with lots of different challenges and those on that team get experience in dealing with a situation that they probably would not experience. However, if this unlikley situation does come up in the real world , then they are more prepared to deal with it. Much like a simulation, some of these games may really be some type of immunization against the future!
There are all types of learners; auditory, visual, kinesthetic, etc. I believe games engage all types of learners and provide them with not only new information but a "practice area" to try on new roles and behaviors before they have to use them in a critical situation in real life!
What do you think?
The result, says Carl Honore, journalist and author of "In Praise of Slowness," is a situation where the digital communications that were supposed to make working lives run more smoothly are actually preventing people from getting critical tasks accomplished.
Chris Caposella a VP in the Microsoft Information Worker Business Unit says that "People are ultra connected. And you know what? Now they are starting to realize, 'Wow, I want to actually stop getting interrupted.'" Dan Russell, a researcher at IBM's Almaden Research Center, turns off the instant notification of e-mail and only looks at e-mail 2X a day and has cut the time he spends with e-mail in half. Other organizations, like Veritas Software have implemented "no e-mail Fridays." Employees can't e-mail one another on Friday, but they are allowed to e-mail customers or other parts of the storage company if they have to. The result? Workers spend more time connecting face to face.
A study by Hewlett-Packard earlier this year found that 62 percent of British adults are addicted to their e-mail--checking messages during meetings, after working hours and on vacation. Half of workers felt a need to respond to e-mails immediately or within an hour, and one in five people reported being "happy" to interrupt a business or social gathering to respond to an e-mail or phone message.
Even airlines are starting to offer broadband Internet access. So how will we be able to deal with this tidal wave of communications?
"With Office 12, we will do things to make it a lot easier for people to be more effective in the way they manage all of these communication mechanisms," Capossela said. IBM also is looking at solutions to manage scheduling for the next version of Lotus Workplace, part of IBM's collection of software that rivals Office.
But technology may not be the solution. Like many issues in collaboration it is the "people and process issues" that are the crux of the problem.
"The problem, Russell said, is that there are only certain types of tasks that humans are good at doing simultaneously. Cooking and talking on the phone go together fine, as does walking and chewing gum (for most people). But try and do three math problems at once, and you are sure to end up in frustration."
I have written a lot about what I call "attention management" and what everyone else calls "Continuous Partial Attention (term coined by Linda Stone)." Stowe has been blogging about this for months, and he and I have had a few discussions on the subject.
Basically, he believes that your social networks are your filter for information overload. If A likes it and I like and trust A, then I should like it. I agree with Stowe to a point, in that social networks only deal with part of the problem. I do not believe that you will be able to filter enough through these networks to stop the overwhelming of your bandwidth for both information and attention.
I believe that the problem needs to be attached also from the other direction. That is to augment a person's ability to "attend" to content and events. In my view of the future there are a variety of technology solutions that might help. But I don't think the scheduling tools that Microsoft and Lotus are building are it. I believe that you will need to multiply your bandwidth and attention by multiplying your self.
Some type of virtual agent that not only knows where you are, what you are doing and what collaboration programs or devices you have, but it also has a subset of your personality and is assigned to deal with specific types of tasks demanding your attention. For example, this virtual agent or avatar can deal with lower-level requests for attention and decisions around what to pick up at the grocery store. It knows your likes and dislikes, what is in the refrigerator and what is not, and you have empowered it to make those shopping decisions, and have the groceries delivered to your house at 6:00 pm (it knows your schedule and that you are due to have dinner with your family by 7:00 pm).
This leaves you free to deal with critical requests for your attention from your family, your boss, negotiating with a client, dealing with a crisis, etc. Since many fewer items fall into these "critical" categories your bandwidth and attention are on overwhelmed, and yet all of these other demands on your attention are also being satisfied.
Blue Sky or Tomorrows Solution?
I realize that I lot of what I have written about is theoretical. The the days of intelligent agents that can augment my attention may be far off, but the tsunami of information and demands for my attention are here today. One of the biggest issues I had in school was paying attention to the teacher, especially if I was bored or had already done the work. I don't see online learning, or virtual classrooms (the way they are today) as a good solution to that problem. What do you think?
What is all this pointing to? Simply my new mantra - the application is becoming the platform. Wikis, blogs, podcasting - all part of the same dynamic. Call it what you will but things that we used to think of as "applications" - discrete programs used for specific purposes, a search engine, a game, are becoming platforms for development. The first browser was an application unti people started developing for the Web instead of the Net. eBay was an application, an auction site, until people started developing programs that were based on eBay - like automated auction programs.
This isn't exactly breakthrough thinking here but my question is really...where is this dynamic happening in the learning world? This is as much me actually asking the question and looking for answers as it is a rhetorical device. I want to know. Who are the folks creating "learning platforms" on which future learning applications will be able to be developed? If the answer here is silence or even a muted reply, then the next question must be why? Why, in the face of such staggering successes in other fields (computer gaming really took off with the release of the first DOOM in 1993 and that was largely due to two factors - they gave away the first three levels for free and it came with an editor - that's right, from its release it was sold, marketed and exploded at least in part because it became a platform.
So one final time and then I'll be quiet - who is developing learning products which can both serve a primary function as a learning product but are also designed to act as development platforms - at little or no additional cost?
Sunday, October 9
Before Einstein, scientists would observe and record something, and then find the right mathematics to explain the results. Einstein comes along and reverses the process by finding a beautiful piece of mathematics based on some very deep insights into the way the universe works and then makes predictions about what ought to happen in the world.
Behold the power of human creativity.
Mihaly Csikszentmihalyi wrote that the creative process normally takes five steps (Creativity, 1996, p. 79):
- Preparation: becoming immersed in problematic issues that are interesting and arouses curiosity.
- Incubation: ideas churn around below the threshold of consciousness.
- Insight: the "Aha!" moment when the puzzle starts to fall together.
- Evaluation: deciding if the insight is valuable and worth pursuing.
- Elaboration: translating the insight into its final work (Creativity is 99% perspiration and 1% inspiration - Edison).
Dyson's story is interesting as it fits the five steps of creativity:
- Preparation: He goes to Princeton to study under the greats. He gets personally acquainted with the two central figures, recognizes the two theories are connected, and then goes through a six month period of directed preparation.
- Incubation: He spends two weeks relaxing
- Insight: and is hit with the "Aha!" of how to explain and connect the two.
- Evaluation: He then spends another six months creating, evaluating,
- Elaboration: and elaborating two papers that are accepted by the editors of Physical Review.
The Learning Process in the Knowledge EconomySkills needed in this so called "knowledge economy" go beyond rote memory to the next level -- the ability to think both critically and creatively.
Yet traditional learning systems have typically been centralized and operate on the principle that learners are unable to decide what they need to learn, thus the system does it for them, which in turn creates a vicious cycle -- put the learners in a system that does very little to encourage critical thinking, formal reasoning, or meta-learning; then tell them they are unable to decide what they need to learn, thus others will have do it for them. And this carries on from schools to the business world.
This central control is stifling...it is closed to the possibility that people need to have a say in what they learn. It is closed to the next step in the learning process -- building a variety of experiences in order to build a strong knowledge base; which then creates the possibility for building a context or connection that no one else has created before. This is how problems are solved and novel ideas are created.
Learning needs to follow a similar process as Csikszentmihalyi's Five Steps of Creativity:
- Preparation: A two step process:
- Rote learning in order to create the building block of logic -- the means to use rules to make inferences, choose courses of action and answer questions.
- The collecting of information (called ontologies) -- In philosophy, an ontology is a theory about the nature of existence, of what types of things exist; ontology as a discipline studies such theories. The most typical kind of ontology has a taxonomy and a set of inference rules.
- Rote learning in order to create the building block of logic -- the means to use rules to make inferences, choose courses of action and answer questions.
- Incubation: A period to reflect and interact with others.
- Insight: Making new connections.
- Evaluation: Metacognition - planning, setting time lines, and allocating resources (Schank & Abelson, 1977) for new "connections." It also designs strategies for accomplishing goals once they have been set.
- Elaboration: Turning the idea into reality.
This Tuesday (October 11, 2005) on PBS -- Einstein's big idea: The Legacy of E = mc2
Friday, October 7
Maybe this is a thought crime, but the meme that's propagating in my head this morning is that instructional design will once again mimic software design. (Where do you think those human performance flowcharts came from anyway?) Read this article about lightweight software development and the surge toward Web 2.0.
Now, let me indulge in some cut-and-paste thinking. Modify the article by substituting instructional or instruction for software, and Cross for Fried. Here's what you get.
Traditional instructional development is expensive, resource-intensive, and born of a Cold War mentality, Cross said. His advice is to "think about one-downing, instead of one-upping, and underdoing-competitors" –beating them with less.
According to Cross, in the era of lightweight apps and simple products you need less money, people, time, abstractions and instruction.The more I dig into how people learn, the more convinced I become that we've been trying to do things the hard way. We used to think our job was designing instructional systems. I'm beginning to think we're nurturing the evolution of learning experiences.
Cross believes that money mostly buys salaries and you only need three people–a designer, programmer and utility player, which he calls a "sweeper." The feature set should be scaled for the headcount. Having less time is also an advantage. "You spend time in unproductive meetings and overanalyzing the product. Less time forces you to spend less time on better things," Cross said.
He suggested 30 hours per week per person, which "forces you into building better products and being creative with your time." And, if you have less time, you have less time to think about abstractions, such as functional specification documents, which Cross characterized as a waste of time. "Instead, build the product and start from the user interface customer experience first; then wrap with the technology," Cross said. "The interface screens are the functional specification."
Finally, building less instruction means fewer features, less documentation, minimal support and less confusion in selling the product. "Less instruction is key to building very specific tools. There are a million simple problems to solve with less. Competitors solving complicated ones are most likely to fail," Cross said. "For Web-based instruction there are plenty of simple problems to pick from and you can nail."
Instructional design tries to fix things that are broken. It begins with assessing what's wrong, "gaps," and leads to developing grandiose, cure-all solutions. Learning evolution begins with what you've got and nurtures incremental improvement.
We see the same sort of issue on the front page of our newspapers. One the one hand, some people believe a master designer released Earth 1.0 about six thousand years ago. Others folks believe Earth beta has been evolving for billions of years; it's a web without a weaver.
Do you believe in the intelligent design of instruction or the evolution of species of learning?
This is a follow-on to a previous post "The Number 2". It was triggered by a comment that Jay Cross made:
"In 1920, Bluma Zaigarnik notices that waiters in coffeehouses memorize remarkably complex orders and then flush them from memory once the transaction is complete. There's more at work here than short-term vs. long-term memory. In 1927, Bluma's research found that people retain about twice as much of a subject if they don't reach closure. When you put down a book, do it in mid-chapter. If you're leading a seminar, don't finish before the bell rings. Not closing out the topic creates tension in the brain that fades when the thing is finished."
Ultimately Zeigarnik proved that people remembered unfinished tasks about twice as well as completed ones.Thus, if an instructor wants students to remember a presentation, she will end the class in mid-sentence, before drawing a final conclusion. Direct marketers use the Zeigarnik effect to whet their readers’ interest. To remember the book you’re reading, take a break in mid-chapter, not at a more natural stopping point. If you want to keep something actively in mind, don’t close it out. Let it hang. Theres more on Internet Time if your curious.
Simple idea. It sounds like it works. So why don't more people do it?
There's all this fantastic research out there about How We Learn. We've read and talked a lot about it here and elsewhere. Talk is cheap ...
Without turning people into lab rats WHY aren't more of us who are responsible for learning creating controlled experiments in learning? Tackle the same training problem in several different ways and see what works, what the resource costs are, what was the ROI? I think it would be enlightening to see the same training problem 'solved' in a number of ways:
- Using an intructor-based classroom format one with a grand finale, one using the 'end the session in mid-sentence' ambiguity model
- Developed and a workflow/EPSS program - no 'learning' involved
- Done as a good online program delivered via LMS
- Developed and delivered in a mistake-based/ambiguity-based/ mode,
- and as a perhaps even designed as a simple simulation.
You (our blog readers) might even be interested in taking pieces of the puzzle. We could start with a finshed program from the above list that has been shown to get good results from the students, and use it to develop the other models for testing. We could publish the results here. The focus could be corporate workplace learning.
We have this dinosaur of a model, that we hide behind the jargon of pedagogy, that we cling to like drowning sailors ... almost like trying to take your vinyl long-playing records and diode turntable and stereo speakers on an airplance to listen to music. We all know and talk about the ways that research and technology are changing the ways we live and work.
What about the ways we learn? And I'm not just talking about 'a classroom online'. To me, that is an oxymoron akin to jumbo shrimp. As the lyrics to one of my favorite tunes tells me, "Time keeps on slipping, slipping, slippin' into the future." Time, perhaps, to try something new and interesting?
Wednesday, October 5
There may be a planned illness outbreak in World of Warcraft so that sociologists can study and document player's reactions from the start. This is the converse of how we often use games and simulations: instead of creating an environment to evoke a specific response, it's creating an environment and documenting participants' emergent behaviors.
Tuesday, October 4
I've been hearing a lot lately about "... learning from our mistakes". Natural disasters. Personal mishaps. Problems at work. Issues at home. I keep hearing "We learn from our mistakes." So I wondered if there is a new approach to learning we might call "Mistake-Based Learning".
Then I was looking at a new piece of software that all together avoids the need to learn anything at all and make any mistakes. It lets you know what you need to do, and when, and it's all a click away. This already has several names in the hat - the one I like the most is "Workflow Learning".
Sudden Idea: Only 2 Choices to Make
So suddenly I find myself simplifying the complex universe of learning into two groups:
- Mistake-Based Learning: Learning by making mistakes and learning from those mistakes, and
- Workflow Learning: Not learning at all, just finding what you need to do, or know, and doing it.
Choice 1: Mistake-Based Learning
Mistake-Based Learning is actually supported by the research into how we learn in the workplace. It's the 75/25 Rule where 75% of workplace learning is done while you work and only 25% (at best) is accomplished in a more formal setting. That 25% includes everything from classes to online programs to simulations. It's been broken down into
- 20% "I Know" and
- 5% "I Can Do".
Most training programs barely reach the "I Know" level. The best, using the latest interactive learning technology like simulations, hit the 25% "I Can Do" mark.
The remaining 80% is the "I Adopt & Adapt" level. It's a good definition of being ready to perform your job - the ability to adopt and adapt what you know and can do in response to an everchanging set of circumstances.
I consider this real learning.
Choice 2: Workflow Learning
In The Future of eLearning, Jay Cross does a wonderful job of defining and covering the ideas, concepts and case studies of workflow learning. He does a brilliant analysis of the value of having immediate access to knowledge - what you need to know or do - when and where you need it. In summary, it means you don't ever need to learn how to do something, nor really know what to do, you just click a (fill in the blank technology tool) and get the answer, follow the instructions and move on. Seems mistake proof unless you misread the instructions.
I consider this rote learning.
Why This Matters
1. In companies and corporations all over the world, the attempt to recreate the schoolplace in the workplace is most obvious where rote learning rules, and simply testing is okay. Performance is not relevant or really demonstrable, and just knowing the answer is everything. You do not have to really learn at all, just remember for awhile. The model is not fundamentally wrong, just completely misapplied. It does not go far enough to use the emerging technology that can replace the classroom, and it is never in the right place at the right time. Sort of an 'out-of-the-workplace' learning model. What is really needed is a true Workplace Learning program.
2. In the actual workplace, as opposed to the fictional schoolplace, where learning by doing is what it's really all about, and test scores (or LMS completion rates) are irrelevant, performance is key and know-how is everything. So we have learning by your mistakes or Mistake-Based Learning, a new kind of program, of which simulations are the tip of the iceberg, as you ascend up into the 80% level through "I Know" and "I Can Do" towards adopting and adapting.
That means that in any given situation in which I am asked to help people learn something, I can easily choose to create a Mistake-Based Learning program or a real Workflow Learning program. Both of these choices involve new directions for people in charge of the Corporate Brain. The majority of the thinking has been in the "Workflow Learning" category. Again I refer you to my friend Jay Cross if you want more details.
What remains to be worked on are the Mistake-Based Programs. If we truly and really learn by our mistakes, and experience is the best teacher, then what does Mistake-Based Learning look like? When is Mistake-Based Learning a better choice than Workflow Learning? Aside from simulations, are we creating these types of programs? Will companies even allow the idea of "training" when it is a Mistake-Based Learning program? Will they let us develop programs that set up employees to make mistakes-try again-succeed-and really learn? Or will the schoolplace model, an artifact of the Industrial Economy, continue to prevail? A model in which making a mistake means a lower grade, AND less capability to not make a real mistake when you're back at work? Where's the ROI in that?
Lots more questions. Tell me what you think ...
- That you can post out to most blogs
- bring content to writely by email, upload, or original creation,
- download into Word, HTML,
- all with revision history and the ability to revert to previous versions at the click of a button.
I agree with Harold, this is definitely the future for use today.
First, there are a few nuggets of interesting perspectives on simulation design.
But more importantly, there is a lot of space and energy, both on my part as the writer and your part as the consumer, dedicated to humor. Think of your own experience as a reader plowing through the material.
- Some will find it engaging.
- Some will find it lame.
- Some will find it a waste of space (mostly the people who really care about the material).
- Some will read all of the material because of the humor, which lowers tension and builds connectedness with the reader.
My meta-point to the interview was to let readers experience a game element themselves (the humor), and come to their own opinions on the risks and benefits of using game elements generally.
Who says learning by doing has to be complex?
Saturday, October 1
you humble Blogmeister,