There are probably writers who work in a nice tidy sequential fashion.  They have an idea, sit down, research it, write it, hit ‘save’ or ‘publish’ and move on. I’m not one of those writers.

Well, I used to be one of those writers, but I’m now solidly in the school of writers who work something like this:
“Start with a couple dozen open tabs on Firefox and a collection of journal articles spread out on the desk, add a few though provoking conversations, have the connections start pinging in the brain …. and next thing you know, there are the notes for 3 or 4 different posts and articles, but nothing actually written.”

At some point I realized that this is the blessing and the curse of living in this Web 2.0/SocialMedia world.  An abundance of resources, an abundance of conversations to trigger ideas, and an overabundance of details to try to do something useful with.

I’m realizing more and more that the problem is not a lack of information structuring, nor of information filtering, it’s that I’ve needed to learn to keep my resources for a given piece of writing all in one place.  My brain is more than willing to be distracted when I write, and it was an awful lot easier maintain a hint of discipline when the only things within my grasp were directly related to the work at hand.  You know, Old School: at the big library table with references and note cards creating the only visual landscape; a wall of focused information blocking out the rest of the world.

So, I’m learning.  Learning when to allow myself time to dig into research and go down those rabbit trails that lead to serendipitous connections.   And learning to remember when it’s time to say “enough”.  Just because one has nearly infinite access to resources doesn’t mean it’s a good idea to gorge on all of them.   If I’ve missed an idea or a connection, I’ll find it later, on my own or through the comments of others.  But if I don’t get it written in the first place, there’s nothing there to improve on, to criticize or to expand on.  It’s that simple, and it’s that hard:  sit down and write.

Now, it would be really great if I had the discipline to tune out the siren song of Google Scholar, and the persistent calls of Twitter.  But it’s human nature, when you lift your head up from writing, to want to dig in and find “one more thing”.    I needed to find a way to keep all the resources for a project in one virtual workspace so I wouldn’t drift off while tracking down references, mind maps, emails, and notes that I’d collected.

For me this ultimately meant finding a software solution;  I started using Scrivener.   It allows me to recreate that giant library table of references and notes, removing the need to wander off to Google (or Mindmeister, or iStockphoto) to finish a project.

Writing is more that words.  It’s words fueled by ideas, challenging conversations, striking images, or problems to be solved.   Creating a virtual library table can make it possible for a writer to step back from the conversation long enough to actually produce something that contributes to the conversation.

There was a recent conversation on Twitter regarding the value of children learning to read an analog clock – one person classed it as an irrelevant skill, right along there with using slide rules.  It was the sort of side discussion that almost looks like a trivial bit of chit-chat, when it really is something quite important.   The crux of the conversation was: “we have a newer, more efficient tool (digital clock) so the old tool (analog clock) is irrelevant”.

But there’s something missing from that picture – we are not digital beings, we live and move and think in an physical world – we are physical people who happen to use digital tools.   Analog tools are exactly “what it says on the tin”: they are analogs, physical analogies, representations rooted in the world in which we live.    We connect to analog – it models physical realities or even complex abstractions.  Finding the meaning of abstractions represented digitally is a different game altogether – they might be a shortcut of expression, but they are not physically analogous representations; they add an extra layer of symbolism for the brain to process.

For this reason it probably takes most  people a few seconds to figure out the joke:
Q.  Why do computer scientists get Halloween and Christmas confused?
A.  Because Dec 25 = Oct 31.
(If you don’t spend a fair amount of time in the Math or CS playground, it may takes an extra step or two of mental translation to get this one.  Hint: Dec = base 10, Oct = base 8)

Time, in and of itself, is an analog tool: taking the day and breaking it into equal chunks for purposes of planning and communication.

The nature of non-digital gives us something we can naturally connect to.

The analog clock is a beautifully simple illustration of this.  It is an analog reference to an analog concept – a physical representation of the turning of the earth, if you like; mapping an abstraction (time and its passage) into a tangible, touchable model. Roitblat and Meyer describe it this way:

“The time representations in an analog clock directly reflect the similarities between times rather than symbolically describing them.  For example, on a digital clock, the representations of 9:58 and 9:59 share quite a few features relative to those represented by 9:59 and 10:00, but those times differ by exactly one minute.  In an analog clock, on the other hand the similarity between 9:58 and 9:59 is exactly the same as the similarity between 9:59 and 10:00.  An analog clock is a nonsymbolic (in the sense described here) representation that preserves the correspondence between the event to be represented and the characteristics of the representation.”

Beyond that, the analog clock is something learners can touch and feel and play with – see the mechanics and how the gear ratios drive the motion, grasp spatial and numeric relations based on something real and tangible that users experience everyday.  It is a tool that is understood experientially, not merely a classroom lesson, so it can provide a meaningful schema for complex and abstract concepts beyond telling time.   Supplanting analog totally with digital (replacing physical representation with symbolic) might lead to faster reading of clocks, but not necessarily a faster grasp of the relation between different times (which is a significant part of the value of clocks).  It also, incidentally, removes a tool that gives grounding to abstract concepts such as ratios, through meaningful everyday experience.

math-manipulatives If I were obliged to provide an analogy for analog clocks in the realm of mathematics tools,  it would be closer to manipulatives in a math class than to a slide rule.   (I would hope that I need not argue the case for students benefitting from understanding the meaning of mathematical operations, as opposed to merely memorizing the appropriate algorithms).

Symbolism has it’s place, and its merits, but generally these are after the learner has a real understanding of the concepts.  And understanding of complex or abstract topics often comes from use of representation.  It is not “either-or”, it is “both-and”.
As learning professionals, at the end of the day, our evaluation of tools and methods must directly correspond to how humans actually think and interact with the world.  Unfortunately, wholesale, or careless, dismissal of tools and methods that are out of vogue is not an uncommon situation; and this leads to a more difficult question.

The deeper issue ties back how we evaluate evolving tools.  If we’re going to assess the relative merits of tools, and their relevance, we have to take into account not merely their objective use, but how they actually used; as an example an interesting post by @hypergogue considers the “affordances”, or qualities, of digital and paper documents in the sphere of knowledge work.  Paper documents, messy and tangible and shuffle-able, allow a for different kind of off-loading process for our brains than digital ones.  Neither digital nor paper documents are inherently useful or relevant (nor inherently un-useful or irrelevant) – relevance for learning is tied to use, cognition and meaning.  It is our task (among other things) to weigh tools well and use them wisely on their own merits and within this framework.  To do this requires a shockingly old fashioned tool-set: knowledge, logic, objective evaluation (the same tools needed for the dying art of discourse).  To acquire and apply that tool set requires something even rarer in this modern age than discourse; what it requires is Time.

The question is: Are we willing to find that Time?

There must be something going on in the hive-mind that is the internet.  I’ve run across a half dozen references to skepticism in the past half hour.  All of which reminded me of something I wrote a while back.

It’s not any easy thing to be a skeptic.  It’s quite common for someone to express skepticism about something new, and then find themselves in a hailstorm of irate responses along the lines of:  “Oh, they’re just hidebound old fogies who can’t accept innovations or new ideas.”   Maybe, though, there’s a reason they are skeptics.  Maybe they are all for innovation, but are simply asking the right questions because they are just a little better and faster at analysis than the average bear.

People like new ideas (or even old ideas that have been dressed up in bright new clothes).  There is always that hope that there is that magic solution, that the “next great and wonderful thing” will actually live up to it’s promise and transform the world, or at least the workplace.

So “New” sells.  It sells books and products, it generates prestige…  And those who are clever enough to say “yeah, this is great in concept, but how are you going to handle…?”  tend to be dismissed as too old fashioned or resistant to change simply because it is human nature to want that new solution to be perfect; we like the fantasy (or cling to the hope) that somewhere there is a ‘silver bullet’ solution.

This is a big problem, because it removes the opportunity to address those potential pitfalls at an early point of adoption, often preventing bigger issues downstream.  We’ve all seen the “next great thing” fail to live up to it’s promise in the workplace, in education, in technology. And some of those failed initiatives need not have failed, perhaps would not have failed, had both the proponents and the skeptics sorted through issues at the front end instead of writing each other off as unrealistically naive and cynical respectively.

All of which begs the question: When did phrases like “Can you give me some data?”, “How does this really work”, or “There are some issues that need to be addressed”, start being heard as wholesale rejection of an idea?  When did human minds become so narrow (or egos so fragile to criticism) that the standard response is that anything but absolute acceptance is deemed as condemnation?

It points to a larger problem – the end of discourse.

Fingers could be pointed a lot of ways in this.

Educational systems that purport that they want students to “learn how to learn” but don’t teach them logic or rhetoric or any of those other old fashioned topics that allow for examination and conversation around all angles of a situation?

The business world, where the model has shifted from building businesses that will last and thrive for years to come, to merely seeking to make the best possible numbers for this quarter.  In this situation people want a quick win; there is no time for, and no interest in, real long-term viable solutions.  This model is not only systematically starving and killing off the flock of geese that lay the golden eggs, it also effectively puts employees in a perpetually defensive posture where opposing views or mention of flaws are viewed as threats to one’s career.

Regardless the origins, this is a problem that needs to be addressed because it is bigger than “Is [insert innovation here] good or bad or neutral?”   If we can’t ask real questions, it’s going to be pretty tough to distinguish between “snake oil” and a good idea that needs refinement.  A lot of good ideas are going to get lost in the shuffle if there is not room to ask the hard questions that will take those ideas beyond the initial burst of enthusiasm to a point where they can reach their full potential.

I was reading a post by Bob Marshall, nodding in agreement with much of what he wrote.  I’m not in the software development business, but I often see the  same problems that he describes relating to good work:   [those who] “know how but can’t anyway because of where they work, who they work for and because of all the monkey-wrenches being lobbed into their daily routines…”   He was speaking of the software industry, but what he describes is not an uncommon issue, in any field.

I’ve run into similar scenarios, and one common factor among them is the general perception that every solution, every process, every approach, ought to “scale”.   Since, in most business circles, continuous growth is viewed as not just good, but essential, the desire for universal (and infinite) scalability of processes and procedures is understandable from the standpoint of efficiency.  Scaling may well streamline administrative functions (legal, HR, finance), but it is important to recognize that if certain aspects of a business are readily scalable, others (e.g. Operations, R&D), perhaps, are not.  This non-scalability may not indicate a problem to solve, but a natural attribute of how human beings and communities really work.
Companies see themselves as a single expansive entity (and therfore embrace the model that universal, one-size-fits-all procedures are beneficial to organizational effectiveness), when in fact they are often effectively a bunch of boutique organizations welded together in a common enterprise.  If you talk with people in different functional groups of an organization, you know this; each group sounds like it works for a completely different company than the others.  What are often called silos are really the front doorsteps of the different small communities.  And how one group learns or produces will not translate directly to how another group does.
Whether the boutique (or community) model is most “efficient” on an algorithmic scale, isn’t the point.  The point is that it is how human beings actually interact.   No matter how much you scale up an organization there will always be points of functional disconnect between groups in their specializations and one-size-fits all codification of the larger organization.  Humans will continue to interact in small connected groups and build their own, most effective approaches.  Universally scaled-up practices, while efficient, will not necessarily prove effective with respect to quality or productivity within the smaller, organically formed segments of an organization.
Maybe the key to bypassing the “monkey-wrenches” that stifle good work is to recognize that learning design or software design (or any other business activity) should not be presumed to be infinitely scalable.  It’s always going to be a balance between efficiency and effectiveness.  So keep the uniform approaches in the arenas where efficiency matters, but also determine where effectiveness is the greater goal than efficiency, and shape the policies to match how the work really happens.

booksRigour is a popular term in learning and training environments.  It gets trotted out a lot in marketing materials as well.  But the problem is that a lot of what get posited as “rigourous” is actually not.  In an elementary school textbook, a work place learning module, or a keynote presentation, you’ll find things that look like rigour, but that doesn’t guarantee that they are.

Someone who really knows a topic will spot false rigour in an instant – much as adults may chuckle indudgently (or cringe) at adolescents who attempt to pose as being much older. So, maybe the first question is ‘how do I spot an expert?’, because they are the quickest, easiest path to spotting false rigour. A real expert is often easily identified  by their ability to accurately reduce a complex concept into layman’s terms without losing the fundamental meaning.  Of course it might take a real expert to recognize that  was done properly – so that’s getting you into a worthless ‘infinite loop.’

Leaving us with a conundrum of the first order: it is very difficult to accurately call out artificial rigour without sufficient expertise.

So, what’s a non-expert to do?

My first instinct was to look at the problem from the perspective fields like math and science (simply due to my own background).

In classroom texts it is not uncommon to find a sort of artificial rigor that was created to meet a list of criteria, as opposed to lessons rooted in true fundamental understanding and applicability.  The focus is not on a meaningful “why”, a reason we want students to learn something; it is rooted in lists and box-checking, which are themselves rooted in standards that have as much basis in perception and political agendas as they do in actual learning.

Box-checking driven learning has a high probability of being guilty of false rigour.    So that’s one warning signal, easily found, but it’s only a starting point.

What else comes into play?

We may not be experts on a given topic, but we can take what we know about expertise and use it as a guide.

A while back I wrote:

chalkboard equation“If I’m really, really good at, let’s say, math, then I may not have to stop and think about quadratic equations because I intuitively grasp them; but if asked, I clearly explain (in simple terms) why they have the solutions they have. If I am merely good at arithmetic, I can show you how to solve the equations (just by plugging numbers into the formulas) which might look like expertise to a novice, but is really just mechanics; in that case I know it works but don’t fully grasp why or how.  A lot of false rigour works the same way.

Simon Bostock countered these thoughts with the insight that being able to break concepts down into their component parts may (will) not work for all domains:

“I’m not sure true experts can always unpick and unpick. I think it depends, rather, on the domain.

Maths and physics are inherently unpickable, and the reputation of Feynman as a teacher, therefore, shines. Science depends on the principles of proof and peer-review so being a teacher (ie explaining stuff and testing that it’s been understood) is essentially the same as science. [Warning: massive over-simplification!!!]

But things like medicine, art and computer programming just have to work. We don’t necessarily care how the surgeon genius or the does-the-work-of-a-hundred programmer work. And we certainly don’t trouble them to explain themselves. In many cases, they probably couldn’t because it’s doubtful they’re aware of how they do it themselves – my feeling is that they’re drawing from as-yet-unnamed disciplines, and you can’t unpick things you can’t name”

And he’s absolutely right about this…

Different fields having differing degrees of inherent “unpickability”. I can see in the case of, say, a violinist – they can ‘unpick’ the details of technique and tone production, but as far as (for lack of a better word) artistry – well that’s a personal thing, that’s not so readily broken down.  But then again, in that case, I would put the expectations of instructional rigor on the technical aspects, and not assign it to the area of personal expression or artistry.   But we still do need to look at what constitutes rigour (or at least expertise) in topics that are not inherently disectable.

The Role of Narratives

I was helping someone with a technical problem which they were grinding through it rather mechanically, without any real understanding (I could recognize this as I’ve been in the same situation).  I took a comparable problem and broke it down into logical components, but did so within the context of a narrative about the physical reality which the equations were describing.  The same person later was able to discuss another problem with me in terms of meaning, rather than mere mechanics.  They had crossed a threshold, perhaps not into expertise, but at least onto the path that leads there.

Expertise goes beyond merely breaking down a problem into component parts, it’s deeply tied into a narrative.  Real rigour has a narrative rooted in truth;  artificial rigour’s narrative is not entirely so – it looks almost like the truth, but on closer examination the narrative of artificial rigour is either rooted in superficial function, not understanding; or is rooted in fallacy.

We see artificial rigour in this guise in a lot of modern math curricula where elementary texts proclaim sub-sections to be “Algebra” when, in fact, the students do not have sufficient intuitive grasp of numeric relations for there to be any meaning to the work.    It looks like 8 year olds are ‘grokking’ algebraic concepts, but they do not truly do so because their mind is so filled with painful, tedious mechanics so they haven’t the mental energy left to grasp the intuitive connections.

The real narrative is one that shows mastery (and rigour), describing not “what is done” but “what it means”.

For non-techinical areas like Simon’s examples of music or surgery there are two layers.  There is the mechanical aspect of the work, and then there is, for lack of a better word, “artistry’.    If I am a reasonably capable technical musician, I can follow along and imitate styles and variations by talented musicians, but I don’t have the internal grasp to create my own riffs.  To an outsider on the right day from the right angle I might look like I know a bit, but really i’m just a reflection of those who do.  An expert would know that pretty much right off, for someone else it might take some closer scrutiny over a bit of time to realize I can’t really improv like a pro.  A non-expert might not be sufficiently interested to notice.

violinFor ‘unpickables’ (to use SImon’s term), expertise and rigour reveal themselves not imitation but in creation.  The expert surgeon does not exactly mimic his peers, nor does Itzhak Perlman imitate other violinists; they may learn and absorb what other experts do on a technical level, but from that understanding, they can create.  So the teacher of these subjects does not provide rigour through mere mechanics, but through fostering the learner’s innate understanding, challenging it, stretching it.

Where Do We Go From Here?

It seems virtually impossible to separate a discussion of rigour from a discussion of expertise.  But it is possible for a non-expert in the field to keep a weather eye out for warning signs.

Artificial rigour tends to lean on the smoke and mirrors of a quick grind through the mechanical motions; this is a common feature of learning based on box-checking agendas.

Box-checking as a concept provides a bit of a compass star to to another indicator – the antithesis to box checking is  understanding, and understanding often reveals itself in meaningful narratives (as opposed to snake-oil style narratives; substance rather than a sales pitch).  Real  experts can create meaningful illustrations, applications, and narrative; false rigour  can only ape what it has heard or seen.

Another measure of artificial rigor is that it tends to make one “feel” good (accomplished, affirmed….).   It is very appealing. Real rigour requires hard work. A bit like climbing a mountain: it may be pleasing in a deep gut level, but it doesn’t come easily or quickly.

I would love to have found a simple check-list (you know, like the box-checking discussed above) to help a novice identify real rigour when they see it.   But then again maybe that’s the point: if you are a novice it’s time to start asking around and finding experts.

The best I can offer is an invitation to continue the discussion.  I still have a lot to learn.

When did “Problem” become a dirty word?

In a business conversation, anyone who utters the “P” word is likely to be shut-down with the statement that “there are no ‘problems’ there are ‘challenges’”.

For those who are from a technical background (e.g. NASA engineers), problems are a good thing – saying there is a problem immediately implies that there is also a solution.   There may be ‘challenges’ faced on the way to reaching those solutions, but problems are something we can work on, something we can do something about.  We can solve a problem and possibly prevent a failure.

But then again, “failure” is another word you can no longer use is business. Failures are now “learning opportunities”.

Now, it is true that if the NASA team had not solved the problems (sorry, ‘challenges’) on Apollo 13, they would have had quite a learning opportunity.  As would have the crew; but the crew’s learning opportunity would have been very ‘short-lived’.

One problem with the constant relabeling of terms is that you cannot change the nature of the thing the terms stand for, and eventually the emotional baggage of the old term will apply to the new. (“If it quacks like a duck, looks like a duck…. it’s still a duck”.)   The flight crews of Challenger and Columbia would have suffered the same fate regardless whether you called those situations ‘catastrophic failures’ or ‘learning opportunities’.

Another problem with relabeling ties into the cultural source of relabeling as a concept.  People whose work has real, direct results have less problem using strong words.  By strong words I don’t mean the kind that would have gotten your mouth washed out with soap by mom, I mean the kind that will get you a dressing down by your supervisor (and possibly lead to professional ‘learning opportunities’ for you).

If your work has a direct, observable function (be that as a farmer, or as a NASA engineer) you are fine with calling a failure a ‘failure’.  The fact that you will learn from your failures is a given; it’s part of the job, so obvious that you need not mention it.  You call problems problems, and then you go solve them.   And you also know there are some problems and failures that are out of your control: there are unpredictable elements to life and work and you handle them as they come.

In much of the corporate world, though, people’s work is so far removed from the ultimate results, that it is possible (and perhaps even savvy, in a Machiavellian way) to avoid calling things what they are. But relabeling truths doesn’t alter the truths – they are still the elephants in the room; elephants that no one dares mention in an environment of fear and mistrust.  It seems the more euphemisms an organizations uses, the deeper the culture of fear and distrust, and the more paralyzed people are from actually taking action.  In a business where you cannot say ‘problem’ or ‘failure’, it is very clear that you are not allowed to have one, under any name.  It is also clear that there is no real desire to innovate or learn; the preferred approach is to sweep failures under the rug.  It more important to protect oneself from blame than to solve the problems.

This situation presents great losses in opportunity: when individuals and organizations face problems head on and learn from failures, that is where the real innovation happens.  Don’t believe me?  Ask the crew of Apollo 13 and the folks at NASA how much they learned, and how many solutions they innovated to bring that crew home.  And they did so because they faced up to the catastrophic failures on the craft, addressed the problems, and then set to work.  Heads were not going to roll for the failure, the only unacceptable action was not to try.

Innovation is the buzz-word of the day in businesses; if you want innovation to happen, call problems what they are.  And then go solve them.

“First figure out why you want the students to learn the subject and what you want them to know, and the method will result more or less by common sense.”
-Richard Feynman, 1952

The concept of real learning can be easy to describe but difficult to achieve.  The work of Richard Feynman provides an interesting case study of the value starting with ‘Why’, and where to take things from there.

Why’, ‘What’…. then ‘How’

The name Caltech tends to conjure the image of highly talented, motivated students, but in 1960 it was clear there was a problem. The standard two-year introductory physics course offered to freshmen and sophomores was actually dampening their enthusiasm.  The classes offered the usual necessary foundational topics in physics, but for students who walked the university in with visions of quantum mechanics, it was more than a slight let-down.  There was not a lot of connection made between what was presented in classroom lectures and where modern day physics was heading.

Feynman recognized that what was missing was the ‘Why’ – the meaning, the reasons, the endgame, if you like, that stemmed from these foundations.  Without a sufficient ‘Why’ the ‘What’ and the ‘How’ are destined to go astray.  So from 1960-1962 he delivered what are now known as his Lectures on Physics.   They were presented to the entire introductory physics class, but the content was geared to spark the curiosity of the most advanced students (with the intent being that practice problems within their recitation sections would shore up practical understanding for others).  The lectures often presented, if sometimes only in a summary manner,  concepts beyond the students’ current understanding, giving them a window into where their studies could take them.  It was the kind of window that a standard, linearly presented course in physics did not provide.

The  Lectures were not (at the time) an unqualified success.  Feynman recognized that the somewhat spontaneous nature of the lectures meant that there was not time for sufficient front-end preparation of practice problems that were to be provided by the instructors of the recitation sections.  Advance preparation is always a key “cost” to consider when looking at non-linear, inquiry-driven learning; it is also key to its success.  Despite the need for better preparation to allow for more effective practice problems, those students who ‘got’ the concepts were inspired and motivated in ways they would not have been otherwise (as were the many graduate students and professors who attended the lectures).   In principle Feynman’s Lectures were on the right track, in practice, he was aware of the improvements and changes needed to make his approach effective (better opportunities for practice and support).

Surely You’re Joking… Evaluating Textbooks

In the book Surely You’re Joking, Mr. Feynman, there is a memorable chapter regarding a time in the 1960s when Feynman was asked to help review math textbooks for the State of California’s school system.  The whole story is worth reading (being both disturbing and entertaining) and can be found online.   The experience proved to be a bit of a shock for Feynman as he went through book after book. All the texts tried to embody the kind of real learning that Feynman himself strove to provide, but each one was guilty of a serious shortcomings: inaccuracy, poor terminology, and ridiculous problems.

The root of the issue for all the texts was a sort of artificial rigor that was created to meet a list of criteria, as opposed to lessons rooted in true fundamental understanding and applicability.  Additionally, despite the names of ‘experts’ listed as authors of textbooks, the actual mass assembly process used by publishers tended to involve many authors of limited knowledge and skill; the names experts were every bit as much window-dressings as were the aspirations to suggest the course was rigorous and correct.  It is an example of expediency and costs driving content.  Creating meaningful learning takes time, effort, and deep understanding of a concept.  (Sadly, having been on committees that review Math and Science texts, I can vouch for the fact that little has changed in the intervening decades.)

Feynman immediately recognized the lack of both substance and of meaningful practice in the books.  In this case, the ‘Why’ for learning was: to meet standards generated by state bureaucracy.  This made it unlikely that the ‘what’ or ‘how’ of learning were going to to be any more meaningful; if there is no real goal, it’s unlikely there will be meaningful practice; expedient checking off the boxes becomes the priority goal, taking precedent over deep learning.

Three Easy Pieces

When comes down to it, providing the opportunity for real learning is quite simple, at least in principle:

Remember the Why
Context matters.  ‘Why’ you are learning something drives everything else from motivation (it’s your ‘elevator pitch’), to sense-making, to methods.

Front End Preparation
Good learning requires sufficient front-end preparation that students have worthwhile opportunities to practice and learn.  This is how they make understanding their own.  Lack of advance preparation leads to a lot more box checking and micromanagement.  Yes, it’s more work at the beginning, but if the goal is learning, not just ticking things off the list…

Rigor Needs to Be Real, Not Just Window Dressing

Rigor for rigor’s sake can lose track of the ‘why’ and become another form of “box-checking”.  When you know the ‘why’ you have the chance for meaningful teaching based on deep knowledge.

Each time I’ve read something about “information overload” and how we need better external filters, it’s left me with a vague sense that we are over-looking the obvious.  And it really is obvious, once you stop and think about it:

Human minds are born to filter.

More than that, they are born to build very sophisticated, constantly evolving filters.  You know this intuitively; if you’ve ever taken a walk with a one year old child, they will stop and see every detail – every variation in grasses, or tree leaves; they stop to evaluate every sound, to admire every insect.  You don’t do this; your brain has learned to filter.

There have been a number of studies on infant language development.  A recent study looks at the filtering strategies employed by 18-24 month olds as they work to distinguish and learn individual words from the ambient noise of conversation.
A study back in the 1990s looked at how very young children (around age 2) had already learned to filter the normal variations of pronunciations within their native language, but would respond to extremely subtle variations of pronunciation in sounds that were not present in their native language.

Those are some pretty complex filters developing in very young minds.  And our filtering abilities grow and develop throughout our lives; we filter staggering amounts of information every day.  Don’t believe me?  Step outside and turn off your filters.  Try to catch how many ambient sounds, scents and visual details your brain has routinely learned to dismiss because they do not require action – they are background noise, safe, uninteresting.  Thousands of inputs filtered out every second.

You really notice the amount of daily filtering you do if you move to a new environment.  Your brain doesn’t know what sounds or smells it can safely ignore, nor what normal weather patterns look and like, nor which insects it can allow you to simply overlook, what social cues are relevant.  So your senses are bombarded with a much higher level of detail; it can be overwhelming (and exhausting).   But over time your brain builds new filters for the new environment.

To some extent we are starting to see this evolution in our increasingly information saturated 2.0 world.  On first exposure our mental filters are as overwhelmed as they would be if one moved from Duluth to Mumbai.  Over time our filtering ability evolves; we do, after all, have minds capable of the complex  and unceasing inputs of the natural world.  But as we deveolop our media filters, we do have to take care that we are filtering well.**

**e.g.  Simon Bostock discussed an important article, Six Views of Embodied Cognition which, among other things, looks at the cognitive strategies (often short-cuts) we use when time pressured.  If those pressure driven strategies are employed consistently over time, it seems possible they may lead to over-filtering so that it becomes habitual to merely skim the online information stream, instead of selectively reading certain items with the same level of attention one would give to reading a good book.

Company Policy

August 5, 2010

I was asked, recently, about the advisability of including the Blog function in a Sharepoint implementation.  The answer to that lies in a question, the same primary question that needs to be asked if your business is looking Twitter, Yammer, an in-house wiki, or a host of other Social Media tools.  The question is a simple one:

Is it your company policy to hire stupid people?

I’m guessing the answer to that question is “no”; that your HR policy is to hire talented, capable, highly motivated professionals who want to excel in their careers.  Assuming that is the case, then there are some other questions to consider:

Do you want to leverage the talents of your workforce to achieve the greatest business results?

Do you want employees to have access to the best in-house knowledge to support their performance?

Do you want to increase efficiency and productivity?

In this case, I’m guessing the answer is “yes”.

So, if you’ve hired intelligent, motivated adult professionals, maybe you need to let them be just that.  Given the opportunity, it is likely a good portion of them will have expertise and insight that they want to share.  And if that expertise is shared on an in-house blog or wiki, then that means the next time someone needs input or advice they’ll be able to track down the experts in the business instead of taking Hobson’s Choice, merely asking the person at the water cooler or in the office down the hall.  And as questions get asked, it’s good odds that more and more of the most needed information will end up on your blog or wiki so that the experts only have to put it out there once, not in twenty separate conversations.  More efficient for the information seekers; more efficient for the information sources.

Now, of course it is not that simple.  It’s easy to fritter away time on blog posts, micro-blogging or wikis.  It’s easy to spend too much with social media and not enough time on projects.  But you and the rest of your organization face this already, with phone calls, email, the internet, impromptu conversations in the hallway….  Wasting time is a product of people and the company culture, not of tools.

In the same way, success of Social Media tools will also hinge on your business culture.  If you have a culture of information hoarding, or of viewing “failure” as worse than inaction, then the best tools in the world will not be effective in leveraging the knowledge and talents of your employees.   Because, as was said before, your employees aren’t stupid.  They’ll contribute and innovate in direct proportion to what your corporate culture really values.

The Forgotten Filter

April 19, 2010

I heard it again today – a colleague protested: “There’s too much information!”

Call it what you will:  “information overload”, “drinking from a fire hose”…  however you phrase it the complaint is universal.  And so are ideas for how to manage the deluge.   One of the key tools is filters, and by filters, I don’t mean the ones in your email inbox, but the incredibly perceptive, flexible filter of the human mind.

Experience and awareness of goals and needs will go a long way toward effective filtering when you try to decide which of the 200 links coming in on your twitter feed are actually worth clicking on, let alone which ones are worth reading in detail. But a filtration system on one end of the information stream will only be able to do so much.

Think about the water system – you may have a filter on your tap at home, but that filter is designed with the assumption that there are some really substantial filters “upstream” at the water company.  That upstream filter is critical; it can remove a lot of materials that would otherwise quickly overwhelm the downstream filter at your kitchen sink.

A key to reducing the information overload for our employees or clients is to work on building our own “upstream filters”.  This isn’t a new idea.  Years ago, in the early days of listservs, high membership lists would periodically send the users a reminder: “Before you post, ask yourself:  do 500 other people need to read this?”  That’s still a valid question, and businesses using social media internally or externally can benefit from applying it.  Sometimes that email, post, or tweet is relevant or will build relationships, but other times, it probably really only needs to go to a few people, not to a whole list of followers.

There are fantastic opportunities for serendipity in the unexpected things that are buried in the information streams;  some good upstream filtering would do a lot to improve the signal to noise ratio and make those hidden treasures easier to find.