The Relatively of Wrong, or How I Evaluate Technology
This well-written piece by the New York Times Nick Bilton, “Disruption: When Sharing on Facebook Comes at a Cost” came across my feed this morning. Bilton does an excellent job dissecting the problem of trying to build a business model using Facebook. Entrepreneurs of all levels should read it.
However, Bilton’s analysis isn’t a surprise. In fact, this is the inevitable architecture of Facebook, which was always been built upon the idea of private networks, limited interaction, and hidden messages.
I. Technology As Distrust
One of the reasons I decided to (re)-pursue my teaching career after 12 years as a journalist was my realization that the industry wasn’t solving its most basic problem: understanding technology.
When I started my career in 1995 as a news aide at Cincinnati CityBeat, a local weekly, my then-boss refused to allow me to telnet into the Cincinnati Public Library to do our daily research. Instead of allowing me to access and print the files she needed (while she sat across from me), she made me trudge a few blocks down the road so that I could physically sit in the building.
When I asked why, she said she didn’t trust that what I was getting was the same as what was in the library.
In many ways, I would have that exact conversation in news rooms throughout the next decade. As new technologies emerged that were built on the increasingly user-friendly Web platform, the possibilities for its use seemed both endless and untrustworthy.
The platform was too good. The possibilities too great. Surely, the reasoning went, nobody could understand how to use these, and so those new tools were relegated to a pile in the corner, one that future generations could sort through.
II. What You Don’t Actually Know
We live in an age when access to data is everywhere. Looking around my office, I have:
- a MacBook Pro that runs Windows and Apple’s OS
- a first-generation Motorola Xoom tablet with keyboard
- an iPad 3 with keyboard
- a Samsung Galaxy 2 smart phone
Wherever I happen to be throughout the day, I am just a swipe or touch away from accessing data points that can help me make a better decision. This, of course, was the dream of J.C.R. Licklider, the grandfather of our “Intergalactic Computing Network“, in “Man-Computer Symbiosis,” the paper that outlines how this network of our should operate.
His main point: Humans are very good at creative problem solving, and computers are very good at parsing through big data sets. We should have devices (e.g. computers, mobile devices) that parse through a real-time network to deliver us the exact bits of data we need to make better decisions.
Of course, this access to data doesn’t make us smarter, and Licklider was clear about that. We still needed to apply our creative problem solving skills.
The unintended consequences of this network, though, is that our ability to interact with this networked technology doesn’t give us any understanding of its underlying capabilities and uses any more than driving a car every day helps you understand how it works. Yet our minds have evolved to think in exactly that way. The Illusion of Knowledge causes our brain to over-estimate our ability to understand how something works the more we use it.
Now take these two ideas together:
- that we are now exposed to more data than ever before, which when used properly allows us to make better decisions; and
- that exposure to the use of something leads our minds to think we understand the underlying way in which a thing works.
You can begin to understand the problem with understanding technology. The more people use a complex tool (the Web + Internet), the more they think they know how it works.
This leads to the Illusion of Potential, an idea that suggests humans over-estimate their ability to know – or do – a thing.
What that leads us to is a problem Isaac Asimov called “The Relativity of Wrong.” The principle is this:
There are experts in a field, and there are amateurs. In every instance, the experts ideas are more likely to be right than an amateurs. This doesn’t mean the expert is always right, but it does mean that with expertise comes the notion that you are more likely to be close to the answer when you are wrong.
The problem with technology is, as I hope I have demonstrated, is that basic cognitive functions have led everyone to believe they are an expert.
III. What I Forgot
Last week, I gave a presentation to a group of Afghan professors who have spent the last several months at Ball State. Our department has worked with them on various projects, and I was asked to have a discussion about social media and its impact on journalism.
The catch, though, was the school where they teach isn’t equipped with much technology so my talk focused on teaching social media without technology.
While I’m loathe to call myself an expert in this area, I do know that my experience and education in this area has helped me understand how social communication technologies work, and my classes spend several weeks considering technology before we do anything with modern tools.
As I worked through my presentation, I realized something: While my classes originally spent a great deal of time on the consideration of technology, they have evolved to the point where they now leap almost directly into the use of technology.
I flipped through my syllabi and read my project documents, and realized somewhere along the way I’d stopped preparing my students for long-term careers in technology and replaced that with vocational training on modern tools.
I was preparing them just enough to become the type of people who struggle to understand new concepts in technology.
I eschewed the framework for evaluating technology.
IV. The Framework
I create a document called “Thoughts on Social Media, Networks, and How They Work” to provide a theoretical framework for evaluating any network.
I developed my framework through lots of reading, lots of trial-and-error, and lots of analysis (although I haven’t tested this scientifically so it’s more of a heuristic framework). That framework says:
All social networks do five things. They provide the ability to search, archive, and retrieve information from a real-time network that can be delivered back and published to a selected audience.
As I evaluate any technology, I run a 3-part test on those 5 elements.
Part 1: Understanding:
As such, the first analysis I do on any new technology is to examine how it allows me to search for content, archive and store that content, and retrieve that content in the way I want. Next I analyze how effective its real-time network is and how it allows me to publish back to a selected audience within that network.
For instance: Facebook does a very poor job of allowing me to search, archive, and retrieve information. It does an excellent job of the real-time network, and the selective audience (with things such as private groups).
My basic analysis of Facebook has always been it’s great for forming small, tight-knit social relationships built around affinity and friendship, but horrible for spreading information because of that. (We’ll talk about the Weak Ties in a bit.)
Hence my insistence for years that Facebook is a terrible platform through which to promote and build a business.
Part 2: Analysis: There are three ways I try to understand a network.
- The Network Effect is the most obvious. I try to understand how large a network is, how interconnected that network is, and how well the network eliminates negative effects.
- The Strength of Weak Ties examines how easily it is for fluid networks to be formed by users, and how much those networks are based upon affinities (weak ties) and not friendships (strong ties). Weak Tie networks (e.g. Twitter) spread information more quickly, and more globally than Strong Tie networks.
- Bartle’s Taxonomy is a less obvious one. Richard Bartle, the father of Multi-User Dungeons (MUDs) outlined 4 types of interactions players have within virtual games: Achievers, Explorers, Socializers, and Killers.
On the surface, Bartle might appear to be a strange way to evaluate any social network, but I’ve found using this simple game mechanic framework helps get a sense of how robust the interactions might be.
In this framework:
- Achievers are people who can win things. They derive motivation from leaderboards, badges, and public displays. (Twitter does this with its SPONSORED LISTS. Are you good enough to be on it? Facebook doesn’t really have a good function for this.)
- Explorers are people who take things apart and help others. In one way, this is the App environment. How open is the architecture? How much are people exploring new ways to use this. (Twitter was once robust, but not it isn’t.)
- Socializers are people who enjoy chatting, collecting groups of people, organizing information, and spreading. (Twitter is much better about this, at least in real time, than Facebook.)
- Killers are people who try to break the system. (Facebook’s real name policy prevents a great deal of spam; Twitter’s system is at times riddled with it.)
Part 3: Network Use: This is a framework I culled together from Larry Lessig‘s books Code, The Future of Ideas, and Remix. Lessig spent a decade deconstructing the Internet, networks, and sharing. In various places, he’s explained what makes a vibrant community. In some places he’s used these phrases exactly (although where has gone lost on me now). What I do know is this: Lessig and Howard Rheingold were the drivers of this thinking:
There are 4 components that every community and network must have:
- Good content to attract a crowd;
- Keep the Commons simple;
- Simple interface for the Commons;
- Decentralized control and access
There are 4 systems of rules that must be imposed for any network to work:
- No free riders;
- Rules compliance and sanctions;
- Encourage commitment and reward participants;
- Allow for organic growth and change
This combines the ideas of basic human-computer interaction (HCI) and content analysis into easily understandable heuristics that give you the ability to judge the overall usefulness and health of a network.
This 3-part analysis of the social communication networks helps me figure out what the network is best suited to do. For example:
Facebook (in my analysis) is great for small, private, real-time conversations, while Twitter is good for large, public conversations that can be search and archived (in the short term) to share and understand cultural moments.
Building a business on Facebook makes less sense, while long-term strategizing (e.g. long-term customer service) on Twitter is a bad idea.
V. Back to Bilton
All of which brings us back to Bilton’s piece in the The New York Times.
As I said at the start, I think it’s well written and logically sound. What’s astounding to me is in 2013 we’re still surprised and caught of guard by the nature of the social networks that surround us.
This isn’t a knock on Bilton, the Times, or anyone in particular. As I noted, I realized my own classes had turned away from the more mundane ideas of how networks worked and succumbed to the more student-friendly ideas of building a network.
This is a reminder, though, that expertise in parsing these networks exists and it doesn’t come from just a prolonged use of those networks. It’s a reminder that a casual knowledge of a thing doesn’t make you an expert at understanding and using that thing, which means all those youngsters whom you believe get this technology in all likelihood really don’t get it. And it’s a reminder that there are those people who can understand these networks for what they do before we use them, and their analysis is different than your opinion.
Certainly the expert analysis might be wrong, but, as Asimov said, that wrong-ness will be relative. In a face-paced world where technologies emerge and fade in the blink of an eye, it’s more important than ever to fight the Illusions of Knowledge and Potential, and dig into the science of the network