Rohan Jayasekera's thoughts on the evolving use of computers -- and the resulting effects

Occasional thoughts by Rohan Jayasekera of Toronto, Canada.

My Photo
Location: Toronto, Ontario, Canada

I've been online since 1971 and I like to smoothe the way for everyone else. Among other things I co-founded Sympatico, the world's first easy-to-use Internet service (and Canada's largest).

View Rohan Jayasekera's profile on LinkedIn Rohan Jayasekera's Facebook profile twitter / RohanSJ
Subscribe in a reader

Or enter your email address:

Powered by FeedBlitz

Sunday, December 24, 2006

Exceptions to openness

When the computer industry began, everything was “proprietary”, meaning owned by one company, e.g. the architecture of an IBM computer was different from that of a Univac. So if you wrote a program for an IBM machine you couldn’t run it on a Univac. (In fact, you couldn’t run it on any other model of IBM computer either!) But now, if you have a C++ or Java program it’s usually not that hard to “port” it elsewhere (the adjective “portable” having been turned into the verb “to port”). And if you obtain an MP3 audio file anywhere you can play it on any music player, because they all support MP3 format. “Open architectures” have taken hold. On the hardware front, when IBM introduced its PC it published specifications so that anyone could build cards that plugged into it to provide video, audio, you name it.

But if you buy a song from Apple’s iTunes music store, the only portable music player you can play it on is an iPod, because it’s in a proprietary format called “protected AAC” that is designed to prevent you from playing it on a competitor’s player. And if you have software written to be run under Microsoft Windows, it won’t run elsewhere. No openness here, yet iTunes and Windows are market leaders. Why?

Openness is great, but it’s messy. Consensus needs to be built, which takes time. And because there are always multiple standards organizations working in any area there are often standards that are similar but not identical (e.g. U.S. vs. Europe). As Prof. Andrew S. Tanenbaum has been quoted, “the nice thing about standards is that there are so many to choose from”.

There is an effective alternative to open standards: a single person or company can set the standard instead, and act as a benevolent dictator. Microsoft defines Windows, and if a decision is needed about something, Microsoft will make it — and it won’t take years. On the music front, because Apple controls everything in the iTunes/iPod world, it’s been able to make deals that nobody else could.

Many years ago I observed that in any one area there seemed to be, over time, room for one, and only one, proprietary architecture to have large market share.

Here’s why I think that is. The problem with dictatorships is that you can’t have more than one in an area, or they spend all their time warring against each other, all losing strength, while the “open” states remain prosperous. But if one dictator defeats the others, or if one never allowed any others to gain strength in the first place, it is possible for that dictator to compete with its “open” competitors as long as its offering is good enough that customers will tolerate being locked in to one supplier.

Comments welcome.

Saturday, December 23, 2006

Music piracy: no defence

A lot of forward-thinking writers have said that music piracy has been good for the music industry because it showed that there was a desire for downloading music, and that this is why we now have things like iTunes and Rhapsody.

I don’t agree. If there hadn’t been illegal sharing, the notoriously self-serving record companies would have continued to insist that we buy their music on physical media as usual, while independent musicians would have pioneered Internet distribution and new labels would have appeared to sell electronically. Then the old record companies would have slowly given up market share to new players interested in cashing in on the new method of distribution. And there wouldn’t be any of that abominable DRM.

Wednesday, December 20, 2006

Fostering a startup culture

I had lunch today with fellow TorCampers Matthew Burpee, Martin Cleaver, and Jonas Brandon. One topic that came up was the relative lack of a tech startup culture in Toronto. I believe that one cause is the lack of a sizable established network of people sharing such a culture. In Kitchener-Waterloo, 100 km west of Toronto, lots of potential startup people know each other, directly or indirectly, via the University of Waterloo. And in Kanata (a suburb of Ottawa) so many people are connected via Mitel and its descendant companies such as Corel and Newbridge (the latter now part of Alcatel).

Fortunately TorCamp has arisen and may grow that culture. To help in that process, after lunch we walked over to Jonas's nearby home and created a page where TorCampers can profile themselves all on one page, complete with photo. It may seem like a small thing, but I think it helps to support the notion that TorCamp is a community of people and not just a series of events. And that such a community provides resources for those who might create a startup.

Saturday, December 16, 2006

Great expectations then; deflated expectations now

On Wednesday I attended the Toronto Venture Group’s monthly breakfast talk. Mark Evans of b5media spoke on the topic “Two Solitudes: The real differences between running an Internet start-up now and during the dot-com boom.” You can see the posts linked to by Mark for summaries, as well as one by Tom Purves, but I’d like to focus on the phrase that Mark emphasized: It’s all about the chairs.

The chairs he refers to are those used by the typical startup during the two periods: thousand-dollar Aeron chairs then, cheap but acceptable ones now. They symbolize the respectively free-spending and frugal ways.

Why the change? Since the tanking of the market for tech stocks starting in 2000, the expectations of riches and the accompanying appetite for risk have been greatly diminished. But that doesn’t fully explain it, not with the skyrocketing price of Google stock and the sale of YouTube for US$1.65 billion.

Part of the answer is that costs are much lower now (for a number of reasons). But that can’t be the whole story, because lower costs imply higher profits, not lower. I think the key difference is that the Web 2.0 startups expect much smaller revenues than the dot-coms did, and have set their expectations of potential wealth accordingly.

An Entrepreneur 2.0 probably wouldn’t mind getting rich by selling the company, but doesn’t see the probability of that as particularly high. So spending is kept low in order to keep the business going, and these people are very smart at doing that. Mark spoke of how at b5media they use Skype to avoid paying significant long-distance telephone charges, and how they work from home to avoid paying for office space — something that they can do effectively because I’m sure they use the Internet to its fullest advantage for online collaboration etc.

In addition to having a frontier spirit, Web 2.0 entrepreneurs are those most capable of using Web 2.0 tools to keep costs minimal, and those most willing to help work out the bugs. Methods they use now that turn out to work effectively will be copied by companies in other industries to cut their own costs. And when costs decrease in any competitive industry, so do prices. Deflation 2.0™.

Online storage

Those of you who have been following this blog know about my opinion that personal computers, as we traditionally use them, are a bad idea. There’s one main reason for this: a PC isn’t the best place to store your data.

I value most of my data. I rely on a lot of it pretty much every day: my to-do list, my grocery list, my “list” of what blogs I subscribe to and which of their posts I’ve read so far, my recent email. Even an email from years ago might unexpectedly turn out to be helpful, say if I got audited on my income tax and needed to support a claim.

1. My to-do list (together with most of my other data files) gets backed up once a week. I should probably do it more often.

2. My grocery list gets backed up daily, because it primarily lives in my Treo (PalmPilot) with approximately daily synchronization to my PC.

3. My blog data is backed up by people at Google who are paid to do it. If anything goes wrong, they’ll figure out how to fix it.

4. My email gets backed up once a month.

Of those four, my favourite is the one where Google take care of things for me. (For free!)

The other aspect of valuing my data is that I want to get at it. If I’m away from my PC, that can be difficult. I could enable remote access to it, but then I’d need to leave it on all the time, and if Windows gets into trouble it’s hard to fix remotely (it can be hard enough to fix locally!). If my data is stored on some Internet-connected server, I can get at it anywhere I have Internet access. For blog-reading I use Google Reader, and when I’m sitting on the bus (or wherever) I can pull my Treo out of my pocket and catch up on my blog-reading — Google Reader knows what I have and haven’t yet read, regardless of whether I was using the PC or the Treo. It also helps that, since May, Google Reader has included a user interface specifically for mobile devices with small screens.

So: the data is stored in one place, but accessible from multiple places, with access tailored to the situation (screen size, keyboard availability, bandwidth, whatever). And the “one place” it’s stored is one where professionals will keep it safe.

“Putting all your eggs in one basket isn’t stupid if the basket sits on a cushioned floor in a high-security building in a non-coastal area with no fault lines.”
-Rohan Jayasekera, 2004

Admittedly, I am reliant on the Internet to get access to the data, and that is not 100% assured. So I’m swapping one type of risk for another. But the best of both worlds can be obtained by using either local caching (where my device keeps a copy of data I’ve recently accessed) or synchronization (where data is stored in two places, and any changes to one place are copied to the other when the two places happen to be connected). Then if I have no Internet connection I still have a lot of my data available. Local caching is implemented by Web browsers’ offline mode, and synchronization by, for instance, Omnidrive, which provides what appears to be another local hard drive that just happens to be mirrored to a server over the Internet.

I expect that most of us will find our data migrating to servers, both because we increasingly use Web-based applications such as Google Reader, which intrinsically use online storage, and because even for our locally-based applications such as Microsoft Word (for those who continue to splurge on such expensive luxuries) we’ll stop being willing to entrust our data to a single local hard drive. It may take a while, but just as buying antivirus programs became mainstream as a defensive measure so will using online storage.

And yes, if and when our Internet connection goes down we’ll curse — but we do already, because that’s where all the fun is. We will come to accept Internet dependency as a fact of life.

Monday, December 04, 2006

The nature and nurture of DemoCamp

Mark Kuznicki has written a great post about the nature of DemoCamp, a type of event pioneered here in Toronto that has attracted discussion about the small percentage of female attendees.

I think he’s right that DemoCamp has some intrinsically male-oriented characteristics that we can’t “fix”.

Another thing has been nagging me about DemoCamp. Often there is a show of hands of how many people are first-timers, and it’s always a lot. But the size of the audience isn’t growing, which means that a lot of people don’t come back. Clearly it’s not only women for whom the event is not appealing.

Yet there is a core group of people, including women, who go as often as they can. I think that there is a hunger for this kind of gathering, and maybe the demos are just an excuse for a community of interest to get together. Slava Sakhnenko wrote that “Most people don’t show up to DemoCamp to look at the demos”.

Having more BarCamp-style events might be nice, but they are big deals to set up and run, and as I understand it the reason DemoCamp was invented was to have more frequent events. DemoCamp has been wildly successful in achieving that goal. What other events could we have that would achieve that same goal of frequency, but without the characteristics that many people evidently find unappealing?

It occurred to me that we could just dispense with the “event” and go straight to the after-discussion at the pub, but I think it’s important to have the common starting point of “I have such-and-such comment about demo X”. It gets the conversations started, and it allows all present to have something to say without having to be “experts”. TorontoWikiTuesdays are purely pub gatherings, and as much as I enjoy them I think they suffer from the lack of that common starting point.

Finding a replacement for the demos might be difficult. They have a bunch of highly desirable qualities: they’re usually about innovations, they’re usually about projects that the presenters are personally involved in, the “no PowerPoint” rule does a lot to avoid putting the audience to sleep, the “presentation” nature allows a large number of people to attend, and the lack of BarCamp’s “participation mandatory” rule allows the attendance of those who aren’t hard-core (but who might become hard-core later). Perhaps it will turn out that DemoCamp is the “worst except for all the alternatives”. I’m very keen to hear any suggestions for alternatives to DemoCamp that could also be run monthly and would also accommodate, and welcome, such a large number of people.

Saturday, December 02, 2006

Deflation 2.0 ™

Back in April I wrote about how Web 2.0 would contribute to deflation and its consequences (such as unemployment). I didn’t call this Deflation 2.0 ™, but since Google shows no matches for that term I’m coining it now, and trademarking it just like O’Reilly Media would tongue sticking out. Licensed use of this trademarked term, Deflation 2.0 ™ (did I mention that it’s trademarked?), is available at a reasonable price (my wife and I have expenses, you know, and our five cats plus the local homeless cats we feed collectively eat quite a lot).

Via Gagglescape I just read a Business 2.0 story about how some large corporations have turned to “crowdcasting”, where a large number of people (e.g. 3000 MBA students) compete to address a particular business problem (such as how to attract consumer attention to GE Money). This is work that might ordinarily have gone to big consulting firms. Instead it’s going to a few people at the crowdcasting company and those who win the competition. All the losers work for free — but their work is necessary because nobody knows in advance who will have the best ideas.

Crowdcasting is similar to crowdsourcing. Canada’s Cambrian House says of their particular crowdsourcing model “it’s like open source, but with money!” The thing is, you only get the money if your idea is chosen. (At least at Cambrian House there is an alternate way to make money: help build one of the chosen products. Then you too get a share of the resulting royalties. If there are any, of course, but that’s business.)

So, just as professional journalists have their turf invaded by amateur bloggers, professional consultants are now being encroached on by MBA students. It’s like hiring an intern, but without having your expensive office space occupied or having to buy more coffee.

Where does Web 2.0 come into this? It makes it feasible to manage the contributions of all those people. Rummaging through 3000 paper submissions or emails is not practical. The Business 2.0 story doesn’t go into how this is done, but this sentence does touch on it: “[Crowdcasting company] Idea Crossing also earns consulting and licensing fees from companies that use its Web platform to manage their own idea [competitions]”.

Why pay a bunch of people when you can have even more people work for you and pay even fewer of them? More Deflation 2.0 ™.