June 4th, 2008
Why is Google making more money everyday while newspapers are making less? I’m going to pick on The Washington Post again only because it’s my local paper and this is a local example.
There were severe storms in the Washington area today, and the power went out in our Reston office. I wanted to find some information about the status of power outages to see whether we should go into the office tomorrow. Here’s what I found on the homepage of WashingtonPost.com:
This is the WASHINGTON Post, right? So where’s the news about Washington? We just got pounded by a nasty storm — but it’s not homepage worthy.
Fortunately, although it’s not top of mind for the homepage editors, it is top of mind for readers — I found the article about the storm in the list of most viewed articles in the far corner of the homepage. I go to the article, where I find highly useful information like this:
“We have a ton of trees down, a ton of traffic lights out,” said Loudoun County Sheriff’s Office spokesman Kraig Troxell.
Great, that’s very helpful.
So what’s my next step, when I can’t find what I want on the web? Of course:
Thanks, Google, just what I was looking for:
Wow, I thought — it can’t be that bad, can it? So I went back to the WashingtonPost.com homepage. This time, I clicked on the Metro section in the main navigation. Sure enough, the storm was the lead story.
And there at the top was the link to the same useless article. But then below the photo was this tiny link: Capital Weather Gang Blog: Storm Updates
I clicked on the link, and wow:
Real-time radar, frequent storm warning updates with LINKS, and… a link to that page I had been SEARCHING for on Dominion Power about outages. (Note the link to the useless news story buried at the bottom.)
It was a brilliant web-native news and information effort — BURIED three layers deep, where I couldn’t FIND it.
Is it any wonder why Google makes $20 billion on search?
And what’s the root cause problem? The useless article with no real-time data and no links was written for the PRINT newspaper. And the homepage is edited to match what will be important in the PRINT newspaper. And the navigation assumes I think like I do when I’m reading the PRINT newspaper. Want local news? Go to the metro SECTION.
The Capital Weather Gang blog is a great example of “getting” the web — but then making it impossible to find…
Oh, and if you click on the tiny Weather link on the homepage (which I only noticed on my fourth visit), you get a page that looks like the weather page in, you guessed it, the print newspaper — all STATIC.
Again, it takes another click to get to the dynamic, web-native weather blog.
Yesterday, I saw a ranking of the top 25 “newspaper websites” — and that’s exactly the problem, isn’t it? These are newsPAPER websites, instead of WEBsites.
WashingtonPost.com ranks #5, with this comment:
The figures from the WPO 10-Q indicate that revenue for the company’s online business is relatively small and represents only a modest part of the sales for the newspaper group. That is unfortunate. If any company should be right behind The New York Times in internet revenue it is the Post.
So much potential, like the hugely innovative weather blog, crushed by the weight of tradition. And it’s not just the Post, of course (not to unfairly pick on them) — it’s every print publisher boxed in by the legacy business.
Here’s an idea for newspaper website homepages — just a search box and a list of blogs. Seriously. Instead of putting all the web-native content and publishing in the blog ghetto, like NYTimes.com does, why not make that the WHOLE site? (I mean seriously, having a blog section on the website is like having a section in the paper for 14 column inch stories.)
It’s like newspapers on the web as saying: here’s all the static stuff we produced for the paper — you want all of our dynamic web innovation? Oh, that’s downstairs, in the back room. Knock twice before you enter.
It’s a shame — so much marginalized value.
I bet I could stop going to the New York Times site entirely and just subscribe to all of their blog RSS feeds, and still get all the news, but in a web-native format, with data and LINKS.
Of course, the only way to do that is click on 50 RSS buttons one at a time. And they only publish partial feeds.
Mark Potts had a similar frustration with the storm coverage — and it looks like he never even found the weather blog.
Another big missed opportunity — the Dominion electric site can’t tell me specifically if the power is still out in our office in Reston. But I bet Washington Post readers with offices in that area – or even in our office condo — could help me out, if someone gave them a place to do so. The Post weather blog has a ton of comments, but information is haphazard — how about a structured data form where you can post your power outage status, maybe map it on Google maps?
Lastly, at least Google knows how to make the Post’s weather blog findable:
Jonathan Krim, the local editor from WashingtonPost.com, offers an important clarification:
As the editor for local coverage, I appreciate the comments on our coverage yesterday. But I am compelled to point out:
The page Scott uses for his example is not our home page for local users. We have one for our very large non-local audience, which is what you display in your blog post. You can change your settings, making the Washington home page your default, by clicking at the very top of the page. Had you looked at our local home page, you would have had a different experience, with very prominent display links to our capital weather gang coverage.
Thanks for the comment. I had already heard that others who were logged in had a different experience. Perhaps the lesson then is about assumptions around user registration and login. I’m a dedicated reader of WashingtonPost.com, but I never login. It may be necessary to supplement the customization for logged in users with geo-targeting based on IP address, which isn’t perfect, but it might have worked for me yesterday.
I also think you should integrate the Capital Weather Gang blog into the main weather page, instead requiring another click to get to it.
I think the main lesson is the tremendous pressure that Google puts on every site to make the user experience perfect. You had the data and coverage I wanted. You had the customization for local users. But somehow I still missed it and went to Google instead.
Several people have commented that my not finding out about the Post’s local customization for logged in users, either from the Post directly or through another source, means I didn’t have all the facts. In one sense, that’s true, but the example here is not about WashingtonPost.com as an object in a vacuum with a certain feature set, or what the WashingtonPost.com thinks about how their site works, but about MY EXPERIENCE using the site. My experience was lacking, and therefore I concluded that it would be lacking for other users like me. Some people might have clicked on the Weather link, or gone straight to the Metro section, or were logged in. But my experience represents this is not true for all users.
And the point of this post is not about the extent of WashingtonPost.com’s shortcomings, which may not be that significant, i.e. they are easily correct, but about the demands of the web as dictated by the existence of Google. Google is obsessed with not letting any users fall through the cracks. Despite having customization for local users and the right content, I still fell through the cracks as a user of WashingtonPost.com. And that is the key fact of this post.
That’s the brutal reality of the web that we all live by. We can have all these features and content and design and intent, but the user experience is the only arbiter. Google understands this better than newspapers. If newspapers understood it better, their sites would get better, which would create more economic value for them on the web.