Posts in the "" Category

Google Earth

Google Earth must be one of the coolest applications I’ve ever seen! With me loving to travel and a vast interest in seeing the world, this was a real eye-opener.

And just think about the implications! I really wonder where all this will end!

 

A tip: hold down the left mouse button to drag the map around, and the right mouse button while dragging up or down to zoom in and out.

 

PS. Thanks to Faruk for bringing this to my attention. DS.

XHTML and error handling

What I want to touch with this post is how errors are handled when XHTML is served the way it should be. Let’s, for the sake of argument, say that we want to write and deliver XHTML (not wanting to turn this into a discussion whether we should write HTML or XHTML).

First, some general background information about how to send documents to the requesting web browser. It’s all about the media type type, described in XHTML Media Types:

HTML
Should be sent with the text/html MIME type.
XHTML 1.0
All flavors of XHTML 1.0, strict, transitional and frameset, should be sent with the application/xhtml+xml MIME type, but may be sent as text/html when it conforms to Appendix C of the XHTML specification.
XHTML 1.1
Should be sent with the application/xhtml+xml MIME type; Should not be sent with the text/html MIME type.

So, what’s the difference? It’s that web pages sent as text/html is interpreted as HTML while those sent as application/xhtml+xml is received as a form of XML. However, this does not apply to IE, because it doesn’t even understand the application/xhtml+xml MIME type to begin with, but instead tries do download it as a file. So, no application/xhtml+xml for IE.

Aside from IE‘s lack of support for it, and for what you need to consider described by Mark Pilgrim in his The Road to XHTML 2.0: MIME Types article, it means that when a web page is sent as application/xhtml+xml while containing an well-formedness error, the page won’t render at all.

The only thing displayed will be an error message when such an error occurs. This is usually referred to as draconian error handling, and its history is told in The history of draconian error handling in XML.

My thoughts about this started partly by seeing many web developers writing XHTML 1.1 web pages and then send them as text/html, and they were only using it because it was the latest thing, not for any features that XHTML 1.1 offers (this also goes for some CMS companies that have invalid XHTML 1.1 sent as text/html as default in their page templates for customers to take after). Sigh

It is also partly inspired by an e-mail that I got a couple of months ago, when Anne was kind enough to bring an error on my web site to my attention, with the hilarious subject line:

dude, someone fucked up your XHTML

What had happened was that Faruk Ates had a entered a comment to one of my posts where his XHTML had been messed up (probably because of some misinterpretation by my WordPress system), hence ending up breaking the well-formedness of my web site so it didn’t render at all.

Because of that, and when using it for major public web sites, I really wonder if that’s the optimal way to handle an error. Such a small thing as an unencoded ampersand (example: & instead of &) in a link’s href attribute will result in the page not being well-formed, thus not rendered. Given the low quality of the CMSs out there, terrible output from many WYSIWYG editors, the “risk” (read:chance) of the code being valid and well-formed is smaller than of the code being incorrect. Many, many web sites out there don’t deliver well-formed code.

Personally, I agree with what Ben de Groot writes in his Markup in the Real World post. I prefer the advantages of XHTML when it comes to its syntax and what will be correct within it. However, Tommy once said to me that if you can’t guarantee valid XHTML you shouldn’t use. Generally, I see his point and think he’s right, but to strike the note Ben does, I can guarantee my part of it but there will always be factors like third party content providers, such as ad providers, sub-par tools for the web site’s administrator and so on. And for the reasons Ben mention, I’d still go for XHTML.

So, conclusively, I have to ask: do you think XHTML sent as text/html is ok, when it follows the Appendix C of the XHTML specification? Do you agree with me that having a web site break and show nothing but an error if something’s not well-formed isn’t good business practice?

Rise, Lord JavaScript

The time has come. JavaScript will rise again from its hidden trenches.

Jeremy Keith recently held his JavaScript presentation The Behaviour Layer at the @media conference in London, and from what I’ve heard and read, the crowd went Oooh and Aaah when he introduced the concept of the DOM and how to write unobtrusive JavaScript.

I reacted in two ways when I heard about his presentation and the crowd reaction:

  1. It’s very good and about time that this is being focused on.
  2. Even though it seems like it was a good presentation, given the slides, how come the crowd reacted that way? Shouldn’t they already know about this? The way I see it, interface developing consists of three layers, where JavaScript correspond to the behavior/interaction layer. When developing web interfaces, you should be aware of all three and the possibilities and caveats they present.

For a long time, JavaScript has had a bad reputation that I don’t think it has deserved. It’s been based on lack of knowledge and common misconceptions that has spread like a virus. Let me meet some of them:

 

JavaScript doesn’t work in all web browsers

This belief is based on an old era, the so-called browser wars days, when IE 4/5 and Netscape 4 were fighting for domination. And we all know how that went…
Nowadays, if you write proper scripts according to the standardized DOM, also known as DOM scripting, you will target virtually every web browser in the market. For a comparison, you will even get more widespread support than CSS 2 has!

The other day, I was at a seminar by one of the leading CMS manufacturers in Sweden, and one question was if the next version of their product would stop being JavaScript dependant, as opposed to the previous version (this is largely due to using Microsoft.NET and what it generates), or if its scripts would at least work properly cross-browser. The reply:

The problem we had was to get the scripts to work in all web browsers

The way he saw it, the problem was in the web browsers, not in the product, which upset me. At that time, I had to step in to explain that the reason why their scripts didn’t work is because Microsoft.NET’s controls generate JavaScript that is based on the scripting model Microsoft introduced with IE 4, and that’s the reason why they didn’t work in any other web browser.

If Microsoft only had taken the proper time and decision to implement proper DOM scripting, which is supported in any major web browser, as well as from IE 5 and up on PCs, things would’ve been fine. So, let’s kill this misunderstanding once and for all that has flourished for a long time. Correct written scripts will work in any web browser.

 

JavaScript doesn’t rhyme well with accessibility

JavaScript does rime well with accessibility, but some/many things that have been developed with JavaScript haven’t. The reason for this is web developers not being aware of how it should be done correctly. However, believe me, when it comes to writing JavaScripts every serious web developer focus as much on accessibility and standards as those people promoting it. And when JavaScript is used, be it for form validation on the client side to avoid unnecessary roundtrips, for dynamic menus or something else (why not an AJAX application?), a non-JavaScript alternative should always exist to cater for those where JavaScript isn’t a possibility.

 

So, how do you create a page with unobtrusive and accessible JavaScript? Humbly, I think my pictures page of my Rome trip web site is a pretty good example of how to enhance the experience for those where JavaScript is an alternative but still functional for those cases where JavaScript can’t be used.

It has a script that triggers when the page is loaded, only for those web browsers that support the document.getElementById, and this is verified through object detection. It then adds onmouseover and onmouseout events to the small images. When they are hovered, they show a larger version of the current image. What this means is that the HTML isn’t cluttered with tons of event handlers and for those who don’t have JavaScript activated/have a web browser that doesn’t support it, the small images are also linked to the same larger version of it. It also means that the script won’t throw an error in web browsers that dont have the necessary support, thanks to object detection.

 

So now, get out there! Write your DOM-based JavaScripts that will enhance your web sites profusively!
Trust me, it’s a whole new level that will give you a big smile when you realize what you can accomplish! πŸ™‚

 

Related links

Another player, another perspective

An old colleauge of mine, Oscar Berg, has started blogging. Oscar is well-experienced as a Business Analayst and Usability Designer, and I have to admire him for finding the time to start blogging with having two kids (and a third on the way).

He is one of the people behind the initial launch of the hugely successful hitta.se, and I actually wrote the very first HTML prototype of it. But unfortunately the company that owned the technical part of the project decided that they knew enough to code the interface themselves. If you look at the web site’s code, apparently they didn’t…

Anyway, for those of you interested in the business perspective on things, I strongly recommend a visit to his blog.

How to generate valid XHTML with .NET

A common problem is that the Web Forms and Web Controls in ASP.NET generate invalid XHTML. Amongst these errors are invalid attributes and inline elements without a correct block element container, as well as good ol’ HTML comments in a script block, which prevents you from sending your XHTML with the application/xhtml+xml MIME type.

All these errors are automatically created when the page in question is being rendered in the web browser, meaning that even if you write flawless code you will still fail to get it valid.

To the rescue, some solutions to take care of this:

Another option can be to write your own fix and customize it after your specific needs. This should take something from a day and up, depending on where you set the bar.

Or maybe you’re one of the people that hope ASP.NET 2.0 will take care of all this? In that case, I recommend you reading Charl van Niekerk’s posts ASP.NET 2.0 – Part 1 and particularly ASP.NET 2.0 – Part 2.

ASP.NET 2.0 outputs lovely (*irony*) things like:

<form onsubmit="javascript:return WebForm _ OnSubmit();">

and

document.all ? 
document.all["Login1_UserNameRequired"] : 
document.getElementById("Login1_UserNameRequired")

(for the vast IE 4 support required?) and still HTML comments in scripts block.
And when it comes to semantics, structure and unobtrusive JavaScript, it’s a mess.

Don’t get me wrong, I think it’s great that it validates, but only validating doesn’t necessarily make it good code. Validation is just one of the components necessary for a good web page; there’s, for instance, also semantics, accessibility and unobtrusive JavaScript (or, as important, offering a way for it working without JavaScript as well; kind of connects back to accessibility).

My advice to you: some way or another, make the extra effort to make sure your XHTML from .NET is valid. It’s not that big of a deal, and it’s totally reusable in your next .NET-based project.

 

Do you have experience of above techniques to make it valid, or some other way to accomplish it? Please feel free to share!

Why accessibility?

Seems like a pretty easy question to answer, doesn’t it? Well, it isn’t. If you look further than the obvious standpoint, that of course one wants every web site to be accessible and usable by everyone. From a perspective based on respecting and catering to people with different needs, accessibility is a given.

However, let me give you some background first by pointing you to two good articles Roger has written:

Accessibility myths and misconceptions
A good summary of building accessible web sites.
Accessibility charlatans
About people jumping the bandwagon stating that they create accessible web sites to get PR and make money, and then ignorant journalists that don’t do proper fact-checking support their claims.

With that as our canvas, enter the business factor. Sure, WAI is the hype word for the moment for IT sales men, but I’m sorry to say this: I still haven’t had the opportunity to work in a project that offers an accessible web site, or even a project that has had that set as a goal. For instance, take the project I work in right now: It’s a web site based on a Microsoft .NET-based CMS product from one company, with the addition of extensions from yet another company.

My problems:

  • To start, we all know that Microsoft .NET generates code that can’t be validated as strict XHTML or HTML.
  • The CMS product generates some invalid code on top of .NET’s plus the fact that it has a WYSIWYG tool based on Microsoft’s contenteditable property, which, to say the least, generates terrible code.
  • To top it off, the extensions mentioned above use span tags as the ultimate block level element, encompassing everything (span tags are inline elements, you morons).

There’s too little time and money (as always) in the project, so there simply isn’t any window of opportunity or incentive for any system developer to fix the .NET errors. And even if that had been taken care of, we would still have the WYSIWYG and extension problems. So, with that in mind, accessibility isn’t even on the agenda. And we’re talking about a web site that has roughly 60 000 – 70 000 visitors per day!

And, from my experience, this is not an uncommon situation at all. Generally, when you present the accessibility factor to a company, they want it “in the package”, but they’re not ready to pay anything more for it nor allow for any extra testing. Most companies (and visitors that don’t need the accessibility enhancements) are happy if the web site in question displays somewhat correctly. They don’t give a damn if it follows web standards, uses correct semantics or is accessible to people with other needs. The companies normally seem to think that the percentage of visitors they lose in not being accessible doesn’t make up for the time and testing it takes to go that extra mile (no notice taken to the bad will this will create, however).

A very current example: Jens Wedin works for a company whose web site just was elected the best government web site in Sweden. And, as he states:

Take note that the site do not follow web standards, is using frames, do not validate, have a lot of accessibility problems.

I have no doubt that Jens will do his best to push things in the right direction in the future, he seems like a knowledgeable person, but the fact still stands that its current state didn’t keep it from getting first place. Another example that seems to come up is Google. Do a search for Robert Nyman, and it returns a page with 437(!) warnings/errors (Ironically, a MSN search, of all web sites, returns a valid XHTML Strict page, although not sent with the application/xhtml+xml doctype, but that’s a discussion for another day). This gives people the argument that:

Well, Google seem to do pretty good. And if they don’t care about web standards or accessibility, why should we?

I don’t really know how to argue with that. Even if every web interface developer in the whole world would know how to create accessible web sites, we still need to convince businesses, customers and decision makers to take it into consideration, to sell the idea of accessibility to the ones that ultimately make the call!

How do we do that?

 

Related posts:

Back from Rome – travel stories!

I’m back! Feels good to be home again. Actually, I’ve been home a little bit over a week, but that week has been spent going to a party with my new employer, attending a wedding and also starting at my new job. During that week, I have also been building and setting up our Rome trip web site, optimized for IE 4 and later (just kidding, ok?). We had a great time and you can find travel stories, pictures and video clips in the web site!

Since people found out that I had come home again, they started asking me when I was going to start blogging and write my next post. This makes me really happy, to see that people appreciate my writings!

A friend of mine, Per at Gamepepper , pointed out to me that I have at least reached #3 in the Lifecycle of Bloggers. πŸ™‚
However, I must confess that I’ve flirted with a lot of the other points as well…

I have come to the conclusion that I won’t be writing a new post every day, Monday – Friday, but maybe somewhere around 2-4 posts per week. This is not due to not having enough topics to cover or lack of motivation, rather just that I have to spend more time with my family and doing other things than just sitting in front of the computer (no, this is not a faux “retire”).
Also, writing every day has lead to some people missing a few of my posts, since they just don’t have the time to visit my web site/read the RSS feed every day (however, what would be more important than that, I have no idea… ;-)).

So, with this move I hope my posts will become better and I aim to write really interesting pieces in the future!

 

I would be very happy if you were to write a comment what you think about the Rome trip web site, or my move to writing fewer (and hopefully better) posts!

 

Update! For you web developers out there: Resize your web browser window in the Rome trip web site and look at the dynamically sized masthead image. Also, take notice of the fixed navigation bar that even works in IE 5!