Recently, ElektroPost, the company behind the Content Manamement System EPiServer, asked me to write an article to be part of their Expert Panel in their Usability section. It’s targeted at project managers, editors and web developers who don’t work full-time with web interfaces. The article is titled Better Control and Cost Savings with Style Sheets (Swedish version), and I’ve also decided to publish the full article below to take comments. A big thankya to Henrik Box and Jeroen Mulder for reading my first drafts and giving me valuable feedback. Also, a thank you to ElektroPost for professional proofreading.
Here goes:
When the Web first became available to the public, everything was in the markup language HTML: the content, the presentation and the interaction. Since then, we’ve seen the advent of content management systems, which offer editors the possibility to publish content on the Web without any previous coding knowledge. The content they produce is saved in a database and then dynamically generated into its corresponding page.
All this has helped the Web to grow enormously, but it has often resulted in controlled chaos. Many Web sites had to be rebuilt from scratch as soon as any changes were to be introduced or if any new Web browser was released. How can we change that scenario?
The next step now is to make sure that you separate your content from your design and the way you want your pages to look. Style sheets, also known as Cascading Style Sheets (CSS), have been around for some time, but their usage has grown most rapidly during the last two years. The general recommended approach to producing a Web site is to collect all visual presentation in one or several style sheets. For instance, there you specify:
all the colors that are used,
which font should be used, and if it should be scalable to cater to people with various sight disabilities,
images used for design purposes,
boundaries and placement of all elements used.
So What Are the Biggest Selling Points?
The style sheet is the central place where you control the look and feel of every page on your Web site. This leads to a consistency that greatly improves the usability for the end user. What this also means is that you don’t need to update every page if you want to change something in the layout; this is done in one place and it affects every page. This is one of the major reasons you should never apply things like padding, margins, widths, etc. in the HTML, as it results in you having to go through all the files to make any updates.
It is also a way to gain better control of how to adapt the layout to cover for different screen resolutions and different sizes of the end user’s Web browser window. When it comes to increasing the performance of your Web site, you should be aware of the fact that the style sheet files get cached in the visitor’s Web browser. This means that they will only need to download it for the first page they visit on your Web site. From there on, the only data that will be sent to them for the following pages they visit is the HTML code. This is one of the really good reasons to move all the presentation from the HTML file.
This means, for an ordinary Web site, that by using style sheets, it is possible to save 25% in the data that needs to be sent to the visitor for their first page visit, and then up to 50% for the pages after that.
A good example of how to totally redesign an entire Web site through style sheets is css Zen Garden, where the HTML code is the same for every page, and the only thing changed is the style sheet file.
What Should You Keep in Mind?
The power of using style sheets for your layouts should be combined with semantically correct HTML code in your templates and correct elements in your content. The usage of heading elements for headings, paragraph elements for paragraphs of text and list elements for different listings such as menus etc., will result in:
Pages that are more accessible to everyone.
Better search-engine ranking by making it easier for them for understand the weight of the text you’ve used in your pages, and by not having an
unnecessary clutter of presentational code mixed with the content.
Easier maintenance of page templates and page content by focusing solely on the content, instead of having to think of the presentation as well.
Conclusion
By using style sheets you will gain better control, achieve easier maintenance and increase performance while saving bandwidth. Together with correct semantic code, you will also reach a better search-engine ranking and automatically increase your Web site’s accessibility.
For the moment, I’m working on a fairly big project where the interface design will be elastic. What do I mean by elastic? Basically, there are three ways one can choose to design the interface’s relation to the visitor’s resolution and web browser window size:
Fixed
The layout is fixed in pixels and won’t take any consideration to the resolution or the window size.
Fluid
The layout flows to fill the whole window, no matter how wide or narrow it is. This layout can be set in pixels, em or any other desirable unit.
Elastic
This is, in my opinion, the best one. It isn’t fixed, but it also stops from getting too wide for, for instance, wide-screen users. You can specify a maximum width and a minimum width and then you let it flow between those two values depending on window size. The units for this type can also be either one that suits best: pixels, em, percentages etc.
Since I want my font to be scalable, and consequently the width and height of some elements resizable to go with that, I’ve chosen to use the em unit for most cases. This is to make it work in the different versions of Internet Explorer in Windows, since they can’t handle user resizing of a pixel-based font (as opposed to Firefox, Opera and Safari, amongst others).
It’s really invigorating to create something that’s so scalable and flexible, and I really do believe this will help targeting more end users and will make the web site become more usable and accessible for them. All the people involved in the project who have seen my HTML prototypes have really liked it and they think it is a great approach.
But naturally, creating a this isn’t without gripes. My biggest annoyance is web browser bugs, where number one on that list is rounding errors. If I have, for instance, a background color on an element and the font is resized, some elements will have a different height in Firefox than in Internet Explorer. This is because the em unit calculates the elements’ boundaries depending on what fontsize is used, transforms it to pixels to render, and inevitably this leads to different rounding sums. Firefox definitely seems to be the worst at this.
The key to solving this is to find a certain unit where all (read: most) web browser seem to agree on the rounding for different sizes. So, for the moment, I’m on top of things and I really like the web site I’m working on. But it’s an everyday control task, to make sure that it’s consistent.
You put your heart and soul into a web site, you put in those extra hours of fine-tuning some pixels, some scalability fixes, enhancing the accessibility or just plain making sure it’s valid and therefore as future-proof as possible. Enter: the customer.
Within a week they have usually messed it up some way, one or several of their code monkeys, who usually are more “creative” than skilled, have been let loose on the code. And this will happen as long as they have access to the source code (which they, of course, should have, they’ve paid for it). But I sincerely do think that the customer should think again before they start doing their quick fixes, maybe just realize that the things in what they got in the delivery was made that way it is intentionally and not just out of chaos.
This then comes back to us web developers; it’s tough to have reference cases when you know most of them are screwed over. Meeting a potential new customer, one wants to show the different projects one worked on before one lost control of the code:
Here’s the web site as it should have looked, before it got to the sorry state it’s in today.
It’s hard being a custodian, saying goodbye to your loved one.
I know this is the order of business, but I really wish some customers would think twice. For their own sake.
Yesterday I found out about the company clear:left, which consists of web developing pro’s Andy Budd, Jeremy Keith and Richard Rutter. These three are extremely experienced and have had a great impact on web developing. If I were working for a company in the UK, this is the company I’d contact for web development work.
I wonder what they charge, though… However, quality costs, and in the long run you will definitely save a lot of money if the job was done properly the first time around.
This made me wonder if there should be a Swedish equivalent of such a company, one with the best people we have to offer in this field. I know what people I’d like in that compnay (but that I won’t tell)…
So what do you think? Should there be a Swedish company with the big names we have?
Finally, the Opera web browser is for free. That means no more ads, no nothing. I expressed my opinions about Opera almost six months ago, and except for the getting paid-part, I think the other arguments still stand.
However, what’s good about this is that Opera, most likely, will see an increase of users, and this is what I like. If web-standards compliant web browsers like Mozilla Firefox, Safari and Opera gain more and more market share, this will force web developers to write valid and correct code, instead of just relying on Internet Explorer’s error handling for code that should’ve never seen the light of the day.
I can just imagine projects where there will be conversations like:
- My code only works in Internet Explorer! Crying
- That's because you didn't do the job properly
the first time! Stop writing such sloppy code
to begin with, and learn your profession!
I guess the future will tell…
Anyway, if you like Opera, rock on! Download away and have a good time!
…to create an e-Accessibility Quality Mark for Web services, as part of the Action Plan eEurope 2005: An information society for all.
This is really a commendable initiative, we have something similar in Sweden that Statskontoret is working on, called 24-timmarsmyndigheten.
The problem with Support-EAM is the example they set with their own web site. Although mostly valid, it’s not that accessible. Let me take care of a common misunderstanding here: just because a web site/page validates doesn’t mean it’s accessible. One crucially important factor in making it accessible is writing semantic code.
Their web site is a table-based layout, there’s no skip links present (although they might not be that necessary in this case) and there are places were headings aren’t written out using the correct h1...h6 elements. There are also a number of inline styles and script blocks that don’t have any comments around them to allow them to be hidden.
Although, it has to be said that their web site has been updated since Peter first visited them, they now use list elements for lists of links, heading tags in some places etc. But what I’m going for here is that such a big project that will affect the whole European Union must be as close to perfect as possible when it comes to setting the bar for others.
Am I overdoing it, or you don’t generally agree with my points of criticism? Or do I actually have a point? Let me know.
And there we go again. Recently, Microsoft has made a lot of good decisions, especially when it comes to collaborating with WaSP about having their products, such as .NET, generating more valid and accessible code. This also includes in getting their next version of Internet Explorer to implement a better support for web standards and CSS. All this is great news and very good for the future. The developers at Microsoft seem to really try to do a good job.
But then Steve Ballmer comes along with this quote in Business Week:
We won the desktop. We won the server. We will win the Web. We will move fast, we will get there. We will win the Web.
This has already been discussed by, amongst others, Molly and Roger Johansson. And yes, I know that Ballmer is a business man, he’s got to have this cocky attitude.
But the problem is, especially in light of all the good things Microsoft have done recently, these kind of statements just ruins the goodwill created, it just annoys people who have recently started to think about changing their opinion about Microsoft and to forget the past.
Ballmer is probably just doing this to spite, or to get Microsoft investors all aroused. But please, some balance…
Always wondered where the term bug came from? Been pondering what debugging is about? Well, here goes:
One day in the 1940s, Harvard’s famed Mark I–the precursor of today’s computers–failed. When the Harvard scientists looked inside, they found a moth that had lodged in the Mark I’s circuits. They removed the moth with a pair of tweezers, and from then on, whenever there was a problem with the Mark I, the scientists said they were looking for bugs. The term has stuck through the years.
Why do we have to fight to be allowed to make things right? I mean, really? Look at all the web standards advocates out there, fighting to get their message through; People lobbying for stylesheet-driven web sites and accessibility.
And all these battles are not about trying to have something in line with cool scripts animating things all over the page, not about doing something to show off to your friends. These things are about keeping development costs down, vastly reducing bandwidth usage by having all presentation in CSS files that will be cached in the visitor’s web browser, and reaching a lot more potential customers with web sites that are accessible.
I can’t believe I’m using my spare time, as do many other very talented persons, fighting to get the message across. Everyday, there are web sites/blogs all over the internet showing you how to better adhere to web standards, to write the leanest and most efficient CSS and tutorials and recommendations how to reach a higher accessibility (thus also gaining goodwill as well, which will result in even more business).
But we’re met by a wall of decision makers and Project Managers that just don’t understand what it’s about (or are to weak to take the discussion), tool manufacturers whose products deliver terrible code because they lack the skill to do it correctly and it’s too much of a hassle to learn and to eventually set things right (because no one asks for it).
I mean, even Microsoft, with its history, understands the importance of this. Next version of Internet Explorer will have a greatly improved web standards and CSS support, next version of the .NET environment will encompass web standards and accessibility improvements and their MSN Search is the only search engine out there that delivers valid XHTML code and where the presentation is contained in its own CSS.
Of course Microsoft still has a long way to go, but at least they’re on the right track. And if they can go through this, with their immense size as a company, what holds you small companies back from upgrading your skills? From learning how things are supposed to be done and how you will make much more money? From having some tougher demands on your web developers and tool manufacturers to deliver something that isn’t ghastly?
If you agree with me, give me a “hear, hear”, put your foot down and tell your managers that this can’t go on anymore. It’s business suicide to be in the web development business producing web sites, without even having the know-how or even interest in creating a good, effective front-end layer.
Previously this week I was interviewed by Dag König and the interview is now available as a podcast (the mp3 file is around 14 MB, in Swedish). Dag is a seasoned Microsoft developer and architect, and he is usually traveling around in Sweden giving seminars together with Microsoft.
Therefore, it was extra interesting to have this talk and that Dag is a Microsoft developer that actually care about and is interested in web standards and accessibility factors. We spoke for over an hour, and the final available interview is 42 minutes. Bear in mind that I actually don’t sound like that (usually)! I’m not that accustomed to doing interviews, so my voice sounds extremely strained, and on top of that I was just coming out of a cold I’ve had.
Personally, I think I sound monotonous and boring, like I’m just rambling for the sake of it, but somewhere in there, at least a couple of sentences are good. Have a listen if you like to, and let me know what you think.
As this is in Swedish only, I just want to express my interest in doing interviews in English, podcasts or written, so more people can get something out of it. Feel free to contact me if you think I have something interesting to say.
Recently, I got to my attention that some people at my company were going to perform a “Firefox investigation”. What this meant was that they had built an extranet for a customer who now had requested it to work in Firefox as well (goes without saying that it was a solution that only worked in IE in Windows). With me supressing the need to exclaim to everyone involved that if they hadn’t done such a piss-poor job the first time around, it would’ve worked in Firefox already (as well as Opera, Safari etc together with other standards-compliant web browsers), I decided to call the Project Manager and talk about this.
What I wanted to do was explain to him that it was dangerous to take on the project with the mindset that it should work in a certain web browser as opposed to following the given recommendations and standards, that by doing it with the general approach it would be a much better guarantee for future compatability, automatically targeting more web browsers and easier maintenance. Naturally, every web browser have some flaws that there might be workarounds for, but in general, if you write correct code you will get very close to a web site that will work in as many web browsers/platforms as possible.
So, I called him up, and it went a bit like this:
Introduction, bla bla bla
- But what you're saying is that you have the necessary
skills to make things work in Firefox?
- Well, yes. But I think it's really important that you
follow web standards when you rewrite/adapt your code,
instead of focusing on just a single web browser.
- Yeees, we will try to do that...
We were talking about using a so-called HTML validator
in this project, have you heard of those?
- Er.. Yes (wanting to scream: of course I fucking have,
that's the foundation to make sure that the client-side
code you use is valid!).
That's part of following web standards
(bla bla bla, is he getting me here?)
We spoke for a while, he seemed to understand what I,
as well web standards, was about, and then
the call finished with:
- But if we need to talk to someone, you have
Firefox skills?
- Yes.... (Sigh.)
The problem in our call, as with many Project Managers and System Developers alike, is that they really don’t know about web standards and how it should be done. They never heard of the importance of semantic markup.
So, for all of you out there whose mindset is still set in the browser war era (Internet Explorer vs. Netscape):
Those days are long gone. There’s a myriad of web browsers and platforms out there, together with accessibility as well as other factors that need to be taken into regard. Read this line carefully, and then repeat it in every web project you go in to:
Do not write your code adapted for web browsers, write it according to web standards.
Well, do they? Because every tool they have for web developing, it “magically” rearranges the code when writing it and when it is delivered at runtime to the requesting web browser. I guess Microsoft’s intention is to make it as easy as possible for the web developer, and sure, it goes fast to set up a dynamic page in .NET (if we, of course, look away from the invalid code, dependancy on the obtrusive JavaScript and a total lack of accessibility).
It probably helps some web developers, but to me it just adds a lot of extra time trying to correct the invalid output. Instead of helping or aiding me, it adds 25% to my working time covering up for its flaws.
My approach as a web developer is to have total control of the code being output. This means that if I have the necessary skills, it wil be a good result. Unfortunately, the way it is now, Microsoft tools are really keen on (or even, horny about) trying to be more intelligent in its code generating than they expect the web developer to be.
I really hate when a tool think it’s smarter than you and just gives you a lot of extra things you don’t need, nor ever asked for. It’s like going to a store and buying something, and the sales man throws in a lot of extra crap that you have to bring from the store and throw away after.
It needs to be pointed out that this is not written out of deprecation for Microsoft, I do think that they have created many good things too. This goes out to any company that makes tools that alters my code without being asked to do it. In this case, this company’s name is Microsoft.
Personally, I’ve always disliked uppercase tags in the code. Uppercase characters in digital format is often perceived as screaming, and if that’s true, boy, have I’ve been screamed at by a lot of code I’ve seen. It looks bulky and feels like working with skyscrapers when doing a cut-and-paste operation, one expects the computer to start screaming from the effort.
Estethics aside, there’s also a good technical reason for not using uppercase tags: it’s not allowed in any flavor of XHTML. To quote the XHTML 1.0 specification:
XHTML documents must use lower case for all HTML element and attribute names
As a follow-up to this, if your XHTML/HTML is indeed lowercase, make sure that your tag-specific rules in your CSS is lowercase too. Otherwise, you might not get the behavior you expect (you will not get it to work with lowercase XHTML tags and uppercase references in your CSS, if it’s sent with the MIME type application/xhtml+xml).
So, if you like and use uppercase tags, please lowercase your darlings. For me, and for the future.
Martin Söderlund has performed an interview with me, and it felt pretty well-balanced. Half of it is about web developing, the other about more personal things.
The interview is in Swedish, but what better reason to learn Swedish than this…? 😉
I’ve been wondering if image replacement and the promotion of it is really a good idea. But let’s start from the beginning: what is image replacement?
Image replacement is a common name for a technique to use images for headings and their likes from an external CSS file, as opposed to in the XHTML/HTML. The general approach is to hide the text content (one way or the other) of the element and instead show an image through CSS.
An example (Note: this is not the most sophisticated way to do it, but an easy one to get an overview of the basic idea):
There are some general arguments why to use image replacement, and I though I’d respond to them here:
You can add images for a rich typography or logo
Reply: You can do the same thing with an inline img tag.
Images referenced in the CSS file are cached
Reply: As far as I know, images referenced through inline img elements are cached in most, if not all, major web browsers.
Easier maintenance with a single file to edit
Reply: In pretty much every web site, the content is dynamically generated through ASP.NET, JSP, PHP or something similar. Then you just have a WebControl/include file witht the content of your header and the tag is still just in one file. And if you have a hard-coded site, basically any tool offers search and replace functionality to easily change the content of every file.
For accessibility reasons, it’s good to have a fallback text in the document
Reply: You get the same thing using the alt attribute on the img tag.
Another major reason for not using image replacement is that, to my knowledge, there’s still no way to handle the scenario where the user have a web browser setting with images off and CSS on, then they won’t see the text nor the image. There is, however, ways to use JavaScript-enhanced image replacement, but to me being dependant on JavaScript isn’t an option either.
So, use image replacement if you want to. I know I won’t (at least not until someone convinces me of any advantage of it over using an img tag).
To generalize, there are three different standpoints web developers usually take when it comes to implementing JavaScript in a web page.
Make it JavaScript dependant
This usually means making the web site and important functionality of it dependant on the visitor having JavaScipt activated alternatively a web browser that supports JavaScript. Bad.
Have a noscript fallback
Often, in this case, the web site’s functionality is still dependant on JavaScript, but includes a noscript tag with a text explaining for those who don’t have JavaScript that they can’t use it. Better.
Not JavaScript dependant and no noscript tag
This is the ultimate scenario. JavaScript is used to progressively enchance the functionality of the web site, but all the main functionality will work without it. When it comes to the noscript tag, it’s redundant. Instead, include the necessary elements or warning texts in the code that’s initially loaded, and then use JavaScript to hide them. Best!
What I want to touch with this post is how errors are handled when XHTML is served the way it should be. Let’s, for the sake of argument, say that we want to write and deliver XHTML (not wanting to turn this into a discussion whether we should write HTML or XHTML).
First, some general background information about how to send documents to the requesting web browser. It’s all about the media type type, described in XHTML Media Types:
HTML
Should be sent with the text/html MIME type.
XHTML 1.0
All flavors of XHTML 1.0, strict, transitional and frameset, should be sent with the application/xhtml+xml MIME type, but may be sent as text/html when it conforms to Appendix C of the XHTML specification.
XHTML 1.1
Should be sent with the application/xhtml+xml MIME type; Should not be sent with the text/html MIME type.
So, what’s the difference? It’s that web pages sent as text/html is interpreted as HTML while those sent as application/xhtml+xml is received as a form of XML. However, this does not apply to IE, because it doesn’t even understand the application/xhtml+xml MIME type to begin with, but instead tries do download it as a file. So, no application/xhtml+xml for IE.
Aside from IE‘s lack of support for it, and for what you need to consider described by Mark Pilgrim in his The Road to XHTML 2.0: MIME Types article, it means that when a web page is sent as application/xhtml+xml while containing an well-formedness error, the page won’t render at all.
The only thing displayed will be an error message when such an error occurs. This is usually referred to as draconian error handling, and its history is told in The history of draconian error handling in XML.
My thoughts about this started partly by seeing many web developers writing XHTML 1.1 web pages and then send them as text/html, and they were only using it because it was the latest thing, not for any features that XHTML 1.1 offers (this also goes for some CMS companies that have invalid XHTML 1.1 sent as text/html as default in their page templates for customers to take after). Sigh…
It is also partly inspired by an e-mail that I got a couple of months ago, when Anne was kind enough to bring an error on my web site to my attention, with the hilarious subject line:
dude, someone fucked up your XHTML
What had happened was that Faruk Ates had a entered a comment to one of my posts where his XHTML had been messed up (probably because of some misinterpretation by my WordPress system), hence ending up breaking the well-formedness of my web site so it didn’t render at all.
Because of that, and when using it for major public web sites, I really wonder if that’s the optimal way to handle an error. Such a small thing as an unencoded ampersand (example: & instead of &) in a link’s href attribute will result in the page not being well-formed, thus not rendered. Given the low quality of the CMSs out there, terrible output from many WYSIWYG editors, the “risk” (read:chance) of the code being valid and well-formed is smaller than of the code being incorrect. Many, many web sites out there don’t deliver well-formed code.
Personally, I agree with what Ben de Groot writes in his Markup in the Real World post. I prefer the advantages of XHTML when it comes to its syntax and what will be correct within it. However, Tommy once said to me that if you can’t guarantee valid XHTML you shouldn’t use. Generally, I see his point and think he’s right, but to strike the note Ben does, I can guarantee my part of it but there will always be factors like third party content providers, such as ad providers, sub-par tools for the web site’s administrator and so on. And for the reasons Ben mention, I’d still go for XHTML.
So, conclusively, I have to ask: do you think XHTML sent as text/html is ok, when it follows the Appendix C of the XHTML specification? Do you agree with me that having a web site break and show nothing but an error if something’s not well-formed isn’t good business practice?
I reacted in two ways when I heard about his presentation and the crowd reaction:
It’s very good and about time that this is being focused on.
Even though it seems like it was a good presentation, given the slides, how come the crowd reacted that way? Shouldn’t they already know about this? The way I see it, interface developing consists of three layers, where JavaScript correspond to the behavior/interaction layer. When developing web interfaces, you should be aware of all three and the possibilities and caveats they present.
For a long time, JavaScript has had a bad reputation that I don’t think it has deserved. It’s been based on lack of knowledge and common misconceptions that has spread like a virus. Let me meet some of them:
JavaScript doesn’t work in all web browsers
This belief is based on an old era, the so-called browser wars days, when IE 4/5 and Netscape 4 were fighting for domination. And we all know how that went…
Nowadays, if you write proper scripts according to the standardized DOM, also known as DOM scripting, you will target virtually every web browser in the market. For a comparison, you will even get more widespread support than CSS 2 has!
The other day, I was at a seminar by one of the leading CMS manufacturers in Sweden, and one question was if the next version of their product would stop being JavaScript dependant, as opposed to the previous version (this is largely due to using Microsoft.NET and what it generates), or if its scripts would at least work properly cross-browser. The reply:
The problem we had was to get the scripts to work in all web browsers
The way he saw it, the problem was in the web browsers, not in the product, which upset me. At that time, I had to step in to explain that the reason why their scripts didn’t work is because Microsoft.NET’s controls generate JavaScript that is based on the scripting model Microsoft introduced with IE 4, and that’s the reason why they didn’t work in any other web browser.
If Microsoft only had taken the proper time and decision to implement proper DOM scripting, which is supported in any major web browser, as well as from IE 5 and up on PCs, things would’ve been fine. So, let’s kill this misunderstanding once and for all that has flourished for a long time. Correct written scripts will work in any web browser.
JavaScript doesn’t rhyme well with accessibility
JavaScript does rime well with accessibility, but some/many things that have been developed with JavaScript haven’t. The reason for this is web developers not being aware of how it should be done correctly. However, believe me, when it comes to writing JavaScripts every serious web developer focus as much on accessibility and standards as those people promoting it. And when JavaScript is used, be it for form validation on the client side to avoid unnecessary roundtrips, for dynamic menus or something else (why not an AJAX application?), a non-JavaScript alternative should always exist to cater for those where JavaScript isn’t a possibility.
So, how do you create a page with unobtrusive and accessible JavaScript? Humbly, I think my pictures page of my Rome trip web site is a pretty good example of how to enhance the experience for those where JavaScript is an alternative but still functional for those cases where JavaScript can’t be used.
It has a script that triggers when the page is loaded, only for those web browsers that support the document.getElementById, and this is verified through object detection. It then adds onmouseover and onmouseout events to the small images. When they are hovered, they show a larger version of the current image. What this means is that the HTML isn’t cluttered with tons of event handlers and for those who don’t have JavaScript activated/have a web browser that doesn’t support it, the small images are also linked to the same larger version of it. It also means that the script won’t throw an error in web browsers that dont have the necessary support, thanks to object detection.
So now, get out there! Write your DOM-based JavaScripts that will enhance your web sites profusively!
Trust me, it’s a whole new level that will give you a big smile when you realize what you can accomplish! 🙂
A common problem is that the Web Forms and Web Controls in ASP.NET generate invalid XHTML. Amongst these errors are invalid attributes and inline elements without a correct block element container, as well as good ol’ HTML comments in a script block, which prevents you from sending your XHTML with the application/xhtml+xml MIME type.
All these errors are automatically created when the page in question is being rendered in the web browser, meaning that even if you write flawless code you will still fail to get it valid.
To the rescue, some solutions to take care of this:
Another option can be to write your own fix and customize it after your specific needs. This should take something from a day and up, depending on where you set the bar.
(for the vast IE 4 support required?) and still HTML comments in scripts block.
And when it comes to semantics, structure and unobtrusive JavaScript, it’s a mess.
Don’t get me wrong, I think it’s great that it validates, but only validating doesn’t necessarily make it good code. Validation is just one of the components necessary for a good web page; there’s, for instance, also semantics, accessibility and unobtrusive JavaScript (or, as important, offering a way for it working without JavaScript as well; kind of connects back to accessibility).
My advice to you: some way or another, make the extra effort to make sure your XHTML from .NET is valid. It’s not that big of a deal, and it’s totally reusable in your next .NET-based project.
Do you have experience of above techniques to make it valid, or some other way to accomplish it? Please feel free to share!
Seems like a pretty easy question to answer, doesn’t it? Well, it isn’t. If you look further than the obvious standpoint, that of course one wants every web site to be accessible and usable by everyone. From a perspective based on respecting and catering to people with different needs, accessibility is a given.
However, let me give you some background first by pointing you to two good articles Roger has written:
About people jumping the bandwagon stating that they create accessible web sites to get PR and make money, and then ignorant journalists that don’t do proper fact-checking support their claims.
With that as our canvas, enter the business factor. Sure, WAI is the hype word for the moment for IT sales men, but I’m sorry to say this: I still haven’t had the opportunity to work in a project that offers an accessible web site, or even a project that has had that set as a goal. For instance, take the project I work in right now: It’s a web site based on a Microsoft .NET-based CMS product from one company, with the addition of extensions from yet another company.
My problems:
To start, we all know that Microsoft .NET generates code that can’t be validated as strict XHTML or HTML.
The CMS product generates some invalid code on top of .NET’s plus the fact that it has a WYSIWYG tool based on Microsoft’s contenteditable property, which, to say the least, generates terrible code.
To top it off, the extensions mentioned above use span tags as the ultimate block level element, encompassing everything (span tags are inline elements, you morons).
There’s too little time and money (as always) in the project, so there simply isn’t any window of opportunity or incentive for any system developer to fix the .NET errors. And even if that had been taken care of, we would still have the WYSIWYG and extension problems. So, with that in mind, accessibility isn’t even on the agenda. And we’re talking about a web site that has roughly 60 000 – 70 000 visitors per day!
And, from my experience, this is not an uncommon situation at all. Generally, when you present the accessibility factor to a company, they want it “in the package”, but they’re not ready to pay anything more for it nor allow for any extra testing. Most companies (and visitors that don’t need the accessibility enhancements) are happy if the web site in question displays somewhat correctly. They don’t give a damn if it follows web standards, uses correct semantics or is accessible to people with other needs. The companies normally seem to think that the percentage of visitors they lose in not being accessible doesn’t make up for the time and testing it takes to go that extra mile (no notice taken to the bad will this will create, however).
Take note that the site do not follow web standards, is using frames, do not validate, have a lot of accessibility problems.
I have no doubt that Jens will do his best to push things in the right direction in the future, he seems like a knowledgeable person, but the fact still stands that its current state didn’t keep it from getting first place. Another example that seems to come up is Google. Do a search for Robert Nyman, and it returns a page with 437(!) warnings/errors (Ironically, a MSN search, of all web sites, returns a valid XHTML Strict page, although not sent with the application/xhtml+xml doctype, but that’s a discussion for another day). This gives people the argument that:
Well, Google seem to do pretty good. And if they don’t care about web standards or accessibility, why should we?
I don’t really know how to argue with that. Even if every web interface developer in the whole world would know how to create accessible web sites, we still need to convince businesses, customers and decision makers to take it into consideration, to sell the idea of accessibility to the ones that ultimately make the call!
Last Friday I had the pleasure of meeting Roger Johansson, who was visiting our beautiful capital, Stockholm. He was attending a usability conference (with Jakob Nielsen, amongst others). We met up for a short “fika” (something like having a cup of coffee/tea with optional cookie/cake to go with it), and talked about web development. Roger has gotten some attention here in Sweden recently, bringing up the discussion about (the lack of) accessibility when it comes to web sites in the public sector (article: Webben är inte öppen för alla).
Something we really agreed on is the lack of respect CMS manufacturers show their clients when they create administrative interfaces that only work in IE on a PC. As if that wasn’t enough, their WYSIWYG editors generate terrible and invalid code that cannot be presented as strict HTML or XHTML. We’re talking about editors that generate deprecated tags, upper-case tags, attribute values without quotes, invalid attributes and so on. Basically, worthless code that also (as to top it off) isn’t well-formed, hence impossible to use in a stricter XHTML/XML scenario.
Why do they do this to us? Is it because of a lack of knowledge, or out of laziness, taking the easy way out? Maybe I’m cynical, but I’m apt to believe in the latter. I’ve met many developers that just think it’s a hassle to produce valid code, and their beloved Microsoft makes it oh so easy for them, so why should they bother going the extra mile?
With all the different web browsers on the market now, be it for computers, cell phones or PDAs, the initiative and responsibility to make the web accessible for everyone has to be taken seriously.
For those of you who read this, let’s start with the WYSIWYG editors. Say no to IE-specific ones, and look at the alternatives. One of the best ones I’ve seen so far is the open source alternative, TinyMCE by Moxiecode. It is solely based on JavaScript, which means that it doesn’t require any plug-ins or extra programs to run (the way it should be, in my opinion). For the moment, it works in IE and Mozilla/Firefox, meaning that with the Gecko support it is also available on the Mac and Linux platforms, as well as the PC platform. And, of course, they’re Swedes! 🙂
Another alternative is XStandard, which uses a plug-in for advanced control over the editing. Unfortunately, it only works on PCs.
Using a WYSIWYG tool in your application/-s? Please take your responsibility and make sure that it generates valid (and hopefully semantically correct) code!
Today is the launch of my lab web site, RobLab (yes, it’s a corny name), where I will have code, tips and tricks for anyone to use. For the moment, it only has a few things and I don’t know with what frequency I’ll be adding stuff, but that totally depends on the reception it gets. I hope the web site will act as a resource to you, and will be of use.
In a lot of the CSS code I see, people don’t seem to be aware of shorthand properties. They are an excellent way of keeping your stylesheet condensed and it also gives you an easy overview. Basically, it’s a way to write one-liners to affect all sides of an element or all similar properties grouped together.
The general rule is to apply length values for padding, margin and borders. They are to be written in this order, according to the element’s sides: top, right, bottom, left. If you only write two values, it will be for: top/bottom and right/left. Write three values and it will be applied like this: top, right/left, bottom.
NOTE: Remember to always specify a unit, such as px, em, % etc, unless the value is 0.
/* 2 px margin all around */
div#header{
margin:2px;
}
/* 2 px top/bottom margin and 4 px right/left margin */
div#header{
margin:2px 4px;
}
/* 1 px top margin, 4 px right/left margin and 8 px bottom margin */
div#header{
margin:1px 4px 8px;
}
/*
1 px top margin, 3 px right margin, 8 px bottom margin
and 5 px left margin
*/
div#header{
margin:1px 3px 8px 5px;
}
For font values, the shorthand property is applied in this order: font weight, font size/line height, font family.
/*
A bold font weight, 10 px big with a line height of 14 px
and desired font family specified in prioritized order.
The font weight value and the line height value can be omitted.
*/
div#header{
font:bold 10px/14px Verdana, Arial, Helvetica, sans-serif;
}
When it comes to background styles, it comes in this order: background color, background image, background repeat, background attachment, background position. Like this:
/*
The element will get a white background and have
a non-repeated background image that is placed
in the element's top right corner.
*/
div#header{
background:#FFF url(images/funky-pic.png) no-repeat right top;
}
I think it was Eric Meyer who first coined this expression (or at least he did where I read it first), and I think it’s something that describes what I sometimes feel. Whitespace blues is the feeling you experience when HTML tags aren’t written next to each other, resulting in weird effects, margins etc.
Above code can, in some cases/web browsers, generate extra margins and space in the displayed layout. Sometimes the only solution is to re-write it like this:
Sad but true. Personally, I dislike giving up on writing well-structured code just to get rid of strange whitespace. So is taking whitespace into consideration for web browsers really necessary? Or should whitespace only have any affect whatsoever within PRE tags?
I’ve been meaning to write this post for a long time. But, as always when you hesitate, someone else comes along and writes exactly what you were going to write (in this case, Mark Wubben beat me to it). But I’m just going to write it anyway!
First, what is object detection? The general purpose of it is to check in JavaScript if, for instance, a certain method is supported as opposed to relying on detecting what web browser the visitor uses. For example:
// Object detection
if(document.getElementById){
// Use the document.getElementById method
// to access an element
}
// Browser detection
if(navigator.userAgent.search(/MSIE/) != -1){
// Deliver IE-specific code
}
Generally speaking, it is good practice to use object detection, especially given all different web browsers that there are out there in the market. It is also a way of, as good as it can be done, future-proofing your application.
But it’s not the perfect solution that will work in all scenarios. I totally agree with Mark when he says that one has to compliment it with some browser detection, because there are web browsers out there that claim to support one or the other method, hence passing the object detection and then miserably failing on what one is trying to do.
I think it’s a bit narrow-minded to say things like:
Sure, in a perfect world. But the world isn’t perfect, we still have to deal with web browsers that will act like they support what you’re trying to accomplish, but that then won’t support it a 100% or stable enough. Web interface development comes down to experience in web browser behavior, so it’s a bit pig-headed to say to never use something that in some cases will be a necessary complement. Web developers should use object detection as far as they can, but be ready to swallow their pride and use some browser detection to cover up for web browser flaws if necessary.
I mean, what counts in the end is the result, isn’t it?
Two short topics today (see the other one below). This one is about what I wrote about in the beginning of April, how to structure one’s CSS file. Now the more well-known Douglas Bowman has written a post where he explains some of his tricks.
And, as I also wrote in two comments to Douglas’ post, I use Bradbury Software’s TopStyle for writing my CSS, which gives me the possibility to expand or collapse CSS rules by my choice.
Also, one thing I think is missing in CSS is a class-like grouping method. For instance:
Today, a rainy and gloomy day in Stockholm, I thought would be a good day to talk about sIFR. Since the authors of sIFR (Mike Davidson and Mark Wubben) have written extensive and thorough explanations themselves, I won’t write too detailed about it here.
Version 2 is also just released.
However, there are three things I want to cover; Why sIFR has arisen, the concept and my opinion about it.
Why
The different options for using typography on the web are extremely limited to a few basic fonts that can be taken for granted in the visitor’s computer. This has held ADs, Interface Developers and their likes back with fonts like Arial and Verdana. Enter sIFR.
The concept
Basically, it is:
…a method to insert rich typography into web pages without sacrificing accessibility, search engine friendliness, or markup semantics.
The code for this doesn’t lay in the XHTML/HTML file, it is done through a JavaScript and a Flash file. This results in that the code will look something like this: <h3>My heading</h3>. Then, the magic begins: if the user’s web browser supports/has JavaScript activated and the Flash plug-in, it replaces the heading (or several headings, if that’s the case) with a small Flash movie that draws in the text at the desired size but gives you the ability to have any font you want that is also anti-aliased.
For people without JavaScript capabilities or the Flash plug-in, it simply outputs the H3 as normal.
My opinion
All in all I think it’s for a very good purpose, and I really appreciate when people think outside the box. I’m also glad that they have thought of the accessiblity perspective on it, and that there’s no tampering with the XHTML/HTML. As the authors themselves state, it should be used with moderation, mainly for headings and similar elements.
A common objection is: “Why don’t you just use images whose text and size will be dynamically generated on the server?”.
Well, each image is a separate request to the server, and, as opposed to images, Flash movies generated through sIFR will scale accordingly to the font size the user has set in his/her web browser. The downside, however, to that is that it only scales when loading the docuemnt, not when the user resizes his font text on-the-fly. Personally, I use Ctrl+ or Ctrl- all the time in Firefox while browsing web sites that don’t have a font size I like. This is merely a guess, but I think many users surf that way, they don’t just have font-size large for every web page they visit.
Once can argue that you will reach a wider audience if you use images, since it is very likely that more user’s will accept images than JavaScript and Flash.
Images placed directly in the XHTML/HTML code also downgrades nice to screen readers if you use the alt attribute correctly.
What you choose is up to you. I prefer having options, and I will most likely choose using image replacement or sIFR totally depending on the situation and project.
The WHATWG are working on the draft for Web Applications 1.0, which is about “extensions to HTML to make it more suitable for application development” and it “…represents a new version of HTML4 and XHTML1, along with a new version of the associated DOM2 HTML API“.
First, what is WHATWG?
It is a loose unofficial collaboration of Web browser manufacturers and interested parties who wish to develop new technologies designed to allow authors to write and deploy Applications over the World Wide Web.
Second, it will be interesting to see if the W3C will acknowlede it.
Third, I’m not sure that HTML 5, as Annecalls it, is a good name for, it feels more competent than just a newer version of HTML.
However, maybe that name is only referring to the HTML part of it?
So, is this a good initiative? Or should we just stay with the current W3C recommendations about XHTML 1 and XHTML 2?
I don’t know.
What do you think?
PS. A nod to Anne for pointing me to this in the first place. DS.
Tommy has been asked the famous Ten questions by the WSG. Although I don’t necessarily agree a 100% with all of what Tommy says (for instance, about the benefits of, or lack of, using XHTML), he’s full of knowledge and it is a very good read.
Regarding coding semantically correct and using a strict doctype, be it HTML or XHTML, we’re definitely on the same page.
But when it comes to using the ABBR or the ACRONYM element for abbreviations, I think the discussion about different layers of presentation is far more important.
Personally, I use the ACRONYM element because I want the IE users to able to see it as well.
Si, capisco. But many people don’t understand. Or they don’t want to. Or both.
Interface code is supposed to consist of three layers:
Content (HTML)
Layout and looks (CSS)
Behavior/interaction(JavaScript)
This is so basic, just like using semantics. However, as Jeremy Keith writes in his excellent piece Gotta keep ’em separated (:hover Considered Harmful is also a recommended read), a common problem is that many, many developers use pseudo-classes in CSS for interaction effects.
Hence, of course they then complain about IE‘s incomplete support for CSS 2 (which is a very sad thing, I agree), since there are a lot more pseudo-classes available in it that are supported by web browsers like Firefox, Opera and others.
But pseudo-classes aren’t the way to go, and I don’t know what they’re doing in the CSS specification in the first place.
So why do developers use them? Like Jeremy, I believe that it is because it’s easier to do it that way, and not all of them are accustomed with JavaScript. Peter-Paul Koch also wrote about this (amongst other things) almost a year ago, where he concludes:
However, everyone’s personal preference seems to be CSS these days, and that’s what bothers me. The balance is lost. People seem to be afraid of JavaScript.
And Jeremy wonderfully states:
There is a gap in your skill set that needs to be filled.
That’s all to say, really. Do it right or don’t use it. Many developers (including my Interface Developer colleague at the company where I work) argue:
But it’s so easy to have it in the CSS file.
But what kind of an argument is that? As easy as using tables instead of CSS? As easy as having FONT tags? As easy as using DIV tags for every element on the page and then style them?
It shouldn’t be easy, it should be correct.
There are two cases to argue this, that are semi-valid points that should be answered:
Accessibility
True. There might be cases where the visiting web browser has JavaScript deactivated/doesn’t support it, but supports CSS.
But as always, the page should be usable for visitors that doesn’t support script. Not a valid excuse. Another factor to this is that DOM scripting is more widely supported than CSS 2, which means that you will probably be able to target more users with an onfocus event than the corresponding :focus pseudo-class in CSS.
Hover effects on A tags
Having a numerous amount of links in a page would end up in massive code if you were to add onmouseover and onmouseout attributes to every A tag. But that’s not the way I’d do it, and not the way I’d recommend. Luckily, Mark has written the code for display, so I don’t have to put it here.
Now that we’ve finally managed to separate content and presentation, the risk is that the CSS file will just be the new replacement bulk file, with interaction in it as well. Please, please don’t let us repeat the mistakes we did with the HTML files in the beginning. Let’s separate things into where they belong.
When developing web sites, making them as accessible as possible is crucial, both to people with different kinds of disabilities as well as to all kinds of different devices, web browsers and screen readers etc.
Why? Out of respect for the user, while making the web site available to as many users as possible.
You need to be aware of the fact that the web site will not look the same to all visitors, not all UAs handle CSS and other things. The solution isn’t to code in HTML 3.2 and avoid using CSS and JavaScript, since that’s just plain dumb. The most important thing is to code semantically correct, and use HTML 4 Strict or XHTML Strict. If you use JavaScript, make sure that there’s some option for those who have it turned off or a web browser that doesn’t support it.
W3C‘s WAI group released in 1999 the 1.0 guidelines for creating accessible web sites (version 2.0 is still a working draft). They consist of three priorities:
Priority 1
Things the web content developer must satisfy. Fulfilling this leads to the Conformance Level “A”.
Priority 2
Things the web content developer should satisfy. Fulfilling this leads to the Conformance Level “AA”.
Priority 3
Things the web content developer may address. Fulfilling this leads to the Conformance Level “AAA”.
Eager to test you code now? You can check color contrasts with this tool that Roger gave me a tip about, there’s a colorblind filter service available, but most of all I recommend the excellent tool for checking accessibility: Bobby.
I have sinned. I confess. I’ve had the div-itis.
But now I’m cured (I hope)!
What is semantics about then? Basically, it’s about using the correct elements for the corresponding purpose.
This means using H1 to H6 elements for headings, P elements for paragraphs of text, UL/OL combined with LI elements for all kinds of lists and so on.
What you should not do is using DIV or SPAN elements for it all and then just style it up in your CSS file.
For example, this is not semantically correct:
<div class="heading">Title</div>
<div class="text-paragraph">
Some text in <span class="bold">a paragraph</span>...
</div>
<div class="list-container">
<div class="list-item">Item one</div>
<div class="list-item">Item two</div>
<div class="list-item">Item three</div>
</div>
As opposed to this, that is semantically correct:
<h1>Title</h1>
<p>
Some text in <strong>a paragraph</strong>...
</p>
<ul>
<li>Item one</li>
<li>Item two</li>
<li>Item three</li>
</ul>
Some normal objections are a) “What’s the gain” and b) “But H1 elements don’t look good/have different margins in different web browsers”.
The answers:
a) The code will be easier to read when you can instantly see what the purpose of each element is. Another important factor is that for web browsers that don’t support CSS, it will still be readable, for instance, in PDAs, cell phones etc.
This leads to the point of being able to say that the web site will work in any web browser, instead of saying that the users have to have IE 5.0+, Firefox and so on.
Of course, this point only applies when the web site doesn’t require that kind of support for scripting and similar. Generally, what it’s all about is a question of accessibility. It should be possible to read and see the contents of web site, even if one’s web browser doesn’t support CSS.
b) That doesn’t matter at all. I’m not preaching that you have to/should use the default look of those elements. The display and layout of elements should be taken care of in CSS, not in HTML (at all). The HTML should only be about structure and data. A brilliant example of this is the css Zen Garden, where all the different designs use the same HTML file, and only apply different CSS to it.
PS. I promise not to disclose any information about my visitors. But I just have to tell you this one thing: there is life out there! My blog has had a visitor from NASA. They put people on the moon, I write about developing web sites… Same, same. 😉 DS.
Since it consists of already existing teqhniques, it’s just
a matter of branding. I think it’s good that it gets hyped and has
gotten a new name to market. Used the correct way and during suitable
circumstances, it can certainly enhance the user experience.
I
do think it is an interesting way of doing it, and I hope the hype
makes it easier to persuade/convince project managers to allow its
usage in appropriate web sites. What do you think?
PS. Have a nice weekend, I’ll write more on Monday. DS.
In my previous job I worked for a company that have offshore development, mostly for bulk programming purposes to keep the costs down. And not in any of those more common offshore places like India or Russia. No, their offshore development is in Belgrade, Serbia.
All the developers I’ve met/spoken to in the Belgrade office are very nice, but in my opinion it hasn’t really worked out yet for them in their collaboration (due to a number of reasons that I won’t go into here).
My wondering is what kind of buttons are the most suitable, the built-in system buttons, to create your own images or to use links with javaScript calls?
Generally, I don’t like links with buttion functionality, like submitting a form, so to me it isn’t an option.
When it comes to the two other alternatives, there aren’t any easy answer to it. It can be a design issue, when creating your own images can everything look so much better, you know they will look the same in ll web browsers etc.
However, something I find confusing is when designers (usually Mac users themselves) create a design where the buttons just look like the buttons in Mac OS X, which there is nothing wrong with, but it feels weird when you have a design where you have to use images for buttons when they’re just a rip-off of built-in system buttons from another operating system.
If one is going to use images, I ask for more creativity, please.
But when it comes to the recognization factor, I lean towards using the built-in system buttons, since those buttons look like they do in most web pages for the user, and they look like the buttons the user is used to so he/she doesn’t have any problems finding what button to click.
Personally,
I’m of the opinion that frames should never be used (iframes is a
totally different question, they’re just part of a “normal” page).
There’s a number of reasons why you shouldn’t use frames:
A couple of examples for the developer:
Difficult to keep the different pages synchronized, especially when it comes to manual reloading by the user
Hard to push out content. e.g. when a navigation in a frame has neen updated
Search engines can find single pages and then link to them out of their context
A couple of examples for the user:
Impossible to create a bookmark for a certain page
Not possible to save a link that, for instance, goes directly to a product page
When
it comes to the technical aspect, there exists such good possibilities
to cache prats of the page for reloading purposes, so server load
shouldn’t be a factor to use frames.
Another
argument people use to have pro frames is that the menu frame is always
ther (for instance, to the left) and doesn’t “blink”, but this is more
of a browser thing than the technical solution. If one, fonr example,
uses a Gecko-based web browser (such as Firefox)
it chooses to get the page one has navigated to, while keeping the
current page visible until the next page is fully loaded, so one
doesn’t experience a white in-between page or a jump, as opposed to how
it’s being handled in Internet Explorer.
From
a web user interface perspective there are a numder of alternatives of
how to emulate frames if one wants to, and the day Internet Explorer
supports the CSS property fixed it will be a piece of cake.
So why do people still use frames? Lack of competence, or are there cases where it is motivated?
First of all, I have to tell you something funny that happened at work yesterday… I was trying to convince the girl sitting next to me to watch the movie Finding Neverland, and we talked generally about the movie. Then I went to Aftonbladet.se and copied a quote from their review of the movie and sent it to her through MSN Messenger. It read something like: “A fabolous tribute to the lust of dreaming and writing”.
The
thing was, they had talked about our intranet in between while I was
doing this, so she thought the quote was my opinion about our intranet!
🙂 I really wish we had an intranet that gave me that feeling!
Which leads me to the topic of today… Naturally,
everyone wants as much internal information as possible. but are
traditional intranets the way to go? How many visit their intranets at
all? They who don’t or who rarely does, is it out of lack of time, boring intranet layout/form or “dead” content?
Something
I think there’s a lot of talk about is internal portals in the
companies, where people shouldn’t have to look for documents on a lot
of mapped network drives, but instead use a common general interface to
access all relevant information, which of course would be role-based as
well. Is this the way to go?
About 1½ week ago, I met the CEO of Wipcore where he presented their new versions of their system for e-commerce. Before
I met him, I had decided to question the previous version of their
tool, where the admin interface demanded that the user used Internet
Explorer on a PC, and the fact that they had to install a DLL fix for
computers that didn’t have developers programs with the necessary DLL
files.
I was going to
argue with him that, first of all, if the system is web based, you
shouldn’t need to install extra DLL files just to be able to run it,
because then the principle of being accessible from any computer falls.
Second
of all, you don’t want to (although being an admin tool where you can
ask for different criteria from the user/administrator) demand that
they use Internet Explorer on a PC. The least you can ask for, in my
eyes, is that it is available in at least one web browser per platform
among the three major platforms: Windows, Mac and Linux. The only thing
that is needed to achieve this is that you, except for Inernet Explorer
on PC, make sure you support Firefox (who also contains support for
WYSIWYG editing through Midas).
But
before I got the chance to confront him about this, he presented their
new .NET based version of their admin tool that, lo and behold, wasn’t
even in a web browser interface any more. They had come to the
conclusion that they weren’t satisfied with the functionality and
stablility being offered in web browsers and had decided to build a
Windows application in .NET (a so-called Windows Form).
Since
the administrators in their respective implementations were so few, and
dind’t have any real interest in working in the system from other
computers or from, for example, home, they thought that a “real”
aplication suited them better.
I
haven’t really decided what I think about this yet. Part of me is of
the principle that as many things shall be web based and don’t require
any installations, that the only thing needed is a capable web browser.
On the other hand, I’m aware of the fact that, among other things, the
functionality that a Windows aplication offers can’t really be matched
by a web browsers.
So,
the question is: Have they chosen the correct path or not? Are we
trying to create too advanced solutions that web browsers aren’t
suited/ready for, or is it lack of of knowledge and competence that
results in companies avoiding web based interfaces?
Recently, I’ve been moving towards an attitude that I want to satisfy as many users as possible, which means that everyone should be able to see and use the web sites I build. To me, it feels kind of like a Google philosophy, to reach as many users as possible with really easy to use interfaces.
It has gone so far that I even avoid JavaScript enabled in the user’s web browser (people that have worked with me previously problably won’t believe this, I love JavaScript!). But it’s more about what it’s worth, that one doesn’t use functions, scripts, plug-ins etc just for the sake of it, but to actually use it when it is motivated and gives a necessary enchancement to the web site/page.
I mean, how many times haven’t one done very advanced things on a web page, that one has been particurarly pleased about, but then it looks different on another computer, doesn’t work in a third one just because, for instance, script is disabled and so on.
No, I have moved more in a direction where, instead of using advanced functions in the client’s web browser, try to create web user interfaces that are managed through CSS and where content and its looks is totally separated, as in the brilliant example CSS Zen Garden, where evry page has the same HTML and the CSS takes care of everything that has to do with how it looks and its layout.
Also, I really like when web sites gives the user an option to change font size for the current web site/page without the need of going into the web browser settings just to achieve that.
I’m also of the firm belief that to reach the major audience (i.e. “all” users) then it’s vital to make it as easy to use as possible for them., I believe most inexperienced users is bothered/discouraged by texts like:
“Optimized resolution for this site is 800*600”
“You need to have JavaScript enabled in your web browser to be able to use this web site”
“You have to install Flash to hear our epileptic music and to see our bouncing circles”
I think the future is to follow the W3C recommendations that most web browsers have a pretty good support for today (except for, mainly, the PC version of Internet Explorer) to reach as many users as possible.
To start thinking about the end user and show them respect, instead of just complaining about their lack of knowledge and thinking that they’re ignorant.
I suffer from lack of motivation. I mean, it doesn’t just bother me, I suffer from it. It isn’t really related to my tasks here at work, it’s just that web browsers really make me depressed.
Everything I code is tested in seven different web browsers, and, sure as hell, there’s ALWAYS something that differ between them. It’s always some pixel, always in the last web browser you look in, that ruins the day.
I’m thinking about changing my ambitions with what I want to do…
Program some more advanced things that demands a lot of logic which is tough to program, but at least then the day wouldn’t consist of: “Oh no, it pushes to the right. [really dirty word] Where the HELL did that space come from?” and so on.
I’m convinced that I would prefer working with something that makes me evolve as a person with a more logical thinking, than just having experience of what’s wrong in every web browser. All the knowledge I built up about all the bugs in Netscape 4 is really useful now…