Lately, and especially evident in the beginning of the summer after the @media conference, people proclaimed that the war of web standards is won and that we should move on to focus on other things. Let me please say that the war is so far from over.
Have you ever had the need/wish for WordPress to deliver HTML instead of XHTML in your blog? And if yes, then having no idea how to control the default XHTML tags generated in comments and its likes? Fret no more!
It seems likely that at the end of 2006 Internet Explorer 7 will be released. First, let me say that the IE team has undoubtedly done some great work when it comes to fixing the numerous flaws in IE 6 as well as adding a heap of new CSS support (more detailed in Details on our CSS changes for IE7), although I think it’s a joke that display:tablestill isn’t supported.
But, my main question is: is catching up good enough?
Remember something called Geek Meet? People with a sincere interest in web development, always up for learning more and hanging out with like-minded? Well, fret no more! It’s time again! 🙂
For those of you using the very flexible AJAX-S for slideshows, I have now created a little add-on script to hide the footer and only showing it when moving your mouse pointer over it.
For many web developers, CSS means numerous of ways to create flexible designs, control fonts in a powerful manner and a central location for controlling the entire look of your web site.
Unfortunately, CSS is far from perfect so I thought I’d list the most common disappointments I have, given the current state of CSS support, and I will also go a little into what your options are and what the future holds.
Event handling in JavaScript has been an issue for many web developers, and countless of people have made their stab of solving it. When I wrote my post AJAX, JavaScript and accessibility some commenters were asking for a follow-up post explaining event handling in JavaScript. My idea here is to give you a basic background and to also tell you about a new and interesting solution.
Yes! I’m back! And let me tell you that I’ve missed you, and I’ve missed writing. There’s something extraordinary about writing blog posts and then get in touch with and make friends with people from all over the world. To have discussions with like-minded people about topics which we share an interest in.
This post will be filled to the brim with various information; from a new feed URL and other changes to what I’ve been doing this summer, so please read on.
Next Geek Meet will take place June 8th at adocca entertainment, Södermälarstrand 57B, floor 6, and the time is 18.30. Please write a comment and provide a valid e-mail address if you know you can attend.
Last night we held the first Geek Meet in Stockholm. In my experience, it was an immense success, if for nothing else, at least compared to my expectations.
Pretty much everyone that had signed up actually came, about 17 persons in total. After a rough start with a lot of unexpectedly locked doors in the building, people getting lost, one person held up by a robbery in downtown Stockholm etc, everyone seemed to really enjoy it.
Yesterday I went to visit some fellow consultants at their assignment for a sub company/department of one of Sweden’s largest banks. We had a talk about AJAX in general and different ways of how to implement it, and one of them opened his web browser to navigate to some AJAX-based web sites.
Something interesting followed next that really baffled me. Most web sites he went to had empty white patches where no content showed up, and some web pages even went completely blank. We knew for sure that JavaScript was enabled in his web browser of choice (IE, but still almost a real web browser… ;-)) so that couldn’t be the problem.
Then, naturally, we had to go test my ASK script to see what was going on. The version that we got there was the fallback version that works without JavaScript, but instead with regular links reloading the entire web page, meaning that no JavaScript events were applied.
After some digging, we found out that the JavaScript file was completely blank! The reason for this, apparently, is that the proxy server they had to go through to access internet totally cleansed any JavaScript file that contained this text:
new ActiveXObject
So much for object detection and every other approach we recommend to web developers. Not a single line of code was left behind in the file. And the problem is that it won’t throw an error or show the content of a noscript tag either; everything just stops working.
My initial reaction was that if they have such a tight security environment doing that, I really don’t want to care to cater to them. But as my boiling blood got calmer (kind of an exaggeration), I realized that this company was too big to ignore the fact that all their users got shut out.
Also, if they have a situation like this, it’s likely that many other large companies have a similar solution.
Conclusion: if you want to develop AJAX apps, make sure that it works without JavaScript as well, apply all the scripts in an unobtrusive fashion. I’m just glad that ASK passed the test with its accessible groundwork and then building AJAX functionality on top of that (Actually, the Google Analytics code of the ASK page did in fact throw an error when we tested it, but I think it was just a consequence of the proxy server doing it’s job…).
Is Web 2.0 as hyped as dot-com businesses were? Are some people in every company/organization/movement more interested in fucking each others’ butts patting each others’ backs than actually doing something worthwhile? Is the web still immensely exciting? Does Microsoft have a bad reputation? Are people still blinded by different technologies as opposed to focusing on the actual goals of a product?
This article is co-written with Anne van Kesteren, W3C Member and contributor to the WHATWG and Opera specifications, R&D and QA person.
When developing a web page, DOM methods are generally the way to go when dynamically altering elements’ attributes and performing other operations. But what about adding content to a web page in the most efficient manner, both code- and performance wise? We claim that innerHTML is unmatched by any DOM methods available and that it is in most, if not all, situations the best option.
People seem to have this feeling that innerHTML is evil. Instead of one line of innerHTML you would use about twenty lines with calls to the DOM. Every such line making one change. However, innerHTML is actually not that bad. The web browser pretty much parses it much like it parses the original page and builds some DOM nodes out of it which are then inserted at the requested location. Some mutation events are dispatched for the few who care and all is fine.
When it comes to having greater scalability in a web page, especially in AJAX scenarios, innerHTML offers unmatched flexibility. There has also been benchmark tests verifying that innerHTML is more efficient compared to using DOM methods.
The fact that it is not in a standard is simply because nobody got around to it. If you read the mailing list of the W3C Web API’s Working Group you can see that Opera, Mozilla and Apple want it to be standardized and we bet Microsoft would like the same thing. New entrants in the web browser market are probably interested as well given that it has to be supported anyway. That it’s not in a standard is probably its biggest problem, apart from the name which doesn’t really scale well. On the other hand, people complain a lot about document.write() as well which is part of DOM Level 2 HTML.
So, go on! Start, or continue, to use the best tool available for the job!
Most web sites out there don’t abide to web standards, use table-based layouts and are JavaScript-dependant. If you work with web development and you still haven’t got a clue, I think all hope is gone. Then you must be sincerely devoted to not doing a good job, or stray from conventions just to spite.
If you write valid and semantic markup, and add JavaScript in an unobtrusive fashion, your web site has come a long way when it comes to accessibility and SEO as well. It’s all there, one big package of building something great.
If you don’t do it that way and aren’t willing to learn, I won’t bother you anymore. It’s your problem, and something you have to deal with.
Law enforcement
Maybe I’m naive, but I don’t believe in laws enforcing accessibility. They can never be a 100% fair and balanced, and it’s a highly subjective matter. What is truly accessible? On the other hand I understand that when it comes to the public sector there has to be some regulations, when we’re dealing with matters about informing and facts that every citizen has a right to be able to get to. That I support.
For the private sector, however, I sincerely hope that reaching more visitors – thus getting more customers, getting a better search engine ranking, goodwill and actually doing the right thing should be incentive enough.
In the end, if companies choose to make their web sites inaccessible, it has to be their call. It’s their web site and they can do whatever they want with it. They will probably get bad press, like with Target, but I don’t think suing helps. Ultimately, my belief (read: vision) is that the market will cleanse itself; if you do things bad, people will choose another company to do their business with. Easy as that.
Accessibility consultants
On the other hand, we have people fighting for accessibility. Most of them good people doing it for a good cause, but sometimes their critique gets too harsh or is taken as being elitist, and that doesn’t help. Instead, companies being pointed out in such context don’t take it as constructive criticism, but instead as an attack and choose to ignore the people pointing out their flaws. It has to be done in a more respectful manner.
Also, critique is always aimed at the companies who it feels good to point the finger at. I’ve never seen anyone lash out at Flickr or Google Maps, although they don’t work properly with JavaScript disabled. The slideshow just goes dark in Flickr and Google Maps redirects you to a web page telling you that your web browser isn’t fully supported
Flickr slideshow with JavaScript disabled
The Google Maps redirect page if JavaScript is disabled
Why people leave them be? My guess is that people like Flickr so much and that Google Maps has got such a great API for building mash-ups, that they’re willing to overlook such things. Don’t. Be consistent.
A great initiative
Accessibility is often looked upon as something holding web development back, which isn’t true if it’s implemented in a correct manner. Also, some think that trying to make a web site accessible for people with any disabilities and/or platform means that it has to work exactly the same for everyone. It won’t. But make sure it degrades nice so everyone can at least partake of the information being given.
To me, just bashing inaccessible web sites doesn’t seem to do the trick. The people responsible just seclude themselves in their own shell, and hope the problem will go away. Instead, I applaud such initiatives as Accessites.org, which is about premiering good looking and functionally-wise excellent web sites that are at the same time accessible. I think that’s the way to do it, to show that something can be great and accessible.
I’ve done a very minor change to the event handling to cover up for a bug in IE’s garbage collector (something I hear will be addressed automatically in IE 7). In 99,9% of the cases you won’t notice any difference, but if you use it in a very advanced web site/web application it might make things better and less resource intensive.
Updated October 25th 2007
I get a number of e-mails asking how to start the slideshow as soon as the page is loaded. Add this code to the end of the jas.js to make it happen:
(The setTimeout is to avoid a content parsing bug in Internet Explorer)
Pretty much everyone wants to display and show images to other people, right? So many use Flickr for it, and while I think it’s a great idea and that it has got some wonderful features, my main gripe is that if I present images, I want to do it in my own web site.
People who do it themselves, on the other hand, always think Flash is necessary just to have fading and a nice little slideshow. Not true.
Therefore, I created JaS – JavaScript Slides. It is a highly customizable JavaScript library for easily turning your images into a collection viewable as a slideshow, and with fading effects, if desired. It also supports automatic thumbnail creation and tagging of images, so the viewers can find the exact images they’re looking for.
Humbly described, it’s like your own little mini-Flickr that you can use wherever you want to, and skin and brand it the way you feel appropriate. It’s also a way to showcase the independence and separation of the interaction and the design of a web page.
The geek meet has gotten a sponsor that offers a place to be, food and drink. Not bad, eh? 🙂 The new location is: adocca entertainment, Södermälarstrand 57B, floor 6, and the time is 19.00.
Ok, this idea might crash and burn so hard, therefore I just felt I had to do it! 🙂
The idea is to have a very informal gathering of people located in, or visiting, the Stockholm area who are interested in web standards, semanctics, accessibility etc. It will be a time for people to meet and discuss, get to know people and share experiences. The meeting will take place on April 25th at 17.00, most likely at my employer’s office in Kungsgatan adocca entertainment, Södermälarstrand 57B, floor 6, and the time is 19.00.
Does this sound interesting to you? If yes, please write a comment letting me know if you’re coming!
So, if this doesn’t fail miserably, I really look forwarding to meeting you then! 🙂
Today I have a debate article in today’s issue of Computer Sweden, and already in page 2 (meaning that everyone will read it :-)). It can also be read here: LÃ¥t användarna pÃ¥verka webben . Most of it is just stating the obvious about focusing on end users and caring to all different kinds of accessibility needs, but I also manage to throw in a little comment regarding what I feel about the Web 2.0 hype.
Nevertheless, reaching 130 000 readers is never bad. 🙂
An alternative solution to this problem is my FlashReplace library.
Although news of this has been around for a while, many people seem to have missed it and/or didn’t think it was worth reading up on. On the contrary, the implications of this are huge and will most likely affect a lot of web sites. Due to the patent case with Eolas, Microsoft has been forced to update how ActiveX components behave in web pages.
This dreaded update, named Microsoft Security Advisory (912945), has been available for a couple of months, but on April 11 it will be forced out en masse through Windows Update so we have a few days till all hell breaks loose. If you want to test your web pages before that, you can download the patch and install it right now.
The gist of the patch is that no interaction with ActiveX elements will be initially allowed until the user has enabled the ActiveX by clicking it or tabbing to it and then pressing spacebar or enter. When hovering the ActiveX element the user is presented with a tool tip text that reads:
Click to activate and use this control
Examples:
Naturally, no one wants your Flash movies, videos and the likes to be presented to the end user like that. “Luckily”, there’s a fix for it, which I guess is because of some kind of glitch in the patent. If you create the ActiveX object, in most cases this means an object tag, through script, then you will bypass this security warning.
There’s an article on MSDN, Activating ActiveX Controls, which describes different techniques doing this. Noteworthy is that it won’t work with inline scripts in the web page, only external ones.
Updated April 6th
Tanny pointed out a serious problem when it comes to JavaScript solution; something I’d read about but hadn’t tested properly. If Disable Script Debugging is disabled in IE (the checkbox is unchecked), the script workaround won’t function either. However, I think the default setting in IE is that this is enabled, so it will hopefully not affect a majority of the end users. You find that option under:
Tools > Internet Options > Advanced, under Browsing.
What I think of this
I don’t know any deeper details of the patent case, but I think the whole idea of this sounds ridiculous. My general opinions/fears are:
Using Flash or video in your web pages shouldn’t, in my opinion, be dependant on if script is available/enabled.
There will be so many cases of poor JavaScript practices trying to add content to a web page.
I’ve done some testing and ran into problems in IE when adding param elements to an object using DOM methods. Instead, writing out the same HTML code by using the innerHTML property worked… 😐
With this, XHTML web pages served as application/xhtml+xml will probably never see the light of day, since a lot of web pages will now depend on code like document.write and innerHTML (Note: innerHTML does indeed work in Firefox when the XHTML code is served as application/xhtml+xml).
What happens if/when Microsoft manages to appeal this decision and win in court? Should we all then change the code again?
If this sounds like too much to you and you want a library/tool to do all this for you when it comes to using Flash, you can take a look at FlashObject (although unfortunately it relies on innerHTML to render the content).
How to uninstall the update
As life on the web goes, many web developers won’t be aware of this, which will result in that you, as an end user, will have to allow every ActiveX movie you see. The solution to this is to uninstall the patch (thanks to City Of Rain for the heads up.):
Go to the Control Panel
Choose Add or Remove Programs
Check the Show Updates box
Find Update for Windows XP (KB912945) and choose Remove
So, whatever you do, please read up on this. It will affect you, as a web developer, end user or when supporting your grandfather’s computer usage…
With the advent of mass-hype for building AJAX solutions, I find it necessary to shed some light of AJAX and JavaScript implementations and how they relate to and affect accessibility, and to explain how they can both co-exist; that one doesn’t exclude the other.
What is a progressive enhancement/unobtrusive JavaScript approach?
First, a good JavaScript approach is about implementing JavaScript in an unobtrusive way. Basically, what this means is avoiding some basic bad implementations:
No more inline event handlers in HTML elements, meaning that code like this should never be used:
<div onclick="doSomethingAnnoying()">A div</div>)
Definitely never ever use javascript: links, like this:
No inline JavaScript blocks in your web pages at all.
How should I do it then?
Common things to think about are:
Have all your JavaScript in external files, for better accessibility and performance (since JavaScript files are cached by the web browser and only needs to be retrieved once), and then also apply events to elements from there.
Only apply JavaScript event handlers to elements that already have built-in functionality for communicating, like links and submit buttons.
Make sure the web site functions without JavaScript. JavaScript is supposed to be used to spice things up based on already existing functionality, not to be the corner stone that the web site is totally depending on.
Give me a good example
Sure! For instance, say you want to apply a certain JavaScript event to some links in your web page that shows an information layer (e.g. a div that is initially hidden). How do you do it?
Use the window.onload event, which is triggered when the web page is fully loaded, to then apply your events to desired elements. There are many different ways of doing this and how to handle events, so here’s a simple example:
The result is that web browsers that have JavaScript activated and that support the document.getElementById and document.getElementsByTagName methods will cancel the links navigation to the my-details.php page and instead show an information layer directly in the page. For those who don’t match that criteria, it will simply redirect them to the my-details page. Offering something extra for those with JavaScript enabled but still degrading nicely and being fully functional to others.
Let’s break the script down, what happened?
window.onload = applyEvents;
First I tell the window to call a function when it’s onload event is triggered, i.e. the page is fully loaded. Notice: no parentheses after the function name, in that case it would’ve been called instantaneously.
In the applyEvents function, the first line is this:
What it does is using an approach called object detection to see if the document object supports the two methods we want to use: document.getElementById and document.getElementsByTagName (these two are widely supported by most web browsers, don’t worry).
var arrAllLinks = document.getElementsByTagName("a");
Gets a collection of all link elements in the page (could be done in a more effective manner with the getElementsByClassName script).
Loops through the collection of links to find the ones with a certain class name. Note the usage of the variable oLink to avoid doing several checks in the array, and that it is also declared outside the loop. All for performance reasons.
Applies the onclick event to the matching link/-s and cancels their default behavior. The check for oEvent in the event handling is the standard way of event handling, while event is for Internet Explorer’s flawed and proprietary event handling. Now a click will instead show the information layer element.
What about AJAX, it said so in the title?
With the good practices and examples I’ve given above, it’s pretty much all about using the same knowledge when doing something AJAX-based. With my AJAX library, ASK, it was my attention to implement it in that manner, and also cater to well-known usability problems like back buttons that work, impossible to bookmark a specific state of an AJAX-based page etc, at the same time. I definitely urge you to take a look at it and play around with it.
Something to think of is that when it comes to screen readers is that they might support the JavaScript you use but won’t notify the user that something has been updated in the page. For more on this discussion, please read Derek’s Javascript and Accessibility (yes, I saw the name of his post after I initially posted this one… 🙂 ).
As soon as the word accessibility is mentioned very strong feelings and opinions come into motion and the discussions go on all night. Therefore, I felt the need to take a shot at explaining my view on accessibility.
To me, it is all about making web sites accessible to people with disabilities and at the same time to people using different operating systems, web browsers and devices. I’m sure that the general notion when the term accessibility initially was coined that it was to focus on, and cater to, people with special needs that don’t have all the prerequisites as everyone else. A very noble initiative and a corner stone if we ever want the web to be taken seriously.
But when making a web site accessible to people with disabilities, why wouldn’t we at the same time make it accessible to people who aren’t using Windows and Internet Explorer? It’s a mindset and an attitude that go hand-in-hand for me. Surely, everyone wants to reach an audience as wide as possible, right?
A thing that bothers me, though, is when accessibility advocates proclaim that we have to stay away from using JavaScript, Flash et al, all in the name of making it accessibility. Accessibility and using JavaScript, for example, aren’t mutually exclusive. It’s all about progressive enhancement. Build a common ground and then implement enriching features in an unobtrusive way that doesn’t rule out accessibility.
So, let’s stop bickering about what we read into the word accessibility, and instead start focusing on reaching as many people as possible with this wonderful medium called the Internet!
This has to be a joke, some kind of twisted humor. Apparently the U.S. Government granted a patent to a web design company in California, one which:
…covers all rich-media technology implementations, including Flash, Flex, Java, Ajax, and XAML, when the rich-media application is accessed on any device over the Internet, including desktops, mobile devices, set-top boxes, and video game consoles…
This morning, when I read the headline technical article in Computer Sweden , I got upset, tired and saddened. Basically, the article is calling Swedish companies out of date just because they aren’t using AJAX for their web sites. It also somehow manages to convey the notion that AJAX = Web 2.0.
First, AJAX is not Web 2.0. A Web 2.0 company/solution might use AJAX, and that’s it. Using AJAX doesn’t automatically make it Web 2.0. Period.
Second, calling AJAX modern is just ignorant. The technical possibilities have been around for years, the only thing that’s “new” is the acronym and the hype.
Third, even if it were a modern approach, why would everyone benefit from it? The web is already filled to the brim with unmotivated AJAX solutions; web sites that have sacrificed accessibility and usability just to be doing the latest thing. Now this magazine, probably the technical magazine/paper with the highest amount of readers and vastest reach in Sweden, helps to spread the word that everything has to be AJAX-based, which will, without a doubt, lead to a lot of web developers out there start doing it right away, and managers will run to their employees proclaiming that they just can’t miss this.
The article is written by a reporter who, last week, published an article stating that web sites would have to be re-written for IE 7. Sure, if it were amateurs doing the job the first time around… So, needless to say, his track record reveals that maybe he hasn’t gotten a technical expertise. Which is fine, but then please do the proper research before publishing such pieces. With such a job, there’s a responsibility that goes with it.
One company that is mentioned and quoted in the article is hitta.se , who proudly announces that their AJAX-based preloading maps are so much better than their competitor Eniro’s are. Ok, let’s take a swift look at hitta.se and see for ourselves:
With JavaScript disabled, no maps are shown at all (compared to Eniro’s that at least show up initially, but then the navigation of the map doesn’t work).
The code is riddled with inline styles and inline scripts, completely forsaking the professional approach of having this in separate layers.
The word semantics doesn’t seem to have gotten through at all to the web developers; the state of the HTML code is appalling.
So, where does this leave us? They’re proud to be using the “new” technology AJAX, while totally forsaking everything else when it comes to good practice, accessibility, usability and proper web interface developing. If you implement such a simple thing as a map on a web page, and especially for such a popular service on the web, your responsibility is to make sure it isn’t dependant on JavaScript.
Does this mean that AJAX has to be inaccessible then? Absolutely not, it’s all about doing it the correct way. Also, I don’t have a problem with AJAX itself; on the contrary, I agree that used in a proper context, it can make using a web site a lot more interesting, useful and fast to use. But it should never be used at the cost of excluding users or normal web browsing behavior such as using the forward and back buttons in the web browser, bookmarking, reloading etc (this is all something I wanted to address with ASK – AJAX Source Kit).
Do I have a beef with hitta.se? Not at all, I just get tired when people make statements and say that they’re so much more in the loop than other companies, and then it’s obvious that they haven’t done their job correctly. In fact, I know the people behind specifying the concept, and I think it’s great! It’s just sad that the web developers implementing it didn’t have the skills to match it.
Conclusively, I’d advise hitta.se to make their next statement when you’ve done your job right. Till then, do your homework…
When I read Roger’s post Let’s skip Web 2.0 and go straight to Web 3.0 this morning, I experienced some strong feelings that I felt I wanted to elaborate on. Basically, the post is a write-up of people jumping the bandwagon, just following every new tech-hype and feel that they have to implement it.
I’m happy to call Roger a friend of mine, and generally we do agree about this topic; also my and his opinion got a little clearer after an IM conversation regarding it. But, as I wrote in my comment on his web site, I think a lot of people will always do the latest thing just because they can, and a majority will do it in an unprofessional way. Cynical, maybe, but true. I don’t think we can ever stop people from doing such a behavior. It might be driven by web developers, people in sales or whoever
So, the first point I want to stress here is that people should try out the new things, to see what it’s about and to form an opinion. Also, their responsibility and job is to do it without sacrificing things like accessibility and usability. A new technology or approach shouldn’t ruin all the work and conclusions people have come to before about what’s best practice in web development.
The second one is that if a lot of big names/pro-bloggers/(or whatever you want to call them) diss new technologies or mention them in a bad context I’m afraid that people will shy away from something that might actually be a good thing (I know Roger isn’t doing that in his post, but at first it seemed like that to me). It becomes a hype to not follow the hype, if you get me.
I think we should instead indeed embrace the hypes that come along and then carefully mould them into a good thing. Not just refrain from using it, because it has gotten popular amongst less considerate web developers.
As most of you probably know, the target attribute isn’t allowed on links in strict HTML or strict XHTML. The thinking behind this, as I’ve understood the reasons behind this decision, and as I also see it, is that there are too many web browsers out there, be it in computers, PDAs or cell phones, and there are a number of factors that applies then. The most important ones seem to be:
Many of them don’t support opening new windows.
Most computer web browsers support tabbed browsing as well.
It should be up to the end user, not the web site, to decide if a link should be opened in the same window, a new window or a new tab; web developers shouldn’t force such behavior on people.
While all this is good and respectful and sounds great in theory, it’s not that easy in the real world. Let me take a case in question: in one of the projects I work in, they had a demand that a link should be opened in a new window. I came up with the usual counter-arguments why we shouldn’t do that, but to no avail. However, the thing is, I partly agree with the customer and Project Manager in this specific situation; why a new window was actually somewhat motivational to use:
The link was to a PDF file, with all the possible problems that might come of that, and as had already happened to many users (the web site in question is live), they clicked the link and then they just totally lost touch of orientation.
Most people don’t understand the behavior of tabs or new windows, and a majority get confused when they get linked to another web site in the same window/tab. And yes, professional users, like I gather most of you are, have no problem, but we also have to regard our end users.
In the end, I went with using the target attribute. Sure, I could have used an unobtrusive JavaScript to add an onclick event and used window.open, and at the same time get perfectly valid code, but then it wouldn’t be as accessible and also dependant on scripts to function properly,
So, I feel a little perplexed about this: is target really a justifiable approach in some cases (though it has been terribly misused), or is my example just the exception that justifies the rule? Should we take some responsibility in educating end users, or just deliver what they ask for?
Denny brought to my attention that the history and the links didn’t work flawlessly if you have the same target element for several ASK links. Therefore, I’ve now added a paremeter to the object constructor, this.useSameTargetForSeveralCalls = false; that should be set to true if you want to use the same target element for several ASK links. However, the default value is false to avoid adding links to the history if they have different target elements, and also to save performance.
Updated September 29th 2006
I’ve updated a more fail-safe way to use the XMLHTTP ActiveX object in IE, and also added proper fallbacks if the first one fails.
Also, a very minor change has been done to the event handling to cover up for a bug in IE’s garbage collector (something I hear will be addressed automatically in IE 7). In 99,9% of the cases you won’t notice any difference, but if you use it in a very advanced web site/web application it might make things better and less resource intensive.
I have always liked the approach of updating certain content on-the-fly in a web page without the need of reloading the entire content. This approach has been around for years and has fairly recently been nicknamed AJAX.
The thing with AJAX is that it needs JavaScript to work and a direct consequence surrounding its hype is that a lot of web sites have implemented it without catering to common usability and accessibility factors. This is something that has saddened me, and therefore I developed ASK – AJAX Source KIT to address that while at the same time offer a light-weight library to implement AJAX functionality without having to worry about web browser differences.
The basic idea of it is to implement AJAX without sacrificing those factors and at the same time do it in an unobtrusive way, meaning that there’s no need for any event handlers or extra elements in the HTML code. All that is needed is to include the ASK JavaScript file, assign certain class names to the elements one wants to apply the ASK functionality to, and then implement accessible as well as AJAX-enhanced versions of the content that shall be retrieved dynamically.
My ASK concept was featured in the February issue of Treehouse Magazine, where you can find a more in-depth explanation of the code and about the choices I made during its developing phase.
My humble hope is that by seeing this, more web developers will understand what it takes to take a considerate approach to AJAX while using it to offer end users a richer experience. Please try it out and don’t hesitate to post any questions here that you might have.
With the humble title of this post, I guess I really need to add that these ways mentioned below are the ones I’ve experienced to be very reliable to get a good search engine ranking. Naturally it varies a lot, but I get somewhere between 28 – 45% of my visitors from pure Google searches, out of just having a high ranking (and sometimes for terms that amaze me :-)). These are my advices:
Semantic code
Make sure you write semantically correct code, meaning that you need to use the correct element for the right situation. It is all about how you mark the words you are using, and how and in what context you want them to be interpreted.
Friendly URLs
Make sure you have URLs with a good descriptive value, as opposed to one being made up of just a lot of parameters. There are different tools and settings to achieve this in most, if not all, web development environments. For instance, these two links both lead to the same web page:
If you get mentioned with good words in an appropriate context, especially from a web site that has a good PageRank, it will help push you up the search engine list.
These are the only tips I can give you; basically, it’s just about good web development practises and maintaining good relations with other web site owners.
I’m sure there are good SEO companies out there, but the ones I’ve come across have all been unprofessional and/or been using very suspicious methods. And as soon Google update their algorithms, there’s a big chaos when some SEO’s dubious work fail, since some of their tricks have been found out and taken care of. Then, naturally, it backfires so their customers get a very bad search engine ranking.
Just do as I suggest above; code properly and you will be safe. Look around to see how good search engine ranking most web developing blogs get, just because they know how to implement a web site in a correct manner.
Come on, give us a bad example
Sure, but only since you asked for it. Recently the web site http://www.larmdirekt.se/was brought to my attention. If you navigate to their web site and disable CSS in your web browser (Ctrl/Command + Shift + s is one way to do it if you use the Web Developer extension in Firefox), alternatively view the source code of the page.
In the footer, you will then find a link with the text “y”, which leads to the page http://www.larmdirekt.se/laarm/. Make sure to turn off JavaScript in your web browser and navigate to that page and you will not believe your eyes: a little link farm. If you surf around those links you will, amongst others, find the names of some fairly large Swedish companies, and the best thing of it all: the name of the SEO company in the title bar.
So, go check out the code of your own web site right now, or ask your SEO what methods they use.
This post is mostly applicable for Swedish readers, but I believe most of you in other countries stumble across this fairly frequently too.
Here in Sweden we have a publication called Internetworld , whose target group is mostly private users and small businesses. Their articles mostly deal with business gain, short press releases what has happened in the field of technology with things like new services on the web, Firefox increasing its user base etc. Out of general interest I read it, amongst a lot of other publications, just to stay on top of what’s going on and what people are talking about.
When I had worked a while in the internet business, I soon realized that they aren’t always exactly spot on with their articles, especially when it comes to technology choices, coding tips and its likes. However, what they’ve written has mostly been harmless and can at least be of some help to amateurs starting to code.
However, I just browsed through the latest issue with an article entitled “Web standards part 1 – Adapt the web site for different web browsers”. Just reading the headline, I realized it probably wasn’t going to be good. After going through it I came to the conclusion that it isn’t as bad as I first thought, they do, at least partly, try to convey the message that there actually is something out there called web standards and it is there for the device “Code once, run everywhere” kind of equivalent for web code.
Unfortunately, though, they have some parts and quotes that I sincerely think will hurt new web developers’ attitude towards web developing and that’s the reason for me writing this. They briefly touch on the fact that there are different interpretations by web browser vendors how web standards should be implemented. While that is to some degree true, it’s seldom knowledge that beginners need to know, it’s usually only interesting on a pretty high level, as long as you start out the correct way when you build your web sites. And it’s rarely a problem when you write HTML/XHTML, it’s usually when you code CSS that this will be more evident (which will, as I understand, be touched on in an upcoming part in this series).
The conclusion of the article is to follow web standards if you have no idea about your target group; otherwise, offer them an enhanced and web browser-specific version that only works under certain circumstances. Another conclusions is that web standards is an “advanced technique” and question if it’s worth to require that out the users to have such modern web browsers to be able to use your web site; talk about not understanding web standards.
I don’t know where to begin with to describe how damage such an attitude will do. Sure, naturally most if not every web site out there will work better in a later version in, say, Internet Explorer or Firefox than in Netscape 4 but that doesn’t give you the right to shut out users with an older web browser. It’s all about progressive enhancement.
Another thing is that even if you do know a lot about your visitors and the statistics, that situation can almost change overnight. Build an Internet Explorer-version on proprietary code just to realize a month later that many of them have started using any other web browser out there. Also, does anyone really know how many web browsers there are out there? Hundreds and hundreds, let me tell you that. Different web browsers on different operating systems, PDAs, cell phones, digital TV boxes et cetera. The only way to make sure that your code will work is to follow web standards. No, web standards will not solve your every problem, but that’s the closest you can get and definitely your best bet if you’re serious in what you do.
Let me quote some pieces in the article:
There are a number of reasons where you gain from following web standards, but here are also occasions when you don’t, which we will explain in some of the following tips.
After that, I never find any tip where the difference is proved. Also, that’s just the mindset that’s so dangerous and there has to be a realization that while web standards maybe won’t save the day automatically, they will never hold you back either.
In modern HTML, that is often referred to as XHTML…
What kind of crap is that? There’s HTML and there’s XHTML; they are two different things and none of them are really more modern that the other. Something that really bothers me is that that isn’t even mentioned and doctypes are totally left out. No wonder you think there are differences out there if you don’t know how to choose a doctype and what effects that choice will have on the rest of the code.
Usually the unit px (pixels) is the one unit that gets interpreted most alike amongst the web browsers
While I kind of get what he’s going for, like percentage rounding errors in some web browsers and its likes, talk about killing the accessibility factor. You can’t make such a statement that will give such repercussions without explaining it in a more detailed way. And what about ems? Ever heard of those?
Conclusively, maybe I’m way too hard on this guy. After all, I do sincerely believe that he meant well with the article and tried to help people, but my fear is just that he did as much harming as helping; hence this post.
In common web development people use query strings to pass parameters to the receiving web page. This technology is available in basically every language dealing with the web, such as ASP.NET, PHP, JSP, JavaScript etc.
Sure, query strings aren’t always the best way to do things, it depends on the situation, but in my opinion there are a lot of cases where it’s a justified and good approach. There are definitely a lot of scenarios when one can’t post forms to achieve this effect, but instead has to resort to query strings, for instance, when it comes to making a direct page available for bookmarking.
And yes, one can implement so-called friendly URLs, but from what I’ve seen it isn’t really the best approach either.
However, as most web developers are aware of now, is that if you use query strings it will negatively affect your search engine ranking. My question is: why? Should we change a common web developing standard just because search engines have a hard time dealing with it? And who are they to judge, using query strings extensively themselves?
Most of my normal working days consist of me developing web interfaces in a .NET environment using CMS tools based on it. As always, many people have opinions about Microsoft and their products, so I normally don’t even raise an eyebrow when I hear Microsoft getting criticised or dissected.
However, this was different. I normally don’t do this thing with just linking to other people’s posts, but I think Intrepid Noodle’s Asp.net (is) for Dummies… was interesting because it highlighted a problem I often see: programmers get lazy when it’s too easy for them and then many of them don’t know how to take care of things when they go awry. On the other hand, I’ve met a lot of skilled web developers using .NET but they usually shy away from the most common approach in Visual Studio.NET: drag and drop and all will be fine (or will it?). I also liked it because it was balanced and not just a Microsoft bashing.
Don’t get me wrong, I’m all for making things easer for web developers, But sometimes it’s just too easy…
I sometimes meet web developers who don’t even know anything about HTML anymore, WebControls with names like asp:Panel have become their new lingo for interface code. If you ask me, then things have gone too far and has been to Microsoftified.
So, go read the post and share your opinion. Out of respect to the original author, I urge to to write your comments at Intrepid Noodle’s web site, but I’ll leave comments open here too if you want to share anything with your favorite Robert. 🙂
First, just for you to understand where I’m coming from, let me tell you that I love JavaScript. It has given me, and continues to give, immense pleasure when it comes to web developing and I’ve been writing JavaScripts extensively since ’99, doing everything from minor validations and other checks to things like animations, Flash fallbacks and a Web OS.
So, let me move on to the topic of JavaScript animations. Faruk recently launched his web site, and more specifically presented the project he and Tim Hofman have been working on:the FACE project. To simplify, it’s a way of adding animation and visual effects to a web page through JavaScript and CSS. While I have no major objection when it comes to the code itself, I’m not really so sure about the concept.
Animations through JavaScript doesn’t really give the lean smooth experience technologies like Flash or manipulating vector graphics in any other way can, and using filters is, at least in IE, infamous for slowing the web browser down and draining the memory. While I like the idea of not being dependant on any plug-in to create an effect, I think Flash is spread widely enough to not be a problem.
Another perspective is that I, as a user, have a way to choose what kind of web site I want to visit. If I want a web site that is visually a rich experience, and maybe with sound as well, I visit a Flash-based web site, who normally offer a non-Flash version of it as well. But if I visit a “normal” web site, I really like that things aren’t moving around, blinking and flashing, and generally stealing my attention from the content.
This paragraph is probably going to sound a bit harsh, but the only reason I’m writing it is because I went through the same kind of evolution myself. While being very talented otherwise, Faruk is fairly new to JavaScript, and he’s now doing exactly what I did when I reached that level: creating animations. So, what I wonder is if he and Tim create this because they can, or because there’s a user base out there asking for it?
In general, I think adding interactivity to web pages through JavaScript is the right way to go, but then I think approaches like AJAX and its likes are fundamentally more interesting than animations. However, this is merely my humble opinion. I might be totally off-key here and people out there really long for this.
So, tell me what you think? Are JavaScript animations just the new animated GIFs, or are they the future?
Most web sites I look at seem to have no idea how to create structured and valid layout when it comes to form elements. One of the things I get most annoyed at, both as a coder and a normal user, is when they’ve missed out on the wonderful and easy label element.
The label element is used together with form elements and it makes it accessible to screen readers, while also making the text clickable to set focus to the element in question. Point in question:
You have a radio button, but since they’re pretty small, it might be difficult to click it. You then add a label around the text next to it and make it reference the desired radio button: VoÃÂla! You can now click the text as well to make a selection.
However, the example above demands that you know the id of the radio button. If you have dynamically generated forms meaning a moving number of elements, you need to do some trickery to generate dynamic ids and then have the same value for their respective label element.
There is one way around this, but unfortunately it doesn’t work in Internet Explorer. I do showcase it below, though, just so you’re aware of it.
<label>
<input type="radio" name="gender"> Male
</label>
You thought this was all funky and want to know more about improving your form coding, but you have no idea where to go? Fear not, take a look at 10 Tips To A Better Form
Last night I held a presentation for SWENUG about web standards and what to think when developing web interfaces with .NET. Interesting with a crowd who are general web developers and not just working with HTML, CSS and JavaScript.
After the presentation we had an open discussion for an hour or so, talking about circumstances surrounding web developing and what the future might hold. A question that came up hit the nail on the head: if everyone abides to web standards, no more, no less, what’s the gain for them?
Let’s break this down. There are two possible scenarios:
Not fully and/or properly implemented web standards.
Fully implemented web standards and some extra features on top of that.
When it comes to the first bullet, I think the answer is pretty clear. We need some kind of minimum ground to stand on, the least common denominator where we start developing. So far, so good.
The second bullet is more interesting. If software makers aren’t allowed to implement something extra to get that competitive edge, what’s their incentive? For instance, why would companies put a lot of time and money into developing a free web browser? For the good of the world? I don’t buy that. I think Microsoft have a web browser to make it function perfectly with, as well as promoting, other products in their product family.
On the other hand, offering something more than web standards will result in product-specific proprietary solutions and add-ons. And we don’t want that either, that will bring us back to 1999.
I guess a natural follow-up question then would be: Is Microsoft on to something with XAML and WPFE? Should we expect software companies to start delivering products that will give a richer experience for some and downgrade automatically to others?
I don’t really have a good answer to this, but I believe in two things:
Companies will want to deliver something more than their competitors.
We will see a need for emerging technologies to give users a richer experience on the web. If that’s open like SVG or something company-specific, I have no idea.
First, we developed layouts based on pixels. Along came accessibility and scalability, and we started to specify our fonts with ems instead. Then, those of us who wanted to be really out there created whole layouts using ems, so the whole layout would scale accordingly to the user’s current text size setting, giving a more consistent design impression. Hand in hand with this, we also created layouts that were elastic, expanding but with a fixed maximum and/or minimum width.
They way I see it, we break our necks calculating pixels into em, trying to make sure that every value is roundable. Then, of course, when the user changes his/her text size setting, it’s bound to be some rounding errors depending on the new size and things like inheritance of the em value into different elements.
Personally, I think it’s gone to far. The reason people started to use em for fonts weren’t because pixels were a bad unit, but for the fact that Internet Explorer didn’t support resizing of the text size when the font was specified in pixels.
Ever since I was a little kid, playing video games, I’ve been amazed by the fact that no matter what size one has of the TV screen, the game adapts and you can just start playing. When I started to develop web sites, I couldn’t believe the constraints of a fixed size delivered to everyone. Sure, vector graphics aren’t here yet for the web (I can’t believe why SVG isn’t already built into every web browser), but lately I’ve been testing something that gets us as close as possible: the Zoom feature in Opera.
I think it’s outright brilliant! Talk about making it more accessible while keeping the general look of the web site! You zoom a web site to desirable viewing size and it just works. Doesn’t matter if the font is in pixels, or if the web site itself has a hardcoded width. Scale, baby, scale.
My conclusion is that this feature should be mandatory in every web browser. Stop developing with ems, use your beloved pixels, and instead give us tools (read: web browsers) that offer users the features they need.
Let us instead focus on making sure no page demands JavaScript to function and that it’s possible to navigate around using only the keyboard.
A couple of weeks ago, we had a party at the company I work for. Outside of the bathrooms (where else?), I ran into a guy at work who I know is really interested in what the future might bring when it comes to the web. Naturally, then, I decided to ask him a question. This is how the conversation went:
-So, what do you think of Web 2.0?
-Web 2.0? Oh, you mean .NET 2.0! Jesus…
What really got to me is the smug way he said it, like he corrected me and I didn’t really know the correct term. However, he seems like a nice guy, so I don’t think (read: hope) it was meant that way. I don’t take for granted that everyone should know what Web 2.0 is, but I do think that if you’re working with Internet and billing your customers a lot of money, I think you should at least be aware of the biggest buzzwords that are currently in the loop.
However, from my experience, this seems to be a common problem amongst web developers specialized on Microsoft products; they seem to lack the necessary knowledge about what’s going on in the web world that isn’t originating from Microsoft. Of course this doesn’t apply to all of those, but at least a fair number of them match this description.
So please, open your eyes. Know your competitors, know your options. And you know why? Because anything else would be ignorant and not doing your job 100% correctly.
Ok, after the thing we do not speak about, I feel at least a little more stable.
Just wanted to let you know about two upcoming speaking performances for me.
Know IT, November 24th, 17.00
This will an internal performance for people working at Know IT in Sweden, but if you’ve missed it, it’s on Thursday this week.
Swenug, December 1st, 17.00
I’ll be making a speaking performance at a SWENUG meeting December 1st. Some would label this as fraternizing with the enemy; I regard it as an opportunity to reach out and explain the problems and what to think of when working with .NET and wanting to deliver valid and accessible code. Anyway, it’s free (I think you need to register, but as far as I know, that’s for free too)!
While I’m very happy and grateful to get these opportunities to “spread the word”, I still find it kind of sad that it’s even neccessary to evangelize about such obvious things as the benefits of CSS, semantic code etc. It’s like having a lecture for some handymen explaining about hammers and they’d go:
– Oooohhhhhh. Nice. People use these nowadays?
One would’ve thought by now that the discussion would be about how to make cutting edge things with these tools, not explaining what they are to begin with.
Anyway… I’m glad for the opportunity, and if you feel like it, I’d be happy if you were to show up December 1st!
After the feedback I got on my initial AJAX-S release, I’ve compiled it and added new functionality and fixes. In release 2 you will find these beauties:
Incremental rendering.
Printable version.
Support for non-JavaScript users.
Keyboard events fixed so you will stay in the presentation.
Sure, the print design isn’t exactly ground-breaking, but that’s where you come in! Download AJAX-S and test it out with your presentation material and needs, and style it up with your own design. Let me know how it goes!
The last couple of the days, the whole world wide web seem to be talking about Google and their latest release, Google Analytics. Since I thoroughly enjoy Gmail, think Google Maps is pretty cool and, naturally, use the search engine daily, I was intrigued to say that they were releasing a statistic service in the form of Google Analytics. And for free!
Of course I could’ve written a post right away telling about the release, but I wanted to test it first to tell you about my first impressions. Apparently it took 12 hours to get the account activated after signing up, a truth with modifications if you asked people who tried. After maybe 20 hours the account kicked in. Fair enough, I know everything about deadlines and tight releases schedules.
There seems to be lot of different views and ways of analyzing the data collected, all presented in a design that’s easy on your eyes. All you need to use it is to create an account (or use your Gmail one) and to include a JavaScript in the pages of your web site. Two things that bothered me right away were:
It’s not real time
To me, then it definitely loses its main attraction. I want to be able to check what has happened the last hours, hell, even the last minutes. Live, ok? Now it seems I can only see the data from the day before; that is, when the day is over according to US time. Pretty annoying if you’re located in Sweden.
No localization
There seems to be no way, at least not as far as I can find, to localize the time zone and the ways dates are presented. The American date format is pretty disturbing for the rest of the world, if you don’t know that.
On top of that, it gave off some inconsistent behavior in different pages, but I guess every new release has its problems. However, just before I wrote this post, I tried to sign in to check if it was more stable now, and guess what happened? Every time I signed in, I got redirected to the start page of the search engine. WTF? I mean, really…
For the moment, I’m pretty disappointed. If a product/service is as shaky as Google Analytics seems to be right now, cancel it. Pull the plug. Fix the problems and re-release when it works, before it has created such enormous badwill (or perhaps that’s already too late).
But what if they succeed?
Well, then this might become interesting. It’s a free service which supposedly offers a lot of ways to analyze your stats; it’s bound to compete with other services. What will happen with things like Mint, Measure Map and StatCounter? Will they be pushed to become better? Will all aspects of those mentioned, as well as other statistics services, become free? Who knows…?
What does Robert use?
I use StatCounter, and so far I’m very pleased with it. It has always worked but one time, and then I got instant feedback and support, and within an hour or two it was working fine again. Maybe it doesn’t offer as many ways to check the data as Google Analytics, but I prefer a small reliable service over a bulky shaky one any day.
I’m also very interested in what Measure Map will come up with. I signed up for an invitation a while ago (re-did it today), but still haven’t heard from them. If you guys read this, let me try it! 🙂
Why not Mint, you say? It’s created by the multi-talented (I did a search for multi-talented, by the way, and one of the results was Vin Diesel. Ha ha ha!) Shaun Inman, and people say it’s really good. I have two simple replies to that: I want it to be free and I don’t want to host it myself. Simple as that, but I do wish Shaun all the best and I’m sure he’ll do fine without me as a customer. 🙂
I also wonder, if you use one, what statistics software do you use? Let me know!
PS. By the way, why haven’t Google released Gmail to the public yet? Let people use it, it’s great. If you want a Gmail account, but don’t have an invite, just write a comment and tell me. I can send you one right away. DS.
PS 2. Thanks to Dejan who first tipped me about Google Analytics. DS.
Last week I bought the November issue of Treehouse magazine, coming from the people of Particletree, and I have to say it was the best spent three bucks in a long time! I instantly had to read it from cover to cover.
It starts off with two very interesting things: An addEvent article by Ryan Campbell explaining the need for such a function and the difference between the different solutions out there, and then goes straight into an interview with Peter-Paul Koch. Peter-Paul has had a tremendous impact on the web developing community, and especially the JavaScript part. However, I’ve never gotten the chance to meet him in person, and I’ve only gotten one e-mail reply from him (this was a number of years ago). So, Peter-Paul, if you read this, let’s make sure we meet. Who knows, I might even have something good to say too! 🙂
The magazine goes on with some other interesting interviews and articles, where I really liked the Dead Poets Society feeling I got from the interview with, amongst many other things, teacher Lisa McMillan. Alex McClung also had an interesting article about writing accessible HTML code in his piece Understanding Section 508 (although he manages to call the alt attribute for alt tag once… :-)).
Basically, a very recommended read altogether. I think the magazine will appeal to people of all kinds of experience and interest working with the web. Go read now!
The demo and the zip file are updated with a small fix to avoid generating invalid nodes while still offering the possibility to use custom HTML in any page, and the ability to display escaped code for presentations.
Updated the drop down to support pressing the spacebar and enter keys when it has got focus, to navigate directly to that certain page.
Important update!
By popular request, AJAX-S now supports XHTML code in the XML file as well. No escaping, no nothing, just write as you usually do! I think now that it is a real contender to Eric Meyer’s S5!
For some reason unknown to me, the XSLT files failed to work in some Mozilla web browsers on some computers when they had an .xslt extension. I’ve changed the zip file so it now points to XSLT files with an .xml extension. If you’ve downloaded a previous version that didn’t work, please try the new one. Big thanks to Karl and especially Henrik Box for doing some extensive testing for me (Henrik wants to meet the girls behind girlspoke as a thanks… :-))!
Release 2!
After listening to the feedback I got, I’ve now done some major updates to AJAX-S. It now supports incremental rendering, non-JavaScript users and also offers a printable version. Go check the updated demo.
Changed the JavaScript detect for support for the XSLTProcessor object so it asks users that lack that support if they want to go to the printable page instead.
Added check to scroll the current incremental step into view if it wasn’t visible.
Updated with a different look for active increment, past increment and coming increment, and a setting if one wants the first or last increment to be selected when backing from an upcoming page.
Updated with a different look for active increment, past increment and coming increment, and a setting if one wants the first or last increment to be selected when backing from an upcoming page.
Updated with a fix for two glitches in the keyboard navigation.
Add-on available as of September 7th, 2006
An add-on for AJAX-S has been developed, to automatically show/hide the footer of the slides.
I’ve been thinking about creating an AJAX-based slideshow for a while, and today it happened! Today I wrote my first line of code in this project (probably not the last one), but for the moment I feel very content with the results. The code is probably not perfect, but I’m going more for the concept here. The tweaking options are endless.
The idea came to me because I wanted a lightweight slideshow based on HTML, CSS and JavaScript, but I also wanted to separate the data of each page from the actual code that presents it. Therefore, I decided to move the data into an XML file and then use AJAX to retrieve it. The name AJAX-S is short for AJAX-Slides (or Asynchronous JavaScript and XML Slides, if you want to).
Naturally, one of my inspirations for creating a HTML-based slideshow are from Eric Meyer and his S5. However, I wanted to take it one notch further, to make it more flexible and also usable for people with no HTML knowledge whatsoever. Another motivating factor was to just transform the data for the current page, as opposed to creating all the HTML needed for all the pages when the page is initially loaded. A leaner end user experience, basically.
It only works in IE 6 and Mozilla-based web browsers as of now. This is because of the need to do on the fly transformations on the client, which means the necessary support for ActiveXObject or XSLTProcessor has to be there. I think Opera 9 will support XSLTProcessor and probably some upcoming version of Safari too, so more widespread support in the future is very likely.
A freaky thing, which I hope is only a very unimportant detail, is that when I run it here at my host provider, I have to use the xml instead of the xslt one. However, most likely a hosting issue only.
But enough of that now. Download AJAX-S or view the demo of AJAX-S. Please let me know what you think, and if there’s any major error in the code. Not a requirement at all, but if you use it and like it, I would appreciate getting credit for it. 🙂
About a week ago, Andy Clarke wrote a post entitled Advocating the quiet revolution. To sum it up, it’s about not trying to justify every choice of technology to your managers, clients and other people in your team, but just by default write code with web standards, separation of content and presentation and accessibility in mind.
While this is true and a good advice for you personally, it mostly only applies to situations with small teams/companies and when the customers don’t have developers that will inherit your code and continue to build on it. When working on a larger scale or in conjunction with the customer’s developers, it is crucial to explain and motivate the choices of technology, and why everyone in the project should abide to these guidelines.
Because, if you do things right on your own and avoid informing everyone else affected by this, they won’t understand your code and will just alter it as soon as they get the chance. And if you just put your foot down and demand valid accessible code from the developers without giving them reasons why, they will just run to the manager, complaining that it will take longer time to develop then (which is not true, but they usually state that out of fear, because they’ve just realized that they lack the necessary skills).
You don’t have to be a raging standardista full of elitism to convey this message; on the contrary. If you explain in a humble way why this is important by mentioning factors like lesser bandwidth usage, SEO, faster loading pages, maintainability of code etc, then they might understand you, from a business perspective as well as a developing perspective.
So, make sure you write good code. But also make sure to inform people around you why you do it, and why they should do it too.
It seems like the eternal question amongst web developers: HTML or XHTML? Wherever I look there seems to be posts in forums raising the question, web developers asking me or other people write blog posts about what they believe is the right way to go. I’m not writing this post to tell you what the ultimate choice is, but rather to inform you about the consequences of what you choose. So, let’s take it from the top:
Strict or Transitional?
Definitely strict. Transitional doctypes are exactly what the name implies: a doctype for a phase of transition, not meant to be used permanently. If you write HTML and choose Transitional, you will get the Quirks Mode rendering, which results in web browsers just trying to mimic old and incorrect behavior; this means that rendering will be very different from web browser to web browser. If you choose XHTML Transitional, you will get the strict (or rather, strictest) mode available in IE (Note: from version 6) but you will trigger the Almost Standards Mode in Mozilla-based web browsers.
However, if you use a strict doctype, you will get full standards correct rendering and the most consistent and forward compatible interpretation of your pages.
What is XHTML?
A XHTML document is a document that has to be well-formed according to the rules of XML. Every tag has be closed and correctly nested, and for tags like img, input, link etc, a quick close slash should be added at the end of the tag, like this: <input type="text" />. If you’re writing code that should be accessible for people with Netscape 4 and some other web browsers, then make sure to have a space before the slash (Note: not to make it look good in Netscape 4, but to make it work at all).
You’re supposed to be able to save a page written in XHTML and use it as XML right away.
Why XHTML?
It totally depends on your needs. Some people believe it to be very easy and consistent to code in its XML fashion, where everything has to be well-formed and every element has to be closed. Some people choose it to extend its functionality with namespaces, to use it in conjunction with MathML and so on. Other people might work with XHTML, not out of their own choice, but because the products they/their company use deliver XHTML.
I’ve heard something about application/xhtml+xml?
Yes, it’s all about what MIME type goes with your code. For HTML, the media type is text/html. According to W3C, the organization behind many recommendations such as HTML, XHTML etc (albeit mostly known as web standards), state in their XHTML Media Types document:
‘application/xhtml+xml’ SHOULD be used for serving XHTML documents to XHTML user agents. Authors who wish to support both XHTML and HTML user agents MAY utilize content negotiation by serving HTML documents as ‘text/html’ and XHTML documents as ‘application/xhtml+xml’. Also note that it is not necessary for XHTML documents served as ‘application/xhtml+xml’ to follow the HTML Compatibility Guidelines.
What this translates to is that web browsers who can handle application/xhtml+xml should get it served that way. However, IE doesn’t support that media type, thus requiring you send the code as text/html to it, basically resulting in you having to deliver the pages with different media types to different web browsers, using something called content negotiation. By now, you probably think it all sounds like too much of a hassle, and choose to go with text/html all over. I mean, after all, the Appendix C. HTML Compatibility Guidelines presents the validity of serving XHTML as text/html.
However, then you read this:
XHTML documents served as ‘text/html’ will not be processed as XML [XML10], e.g. well-formedness errors may not be detected by user agents. Also be aware that HTML rules will be applied for DOM and style sheets…
Which means that web browsers will not render your pages as XHTML, but rather as HTML and fall back on error handling and trying to guess how it was meant to be. Then you’re most likely back at square one, because you probably don’t want it this way.
Also, something else that is utterly important to know is that certain scripting will not work when sent as application/xhtml+xml. For instance, if you use document.write or have ads on your page through an ad content provider using it (such as Google AdSense), it will stop working. If you implement an AJAX application using the innerHTML property on an element, that won’t work either.
What’s Robert’s opinion?
My personal opinion is that the most important thing is that you choose a strict doctype, be it HTML or XHTML. If you want to use XHTML and serve it as text/html, make sure that you don’t intentionally have code that would break when served as application/xhtml+xml. Do not use scripting like the one above mentioned in an XHTML page, and go the extra mile to make sure it is indeed well-formed. Be also very aware that a page that isn’t well-formed sent as application/xhtml+xml will not render at all, but will instead only display an error message to the end user.
Anne used me as a bat for HTML, but I’d rather be seen as a spokesman for making a thought-through decision, no matter which one it is. I sometimes work with HTML in may daily job and sometimes XHTML, depending on a lot of factors.
So, choose what you think suits your needs best, and choose wisely. And make sure it’s a deliberate decision.
I’m sitting here; just sipping some nice red wine and eating chocolate, celebrating that the last seven days are over now. I’ve been working double shifts for about a week, doing my hours as a consultant daytime, and working on redesigning this web site nighttime.
So finally: redesign! And I wanted to get done with it as fast as possible, I couldn’t stand making a live redesign spread over a longer amount of time, like one of my friends does. There have been a number of reasons I wanted to create and implement a new design for this web site, and the factors and choices have mainly been these:
Write the code myself
When I launched robertnyman.com, I installed WordPress and looked around for themes written for it. My previous design was a theme designed by Shawn Grimes, that I tweaked a bit to personalize it. But with me ranting about how web sites should be developed, I ought to live up to what I preach at my own web site. You know, the shoemaker’s children and all…
I wanted something that was really easy on the eyes, something that looked good and also being original to get some attention for that as well. All image material used here is from pictures I’ve taken myself. And since it’s an Easter Island theme, naturally there has to be an Easter egg; if you find and hold down a certain key combination, you will get to see a freaky picture of me! 🙂
Accessibility
I want this web site to be an example of being accessible to everyone:
With or without JavaScript enabled.
With or without CSS switched on/supported.
With a wider or narrower window.
With a smaller or larger text size setting in the visitor’s web browser.
With or without using a mouse.
Technology
Since I work full-time with web development and also have it as a hobby, this web site should be a showcase of how I think a web site should be. Therefore, the layout is elastic and works in most web browser window sizes. I also use AJAX for the search functionality, thus not requiring a post back of the whole page to see the search results.
But naturally, everything should downgrade well too. The search has a fall back that works without JavaScript and all JavaScripts used are unobtrusive, meaning that all events are applied externally from a JavaScript meaning. The effect of this is that no elements have any inline event handlers whatsoever.
It’s possible to easily navigate through the web site just using the keyboard, leaving out the dependency on using a mouse.
Something that will interest certain people out there, and definitely Anne van Kesteren, is that this web site is using strict HTML, not XHTML. The reasons? First, I’m tired of everyone using XHTML without knowing the reasons why. They just do it because their tool/-s deliver it, they’ve heard it’s cool etc.
Second, XHTML should be served as application/xhtml+xml. In my previous design, that was the case first, but since WordPress wasn’t fool-proof and I still wanted it to be user-friendly to write comments on my posts, this ended up in me having to check the web site all the time just to make sure that nothing bad had gotten through. I then went to using text/html with XHTML for that design, according to Appendix C, but knowing that my code should be valid so I could switch to application/xhtml whenever I wanted to, I hadn’t used anything intentionally that should break.
However, now I use innerHTML in my AJAX script and Google’s ads use document write; two things that don’t work with application/xhtml+xml. So, my decision to use plain old HTML is definitely thought through and a very deliberate one. Maybe some day this web site will use XHTML again, but only the future can tell.
Testing web standards
I’ve haven’t had access to an Apple computer during this whole design face. It has been coded using web standards code, well-tested CSS approaches and object detection in JavaScript. My testing in Firefox and Opera 8, two of the most standards-compliant web browsers out there, leads me to believe that it should work automatically in Safari too. Apple user? Please let me know!
So, get going now! Resize your web browser window, increase/decrease your text size setting, turn off using any CSS, try navigating using only your keyboard, turn off JavaScript and test it!
When that’s done, and your eyes have feasted on the new layout, please let me know what you think of it! 🙂
PS. Don’t miss the two new cool map functionalities; they can be found at the bottom of the left column. DS.
PS 2. A big thank you to Henrik Box for helping me evalute my design sketches. DS.
About two weeks ago, I published An Open Letter to WaSP, and the feedback was very good and the following discussion at a good level. So this post is a kind of semi-follow up to that, based on my reflections on the comments I got.
What I wanted to target here was the “isn’t this for W3C”-reaction that I got, which really is an interesting discussion. We have the W3C that put together their recommendations and we have WaSP fighting for spreading the word and the awareness about web standards. Then Karl of W3C wrote an interesting comment about the W3C Education and Outreach group and pointed us to their work.
This led me to thinking: should WaSP then be a part of W3C?
Don’t get me wrong, WaSP have done tremendous work spreading web standards, especially lately with their collaboration with Microsoft, but I can honestly say that if I were to say WaSP to my colleagues, most of them would think of a heavy metal rock band with a singer called Blackie Lawless. And if W3C have such a group, shouldn’t WaSP be that group? Evangelizing in the name of the W3C would probably get even more attention, and it would also come from the same organization as the recommendations. My belief is that it would help WaSP to gain more credibility (not something they lack in my eyes, but in people I meet).
I’ve been a bit busy lately, and therefore haven’t written about things I wanted to. So here’s a little sum-up of three things I think deserve mentioning:
I’ve used it for a little while, and it offers functionality as good as the one that can be found in the Web Developer Extension for Mozilla-based web browsers. However, what I really like about this one is the screen ruler; it’s a great way to fast measure an element’s size or similar, without resolving to making a screen dump and check it in Photoshop or to have a third-party program. What I also love with it is to automatically get an outline for elements, only when hovered with the mouse. The two gripes I have, however, are lack of keyboard shortcuts for tasks like validating the code (or hell, even View Source!) and that if I have displayed the screen ruler, it sometimes seem to hijack the keyboard shortcut Ctrl + R after that (which I use for reloading a page).
I’m sure it’s a cool product and a great gadget, but it disturbs me that Steve Jobs for a long time has said that he doesn’t believe in it, the last time in a statement two weeks before its release.
The value of a blog
There’s a very hypothetical way to calculate how much a blog is really worth. Nevertheless, it’s always interesting to speculate! 🙂 Apparently, this blog is worth somewhere between $ 40 000 and $ 65 000. A decent amount for having written it for seven months in my spare time, purely out of interest. 🙂
To start with, if this is not your first visit here, you know I’m all for web standards. But from time to time, I feel that things get exaggerated. There’s a validation frenzy and way too much work, time and focus put into the wrong details.
IT projects are almost always under a tight deadline and compromising is usually a way of web development life. So, my pet peeve is invalid attributes on elements. When I write code, of course I refrain from using them, hence making it valid. But in my case, I work in a lot .NET-based projects and Web Forms and such in it produces invalid code, especially when using a strict HTML or XHTML doctype. Examples can be attributes like language=JavaScript and the name attribute on the form tag. There are ways to take care of this, but they might affect performance, especially for a web site with a lot of visitors.
While these attributes render the page invalid, to me it doesn’t really matter. I regard it as much more important to focus on writing accessible and semantic code, and where the presentation is a 100% controlled from CSS. And, as Peter-Paul Koch writes in Why QuirksMode, there are a lot of cases where using custom attributes whould make the code a lot cleaner and understandable. Having an attribute named “mandatory” on a form element would make a lot more sense that adding a class for it. Especially if the class attribute then weren’t used for any presentational purpose whatsoever, but only for hooking it up with JavaScript.
So, my advice is: Make sure your code is well-formed, but after that, focus on the important parts instead of unsignificant things like an invalid attribute. Then, if you have time, take care of the attribute too.
This article is co-written with Vlad Alexander, co-founder and in charge of development at Belus Technology, the company behind the highly successful XStandard WYSIWYG editor.
Web Standards are failing to break into mainstream development because the Web Standards community does not speak with a unified voice. When Web designers, Web Developers, IT managers and software vendors find information about Web Standards, instead of a succinct common approach, there are endless discussions and flame wars driven by individual interpretations of what the specs mean. So instead of getting the information they need, they see bickering over the importance of valid markup, nit-picking over DOCTYPE and MIME types, and squabbles over the role of accessibility.
Of course, debates about Web standards are healthy, and it’s natural that Web developers should consider some aspects of Web development to be more important than others. However, we need to agree on core Web Standards values that everyone can trust because they represent the consensus of opinion of the developer community. This does not mean that we should stop debating amongst ourselves, but newcomers to Web Standards need the confidence that comes from knowing that there is a single, agreed-upon approach to implementing Web Standards.
So how do we arrive at this single, agreed-upon understanding of what Web Standards are?
We compromise. And we locate our core Web Standards values in one place – WaSP.
We therefore ask that WaSP put together a task force to create a Web Standards Charter. The Charter will define what Web Standards are and recommend a single implementation approach. When necessary the Charter will be updated as dictated by the current state of the art and the latest best practices.
The Web Standards community will then be able to direct newcomers to the Charter as a solid starting point from which they can proceed to implement standards-compliant projects with confidence.
Once they have gained confidence, newcomers can join us in ongoing debates about Web Standards, adding to the strength and diversity of our community.